id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.01764 | Knowledge-Aware Audio-Grounded Generative Slot Filling for Limited
Annotated Data | Manually annotating fine-grained slot-value labels for task-oriented dialogue
(ToD) systems is an expensive and time-consuming endeavour. This motivates
research into slot-filling methods that operate with limited amounts of
labelled data. Moreover, the majority of current work on ToD is based solely on
text as the input modality, neglecting the additional challenges of imperfect
automatic speech recognition (ASR) when working with spoken language. In this
work, we propose a Knowledge-Aware Audio-Grounded generative slot-filling
framework, termed KA2G, that focuses on few-shot and zero-shot slot filling for
ToD with speech input. KA2G achieves robust and data-efficient slot filling for
speech-based ToD by 1) framing it as a text generation task, 2) grounding text
generation additionally in the audio modality, and 3) conditioning on available
external knowledge (e.g. a predefined list of possible slot values). We show
that combining both modalities within the KA2G framework improves the
robustness against ASR errors. Further, the knowledge-aware slot-value
generator in KA2G, implemented via a pointer generator mechanism, particularly
benefits few-shot and zero-shot learning. Experiments, conducted on the
standard speech-based single-turn SLURP dataset and a multi-turn dataset
extracted from a commercial ToD system, display strong and consistent gains
over prior work, especially in few-shot and zero-shot setups. | Guangzhi Sun, Chao Zhang, Ivan Vulić, Paweł Budzianowski, Philip C. Woodland | 2023-07-04T15:05:42Z | http://arxiv.org/abs/2307.01764v1 | # Highlights
###### Abstract
A knowledge-aware audio-grounded (KA2G) generative slot-filling framework is proposed for use with limited annotated data for task-oriented dialogue (ToD). KA2G formulates slot filling as a language generation task with natural language prompts. Slot value generation (SVG) is also grounded based on speech input via an ASR module in addition to a pre-trained language model (PLM) in order to give robustness to recognition errors.
* KA2G integrates external contextual knowledge by using two tree-constrained pointer generator (TCPGen) components, one for ASR and one for SVG, with shared prefix-tree encoding networks. The use of TCPGen greatly benefits the KA2G slot-filling performance, especially on rare and unseen entities and unseen slot types.
* Experiments on the SLURP dataset with speech input showed that the proposed KA2G framework can produce state-of-the-art slot-filling results. KA2G was further evaluated on an in-house multi-turn ToD dataset, CONCIERGE, to validate the effectiveness of KA2G in real-world applications.
* Large and consistent improvements with the full KA2G framework were obtained over a standard pipeline-based ToD baseline on both datasets. The improvements were most prominent for rare and unseen entities on both datasets, with a 4.6% absolute SLU-F1 increase for few-shot entities, an 11.2% increase for zero-shot entities and 13.6% increase for unseen slot types in SLURP. Meanwhile, KA2G improved more than 20 joint goal accuracy (JGA) points in multi-turn evaluation on CONCIERGE. The importance and contributions of the two TCPGen components were verified in a series of ablation studies and other analyses.
* The proposed KA2G framework has a number of key differences in the use of TCPGen in ToD. Compared to the previous conference paper (Sun, Zhang and Woodland, 2023)
* Rather than formulating SLU as a sequence-tagging problem which is an audio-grounded extension of text-based methods that are only able to handle predefined slot types, KA2G handles slot-filling as a generative task that depends on both audio input and a knowledge base. It is able to handle an open set of slot types using natural language queries and generates natural language slot values. We also demonstrate that KA2G is more robust to ASR errors and achieves better performance.
* While TCPGen and structured knowledge were mainly considered from the ASR perspective in Sun et al. (2023), in KA2G, a stacked TCPGen structure with a shared tree encoding network is adopted. Notably, TCPGen is applied to SVG to guide the generation process using the most relevant knowledge from multiple ASR alternatives.
# Knowledge-Aware Audio-Grounded Generative Slot Filling for Limited Annotated Data
Guangzhi Suna, Chao Zhanga, Ivan Vulica, Pawel Budzianowskib and Philip C. Woodlanda,*
Corresponding author [email protected] (G. Sun); [email protected] (C. Zhang); [email protected] (I. Vulic); [email protected] (P. Budzianowski); [email protected] (P.C. Woodland)
###### Abstract
Manually annotating fine-grained slot-value labels for task-oriented dialogue (ToD) systems is an expensive and time-consuming endeavour. This motivates research into slot-filling methods that operate with limited amounts of labelled data. Moreover, the majority of current work on ToD is based solely on text as the input modality, neglecting the additional challenges of imperfect automatic speech recognition (ASR) when working with spoken language. In this work, we propose a Knowledge-Aware Audio-Grounded generative slot filling framework, termed KA2G, that focuses on few-shot and zero-shot slot filling for ToD with speech input. KA2G achieves robust and data-efficient slot filling for speech-based ToD by 1) framing it as a text generation task, 2) grounding text generation additionally in the audio modality, and 3) conditioning on available external knowledge (_e.g._ a predefined list of possible slot values). We show that combining both modalities within the KA2G framework improves the robustness against ASR errors. Further, the knowledge-aware slot-value generator in KA2G, implemented via a pointer generator mechanism, particularly benefits few-shot and zero-shot learning. Experiments, conducted on the standard speech-based single-turn SLURP dataset and a multi-turn dataset extracted from a commercial ToD system, display strong and consistent gains over prior work, especially in few-shot and zero-shot setups.
## 1 Introduction
_Slot filling_, as a crucial natural language understanding component of task-oriented dialogue (ToD) systems, aims at filling in the correct value for predefined slots (_e.g._ restaurant and hotel names) (Tur, Hakkani-Tur and Heck, 2010; Tur and De Mori, 2011). As manual fine-grained annotation for slot labels is expensive, time-consuming, and usually requires domain expertise (Casanueva, Vulic, Spithourakis and Budzianowski, 2022), increasing demands have been put on the performance of slot-filling systems under _few-shot_ or even _zero-shot_ learning setups (Hou, Che, Lai, Zhou, Liu, Liu, and Liu, 2020; Liu, Winata, Xu and Fung, 2020; Henderson and Vulic, 2021). Following the now-prevalent use of large Transformer-based pretrained language models (PLM) (Radford, Wu, Child, Luan, Amodei and Sutskever, 2019; Devlin, Chang, Lee and Toutanova, 2019; Raffel, Shazeer, Roberts, Lee, Narang, Matena, Zhou, Li and Liu, 2019) for transfer learning across many NLP tasks, PLMs have also been widely adopted in ToD for slot-filling tasks with limited labelled data (Chen, Zhuo and Wang, 2019; Henderson and Vulic, 2021).
More recently, other text-based approaches have reformulated slot filling as a question-answering (QA) or a sequence generation task, in order to further exploit the power of QA-oriented and generative PLMs (Namazifar, Papangelis, Tur and Hakkani-Tur, 2020; Liu, Yu, Chen and Xu, 2022; Hosseini-Asl, McCann, Wu, Yavuz and Socher, 2020; Madotto, Liu, Lin and Fung, 2020), especially in low-data scenarios (Fuisz, Vulic, Gibbons, Casanueva and Budzianowski, 2022; Du, He, Li, Yu, Pasupat and Zhang, 2021; Lin, Liu, Moon, Crook, Zhou, Wang, Yu, Madotto, Cho and Subba, 2021) However, all these approaches operate directly on 'perfect' text input, thus overestimating the performance of speech-based ToD systems where a loss in performance might occur due to imperfect automatic speech recognition (ASR) (Gerz, Su, Kuszto, Mondal, Lis, Singhal, Mrksic, Wen and Vulic, 2021). Imperfect ASR output can especially harm slot filling that deals with entities infrequent in the general language (e.g., atypical personal, restaurant or hotel names) and is even more pronounced in situations with limited annotated data.
While very recent research has started to explore end-to-end slot-filling tasks for ToD with speech input (Wang, Boumadane and Heba, 2021; Le, Shrivastava, Tomasello, Kim, Livshits, Kalinli and Seltzer, 2022), in this work we focus on a particularly challenging situation which is typically met in production: limited annotated data with many rare entities. Therefore, we propose KA2G, a Knowledge-Aware Audio-Grounded generative slot-filling framework which is tailored towards improving the robustness and performance of slot filling with spoken input.
KA2G integrates information from both audio and text as input to a slot-value generator (SVG) which then generates textual fillers for each slot. Note that the final generation is also grounded in the audio modality. This mitigates the issues arising from noisy ASR-generated transcriptions. KA2G particularly boosts the performance of rare and unseen entities by learning to exploit the available external knowledge (e.g., predefined lists of possible values for slots) via two tree-constrained pointer generator (TCPGen) components (Sun, Zhang and Woodland, 2021, 2022). TCPGen builds a neural shortcut between the _biasing list_, which is a list of entities likely to appear in a given context, and the model output via a pointer generator. Biasing lists are extracted from an external knowledge base (KB) containing possible entities for each slot type and are structured as subword-based prefix trees to be searched. The first TCPGen is applied on the ASR side to reduce ASR errors on high-value biasing entities based on the available context. The second TCPGen is applied on the SVG side to bias the generator's output using sub-trees which contain branches on the prefix-trees that are traversed during ASR beam search. The entire KA2G model is jointly optimised in an end-to-end fashion from the input-speech-'end' to the generated slot-value-'end'. The code for KA2G is available at [https://github.com/the-anonymous-bs/espnet/tree/master/egs/slurp/asr1](https://github.com/the-anonymous-bs/espnet/tree/master/egs/slurp/asr1)
Although our previous conference paper (Sun et al., 2023) explored TCPGen in SLU tasks, the proposed KA2G framework is fundamentally different in the following two key aspects:
* Rather than formulating SLU as a sequence-tagging problem which is an audio-grounded extension of the text-based methods that are only able to handle predefined slot types, KA2G handles slot-filling as a generative task that depends on both audio input and a knowledge base. It handles an open set of slot types using natural language queries and generates natural language slot values. We also demonstrate that KA2G is more robust to ASR errors and achieves better performance.
* While TCPGen and structured knowledge were mainly considered from the ASR perspective in Sun et al. (2023), in KA2G, a stacked TCPGen structure with a shared tree encoding network is adopted here. Notably, TCPGen was applied to SVG to guide the generation process using the most relevant knowledge from multiple ASR alternatives.
The main experiments were conducted on two structurally different datasets with speech input, with a focus on few-shot and zero-shot learning scenarios: 1) the single-turn SLURP dataset (Bastianelli, Vanzo, Swietojanski and Rieser, 2020), and 2) an in-house multi-turn ToD dataset extracted from a commercial concierge/booking system (henceforth termed CONCIERGE). While the zero-shot setup stretches the abilities of the tested systems to the extreme, the few-shot learning scenario is more pragmatic and suitable for industry research (Lauscher, Ravishankar, Vulic and Glavas, 2020; Henderson and Vulic, 2021), as a small number of labels for each entity can usually be made available. Large and consistent improvements with the full KA2G framework were found over a standard pipeline-based ToD baseline on both datasets, _e.g._, improving by more than 20 joint goal accuracy (JGA) points in multi-turn evaluations on CONCIERGE. The improvements were most prominent for rare and unseen entities on both datasets. The importance and contributions of the two TCPGen components were verified in a series of ablation studies and other analyses.
The rest of this article is organised as follows. Section 2 reviews related studies. Section 3 introduces the KA2G framework, with a detailed explanation of TCPGen and how it can be applied to slot-filling. Section 4 describes the experimental setup, and Section 5 discusses the results. Finally, conclusions are provided in Section 6.
## 2 Related Work
### Slot Filling as a Text Generation Task
Recent research has seen increased interest in reformulating the slot-filling task beyond standard sequence labelling and classification paradigms Shah, Gupta, Fayazi and Hakkani-Tur (2019); Budzianowski and Vulic (2019); Chen et al. (2019); Coope, Farghly, Gerz, Vulic and Henderson (2020). Namazifar et al. (2020) and Liu et al. (2022) recast slot filling as a QA task and a reading comprehension task, respectively, with both studies focusing on applications with limited data. More recently, Fuisz et al. (2022) performed a comprehensive analysis of the QA approach for slot filling and provided both efficient and effective fine-tuning methods for domain-specific slot-filling models.
Formulating slot filling as a text generation task has recently also become an active research area. Mehri and Eskenazi (2021) proposed a generative slot-filling framework that leverages PLMs fine-tuned on specific tasks and domains to improve task-/domain-specific generation. Another research stream focused on framing dialogue state tracking (DST) for multi-turn ToD as a text generation task. In particular, the T5DST model (Lin et al., 2021) utilised different slot descriptions as the prompt for generation for cross-domain DST. However, previous approaches have only dealt with text input and do not use external knowledge, whereas our proposed KA2G framework is audio-grounded and efficiently leverages external knowledge.
### Knowledge Integration for Slot Filling
Research has also been performed on leveraging external knowledge bases or the ontology of a dialogue system for slot-filling. In Chen, Lv, Wang, Zhu, Tan and Yu (2020), domain-slot relations from the dialogue ontology were encoded using a graph neural network (GNN) to guide the system, while Lin, Tseng and Byrne (2021) further extended the use of the GNNs to capture correlations between slots and values in different domains. For slot filling for ToD with speech input, Wang, Ye, Zhou, Xie and Wang (2021) used a Transformer encoder to encode external knowledge into hidden representations, while Sun et al. (2023) built a neural shortcut from the external knowledge base directly to the slot filling output. While both methods focused on zero-shot learning setups, they relied on the standard sequence labelling formulation of the slot-filling task. In contrast, KA2G adopts a more flexible generative framework, which yields improved performance in few-shot and zero-shot scenarios.
### Contextual Knowledge Integration in ASR
Previous studies on contextual biasing have been focused on either shallow-fusion-based score-level interpolation (Williams, Kannan, Aleksic, Rybach and Sainath, 2018; Chen, Jain, Wang, Seltzer and Fuegen, 2019; Zhao, Sainath, Rybach, Rondon, Bhatia, Li and Pang, 2019) or deep neural encoders or representations (Pundak, Sainath, Prabhavalkar, Kannan and Zhao, 2018; Chen, Jain, Wang, Seltzer and Fuegen, 2019; Weng, Miryala, Khatri, Wang, Zheng, Molino, Namazifar, Papangelis, Williams, Bell and Tur, 2020; Huang, Abdel-Hamid, Li and Evermann, 2020). **Recent work also explored the combination of deep and shallow approaches for contextual biasing. Specifically, Le, Keren, Chan, Mahadeokar, Fuegen and Seltzer (2021); Le, Jain, Keren, Kim, Shi, Mahadeokar, Chan, Shangguan, Fuegen, Kalinli, Saraf and Seltzer (2021) proposed to apply shallow fusion and deep biasing together in the end-to-end ASR model. More recently, pointer-generator-style shortcuts Sun et al. (2021); Huber, Hussain, Stuker and Waibel (2021) or neural-FST (Bruguier, Le, Prabhavalkar, Li, Liu, Wang, Chang, Peng, Kalinli and Seltzer, 2022) approaches that directly modify the final ASR output distribution have been investigated which allowed joint optimisation of the entire network in an end-to-end fashion. Meanwhile, TCPGen Sun et al. (2021) also achieved high efficiency by using a symbolic prefix-tree search to handle biasing lists of thousands of words. Further research into TCPGen (Sun et al., 2022) used a graph neural network (GNN) to encode the prefix tree in TCPGen, which achieved further improvements in the recognition accuracy of biasing words. TCPGen with powerful GNN encodings acts as the backbone in both our previous work (Sun et al., 2023) and the proposed KA2G framework.
## 3 Methodology
The KA2G framework is illustrated in Fig. 1. It comprises three key components as follows:
_(A)_ The audio-grounded SVG module combines output representations from the ASR module and the text-only PLM to generate values based on the slot prompt. The audio-grounded SVG module, as explained in Section 3.1, acts as the foundation of KA2G where the two proposed TCPGen components for knowledge integration are added.
_(B)_ The knowledge-aware ASR component that integrates external knowledge into KA2G via the first TCPGen component (TCPGen\({}_{\text{ASR}}\)). The TCPGen component and how it is integrated into the ASR module of KA2G are explained in Section 3.2, together with the slot shortlist prediction mechanism dedicated to slot-filling tasks to obtain a more focused biasing list.
_(C)_ The knowledge-aware SVG further integrates knowledge explored during the ASR beam search via the second TCPGen (TCPGen\({}_{\text{SVG}}\)). TCPGen\({}_{\text{SVG}}\) extends the scope of TCPGen-based contextual knowledge integration from ASR tasks to any general natural language generation tasks will be explained in detail in Section 3.3.
### Audio-Grounded SVG
The audio-grounded SVG module comprises (i) an ASR module, (ii) a causal/autoregressive PLM, (iii) an alignment module, and (iv) the SVG; This is illustrated in the right side of Fig. 1. The SVG is implemented as a single-layer unidirectional LSTM which takes the concatenated vectors from the ASR module and the PLM as the representation of the context to make predictions for the value with a given slot query. The LSTM architecture is used for simplicity and increased stability in low-resource setups, and to avoid over-parameterisation since both ASR and PLM have complex model structures with millions of parameters. The ASR model is an attention-based encoder-decoder (AED) where the decoder hidden states, \(\mathbf{h}^{\text{dec}}\), are sent to the SVG.
As the label space for the PLM is too sparse to be used by the ASR module, the ASR module instead uses a much smaller subword token vocabulary than the PLM, and hence the sequence of ASR hidden states and the sequence of PLM output vectors \(\mathbf{h}^{\text{PLM}}\) will be asynchronous. To resolve this _(mis-)alignment issue_, the output of the SVG is first set to have the same subword tokens as the ASR module, which helps to make the best use of the acoustic information. The PLM outputs are then taken for each (full) word instead of every subword at each word end and aligned with \(\mathbf{h}^{\text{dec}}\) at each word ending subword before concatenation. For non-terminal subwords, zero-vector padding is used as placeholders for the PLM output.
An example of this alignment is shown in Fig. 2. The alignment of \(\mathbf{h}^{\text{dec}}\) and \(\mathbf{h}^{\text{PLM}}\) is therefore achieved at word ends, and using the same subwords for both ASR and PLM is not necessary. Moreover, for a slot query which prompts the
Figure 1: The KA2G framework for slot filling. Its three key components are indicated by the labels (A), (B) and (C) and are described in Section 3. Knowledge base showing two example slot types (person and game name) containing possible values structured as wordpiece prefix trees. The example in the figure shows that the first pointer generator network (\(\mathsf{TCPGen_{ASR}}\)) traverses branches of mario, soha and rihanna, which are then included in sub-trees. Another generator network (\(\mathsf{TCPGen_{SVG}}\)) uses this branch further to generate slot values.
generation (e.g. the person is or the game name is), as there is no corresponding input audio, the embedding of the preceding wordpiece is used in place of \(\mathbf{h}^{\text{dec}}\). Note that this alignment mechanism allows the slot value generation to use the PLM as well: Whenever a new word is generated, a new \(\mathbf{h}^{\text{PLM}}\) with the new word can be obtained which is then concatenated with the preceding wordpiece to generate the next one.
The SVG is trained end-to-end by jointly optimising the ASR and slot-value generation criteria as shown below:
\[\mathcal{L}_{\text{joint}}=\mathcal{L}_{\text{ASR}}+\mathcal{L}_{\text{SVG}}, \tag{1}\]
where the respective sub-losses are defined as
\[\mathcal{L}_{\text{ASR}} =\text{log}\ P(\mathbf{y}_{1:n}|\mathbf{x}_{1:T}), \tag{2}\] \[\mathcal{L}_{\text{SVG}} =\text{log}\ P(\mathbf{s}_{1:m}|\mathbf{q}_{1:k},\mathbf{h}^{ \text{dec}},\mathbf{h}^{\text{PLM}}). \tag{3}\]
Here, \(\mathbf{y}_{1:n}\) is the subword ASR sequence, \(\mathbf{s}\) is the generated slot value sequence, \(\mathbf{q}\) represents the query sequence (e.g. the game name is) and \(\mathbf{x}_{1:T}\) is the sequence of acoustic features. Note that \(\mathbf{h}^{\text{PLM}}\) in the SVG loss covers not only the context but also the slot query and value using the aforementioned alignment mechanism. In order to allow the model to also handle predictions for slots not present in the utterance, \(N_{n}\) (randomly sampled) slots that are not mentioned in the context are incorporated in training as negative examples, where \(N_{n}\) is a hyper-parameter. The model should learn to generate None values for those 'not-present' slots. The entire SVG, together with the PLM and the two TCPGen components are optimised using the SVG loss.
During inference, \(\mathbf{h}^{\text{PLM}}\) is obtained by feeding the 1-best ASR hypothesis to the PLM. The same context is prompted with all possible slot types, and those that do not output a None value are saved. For multi-turn ToD, the dialogue history is encoded by the PLM before the start of the current context.
### Knowledge-Aware ASR
External knowledge is organised as a dictionary of slots along with the possible values per slot: see the left blue block in Fig. 1. It conditions the ASR via contextual biasing using TCPGen\({}_{\text{ASR}}\) (as shown in Fig. 1). Contextual biasing is an effective method to boost the recognition of rare words or entities in end-to-end trainable ASR systems, which represents the knowledge as a biasing list (Le et al., 2021; Pundak et al., 2018; Chen et al., 2019; Weng et al., 2020; Huang et al., 2020; Huber et al., 2021; Bruguier et al., 2022). The biasing list contains a list of biasing entities that are likely to appear in a given context, such as a particular restaurant name or the name of an artist in a playlist, and the recognition accuracy can be improved if they are included in the biasing list. In slot filling, possible named entities for each slot type can be collected to form a structured KB, and the biasing list can be extracted from the KB as explained in Section 4.
#### 3.2.1 TCPGen
TCPGen (Sun et al., 2021) is a neural component combining symbolic prefix-tree search with a neural pointer generator for contextual biasing, which enables end-to-end optimisation with ASR systems. Although in this section, TCPGen is described based on TCPGen\({}_{\text{ASR}}\), TCPGen\({}_{\text{SVG}}\) which is presented later in Section 3.3, is also based on the same mechanism. At each output step of ASR, TCPGen\({}_{\text{ASR}}\) calculates a distribution over all valid subwords, referred to as the TCPGen\({}_{\text{ASR}}\) distribution, constrained by a subword-level prefix-tree built from the biasing list. TCPGen\({}_{\text{ASR}}\) also predicts a generation probability \(P^{\text{gen}}\) indicating how much contextual biasing is needed at a specific step. If there are valid paths found in the tree, the set of valid subwords is copied to the ASR output by interpolating the TCPGen\({}_{\text{ASR}}\) distribution and the original ASR model output distribution, weighted by the generation probability.
An illustration of the computation of TCPGen using the same example prefix tree as Fig. 1 is shown in Fig. 3. During ASR decoding, a set of valid subwords is obtained by searching the prefix tree with the decoded preceding subwords. Then, scaled dot-product attention is performed to obtain the TCPGen\({}_{\text{ASR}}\) distribution \(P^{\text{ptr}}(y_{i})\) (omitting dependencies on \(y_{1:t-1}\) and \(\mathbf{x}_{1:T}\) for presentation clarity) as follows:
\[P^{\text{ptr}}(y_{i})=\text{Softmax}(\text{Mask}(\text{Mask}(\mathbf{q}_{i} \mathbf{K}^{\text{T}}/\sqrt{d})), \tag{4}\]
where \(d\) is the dimensionality of \(\mathbf{q}_{i}\) and \(\text{Mask}(\cdot)\) sets the probabilities of subwords that do not form valid paths at the current step to zero. The query vector \(\mathbf{q}_{i}\) is computed from the context vector and the previously decoded token embedding. The key and value vectors are node encodings of corresponding subwords on the prefix tree. To enable
lookahead functionality and obtain more powerful node representations, a graph convolutional network (GCN) (Kipf and Welling, 2017; Sun et al., 2022) was used to encode the nodes on the tree. This node encoding can be done efficiently
The generation probability is calculated using the decoder hidden states and the weighted combination of node encodings from the attention mechanism. Then, the final output can be calculated as follows:
\[P(y_{i})=P^{\text{mdl}}(y_{i})(1-P_{i}^{\text{gen}})+P^{\text{ptr}}(y_{i})P_{i} ^{\text{gen}}, \tag{5}\]
where \(P^{\text{mdl}}(y_{i})\) represents the output distribution from the standard end-to-end model, and \(P_{i}^{\text{gen}}\) is the generation probability.
#### 3.2.2 Slot Shortlist Prediction for TCPGen in SLU
For slot filling, possible entities for each slot are structured as prefix trees separately as shown in Fig. 1. In order to have a more focused biasing list, instead of using all slots, a shortlist of slots is predicted at the start of decoding for each word using a class language model (CLM) (Sun et al., 2023; Huang et al., 2020; Bruguier et al., 2022) which takes the decoded word-level history as input. The top \(K\) slot types predicted by the CLM is used as contextual knowledge, where one TCPGen distribution is calculated for each slot type to model the joint distribution of slot type and wordpieces. The TCPGen distribution used for the pointer generator is obtained by marginalising with respect to the slot types, i.e. summing up probabilities of the same wordpiece in all shortlist slots, as shown in Eqn. (6):
\[P^{\text{ptr}}(y_{i})=\sum_{s\in S}P^{\text{ptr}}(s,y_{i}) \tag{6}\]
Note that the top \(K\) slot list is updated with the current decoded word history when there is no valid path found on any of the prefix trees.
Moreover, as the generation probability \(P^{\text{gen}}\) provides an indication of how much contextual biasing is needed to decode each subword token, it is concatenated with \(\mathbf{h}^{\text{dec}}\) and sent to the SVG to further indicate where in the context the knowledge has been used. TCPGen\({}_{\text{ASR}}\) is jointly optimised with the SVG module.
### Knowledge-Aware SVG
Alternative hypotheses are an essential resource that can be obtained from the ASR system, especially for low-frequency named entities. To exploit the knowledge available in additional ASR hypotheses, the knowledge-aware SVG module is proposed here: it extracts branches on each prefix-tree during ASR beam-search decoding and forms sub-trees to be used by TCPGen\({}_{\text{SVG}}\) (as shown in Fig. 1) to integrate knowledge into the SVG. In particular, as each prefix tree used in the ASR beam search decoding is searched, a valid path that leads from the root node to a leaf node will be saved, which corresponds to a valid named entity belonging to that slot type. After decoding, the lists of entities corresponding to the valid paths found for each slot are gathered and organised into prefix trees. These prefix trees are essentially sub-trees of the original prefix trees for each slot type.
Figure 3: Illustration of TCPGen component for ASR (TCPGen\({}_{ASR}\)) with corresponding terms in Eqn. (5). (a). If “so” is the preceding token, “phi” and “ha\({}_{-}\)” are two valid wordpieces with non-zero \(P^{\text{ptr}}\). Note that TCPGen\({}_{\text{SVG}}\) introduced next is also based on this mechanism. (b). \(P^{\text{ptr}}(y_{i})\) is the TCPGen distribution. \(P^{\text{mdl}}(y_{i})\) is the distribution generated by a standard end-to-end model. \(P(y_{i})\) is the final output distribution. \(P_{i}^{\text{gen}}\) is the generation probabilities.
Next, sub-trees are encoded using the same GCN as used in Section 3.2 and are searched when generating slot values in the same way as with TCPGen\({}_{\text{ASR}}\). In contrast to TCPGen\({}_{\text{ASR}}\), for TCPGen\({}_{\text{SVG}}\), the query comes from the SVG hidden state at each decoding step, while the keys and values are taken from the node encodings on the sub-trees. In the example shown in Fig. 1, the beam search traverses the entity "riinanna" and "soha" in the person slot, and hence the sub-tree of the person slot is constructed using these two entities. When prompting the system with the slot person, this sub-tree is subsequently used to generate values.
In this manner, possible entities that are not covered by the 1-best hypothesis but are explored in the ASR beam search can be effectively used via the copy mechanism in the SVG. In addition to the benefit of exploring other hypotheses, TCPGen\({}_{\text{SVG}}\) on the generation side also improves the performance on rare and unseen entities even if they are correctly recognised, as the SVG might still be unable to pick them out due to insufficient training samples. TCPGen\({}_{\text{SVG}}\), as a pointer generator mechanism, enables the SVG to directly copy entities from the relevant knowledge that is also filtered by the ASR system, even if they are not seen in training.
## 4 Experimental Setup
### Training and Evaluation Data
Experiments were performed on two structurally different speech-based English ToD datasets as described below.
**SLURP**(Bastianelli et al., 2020) is a collection of 72K audio recordings of single-turn user interactions with a home assistant, annotated with scenarios, actions and entities. We ran experiments in two data setups. 1) The official training, validation and test split were used and, following Arora, Dalmia, Denisov, Chang, Ueda, Peng, Zhang, Kumar, Ganesan, Yan, Vu, Black and Watanabe (2022), synthesised audio was used for training. 2) A simulated zero-shot setup following Sun et al. (2023) was used: in training, we held out all utterances containing entities of five randomly selected 'unseen' slots were held out and then the held-out utterances were used for testing.
External knowledge was organised as a simple dictionary for SLURP, referred to as the knowledge base (KB), where keys were slot types and values were lists of possible named entities for that type. It was created for experimental purposes by gathering named entities that appeared in the entire SLURP data for each slot type (including train, validation and test sets), as a simulation of a real-world task environment. The average size of these lists was 106: the largest list was person which contained 872 entities, and the smallest list was transport_agency, which only contained 2 entities.
**CONCIERGE** is a multi-turn dialogue dataset obtained from a commercial system, and it represents a standard and challenging few-shot learning setup typically met in production. The dataset contains a collection of 8 kHz noisy phone-call conversations of real customer interactions with a concierge voice bot covering the restaurant, shop and bar domains. The audio was upsampled to 16 kHz to match the ASR model. Example dialogues from CONCIERGE are shown in Fig. 4. Some dataset statistics are given in Table 1. Each entity in the test set had only 1 to 5 occurrences in the training portion of the dataset and hence can be considered a 'few-shot' scenario.
Figure 4: Two example dialogues from the CONCIERGE dataset along with the state tracking labels. Left: the slot filling label for the restaurant name in the last turn is None. Right: The user changed the goal in the third turn.
### KA2G: Main Setup
The ASR model used an encoder with 16 Conformer blocks (Gulati, Qin, Chiu, Parmar, Zhang, Yu, Han, Wang, Zhang, Wu and Pang, 2020) with a 512-dim hidden state and 4-head attention, a 1024-dim single-head location-sensitive attention and a 1024-dim single-layer unidirectional LSTM decoder. Each 10 ms frame of the input audio was represented by an 80-dim mel-scale filterbank feature. A suffix-based unigram WordPiece model Kudo (2018) with 600 distinct WordPieces was used for the output.
GPT-2 (Radford et al., 2019) with a 768-dim output representation was used as the autoregressive PLM in all experiments. Both TCPGen components adopt one 256-dim single-head attention layer. A two-layer 256-dim GCN was used with the input node encodings set to the WordPiece embeddings of the ASR decoder, based on the default suggestions from prior work Sun et al. (2022). The dimensions of the slot-value generator and projection layers in TCPGen were all determined by the stated dimensionalities. The CLM used to predict a shortlist of slots for SLURP experiments was a single-layer 2048-dim LSTM.
### Main Baseline
The proposed KA2G framework was compared to a pipeline system which used the same ASR model to generate the 1-best hypothesis. The 1-best hypothesis was used as input text to the GPT-2 model for slot-value generation. As for KA2G, the GPT-2 model was also finetuned by generating slot values given slot prompts. In addition, the pipeline system can also be equipped with TCPGen\({}_{\text{ASR}}\), acting as a stronger baseline, which achieves better recognition accuracy on rare entities. The purpose of comparing to this system is to showcase the improvement attained by KA2G beyond just the improvement in recognition accuracy.
### KA2G: Training and Inference
KA2G was implemented in ESPNet (Arora et al., 2022). The ASR AED models, together with the ASR-TCPGen component, were pretrained for 20 epochs on the Librispeech 960-hour English audiobook data (Sun et al., 2022). When training the SVG (see Section 3.1), \(N_{n}=10\) negative 'not-present' slots were randomly chosen and added for each utterance in SLURP, and \(N_{n}=3\) for CONCIERGE. The training was run on a single Nvidia A100 GPU.
Both the TCPGen\({}_{\text{SVG}}\) and TCPGen\({}_{\text{ASR}}\) components were trained in the same way as in (Sun et al., 2022), where the full biasing list was first defined for each dataset, and the biasing list for each utterance was organised by selecting biasing entities in the reference transcription and adding a certain number of distractors following Le et al. (2021). For SLURP, the full biasing list was defined by selecting entities in the KB with fewer than 30 examples in the training set, including unseen entities. There were altogether 5,000 biasing entities. For TCPGen\({}_{\text{ASR}}\), the number of distractors was randomly picked between 100 and 200, whereas for TCPGen\({}_{\text{SVG}}\), 20 distractors from the same slot type were used, which was close to the size of the prefix-tree the model would see during in inference. A random drop of 30% of the reference biasing entities was applied to both the TCPGen\({}_{\text{ASR}}\) and TCPGen\({}_{\text{SVG}}\). The same full biasing list selection criterion, as well as the training procedure for TCPGen, was also applied to the CONCIERGE data.
During inference, a beam size of 30 was used for ASR decoding. For SLURP, entities in the KB with fewer than 30 examples in training were used as biasing entities, and the CLM predicted the top 2 slot types at each word boundary. For experiments using the SLURP zero-shot learning split, a biasing list with 2k entities incorporated all entities in the unseen slots since they were all unseen entities, and was used as a whole during inference, as the CLM could not predict unseen slot types. For few-shot learning on CONCIERGE, all entities in the test set were included in the biasing list since they all appeared fewer than 5 times, which formed a biasing list of 105 entities. Since this was a much smaller biasing list, the entire list was used without CLM prediction. In the experiments, entities that appeared fewer than 5 times are referred to as _few-shot entities_. For the CONCIERGE data, all biasing entities were few-shot entities. Unless stated otherwise, greedy decoding was used for SVG.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Split & \# Turns & \# Dialogues & Time (hours) \\ \hline Train & 1934 & 829 & 3.17 \\ Valid & 212 & 97 & 0.36 \\ Test & 428 & 225 & 0.86 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the CONCIERGE dataset, including the number of dialogue turns (# Turns), the number of dialogues (# Dialogues) and the total time of speech (# Time) in hours.
### Evaluation Metrics
The ASR output is evaluated using the standard word error rate (WER) measure. For slot filling, the SLU-F1 (Bastianelli et al., 2020) and the micro Entity-F1 scores are used to measure performance, offering insights into both word-level and character-level F1 scores. Moreover, for multi-turn dialogues, joint goal accuracy (JGA) is reported: JGA counts a turn as correct only if all slots in the dialogue state are correctly filled in that turn.
## 5 Results and Discussion
### Experiments on SLURP
This section describes the set of experiments performed on the SLURP dataset. It begins with a summary of the main results, followed by a discussion of key aspects. This includes a comparison to prior work, a discussion from the practical aspect about few-shot and zero-shot learning for slot filling, the ablation study, the impact of training data sizes, the impact of beam search, the impact of the incorporation of an external LM as well as the performance under the zero-shot setup.
#### 5.1.1 Main Results
The results on SLURP are summarised in Table 2. Both the SLU-F1 and Entity-F1 scores reveal that, compared to the pipeline system, using the audio-grounded SVG model (Row 2 in the table) achieved better performance overall, with much better F1 scores on biasing entities (_i.e._, their occurrence frequency was 0 \(<f<\) 30) and few-shot entities (\(0<f<\) 5). Similar improvements from audio-grounding were observed when comparing the full KA2G framework to the pipeline system with a contextual ASR using TCPGen. Overall, KA2G achieved a 1.4% absolute SLU-F1 increase compared to the baseline pipeline system, with a 4.6% SLU-F1 improvement on few-shot entities and an 11.2% SLU-F1 improvement on unseen entities. This indicates that audio grounding helped the system to better deal with few-shot entities as long as at least some examples were exposed to it. Similar trends but large improvements were found in Entity-F1 than in SLU-F1 using KA2G, especially on entities in the biasing list as TCPGen is able to guide the slot-value generation to complete the entire entity correctly by following a specific path on the prefix-tree. Furthermore, similar to Le et al. (2022), the model was also able to generate entities that were incorrectly recognised by ASR: for the audio-grounded system, 10% of the correctly filled entities were not correctly recognised by the ASR system, whereas that ratio was only 3% for the pipeline system.
In order to further investigate the influence of audio-grounding in slot filling, a case study was performed on SLURP. Two examples where the pipeline system and audio-grounded SVG had the same ASR output are shown in Fig. 5. In Case 1, while the semantic meanings of "launch" and "lunch" were completely different, their pronunciations were similar. Therefore, the ASR error "lunch" confused the pipeline system which produced the wrong slot type and value, which was not recoverable in the SVG-end since the audio information was lost in the pipeline system. However, with audio-grounding, the pronunciation similarities of "launch" and "lunch" can be considered by the SVG with the strong GPT-2 based PLM, which therefore successfully directed the model to look at the correct entity. In Case 2, although "abs" was in the ASR output, the ASR was uncertain about this prediction and "abbas" also had a good ASR score. Therefore, while both systems were able to fill the correct slot type, the audio-grounded SVG was able to replace the entity with the one seen in the training set that had similar pronunciation. This also explains why the pipeline-based
\begin{table}
\begin{tabular}{l c|c|c c c} \hline \hline
**System** & **WER (\%) \(\downarrow\)** & **Overall** F1 (\%) & 0 \(<\) f \(<\) 30 F1 (\%) & 0 \(<\) f \(<\) 5 F1 (\%) & f = 0 F1 (\%) \\ \cline{2-6} Pipeline & 12.7 & 79.2 (73.0) & 72.4 (69.5) & 69.2 (69.1) & 21.1 (7.2) \\ Audio-grounded SVG & 12.5 & 79.6 (73.9) & 74.3 (71.8) & 72.2 (71.9) & 20.9 (6.5) \\ \cline{2-6} Pipeline + Contextual ASR & 12.4 & 79.7 (73.7) & 73.5 (70.2) & 70.8 (70.5) & 24.5 (9.8) \\ \cline{2-6} Full KA2G & **12.1** & **80.6 (75.1)** & **75.7 (73.4)** & **73.8 (73.7)** & **32.3 (18.6)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: WER and F1 scores, including SLU-F1, and Entity-F1 (in parentheses) on SLURP. F1 scores were measured for all entities (Overall), as well as for biasing entities (occurrence frequency \(f<\) 30), few-shot entities (\(f<\) 5) and unseen entities (\(f=\) 0). _Pipeline_ represented the pipeline system using the same AED-based ASR model to get the 1-best hypothesis, followed by GPT-2 for the slot-value generation. _Contextual ASR_ uses TCPGen\({}_{\text{ASR}}\) in the pipeline system.
baseline performed slightly better than the audio-grounded SVG baseline on unseen entities (see the last column of the first two rows in Table 2).
When TCPGen\({}_{\text{ASR}}\) was used (Row 3 in the table) with the pipeline system, the main performance boost of the system was attributed to a better WER. The improvements in WER and both F1 scores were mainly on biasing and unseen entities as those were included in the biasing list for TCPGen. Finally, the full KA2G framework achieved higher overall SLU-F1 and Entity-F1 scores compared to the pipeline system and particularly large improvements were observed on few-shot entities and unseen entities. Moreover, due to the benefit of multi-task training, KA2G achieved a lower overall WER: this reflected the fact that the slot-filling task can positively impact the ASR performance. We further noted that the WER on words in the biasing entities was similar to that of the pipeline system: this indicated that the gain with KA2G on low-frequency entities did not only come from improved ASR.
#### 5.1.2 Comparison to Baselines from Prior Work
The baseline pipeline system using contextual ASR with the overall SLU-F1 score of 79.7 in Table 2 already achieved a better performance than the best system proposed in Sun et al. (2023). This was mainly due to the formulation of slot filling as a sequence generation task. Arora, Dalmia, Yan, Metze, Black and Watanabe (2022) adopted a sequence labelling approach and reported an overall SLU-F1 score of 78.0 on SLURP (_cf._, 80.6 reported with KA2G in Table 2). Note that the work of Arora et al. (2022) used the more powerful WavLM pretrained speech representations Chen, Wang, Chen, Wu, Liu, Chen, Li, Kanda, Yoshioka, Xiao, Wu, Zhou, Ren, Qian, Qian, Wu, Zeng, Yu and Wei (2022), which resulted in a much lower WER of 9%. Concerning generation-based slot filling systems, Wang et al. (2021) achieved a score of 78.9 using a sequence generator with wav2vec 2.0 as representations. We also found that our pipeline system which as used as the main baseline was usually much better than other pipeline systems reported, as our pipeline adopted the pre-trained GPT2 for slot filling whereas others usually compared to an NLU network with a similar size to the NLU modules in their end-to-end systems (Arora et al., 2022, 2).
Figure 5: Two examples of utterances from SLURP where audio-grounding helped slot filling.
Figure 6: SLU-F1 (%) over different training set occurrence frequencies for entities in SLURP. _Overall_ SLU-F1 scores were also provided as horizontal lines.
#### 5.1.3 Few-Shot versus Zero-Shot
The preliminary comparison of results between few-shot and unseen entities from Table 2 showed that, even when only a handful of examples were provided, the model was able to achieve a sizable jump in performance. To provide more insight, a finer-grained frequency bin analysis was conducted on SLURP, as shown in Fig. 6. The results show that there is a very large increase in SLU-F1 after only providing a single sample for an entity (i.e., moving from zero-shot to one-shot), with SLU-F1 scores increasing from \(\sim\)30 to \(\sim\)70: this corroborated the major benefits of few-shot learning over zero-shot learning Lauscher et al. (2020). Fig. 6 again validates that KA2G provides improvements over the baseline system in such low-resource scenarios.
#### 5.1.4 Ablation Study
The results of the ablation experiments for KA2G are given in Table 3, where the last row in the table corresponds to the system which used the audio-grounded SVG (i.e., the second row in Table 2). By removing the TCPGen\({}_{\text{SVG}}\), the most significant change was the decrease in performance on unseen entities, especially in terms of Entity-F1. When TCPGen\({}_{\text{SVG}}\) was included, the SVG was fully guided by the biasing entities, and hence the model was more likely to predict complete entities. This observation was also found by comparing the system without TCPGen\({}_{\text{ASR}}\) to the system without TCPGen\({}_{\text{SVG}}\). Moreover, since the WERs for unseen entities were much higher for the 1-best hypothesis, extracting information from alternative hypotheses is much more useful for unseen entities.
While TCPGen\({}_{\text{ASR}}\) contributed to the performance improvement, the use of the copy probability \(P^{\text{gen}}\) also contributed to the improvement, especially for rare and unseen entities, as it provided the indication of where knowledge has been used. This was particularly useful when the entity was correctly recognised but SVG was not able to generate it due to not seeing enough examples.
#### 5.1.5 Impact of Training Data Size
The impact of the training data size on the performance of few-shot entities was also analysed. Specifically, the 2.6k utterances which contain few-shot entities were retained while the rest of the training set was sub-sampled. These subsets were then used to train the pipeline system and the full KA2G framework. Unlike other sequence-to-sequence slot-filling frameworks Wang et al. (2021) with speech input, the ASR component in KA2G can potentially benefit from training as a standalone module on domain-specific ASR data. Since ASR annotation is usually easier to obtain than SLU annotation, the ASR component in both the pipeline system and KA2G could benefit from the ASR annotation of the full SLURP training data. To this end, two sets of experiments were conducted to investigate the impact of the training data size, where the first experiment trained both the ASR and the SLU on the same selected subset with TCPGen, whereas the second experiment used the full SLURP training set to train the ASR with TCPGen and the selected subset to train the SLU.
The results are shown in Fig. 7. Reducing other training data indeed had a strong impact on SLU-F1 scores of few-shot entities despite the fact that the utterances covering those entities were retained. With the ASR component trained on the full SLURP data (i.e. the second set), the reduction in SLU-F1 became much smaller. The full KA2G again consistently outperformed the baseline system across different training data sizes, with a larger difference when the ASR module was trained on the full data.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**System** & **Overall (\%)** & \(0<f<5\) (\%) & \(f=0\) (\%) \\ KA2G framework & **80.6 (75.1)** & **73.8 (73.7)** & **32.3 (18.6)** \\ without TCPGen\({}_{\text{SVG}}\) & 80.6 (74.7) & 73.6 (73.5) & 27.5 (8.1) \\ without TCPGen\({}_{\text{ASR}}\) & 79.7 (74.3) & 72.5 (72.3) & 27.4 (13.5) \\ without TCPGen\({}_{\text{SVG}}\) and \(P^{\text{gen}}\) input & 80.2 (74.5) & 73.1 (73.0) & 24.4 (7.5) \\ without TCPGen\({}_{\text{SVG}}\) and TCPGen\({}_{\text{ASR}}\) & 79.6 (73.9) & 72.2 (71.9) & 20.9 (6.5) \\ \hline KA2G framework + SVG beam search & **80.6 (75.2)** & **74.2 (74.1)** & **32.4 (18.8)** \\ without SVG-TCPGen and ASR-TCPGen & 79.6 (73.9) & 72.6 (72.2) & 21.1 (7.2) \\ \hline \hline \end{tabular}
\end{table}
Table 3: An ablation study on SLURP based on SLU-F1 (Entity-F1 in parentheses). F1 scores were measured for all entities, as well as for few-shot entities and unseen entities. The first row referred to the complete KA2G framework, and each subsequent row represented removing the corresponding components.
#### 5.1.6 The beam search
Beam search can be performed for both ASR and SVG. For ASR, it was found that a higher beam size only yields very marginal improvements in WER while taking significantly more decoding time. From the perspective of the biasing lists for TCPGen\({}_{\text{SVG}}\), in the one-best hypothesis, only 50% of entities can be found. With a beam size of 30, 70% of entities were covered in the biasing lists for TCPGen\({}_{\text{SVG}}\), which was the main source of improvements for TCPGen\({}_{\text{SVG}}\). However, this was only improved by 2% using a beam size of 100, whereas the average size of the biasing lists increased from 10 to 15 which introduced more noise to the biasing lists. Therefore, a beam size of 30 was used for the rest of the experiments. On the SVG side, beam search only provided a marginal improvement, but the time for decoding was 4-5 times longer as the lengths of alternative slot values were in general longer than None.
#### 5.1.7 Impact of External Language Model (LM) Fusion
ASR models usually integrate external language information effectively via LM fusion to further boost their performance, hence it is worthwhile studying the effect of using an external LM in the SLU context for both pipeline systems and KA2G. As AED models used in this paper already inherently contained an implicit internal LM that is trained on the SLURP data, an external LM has to contain richer LM information to be effective for ASR. Therefore, a powerful GPT2 PLM finetuned on the text of the SLURP training set was employed. As the modelling unit of GPT2 was different to the AED model, the finetuned GPT2 was used to perform rescoring for the 30-best list from the AED model. For the pipeline system, the re-ranked 1-best hypothesis was used by the text-based SVG, and for KA2G, the hidden states of the 1-best hypothesis were cached and used by the audio-grounded SVG module. The results for both systems are shown in Table 4.
Although rescoring with GPT2 further reduced the WER by 0.4% in absolute value for both systems, it did not improve the SLU performance. In fact, the external LM also tended to reinforce the common correlation in the text which had been well-modelled by the SVG module, and it was the performance on the low-frequent entities that needed to be improved. As a result, external LM integration had a very limited or even negative influence on slot-filling.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**System** & WER (\%) & SLU-F1 \\ \cline{2-3} Pipeline & 12.7 & 79.2 \\ Pipeline + GPT2 & 12.3 & 79.0 \\ KA2G framework & 12.1 & **80.6** \\ KA2G framework + GPT2 & 11.7 & 80.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Investigation of external LM effects on the pipeline system and KA2G. GPT2, as the external LM, was finetuned on the text of the full SLURP training data. The LM scaling factor was set to 0.3.
Figure 7: SLU-F1 on few-shot entities when subsampling the part of the training set not containing few-shot entities. (a). The ASR component in the pipeline and KA2G systems were trained on the same subset. (b). The ASR was first trained on the full SLURP training set but SLU was only trained on the selected subset.
#### 5.1.8 SLURP: Zero-Shot Setup
The results of the zero-shot setup are summarised in Table 5. The WER for the zero-shot learning test set was 18.0% for ASR without TCPGen, which decreased to 16.9% for the KA2G. Compared to the pipeline system, KA2G achieved worthwhile improvements both in SLU-F1 and Entity-F1 scores. Therefore, KA2G provided an effective way of leveraging knowledge for zero-shot slot filling by bridging the SVG and ASR alternative hypotheses via a neural shortcut provided by TCPGen. Moreover, while zero-shot slot filling in Sun et al. (2023) relied on manually tuned hyper-parameters, KA2G removed the requirement on TCPGen-related hyperparameter tuning during inference, which also improved the robustness and reliability of KA2G for zero-shot slot-filling.
### Experiments on CONCIERGE
This section discusses the results of the CONCIERGE data under single-turn and multi-turn evaluation metrics.
#### 5.2.1 Single-Turn Evaluation
The proposed KA2G framework was also validated on a real-world use case with the CONCIERGE dataset. The WER on this test set was \(\sim\)35% due to limited audio data for training, which made the slot-filling task with speech input on CONCIERGE even more challenging. Single-turn evaluation was first performed, with the same metrics as used with SLURP. The results are provided in Table 6.
In the challenging setup, external knowledge plays an even more important role, which led to much larger performance improvements with the KA2G framework. As with SLURP, having a few examples of entities yielded much better performance than zero-shot learning. Clearly, providing a handful of examples in the training set resulted in very large performance improvements. This again corroborated the fact that although zero-shot learning is an attractive research problem, few-shot learning is often more pragmatic for industrial applications, and this is also the case when using generative systems such as KA2G.
\begin{table}
\begin{tabular}{l c c} \hline \hline System & SLU-F1 (\%) & Entity-F1 (\%) \\ \hline Pipeline & 37.2 & 26.9 \\ KA2G framework & **56.6** & **42.5** \\ w/o TCPGen & 41.9 & 29.3 \\ \hline Pipeline zero-shot & 1.6 & 1.6 \\ KA2G zero-shot & 12.4 & 2.6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: SLU-F1 and Entity-F1 scores on CONCIERGE. Zero-shot results were obtained by removing dialogues containing test set entities from training.
\begin{table}
\begin{tabular}{l c} \hline \hline System & JGA (\%) \\ \hline Pipeline & 21.0 \\ KA2G framework & **41.5** \\ w/o TCPGen & 24.3 \\ \hline \hline \end{tabular}
\end{table}
Table 7: JGA on the CONCIERGE dataset with multi-turn dialogue state tracking.
\begin{table}
\begin{tabular}{l c c} \hline \hline System & SLU-F1 (\%) & Entity-F1 (\%) \\ \hline Pipeline & 10.1 & 3.7 \\ KA2G framework & **23.7** & **12.9** \\ w/o TCPGen & 9.3 & 3.0 \\ \hline \hline \end{tabular}
\end{table}
Table 5: SLU-F1 and Entity-F1 scores for unseen slots under the zero-shot learning setup on SLURP. Note that w/o TCPGen refers the system that removed both TCPGen\({}_{\text{ASR}}\) and TCPGen\({}_{\text{SVG}}\).
#### 5.2.2 Multi-Turn Evaluation
For multi-turn experiments, entity mapping was applied in order to group different expressions of the same entity together. Further, the ASR 1-best hypothesis from the history of user inputs was included as input to the PLM. The JGA scores are summarised in Table 7. There were large improvements with the full KA2G, which were mostly due to the use of TCPGen\({}_{\text{NG}}\). As JGA was more relevant to Entity-F1, and the SVG-TCPGen module in the KA2G framework provided particular benefits to Entity-F1, the KA2G framework resulted in a clear improvement in JGA.
## 6 Conclusions
A novel knowledge-aware audio-grounded generative slot-filling framework for speech-based ToD, called KA2G has been proposed. The framework is especially suited for low-resource slot-filling tasks and for handling rare and unseen entities/values. KA2G comprises an audio-grounded SVG, together with two TCPGen components. The first TCPGen integrates knowledge from an external knowledge base containing possible entities for all slots into the ASR module, while the second TCPGen exploits entities found in alternative ASR hypotheses. A comprehensive evaluation has been performed on two different datasets with speech input: i) single-turn SLURP data and ii) multi-turn CONCIERGE data obtained from a commercial ToD system. The usefulness of KA2G has been experimentally validated on both datasets, with clear performance gains over current state-of-the-art systems. KA2G was especially useful in few-shot and zero-shot setups.
KA2G, as a prompt-based SLU framework with speech input, also possesses potential for future investigation with the advent of large language models (LLMs) and prompt-based generative AI. We believe KA2G can serve as a promising speech front-end for LLMs. The explicit knowledge integration component that allows dynamic contextual knowledge to be incorporated in a prompt-based system may also potentially benefit the performance of LLMs on factual and domain-specific enquiries.
|
2303.13697 | Soy: An Efficient MILP Solver for Piecewise-Affine Systems | Piecewise-affine (PWA) systems are widely used for modeling and control of
robotics problems including modeling contact dynamics. A common approach is to
encode the control problem of the PWA system as a Mixed-Integer Convex Program
(MICP), which can be solved by general-purpose off-the-shelf MICP solvers. To
mitigate the scalability challenge of solving these MICP problems, existing
work focuses on devising efficient and strong formulations of the problems,
while less effort has been spent on exploiting their specific structure to
develop specialized solvers. The latter is the theme of our work. We focus on
efficiently handling one-hot constraints, which are particularly relevant when
encoding PWA dynamics. We have implemented our techniques in a tool, Soy, which
organically integrates logical reasoning, arithmetic reasoning, and stochastic
local search. For a set of PWA control benchmarks, Soy solves more problems,
faster, than two state-of-the-art MICP solvers. | Haoze Wu, Min Wu, Dorsa Sadigh, Clark Barrett | 2023-03-23T22:22:07Z | http://arxiv.org/abs/2303.13697v2 | # _Soy_: An Efficient MILP Solver for Piecewise-Affine Systems
###### Abstract
Piecewise-affine (PWA) systems are widely used for modeling and control of robotics problems including modeling contact dynamics. A common approach is to encode the control problem of the PWA system as a Mixed-Integer Convex Program (MICP), which can be solved by general-purpose off-the-shelf MICP solvers. To mitigate the scalability challenge of solving these MICP problems, existing work focuses on devising efficient and strong formulations of the problems, while less effort has been spent on exploiting their specific structure to develop specialized solvers. The latter is the theme of our work. We focus on efficiently handling one-hot constraints, which are particularly relevant when encoding PWA dynamics. We have implemented our techniques in a tool, _Soy_, which organically integrates logical reasoning, arithmetic reasoning, and stochastic local search. For a set of PWA control benchmarks, _Soy_ solves more problems, faster, than two state-of-the-art MICP solvers.
## I Background and Motivation
Piecewise-affine (PWA) systems [26] are widely used to model highly nonlinear behaviors such as contact dynamics in robotics. Given an initial condition and a goal, a trajectory that drives the state to the goal while respecting the PWA dynamics and state/control constraints can be obtained by solving a mixed-integer convex programming (MICP) problem. This approach has seen success in important robotic applications such as push recovery [17] and footstep planning [13] (as illustrated in Fig. 1). The MICP approaches are sound and complete for a discrete-time PWA model and a fixed horizon - they find feasible solutions if they exist. However, completeness comes at a high computational cost due to the inherent complexity of solving MICP problems.
Previous work on controlling PWA systems has focused on obtaining efficient and strong formulations of the MICCP problems in different application scenarios [4, 21, 17, 22, 13, 2, 20]. The actual solving is typically off-loaded to _general-purpose_ off-the-shelf MICP solvers [5, 11, 16, 3]. While the performance of off-the-shelf solvers has improved dramatically in the past decade, these solvers were originally developed with non-robotic applications (e.g., operations research) in mind. Little has been done to tailor solvers specifically to robotics applications. A natural question is then: **can we exploit the structure of problems arising from PWA systems to design specialized MICP solvers that are faster than general-purpose ones?**
In MICP encodings of problems from the planning domain, logical constraints are typically encoded using arithmetic. When it comes to PWA dynamics, a ubiquitous logical constraint type is the _one-hot constraint_, which encodes the fact that at any time "the system is in exactly one mode." For example, each footstep of the robot in Fig. 1 must be on exactly one of the "stepping stones."
Another way to encode one-hot constraints, however, is to use propositional logic directly. _Our key insight is that reasoning about modes explicitly at the logical level can be beneficial._ To get some intuition for why this is the case, suppose we know that a certain mode combination is infeasible. To rule out this mode combination, we could either encode it using integer arithmetic constraints or as a single disjunction at the propositional logic level. Empirically, it can be observed that the addition of a few hundred arithmetic (linear) constraints can significantly increase the runtime of a solver. At the same time, propositional solvers can process thousands of disjunctions in milliseconds.
A tight integration of propositional and theory (e.g., arithmetic) reasoning is at the heart of the highly successful satisfiability modulo theories (SMT) paradigm [6], with most implementations based on the popular DPLL(T) framework [14]. In this paper, we adapt the DPLL(T) framework for the setting of PWA planning. In particular, we develop a sound and complete DPLL(T) procedure which integrates propositional reasoning with reasoning about _mixed-integer linear programming_ (MILP) problems (a subset of MICP problems) with one-hot constraints.
The underlying convex solver in our approach can only operate on convex relaxations of the one-hot constraints, i.e., integer variables are relaxed to real variables. To further tailor our solver to our problem domain, we propose to "softly" guide the convex solver with information about the precise one-hot constraints before branching on them. Inspired by the sum-of-infeasibilities method in convex optimization [8], we define a cost function which represents the degree to which the current solution violates the one-hot constraints. If an assignment is found with cost zero, then not only is the assignment a solution for the convex relaxation, but it also solves the precise problem.
The aforementioned cost function is concave piecewise-linear, which is challenging to minimize directly. We observe,
Fig. 1: Footstep planning (from [13]): the robot must find a path using only the available “stepping stones.” We evaluate a similar problem later (see Fig. 4).
however, that for any specific mode sequence, the system collapses into a set of _linear_ constraints, which can be optimized by an LP solver. Minimizing the linear cost function provides a way to evaluate "how feasible" the corresponding mode sequence is. Leveraging this insight, we propose to use Markov chain Monte Carlo (MCMC) sampling to efficiently navigate towards mode sequences at the global minimum of the cost function. In addition, we propose a novel propagation-based proposal strategy for MCMC sampling, which guarantees that the sampled mode sequence is 1) non-repetitive; and 2) does not match any known infeasible mode combinations.
Our end result is a specialized solver, _Soy_,1 that combines the strength of SMT, MILP, and stochastic local search to efficiently reason about PWA systems. _Soy_ takes in MILP problems defined in the standard MPS format, which is supported by most MICP solvers. This makes it easy for users of MICP solvers to try their problem on _Soy_. While _Soy_ is still an early prototype, it can already be used to solve PWA control problems appearing in the literature significantly faster than was previously possible (using existing MILP solvers alone). The closest related work is [25], which also shows that combining logical and arithmetic reasoning can be beneficial. Our work goes further by proposing domain-specific solutions for PWA dynamics, combining complete search with local search, and implementing a tool that is friendly to practitioners accustomed to using MICP solvers.
Footnote 1: _Soy_ is available at [https://github.com/stanford-centaur/Soy](https://github.com/stanford-centaur/Soy)
To summarize, our contributions include:
1. an instantiation of the DPLL(T) framework for MILP problems with one-hot constraints;
2. FastSoI, a novel local search procedure based on the sum-of-infeasibilities method and MCMC sampling;
3. a propagation-based strategy for MCMC sampling;
4. _Soy_, a specialized MILP solver for PWA control that combines the proposed techniques;
5. an evaluation of _Soy_ on PWA-control benchmarks.
## II Preliminaries and Definitions
**Feasibility of MILP.** Let \(\mathcal{X}\) be a set of _real variables_. A _linear constraint_ has the form \(\sum_{x_{i}\in\mathcal{X}}c_{i}\cdot x_{i}\bowtie d\), where \(\bowtie\in\{\leq,<,=\}\) and the \(c_{i}\)'s and \(d\) are rational constants. An _integral constraint_ has the form \(x\in\mathbb{Z}\), and a _binary constraint_ has the form \(x\in\{0,1\}\), where \(x\in\mathcal{X}\). A _solution_\(\alpha:\mathcal{X}\mapsto\mathbb{R}\) is a mapping from variables to real values. \(\alpha\) is a _feasible_ solution for a set of constraints \(\phi\), written \(\alpha\models\phi\), if replacing each variable \(x\) in \(\phi\) by \(\alpha(x)\) results in a set of true statements. If no feasible solution exists for \(\phi\), \(\phi\) is _infeasible_. The MILP feasibility problem is to determine whether a feasible solution exists for a set of constraints.
A _convex relaxation_\(\widetilde{\phi}\) of \(\phi\) can be obtained by dropping all the integral constraints in \(\phi\) and replacing binary constraints \(x\in\{0,1\}\) with \(0\leq x\leq 1\). The feasibility of \(\widetilde{\phi}\) can be determined with convex optimization (e.g., the simplex algorithm [12]). If \(\widetilde{\phi}\) is infeasible, then \(\phi\) is infeasible. But the reverse is not true, because a feasible solution to \(\widetilde{\phi}\) might not satisfy the integral or binary constraints.
**One-hot constraints.** One-hot constraints can encode requirements like "the system must be in exactly one mode." For example, in locomotion, the modes could correspond to the regions where the robot can step [13]; and in manipulation, the modes could correspond to disjoint contact scenarios (e.g., contact and no contact) [21]. A one-hot constraint can be encoded over a set \(\mathcal{B}\) of real variables as follows.
\[o(\mathcal{B}):=\Big{(}\sum_{x_{i}\in\mathcal{B}}x_{i}=1\Big{)}\wedge\Big{(} \bigwedge_{x_{i}\in\mathcal{B}}x_{i}\in\{0,1\}\Big{)} \tag{1}\]
Equation 1 states that exactly one variable in \(\mathcal{B}\) is 1 and the rest must be 0. For a PWA system with \(m\) modes, \(\mathcal{B}\) would contain \(m\) variables, each corresponding to one mode. Alternatively, the one-hot constraint can be encoded in propositional logic. One way to do this is by introducing a set \(\mathcal{P}=\{p_{1},\ldots,p_{m}\}\) of propositional variables, one for each mode. The constraint is then:
\[o_{L}(\mathcal{P}):=\big{(}\bigvee_{p\in\mathcal{P}}p\big{)}\wedge\big{(} \bigwedge_{1\leq i<j\leq m}(\neg p_{i}\vee\neg p_{j})\big{)} \tag{2}\]
The first conjunct requires at least one variable in \(\mathcal{P}\) to be true, and the rest enforce that at most one variable in \(\mathcal{P}\) is true. In order to be able to use both arithmetic and logical constraints together, we need a way to connect the variables. We assume a bijection \(e:\mathcal{B}\mapsto\mathcal{P}\) from real variables to the corresponding propositional variables (e.g., \(e(x_{i})=p_{i}\)).
**PWA Control as MILP.** When each mode is a polytope (i.e., a bounded polyhedron), PWA dynamics can be encoded with one-hot constraints and linear constraints. A PWA control problem is defined with respect to an initial condition, a goal, and a horizon \(T\). The question is whether the system can reach the goal within \(T\) steps starting from the initial condition. This can be encoded with a constraint of the form:
\[\phi:=\mathcal{C}_{init}(\mathcal{X}^{1})\wedge\bigwedge_{t=1}^{T}(\mathcal{C}_{ pwa}(\mathcal{X}^{t})\wedge o(\mathcal{B}^{t}))\wedge\mathcal{C}_{goal}( \mathcal{X}^{T}) \tag{3}\]
where \(\mathcal{X}^{t}\in\mathcal{X}\) are real variables for step \(t\) and \(\mathcal{B}^{t}\subset\mathcal{X}^{t}\) correspond to the modes at step \(t\). \(\mathcal{C}_{init}\) describes the initial state, and \(\mathcal{C}_{goal}\) describes the goal. In this paper, we assume \(\mathcal{C}_{init}\) and \(\mathcal{C}_{goal}\) only contain linear constraints, so \(\phi\) is a set of linear and one-hot constraints. Solving the PWA control problem amounts to checking the feasibility of \(\phi\)
**Sum-of-Infeasibilities.** In convex optimization [8, 19], the _sum-of-infeasibilities_ (SoI) method can be used to check the feasibility of a set of linear constraints: the feasibility problem is cast as an optimization problem, with a cost function \(f(\mathcal{X})\) representing the total _violation_ of the constraints by the current solution (e.g.., the sum of the distances from each out-of-bounds variable to its closest bound). The lower bound of \(f\) is 0 and is achieved only if the current solution is feasible. In Section IV, below, we build on this idea, proposing a cost function \(f_{\text{soil}}(\mathcal{X})\) that represents the total violation of the one-hot constraints by the current solution and a stochastic minimization solution.
## III Checking Feasibility
DPLL(T) is a framework for solving SMT problems.2 A DPLL(T)-like procedure for MILP problems with one-hot constraints is shown in Alg. 1. The procedure takes as inputs the linear constraints \(\mathcal{C}\) and the one-hot constraints \(\mathcal{O}\) and checks if \(\mathcal{C}\cup\mathcal{O}\) is feasible. During the solving process, it accumulates new information about the one-hot constraints at the propositional level, which it stores as a set of propositional constraints, \(\mathcal{E}\), initialized with a propositional encoding of the one-hot constraints as described in (2) (Line 4). The following invariant is preserved throughout the execution:
Footnote 2: Here, we only touch upon DPLL(T) at a high level; a detailed presentation can be found in [6].
**Condition 1**: _If \(\mathcal{C}\cup\mathcal{O}\) is feasible, then \(\mathcal{E}\) is satisfiable.3_
Footnote 3: A propositional formula is satisfiable if there exists an assignment to its variables that makes the formula true.
Checkfeas invokes the recursive function RecCheckFeas with input arguments \(\mathcal{C}\) and \(\mathcal{O}\), an empty set of decisions, and \(\mathcal{E}\). RecCheckFeas first checks the satisfiability of \(\mathcal{E}\) (Line 11). If it is unsatisfiable, then due to Condition 1, \(\mathcal{C}\cup\mathcal{O}\) must be infeasible. If \(\mathcal{E}\) is satisfiable, we check the feasibility of the convex relaxation \(\mathcal{C}\cup\widetilde{O}\cup\mathcal{D}\) with checkConvAndExplain (Line 12).
checkConvAndExplain calls a convex feasibility checker with the capability of generating _explanations_ in the case of infeasibility. An explanation is an (ideally minimal) infeasible subset of the input constraints [6]. The first output of the method is either feas or infeas, indicating whether the input is feasible. If so, a feasible solution \(\alpha\) is returned and \(\mathcal{L}\) is empty. If not, \(\mathcal{L}\) contains an explanation. We only care about the part of the explanation coming from the decisions in \(\mathcal{D}\), so we restrict \(\mathcal{L}\) accordingly.
For example, suppose \(\mathcal{D}=\{x_{i}=1,x_{j}=1,x_{k}=1\}\), corresponding to a certain combination of modes. In addition to deducing that this mode combination is infeasible, the convex procedure might further deduce that \(\mathcal{C}\cup\widetilde{\mathcal{O}}\cup\{x_{i}=1,x_{k}=1\}\) is already infeasible. In this case, we would get \(\mathcal{L}=\{x_{i}=1,x_{k}=1\}\). Efficiently generating explanations for infeasible linear constraints is a well-studied problem (see [15]). Explanations, also called _theory lemmas_, can be used to prune the search space.
The pruning could be done at the arithmetic level by adding a linear constraint \(x_{i}+x_{k}<2\). However, as we accumulate lemmas during the search, this could lead to a drastic slowdown of the convex procedure. An alternative approach (and the one we take) is to record this information as a propositional constraint \(\neg e(l):=\neg(e(x_{i})\wedge e(x_{k}))\) and rely on checkSat (which in general is much faster than the convex solver) to rule out infeasible mode combinations.
If checkConvAndExplain finds a feasible solution \(\alpha\) that is also a feasible solution to the precise constraints \(\mathcal{C}\cup\mathcal{O}\), then the solver returns feas (Line 15). If \(\alpha\) does not satisfy the precise constraints, the analysis is inconclusive, and branching is required to make progress (Line 16): the branch method selects one of the one-hot constraints in \(\mathcal{O}\) and performs case analysis on it. For example, suppose it chooses \(o(\{x_{1},x_{2}\})\). Then, branch will return:
\[\{\langle\mathcal{O}\backslash\{o\},\{x_{1}=1,x_{2}=0\},\mathcal{ E}\cup\{e(x_{1})\wedge\neg e(x_{2})\rangle,\] \[\langle\mathcal{O}\backslash\{o\},\{x_{1}=0,x_{2}=1\},\mathcal{ E}\cup\{\neg e(x_{1})\wedge e(x_{2})\}\rangle.\]
The procedure iteratively solves each of the sub-problems (Lines 16-19), accumulating theory lemmas (Line 19), and using them for later iterations (Line 17). The procedure returns feas if one of the sub-problems returns feas. Otherwise, the procedure returns infeas along with all the detected theory lemmas.
## IV Sum of Infeasibilities for Mode Sequences
Alg. 1 is sound and complete. It returns feas if and only if there is a feasible solution satisfying both the linear constraints and the precise one-hot constraints. However, since checkConvAndExplain operates on convex relaxations and is unaware of the binary constraints, it is likely to find spurious solutions that violate the binary constraints, leading to costly branching.
precise integral requirements. We next introduce our \(f_{\text{soil}}\) function for one-hot constraints, consider the challenge of its minimization, and present a stochastic local search solution.
### _The Sum of Infeasibilities_
As mentioned above, in convex optimization, a sum-of-infeasibilities function represents how much the current solution violates the convex constraints. Here, we build on this idea by introducing a cost function \(f_{\text{soil}}\), which computes the sum of errors introduced by the convex relaxation of one-hot constraints.4 We need \(f_{\text{soil}}\) to meet the following condition:
Footnote 4: Similar ideas have been used in our previous work to reason about different types of non-linear constraints in a different setting [28].
**Condition 2**: _Given a set of linear constraints \(\mathcal{C}\) and a set of one-hot constraints \(\mathcal{O}\) defined over variables \(\mathcal{X}\), a solution \(\alpha\) is feasible for \(\phi:=\mathcal{C}\cup\mathcal{O}\) iff \(\alpha\) is a feasible solution to \(\widetilde{\phi}\) and \(f_{\text{soil}}(\alpha)\leq 0\)._
If Condition 2 is met, then feasibility of \(\phi\) reduces to the following minimization problem:
\[\begin{split}\operatorname*{minimize}_{\mathcal{X}}& f_{\text{soil}}(\mathcal{X})\\ \text{subject to}&\widetilde{\phi}\end{split} \tag{4}\]
To formulate \(f_{\text{soil}}\) for a problem with one-hot constraints, we first define the error in a single one-hot constraint \(o(\mathcal{B})\) as: \(\mathbf{E}(\mathcal{B})=1-\max(\mathcal{B})\). Note that \(\mathbf{E}(\mathcal{B})\) subject to \(\widetilde{o}\) is non-negative and \(\mathbf{E}(\mathcal{B})=0\) iff \(o\) is satisfied. We define \(f_{\text{soil}}\) as the sum of errors in each individual one-hot constraint:
\[f_{\text{soil}}=\sum_{o(\mathcal{B})\in\mathcal{O}}\mathbf{E}(\mathcal{B}) \tag{5}\]
**Theorem 1**: \(f_{\text{soil}}\) _as given by (5) satisfies Condition 2._
Now, observe that:
\[f_{\text{soil}}=\sum_{o(\mathcal{B})\in\mathcal{O}}\mathbf{E}( \mathcal{B})=\sum_{o(\mathcal{B})\in\mathcal{O}}\big{(}\min_{b\in\mathcal{B}}( 1-b)\big{)}\] \[\quad=\min\Big{(}\big{\{}f\mid f=\sum_{o(\mathcal{B})\in\mathcal{O }}(1-b_{i}),\quad b_{i}\in\mathcal{B}\big{\}}\Big{)}. \tag{6}\]
Thus, \(f_{\text{soil}}\) is the minimum over a set, which we denote \(S_{\text{soil}}\), of linear functions. Although \(f_{\text{soil}}\) is concave piecewise-linear and cannot be directly minimized with convex optimization, we could minimize each individual function \(f\in S_{\text{soil}}\) and take the minimum over all functions. Notice that each linear function in \(S_{\text{soil}}\) has a semantic meaning: it corresponds to a particular mode sequence, i.e., a choice of modes at each time step. For notational convenience, we define \(\mathit{cost}(f,\phi)\) to be the minimum of \(f\) subject to \(\phi\). Thus, the minimization problem (4) can be restated as searching for a mode sequence \(f\in S_{\text{soil}}\), where \(\mathit{cost}(f,\widetilde{\phi})\) is minimal.
### _Minimizing the SoI with MCMC Sampling_
In the worst case, minimizing \(f_{\text{soil}}\) requires enumerating and minimizing each mode sequence. In practice, the search can terminate once a mode sequence \(f\) is found such that \(\mathit{cost}(f,\widetilde{\phi})=0\). A stochastic local search algorithm that "greedily" navigates towards the minimum can be used to search for such mode sequence. One such method is _Markov chain Monte Carlo_ (MCMC) [9], which can be viewed as a class of intelligent hill-climbing algorithms robust against local optima. In our context, MCMC methods can be used to generate a chain of mode sequences \(f_{0},f_{1},f_{2}...\in S_{\text{soil}}\), with the desirable property that in the limit, the sampled mode sequences are more frequently from the minimum region of \(\mathit{cost}(f,\widetilde{\phi})\).
```
1:Input:\(\mathcal{C}\), \(\mathcal{O}\), \(\mathcal{E}\)
2:Output: feas/infeas, feasible solution \(\alpha\), theory lemmas \(\mathcal{L}\)
3:Parameters: Sampling budget \(T\)
4:functionFastsoi(\(\mathcal{C},\mathcal{O},\mathcal{E}\))
5:\(r,\alpha_{0},\mathcal{L}\mapsto\textsc{CheckConvandExplain}(\mathcal{C}\cup \widetilde{\mathcal{O}})\)
6:if\(r=\textsc{infeas}\wedge\alpha_{0}\models\mathcal{C}\cup\mathcal{O}\)thenreturn\(r,\alpha_{0},\mathcal{L}\)
7:\(k,f\mapsto 0,\textsc{initialCost}(\alpha_{0},\mathcal{O})\)
8:\(\alpha,\epsilon\mapsto\textsc{optConv}(f,\mathcal{C}\cup\widetilde{ \mathcal{O}})\)
9:\(\mathcal{L}\mapsto\{f>0\}\)
10:while\(c>0\wedge\textsc{-exhausated}()\wedge k<T\)do
11:\(f^{\prime}\mapsto\textsc{propose}(f,\alpha,\mathcal{E}\cup\neg\mathcal{E} (\mathcal{L}))\)
12:\(\alpha^{\prime},c^{\prime}\mapsto\textsc{optConv}(f^{\prime},\mathcal{C}\cup \widetilde{\mathcal{O}})\)
13:if\(c^{\prime}>0\)then\(\mathcal{L}\mapsto\mathcal{L}\cup\{f^{\prime}>0\}\)
14:if\(\textsc{accept}(c,c^{\prime})\)then\(f,c,\alpha\mapsto f^{\prime},c^{\prime},\alpha^{\prime}\)
15:else\(k\mapsto k+1\)
16:if\(\textsc{exhausated}()\wedge c>0\)then
17:return\(\textsc{infeas},\alpha,\mathcal{L}\)
18:else
19:return\(\textsc{feas},\alpha,\mathcal{L}\)
```
**Algorithm 2** The FastSoI procedure
We use the Metropolis-Hastings (M-H) algorithm [10], a widely applicable MCMC method, to construct the sequence. The algorithm maintains a current mode sequence \(f\) and proposes to replace \(f\) with a new mode sequence \(f^{\prime}\). The proposal comes from a _proposal distribution_\(q(f^{\prime}|f)\) and is accepted with a certain _acceptance probability_\(m(f\)\(\rightarrow\)\(f^{\prime})\). If the proposal is accepted, \(f^{\prime}\) becomes the new current mode sequence. Otherwise, another proposal is considered. This process is repeated until one of the following scenarios happen: 1) we find a mode sequence \(f\) where \(\mathit{cost}(f,\widetilde{\phi})=0\); 2) a predetermined computational budget is exhausted; 3) all possible mode sequences have been considered. The last scenario rarely happens unless the space of possible mode sequences is small.
In order to employ the algorithm, we transform \(\mathit{cost}(f,\widetilde{\phi})\) into a probability distribution \(p(f)\) using a common method [18]: \(p(f)\propto\exp\bigl{(}-\beta\cdot\mathit{cost}(f,\widetilde{\phi})\bigr{)}\), where \(\beta\) is a configurable parameter. We use the following acceptance probability (often referred to as the _Metropolis ratio_) [18]: \(m(f\)\(\rightarrow\)\(f^{\prime})=\min(1,\frac{p(f^{\prime})}{p(f)})\).
Importantly, under this acceptance probability, _a proposal reducing the value of the cost function is always accepted, while a proposal that does not may still be accepted_ (with a probability that is inversely correlated with the increase in the cost). This means that the algorithm always greedily moves to a lower-cost mode sequence whenever it can, but it also has an effective means for escaping local minima.
### _Stochastic Optimization for One-Hot Constraints_
The stochastic optimization procedure FastSoI is shown in Alg. 2. It takes as input a set of linear constraints \(\mathcal{C}\) and a set of one-hot constraints \(\mathcal{O}\) and stochastically searches for a feasible solution to \(\phi:=\mathcal{C}\cup\mathcal{O}\). It is intended as a drop-in replacement of the checkConvAndExplain method in Line 12 of Alg. 1 in order to more efficiently find solutions that satisfy not only the convex relaxation but also the binary constraints. Concretely, we will replace Line 12 with
\[r,\alpha,\mathcal{L}\mapsto\textsc{FastSoI}(\mathcal{C}\cup\mathcal{D}, \mathcal{O},\mathcal{E})\]
FastSoI follows the standard two-phase convex optimization approach. Phase I (Lines 5-6) finds a feasible solution \(\alpha_{0}\) to \(\tilde{\phi}\), and phase II (Lines 7-16) attempts to optimize \(f_{\text{soi}}\) using the M-H algorithm. Phase II uses a standard convex optimization procedure optConv which takes an objective function \(f\) and a set of convex constraints as inputs and returns a pair \((\alpha,c)\), where \(\alpha\models\phi\) and \(c=cost(f,\phi)\) is the optimal value of \(f\). Phase II chooses an initial mode sequence \(f\) based on \(\alpha_{0}\) (Line 7) and computes its optimal value \(c\). The M-H algorithm repeatedly proposes a new mode sequence \(f^{\prime}\) (Line 11), computes its optimal value \(c^{\prime}\), and decides whether to accept \(f^{\prime}\) as the current mode sequence \(f\). Moreover, if a mode sequence \(f\) is found to be infeasible (e.g., \(cost(f,\tilde{\phi})>0\)), we record this as a theory lemma.
The procedure returns infeas if the convex relaxation is infeasible (Line 6) or if the mode sequences are exhausted and none of them is feasible (Line 16). Otherwise, the procedure returns feas. Moreover, when a mode sequence with cost 0 is found, the returned solution \(\alpha\) is a feasible solution to the precise constraints \(\mathcal{C}\cup\mathcal{O}\). Finally, the theory lemmas accumulated through the process are also returned for use by the main search algorithm.
Importantly, under the hood, the same convex optimization procedure is used in both phases. Therefore, from the perspective of the convex optimizer, FastSoI solves a sequence of convex optimization problems that differ only in the objective functions, and each problem can be solved incrementally by updating the objective function without the need for a restart.
The accept method decides whether a proposal is accepted based on the Metropolis ratio. Function initialCost proposes the initial mode sequence \(f\) by rounding the relaxed binary variables in \(\alpha_{0}\) to the nearest integer. The propose method is more intricate, as explained below.
### _Propagation-based proposal strategy_
The proposal strategy for the M-H algorithm is key to its convergence efficiency. One popular proposal strategy was introduced in Walksat [24]. In our context, it amounts to randomly changing the mode of a currently unsatisfied one-hot constraint. While this works reasonably well, there are two drawbacks: 1) it can get stuck in local optima (as one could cycle between mode sequences); 2) it does not take known theory lemmas into consideration. To tackle these two issues at once, we further leverage the fast propagation technology in the SAT solver to make sure that the proposed mode sequence is consistent with known propositional constraints. Our proposal strategy is described in Alg. 3.
```
1:Input: current mode sequence \(f:=\sum_{\alpha(\mathcal{B})\in\mathcal{O}}(1-b_{i}),\quad b_{i}\in\mathcal{B}\), current solution \(\alpha\), all propositional constraints \(\mathcal{E}\)
2:Output: proposed mode sequence \(f^{\prime}\)
3:functionpropose(\(f,\alpha,\mathcal{E}\))
4:\(\mathcal{O}^{\prime}\mapsto\textsc{unsatisfied}(\mathcal{O},\alpha)\)
5:\(o(\mathcal{B}^{t})\mapsto\textsc{randomChoice}(\mathcal{O}^{\prime})\)
6:\(b^{t}\mapsto\textsc{randomChoice}(\mathcal{B}^{t}\backslash\{b^{t}\})\)
7:\(Q\mapsto[e(b^{t}),e(b^{1}),\ldots e(b^{t-1}),e(b^{t+1}),\ldots,e(p^{T})]\)
8:while\(\textsc{-CHECK}\)sat\((\mathcal{E}\cup Q)\)do
9:\(Q\).popBack()return\(\textsc{satSolutionToModeSequence}()\)
```
**Algorithm 3** Propagation-based proposal strategy.
The procedure propose takes as input the current mode sequence, the current solution \(\alpha\) and all propositional constraints \(\mathcal{E}\) (including all the theory lemmas found so far). Similar to Walksat, it first randomly selects a currently unsatisfied one-hot constraint (Lines 4-5) and randomly changes its mode to \(b^{t}\) (Line 6). Then, instead of directly returning this "adjacent" mode sequence, the procedure uses a Boolean satisfiability (SAT) solver to propagate the effect of this mode switch. Concretely, we first check whether the proposed mode combination (constructed in Line 7) is consistent with \(\mathcal{E}\) (Line 9) using a SAT solver. If an inconsistency is detected, we leave one of the one-hot constraints unassigned (Line 11) and check the SAT-level consistency of the remaining partial mode combination. This process is repeated until we find a partial mode combination that is consistent with \(\mathcal{E}\). At this point, we construct a full mode sequence setting the unassigned one-hot constraints according to the assignment found by the SAT solver.
**Theorem 2**: _If \(\mathcal{E}\) is satisfiable, Alg. 3 always terminates with a proposed mode sequence that is consistent with \(\mathcal{E}\)._
In practice, the repeated invocation of checkSat does not incur significant runtime overhead (\(<5\%\) of the total runtime of FastSoI). This overhead pays off in practice because it efficiently rules out infeasible mode sequences that would otherwise be attempted by convex optimization.
## V _Soy_: An MILP Solver for PWA-Control
We now present _Soy_, a specialized MILP solver for PWA control that combines the proposed techniques in Secs. III and IV. _Soy_ is implemented in \(\sim\)15K lines of original C++ code with 100+ unit tests. Fig. 3 exhibits the architectural design of _Soy_.
**Parser**_Soy_ takes as input MILP problems in the standard mps format, which are supported by most off-the-shelf MICP solvers and can be generated by popular robotic software such as Drake [27]. This makes it relatively straightforward to use _Soy_ or compare it with existing solvers. Our mps parser transforms the given constraints into an internal representation (IR). A unique feature is that it will automatically extract _one-hot constraints_ from the mps file and create for them both the standard arithmetic representation and a propositional logic level representation.
**Pre-solver** Before entering the main solving loop described in Alg. 1, _Soy_ first performs interval analysis on both the linear constraints and the logical constraints. For example, if in a certain equation, all variables but one are bounded, the analysis will derive sound bounds for the unbounded variable. In practice, we find pre-solving can significantly reduce the runtime of the convex procedure.
**Engine** After pre-solving, the updated IR (with tighter variable bounds) is passed on to the main solving engine, which has three components, a SAT Solver, a convex solver, and the FastSoI engine. The former two combined execute the DPLL(T) procedure described in Alg. 1. We instantiate checkSat with the Cadical SAT solver [7], and checkConvAndExplain/optConv with the LP solver in the Gurobi Optimizer [16], which is capable of generating explanations [1]. We leverage incremental solving in Cadical and Gurobi whenever possible.
There are two reasons we implement our own DPLL(T) solver rather than building on top of existing SMT solvers. First, LP engines in SMT-solvers use arbitrary-precision rational arithmetic, which can be many times slower than off-the-shelf floating-point LP solvers. While precise arithmetic is necessary for ensuring soundness in formal verification, it is not as crucial in our setting, where the goal is to _find solutions_. Secondly, we hope to support more general MICP in the future, but support for convex programming beyond LP is less mature in SMT solvers.
## VI Experimental Evaluation on Control of PWA Systems
We evaluate _Soy_ on MILP encodings of PWA control problems. We compare _Soy_ with two state-of-the-art MICP solvers - Gurobi [16] and Mosek [5] - and perform ablation studies to show the effectiveness of our techniques. The main evaluation criterion is the run-time performance of finding a feasible solution (a trajectory).
### _Solver configurations_
We denote our best configuration as Soy and compare with Gurobi and Mosek in their default configurations. Anecdotally, Gurobi has a particular strength in MILP, which is in fact the setting we consider. _We remark that the relative performance of these solvers on the specific benchmarks we study may not be representative of their relative performance on other kinds of problems._
We run three ablation configurations, each of which differs from Soy by one feature: (1) Soy\(\backslash\)CDCL does not extract theory lemmas from the convex procedure or perform logical reasoning with the SAT solver (i.e., Line 11 of Alg. III is not executed). This configuration is a bare-bones branch-and-bound-like complete search that runs FastSoI at each search state; (2) Soy\(\backslash\)SoI does not perform the FastSoI procedure but runs a normal convex procedure during the search, making it less likely to find feasible solutions until it reaches the bottom of the search tree; (3) Soy\(\backslash\)Prop uses the Walksat-based strategy in Sec. IV rather than the propagation-based proposal strategy in FastSoI.
### _Benchmarks_
We evaluate on two types of existing PWA control problems, namely _Stepping Stones_[22] and _Ball and Paddle_[21].
Stepping Stones [22]In these problems, an agent must navigate from a starting position to a destination, stepping only on the (convex) blue and red regions, as illustrated in Fig. 4. We use the same dynamics as [22], but design two new maps, SS\({}_{1}\) and SS\({}_{2}\). The goal is to reach the destination position with zero velocity within a fixed number of time steps. The white regions are not reachable, while the blue and red regions have different costs. The one-hot constraint in this case is used to specify the fact that the agent is in one of the convex regions, and the number of modes is equal to the number of convex regions. This problem can be seen as an abstraction of the footstep planning scenario in Fig. 1, although we remark that the latter requires quadratic constraints, so the problem becomes MIQP rather than MILP. Supporting MIQP in _Soy_ is a direction for future work.
We generated \(100\) MILP encoding instances from map SS\({}_{1}\) by randomly varying the vertical positions of the starting and ending points in \([0,11]\times[2,9]\), respectively. We generated another \(100\) MILP problems from map SS\({}_{2}\) by randomly selecting \(5\) (red regions in Fig 3(b)) out of the \(16\) convex regions to have a lower control parameter, which makes navigation harder. We refer to the benchmark sets as SS\({}_{1}\) and SS\({}_{2}\) in Table I.
Ball and Paddle [21]This problem, as shown in Fig. 5, is to rotate a two-dimensional ball using a paddle (bottom in red) under a fixed ceiling (top in gray). The original system has 7 modes: no contact, ball in contact with the paddle (sticking, sliding left or right), and ball in contact with the ceiling (sticking, sliding left or right). We fix the
Fig. 3: Architectural overview of _Soy_.
terminal state and generate different instances by varying the initial state. The first benchmark set, denoted as BP easy, is generated by randomly varying the horizontal position of the ball in the interval \([0,l/2]\), where \(l\) is the width of the paddle. The second benchmark set, denoted as BP hard, is from randomly varying the horizontal position and the angle of the ball in the interval \([0,l]\times[-\pi,\pi]\). Each benchmark set contains \(100\) instances.
Experimental setupEach benchmark is stored in MPS format. Each configuration was given one thread and a 20 minute CPU-timeout per benchmark. Experiments were performed on a cluster equipped with Intel Xeon E5-2637 v4 CPUs running Ubuntu 20.04. We measure the CPU time each configuration takes on each instance. The instances that timed out are assigned the time limit as their running time.
### _Comparison with existing solvers_
The runtime performance of all configurations running on all benchmarks is shown in Table I. We first compare Soy with Gurobi and Mosek (the **Main** block). On the Stepping Stones benchmarks (the SS\({}_{1}\) and SS\({}_{2}\) columns), both Soy and Gurobi can efficiently solve all the instances, while Mosek fails to solve 25 instances within the time limitation. Gurobi is overall faster on those benchmarks, with an average runtime of 8.7 seconds per instance on SS\({}_{1}\) and 2.12 seconds per instance on SS\({}_{2}\); in contrast, Soy has an average runtime of 10.6 and 4.6 seconds, respectively.
The Ball and Paddle benchmarks take significantly more time to solve. Mosek struggles in this more challenging use case and only solves 27 instances. On the other hand, Soy performs significantly better than Gurobi. Not only does it take 57% less time on BP easy and 69% less time on BP hard, it also solves all of the 200 instances while Gurobi times out on 1 of them. Running with a longer time limit reveals that **it takes Gurobi 7 hours and 39 minutes to find a feasible solution for this instance**. In contrast, Soy solves this instance in 501 seconds.
A scatter plot of the runtime of Soy and Gurobi on all benchmarks is shown in Fig. 6. Overall, Soy performs best, but the two solvers frequently show complementary behaviors. Indeed, as shown in Table I, if we consider a virtual portfolio strategy that runs Soy and Gurobi in parallel (Gurobi+Soy), further performance gain can be obtained on _all_ of the four benchmark sets. This suggests that running multiple competitive solvers in parallel is advisable in practice. Marginal gain can be obtained if we also include Mosek in the portfolio (All).
### _Ablation studies_
The **Ablation** block of Tab. I shows the runtime performance of the ablation configurations. The number of solved instances drops significantly if we turn off logical reasoning (Soy\(\backslash\)CDCL) or if we do not perform FastSoI (Soy\(\backslash\)SoI). Without FastSoI, the performance of Soy is disastrous on the Ball and Paddle benchmarks, solving only 14 of the 200 instances. Interestingly, we observe a similar pattern on the performance of Mosek and Soy\(\backslash\)SoI: they both fail on a fraction of SS\({}_{1}\), solve all of SS\({}_{2}\), and perform badly on BP easy and BP hard. This leads us to speculate that Mosek does not invest in local search during MILP solving. Finally, if we use a Walksat-like proposal strategy instead of the propagation-based proposal strategy in the FastSoI procedure, we are still able to solve all instances, but the overall runtime degrades by 44% (from 6367 to 9155 seconds). The conclusion of these ablation studies is that each of the proposed techniques contributes to the competitive performance of Soy.
## VII Conclusion and Future Directions
We have introduced _Soy_, a specialized MILP solver for PWA control problems. We instantiated the DPLL(T) procedure for deciding the feasibility of combinations of linear and one-hot constraints. We also presented FastSoI, a specialized optimization procedure that stochastically minimizes the sum-of-infeasibilities to search for feasible mode sequences. _Soy_ is already competitive against highly-optimized off-the-shelf MICP solvers, suggesting that this direction of designing specialized MICP solvers for PWA control is promising.
Fig. 4: Two types of maps with example solutions (planning trajectories from _Soy_) in the Stepping Stones benchmark.
Fig. 5: Flipping the ball by \(180^{\circ}\) with a paddle. The top right image is the _initial_ state and the bottom right is the _terminal_ state. These images are sampled frames from the animation video in the supplementary materials.
Fig. 6: Scatter plot of the runtime of Soy and Gurobi.
We hope the work will garner interest in _Soy_ and welcome contributions of new benchmarks for _Soy_.
**Limitations and Future Work.**_Soy_ is still a research prototype, and as such has many limitations compared to mature solvers: 1) We do not yet support more general convex constraints, such as quadratic constraints, which the MICP formulation in many existing works (e.g., [13]) requires; 2) We do not yet support other logical constraints common in robotic applications (e.g., "at-least-one", "at-most-one"); 3) We only perform feasibility checks instead of finding optimal solutions.
We hope to address all these limitations in the future. We note that the proposed techniques are compatible with more general convex constraints and can be in principle extended to other logical constraints, though actually doing these efficiently requires non-trivial research and engineering effort. On the other hand, a strong feasibility checker is the first step towards a strong optimizer. In the SMT community, there is an active effort to extend SMT solvers to do optimization (a line of work called "optimization modulo theories") [23] and insights there could potentially be borrowed.
Supporting parallelism, incrementality, and APIs to encode constraints are also important next steps for our tool.
**Acknowledgment.** We thank Tobia Marcucci, Gustavo Araya, Richard McDaniel, Jorg Hoffmann, Aina Niemetz, Mathias Preiner, and Makai Mann for the helpful discussions.
|
2307.16218 | Products of Traceless and Semi-Traceless Matrices over Division Rings
and their Applications | We study the problem when every matrix over a division ring is representable
as either the product of traceless matrices or the product of semi-traceless
matrices, and also give some applications of such decompositions. Specifically,
we establish the curious facts that every matrix over a division ring is a
product of at most twelve traceless matrices as well as a product of at most
four semi-traceless matrices. We also examine finitary matrices and certain
images of non-commutative polynomials by applying the obtained so far results
showing that the elements of some finite-dimensional algebras over a special
field as well as that these of the matrix algebra over any division ring
possess some rather interesting and non-trivial decompositions into products of
at most four generalized commutators. | Peter V. Danchev, Truong Huu Dung, Tran Nam Son | 2023-07-30T12:58:53Z | http://arxiv.org/abs/2307.16218v1 | # Products of Traceless and Semi-Traceless Matrices over Division Rings
###### Abstract.
We study the problem when every matrix over a division ring is representable as either the product of traceless matrices or the product of semi-traceless matrices, and also give some applications of such decompositions. Specifically, we establish the curious facts that every matrix over a division ring is a product of at most twelve traceless matrices as well as a product of at most four semi-traceless matrices. We also examine finitary matrices and certain images of non-commutative polynomials by applying the obtained so far results showing that the elements of some finite-dimensional algebras over a special field as well as that these of the matrix algebra over any division ring possess some rather interesting and non-trivial decompositions into products of at most four generalized commutators.
Key words and phrases:Traceless matrices, Semi-traceless matrices, Fields, Division rings, Images, Non-commutative polynomials, Versik-Kerov group 2020 _Mathematics Subject Classification_. 16U60; 16S34; 16U99 * Corresponding author: Peter V. Danchev
## 1. Introduction
In this paper we study the following properties of matrices:
\[\begin{pmatrix}i&j\\ -j&i\end{pmatrix}=\begin{pmatrix}i&j\\ -j&i\end{pmatrix}\]
and \(\operatorname{M}_{n}(D)\) is the matrix of the form
\[\begin{pmatrix}i&j\\ -j&i\end{pmatrix}=\begin{pmatrix}-j&i\end{pmatrix}\]
and \(\operatorname{M}_{n}(D)\) is the matrix of the form
\[\begin{pmatrix}i&j\\ -j&i\end{pmatrix}=\begin{pmatrix}-j&i\end{pmatrix}\]
and \(\operatorname{M}_{n}(D)\) is the matrix of the form
\[\begin{pmatrix}i&j\\ -j&i\end{pmatrix}=\begin{pmatrix}-j&i\end{pmatrix}\]
and \(\operatorname{M}_{n}(D)\) is the matrix of the form
\[\begin{pmatrix}i&j\\ -j&i\end{pmatrix}=\begin{pmatrix}-j&i\end{pmatrix}\]
and \(\operatorname{M}_{n}(D)\) is the matrix of the form
\[\begin{pmatrix}i&j\\ -j&i\end{pmatrix}=\begin{pmatrix}-j&i\end{pmatrix}\]
but the matrix \(\begin{pmatrix}i&j\\ -j&i\end{pmatrix}\) is not traceless. Note that similar matrices do not have same trace in \(\operatorname{M}_{n}(D)\). For instance, if \(a\) and \(b\) are in \(D\) that do not commute, then the matrix \(\begin{pmatrix}a&0\\ 0&-a\end{pmatrix}\) is traceless, but however the product
\[\begin{pmatrix}1&0\\ 0&b\end{pmatrix}\begin{pmatrix}a&0\\ 0&-a\end{pmatrix}\begin{pmatrix}1&0\\ 0&b\end{pmatrix}^{-1}\]
is not traceless. Therefore, we shall proceed with semi-traceless matrices in \(\operatorname{M}_{n}(D)\). So, we denote by \(\operatorname{sl}_{n}(D)\) the set of all traceless matrices in \(\operatorname{M}_{n}(D)\).
The objective which motivates writing of this article is to initiate an in-depth study of the decomposing properties of matrices over division rings into products of traceless and semi-traceless matrices. We also focussing on the exploration of decomposable properties of finitary matrices over division rings and connect them with traceless and semi-traceless matrices. We apply what we achieved to the images of non-commutative polynomials.
Concretely, our work is organized thus: In the next second section, we investigate the decomposition of matrices from the general linear group over an arbitrary division ring into products of traceless matrices of the matrix ring (see Theorem 2.6 and Proposition 2.8). In the subsequent third section, we examine the decomposition of matrices over a non-commutative division ring into products of semi-traceless matrices (see Theorem 3.6 and Proposition 3.7). In the fourth section, we focus on the so-called _finitary_ matrices over division rings by proving some deep results pertaining to their decomposition into semi-traceless matrices (see Theorems 4.3, 4.4 and 4.9). In the fifth section, we concentrate on the images of non-commutative polynomials by applying the results from the foregoing sections in order to establish certain decompositions of elements of some algebras (see Theorem 5.7, Proposition 5.8 and Corollary 5.9). We finish our study with an important problem which seems to be extremely difficult (see Problem 5.10).
## 2. Products of traceless matrices
We begin here with the following statement.
**Theorem 2.1**.: _If \(D\) is a field, then then each matrix in \(\operatorname{M}_{n}(D)\) is a product of two traceless matrices in \(\operatorname{M}_{n}(D)\)._
Proof.: Assume that \(D\) is a field. Let \(A\in\operatorname{M}_{n}(D)\). Then, \(\operatorname{sl}_{n}(D)\) is a linear hyperplane of \(\operatorname{M}_{n}(D)\). Owing to [22, Theorem 3 and Proposition
12], we conclude that \(A\) is a product of two matrices in \(\operatorname{sl}_{n}(D)\), as expected.
However, for matrices over division rings the situation is rather more complicated. So, for our successful presentation, our next pivotal assertion is the following.
**Theorem 2.2** (Bruhat decomposition).: _Let \(D\) be a division ring and \(n\geq 2\) an integer. If \(A\in\operatorname{M}_{n}(D)\), then \(A\) can be expressed as \(LPHU\) in which \(L\) is a lower triangular matrices in \(\operatorname{GL}_{n}(D)\) with the entries on the main diagonal are \(1\), \(P\) is a permutation matrix in \(\operatorname{M}_{n}(D)\), \(H\) is a diagonal matrix in \(\operatorname{M}_{n}(D)\) and \(U\) is an upper triangular matrix in \(\operatorname{GL}_{n}(D)\) with the entries on the main diagonal are \(1\)._
Proof.: This can be found in [10, Theorem 9.2.2, Page 349].
Further, to prove our main result in this section, we need a series technical claims as follows:
**Lemma 2.3**.: _Let \(D\) be a division ring. Then, the following statements are true._
1. _If_ \(a_{1},a_{2}\in D\)_, then_ \(\begin{pmatrix}a_{1}&0\\ 0&a_{2}\end{pmatrix}\) _is a product of two traceless matrices in_ \(\operatorname{M}_{n}(D)\)_._
2. _If_ \(a_{1},a_{2},a_{3}\in D\setminus\{0\}\)_, then_ \(\begin{pmatrix}a_{1}&0&0\\ 0&a_{2}&0\\ 0&0&a_{3}\end{pmatrix}\) _is a product of three traceless matrices in_ \(\operatorname{M}_{n}(D)\)_._
Proof.: (i) If \(a_{1},a_{2}\in D\), then an easy check shows that
\[\begin{pmatrix}a_{1}&0\\ 0&a_{2}\end{pmatrix}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\begin{pmatrix}0&a_{2}\\ a_{1}&0\end{pmatrix},\]
as required.
If \(a_{1},a_{2},a_{3}\in D\setminus\{0\}\), then a routine inspection shows that
\[\begin{pmatrix}a_{1}&0&0\\ 0&a_{2}&0\\ 0&0&a_{3}\end{pmatrix}=\begin{pmatrix}0&a_{3}&0\\ -a_{1}&a_{3}&0\\ 0&0&-a_{3}\end{pmatrix}\begin{pmatrix}1&-a_{1}^{-1}a_{2}&0\\ a_{3}^{-1}a_{1}&0&0\\ 0&0&-1\end{pmatrix},\]
as required.
**Lemma 2.4**.: _Let \(D\) be a division ring and \(n\geq 2\) an integer. If the elements \(a_{1},a_{2},\ldots,a_{n}\in D\), then the diagonal matrix with entries \(a_{1},a_{2},\ldots,a_{n}\) is a product of two traceless matrices in \(\operatorname{M}_{n}(D)\)._
Proof.: This claim is directly inferred from Lemma 2.3 by simple induction.
**Lemma 2.5**.: _Let \(D\) be a division ring and \(n\geq 2\) an integer. If \(A\) is an upper (resp., lower) triangular matrix in \(\mathrm{M}_{n}(D)\) whose entries on the main diagonal are \(1\), then \(A\) is a product of at most four traceless matrices in \(\mathrm{M}_{n}(D)\)._
Proof.: Suppose that \(A\) is an upper triangular matrix in \(\mathrm{M}_{n}(D)\) such that the entries on the main diagonal are \(1\). By virtue of a plain technical manipulation, which we leave to the interested reader to be inspected, the matrix \(A\) can be expressed as the product \(BC\) of matrices \(B\) and \(C\) in which \(B\) is a product of two traceless matrices in \(\mathrm{M}_{n}(D)\) and \(C\) is the diagonal matrix in \(\mathrm{M}_{n}(D)\) with only one non-zero entry and \(1\)'s elsewhere on the main diagonal. So, with Lemma 2.4 at hand, \(A\) is a product of at most four traceless matrices in \(\mathrm{M}_{n}(D)\), as expected.
We are now ready to prove our main theorem of this section.
**Theorem 2.6**.: _Let \(D\) be a division ring and \(n\geq 2\) an integer. If \(A\in\mathrm{M}_{n}(D)\), then \(A\) is a product of at most twelve traceless matrices in \(\mathrm{M}_{n}(D)\)._
Proof.: Given \(A\in\mathrm{M}_{n}(D)\). Invoking Theorem 2.2, \(A\) can be expressed as \(LPHU\) in which \(L\) is a lower triangular matrices in \(\mathrm{GL}_{n}(D)\) having the entries on the main diagonal exactly \(1\), \(P\) is a permutation matrix in \(A\in\mathrm{M}_{n}(D)\), \(H\) is a diagonal matrix and \(U\) is an upper triangular matrix in \(\mathrm{GL}_{n}(D)\) possessing the entries on the main diagonal precisely \(1\). Now, Lemma 2.5 allows us to derive that \(L\) and \(U\) are products of two four traceless matrices in \(\mathrm{M}_{n}(D)\). Since \(P\) is a matrix over a subfield of \(D\), we deduce that \(P\) is a product of two traceless matrices in \(\mathrm{M}_{n}(D)\). However, Lemma 2.4 leads us to the fact that \(H\) is a product of two traceless matrices in \(\mathrm{M}_{n}(D)\). Finally, \(A\) is a product of at most twelve traceless matrices in \(\mathrm{M}_{n}(D)\), as stated.
We end this section with the following two useful statements.
**Proposition 2.7**.: _Let \(D\) be a division ring. Then, every matrix in \(\mathrm{M}_{2}(D)\) is a product of at most six traceless matrices in \(\mathrm{M}_{2}(D)\)._
Proof.: Choose \(A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{M}_{2}(D)\). If \(a\neq 0\), then
\[A=(\begin{smallmatrix}1&0\\ 0&-1\end{smallmatrix})\left(\begin{smallmatrix}1&0\\ -ca^{-1}&-1\end{smallmatrix})\left(\begin{smallmatrix}0&a\\ 1&0\end{smallmatrix}\right)\left(\begin{smallmatrix}0&d-ca^{-1}b\\ 1&0\end{smallmatrix}\right)\left(\begin{smallmatrix}1&0\\ 0&-1\end{smallmatrix}\right)\left(\begin{smallmatrix}1&0\\ 0&-1\end{smallmatrix}\right).\]
If \(a=0\) and \(b\neq 0\), then
\[A=\begin{pmatrix}0&b\\ c&d\end{pmatrix}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\begin{pmatrix}1&0\\ -db^{-1}&-1\end{pmatrix}\begin{pmatrix}0&b\\ c&0\end{pmatrix}.\]
If \(a=b=0\) and \(c\neq 0\), then
\[A=\begin{pmatrix}0&0\\ c&d\end{pmatrix}=\begin{pmatrix}0&0\\ c&0\end{pmatrix}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\begin{pmatrix}1&c^{-1}d\\ 0&-1\end{pmatrix}.\]
If \(a=b=c=0\), then
\[A=\begin{pmatrix}0&0\\ 0&d\end{pmatrix}=\begin{pmatrix}0&0\\ 1&0\end{pmatrix}\begin{pmatrix}0&d\\ 1&0\end{pmatrix}.\]
This substantiates our claim.
**Proposition 2.8**.: _Let \(R\) be a ring and \(n\geq 2\) an integer. If \(A\in\mathrm{M}_{n}(R)\), then \(A\) is a sum of two products of pairs of traceless matrices in \(\mathrm{M}_{n}(R)\)._
Proof.: Choose \(A=(a_{i,j})\in\mathrm{M}_{n}(R)\). So, \(A\) can be expressed as the sum \(A=B+C\) in which \(B=(b_{i,j})\in\mathrm{M}_{n}(R)\) is a lower-triangular matrix and \(C=(c_{i,j})\in\mathrm{M}_{n}(R)\) is an upper-triangular matrix satisfying the equations \(b_{1,1}=a_{1,1},c_{1,1}=0,b_{2,2}+c_{2,2}=a_{2,2},b_{3,3}+c_{3,3}=a_{3,3}, \ldots,b_{n-1,n-1}+c_{n-1,n-1}=a_{n-1,n-1},b_{n,n}=0,c_{n,n}=a_{n,n}\). Let us now \(B_{1}\) be the matrix
\[\begin{pmatrix}b&b_{1,1}&0&\cdots&0\\ 0&b_{2,1}&b_{2,2}&\cdots&0\\ 0&b_{3,1}&b_{3,2}&\ddots&0\\ \vdots&\vdots&\vdots&\ddots&b_{n-1,n-1}\\ 0&b_{n,1}&b_{n,2}&\cdots&b_{n,n-1}\end{pmatrix}\]
in which the element \(b\) is chosen so that this matrix will have trace \(0\), and let \(B_{2}\) be the matrix
\[\begin{pmatrix}0&&&&\\ 1&0&&&\\ &1&\ddots&&\\ &&\ddots&0&0\\ &&&1&0\end{pmatrix}.\]
Also, let \(C_{1}\) be the matrix
\[\begin{pmatrix}c_{1,2}&c_{1,3}&\cdots&c_{1,n}&0\\ c_{2,2}&c_{2,3}&\cdots&c_{2,n}&0\\ 0&c_{3,3}&\cdots&c_{3,n}&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&c_{n,n}&c\end{pmatrix},\]
where the element \(c\) is chosen so that this matrix will be traceless, and let \(C_{2}\) be the transpose matrix of \(B_{2}\). Therefore,
is a sum of two products of pairs of traceless matrices in \(\mathrm{M}_{n}(R)\), as required.
## 3. Products of semi-traceless matrices
Everywhere in this section, let \(D\) be a division ring and \(n>1\) a natural number and we use the notations: \(\mathrm{M}_{n}(D)\) and \(\mathrm{GL}_{n}(D)\) are respectively the ring of matrices of degree \(n\) over \(D\) and the general linear group of matrices of degree \(n\) over \(D\). Assume that \(f(x)=x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}\in D[x]\) is a monic polynomial in a variable \(x\) with coefficients \(a_{i}\) in \(D\). _A companion matrix_\(C(f)\) of \(f(x)\) is defined as
\[C(f)=\begin{pmatrix}0&&&-a_{0}\\ 1&&&-a_{1}\\ &\ddots&&\vdots\\ 0&&1&-a_{n-1}\end{pmatrix}\in\mathrm{M}_{n}(D).\]
We start here with the following.
**Lemma 3.1**.: _If \(A\in\mathrm{GL}_{n}(D)\) and \(B\) is a companion matrix in \(\mathrm{GL}_{n}(D)\), then \(A\) can be expressed as the product \(B^{\prime}C\) of matrices \(B^{\prime}\) and \(C\) in which \(B^{\prime}\) is similar to \(B\) and \(C\) is similar to a companion matrix in \(\mathrm{GL}_{n}(D)\)._
Proof.: This is a special case of [25, Theorem 10].
Our basic technicality asserts thus.
**Lemma 3.2**.: _Each companion matrix in \(\mathrm{M}_{n}(D)\) is a product of two traceless matrices in \(\mathrm{M}_{n}(D)\)._
Proof.: Let \(A\) be a companion matrix in \(\mathrm{M}_{n}(D)\), namely
\[A=\begin{pmatrix}0&&&a_{0}\\ 1&&&a_{1}\\ &\ddots&&\vdots\\ 0&&1&a_{n-1}\end{pmatrix}.\]
First, we consider \(n=2\). If \(a_{0}=0\), then
\[A=\begin{pmatrix}0&0\\ 1&a_{1}\end{pmatrix}=\begin{pmatrix}0&0\\ 1&0\end{pmatrix}\begin{pmatrix}1&a_{1}\\ 0&-1\end{pmatrix}.\]
If \(a_{0}\neq 0\), then
\[A=\begin{pmatrix}0&a_{0}\\ 1&a_{1}\end{pmatrix}=\begin{pmatrix}a_{1}&-a_{0}\\ 1+a_{1}a_{0}^{-1}a_{1}&-a_{1}\end{pmatrix}\begin{pmatrix}1&0\\ a_{0}^{-1}a_{1}&-1\end{pmatrix}\]
is a product two traceless matrices. Now, we consider \(n>2\). Then, \(A\) can be expressed as the product \(BC\) in which
\[B=\begin{pmatrix}0&0&0&\cdots&0&-a_{0}\\ 0&1&0&\cdots&0&-a_{1}\\ 0&0&1&\cdots&0&-a_{2}\\ \vdots&\vdots&\ddots&\ddots&\ddots&\vdots\\ 0&0&\cdots&0&1&-a_{n-2}\\ 1&-1&0&\cdots&0&-(n-2)\end{pmatrix}\]
and
\[C=\begin{pmatrix}1&0&0&\cdots&1&a_{n-1}-(n-2)\\ 1&0&0&\cdots&0&0\\ 0&1&0&\cdots&0&0\\ \vdots&\ddots&\ddots&\vdots&\vdots&\\ 0&0&\cdots&1&0&0\\ 0&0&0&\cdots&0&-1\end{pmatrix}\]
are traceless matrices, as required.
**Remark 3.3**.: Let \(R\) be a ring. We know that, if \(n\geq 3\), then every companion matrix in \(\mathrm{M}_{n}(R)\) is a product of two traceless matrices in \(\mathrm{M}_{n}(R)\). However, if \(n=2\), then every companion matrix in \(\mathrm{M}_{2}(R)\) is a product of four traceless matrices in \(\mathrm{M}_{2}(R)\). Indeed, a simple calculation shows that
\[\begin{pmatrix}0&a_{0}\\ 1&a_{1}\end{pmatrix}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\begin{pmatrix}0&1\\ a_{0}&0\end{pmatrix}\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\begin{pmatrix}1&a_{1}\\ 0&-1\end{pmatrix}.\]
The next claim is well-known, but is formulated here only for convenience and completeness.
**Lemma 3.4**.: _Each nilpotent matrix in \(\mathrm{M}_{n}(D)\) is a semi-traceless matrix._
Proof.: This is implied directly from [2, Lemma 3.2].
The next comments are worthwhile to explain somewhat the more complicated situation here.
**Remark 3.5**.: Let \(\mathbb{H}\) be the real quaternion division ring with \(i,j,k\) satisfying \(i^{2}=j^{2}=k^{2}=-1\) and \(ij=-ji=k\). Thus, if
\[A=\begin{pmatrix}i&j\\ -j&i\end{pmatrix}\in\mathrm{M}_{2}(\mathbb{H}),\]
then \(A^{2}=0\) and \(A\) is a nilpotent matrix whose trace is \(2i\neq 0\). Therefore, there is a nilpotent matrix over a non-commutative division ring which is not traceless.
Our chief result in the present section is the following one.
**Theorem 3.6**.: _If \(D\) is a non-commutative division ring, then each matrix in \(\mathrm{M}_{n}(D)\) is a product of at most four semi-traceless matrices in \(\mathrm{M}_{n}(D)\)._
Proof.: Assume that \(D\) is a non-commutative division ring. Let \(A\in\mathrm{M}_{n}(D)\). If \(A\in\mathrm{GL}_{n}(D)\), then, combining Lemma 3.1 and Lemma 3.2, one verifies that \(A\) is a product of three semi-traceless matrices in \(\mathrm{M}_{n}(D)\). Now, we consider \(A\notin\mathrm{GL}_{n}(D)\). According to [9, Corollary 7], there exist \(B\in\mathrm{GL}_{n}(D)\) and \(C\) is nilpotent in \(\mathrm{M}_{n}(D)\) such that \(A\) can be expressed as the product of \(B\) and \(C\), writing \(A=BC\). In conjunction with the previous argument, one checks that \(B\) is a product of three semi-traceless matrices in \(\mathrm{M}_{n}(D)\). On the other hand, Lemma 3.4 applies to get that \(C\) is a semi-traceless matrix. Therefore, \(A\) is a product of four semi-traceless matrices in \(\mathrm{M}_{n}(D)\), as promised.
A question which immediately arises is whether the estimation of the number of matrices in the decomposition is the exact one or, in other words, can we decrease the number of matrices in this decomposition? In this aspect, we can just offer the following.
**Proposition 3.7**.: _Let \(D\) be division ring with center \(F\) and \(n\geq 2\) an integer. If \(F\) is perfect, then every matrix in \(\mathrm{M}_{n}(D)\) is a product of two semi-traceless matrices in \(\mathrm{M}_{n}(D)\)._
Proof.: If \(F\) is perfect, then employing [24, Proposition 1], we can detect that every matrix in \(\mathrm{M}_{n}(D)\) is similar to a matrix over the subfield of \(D\) containing \(F\). Furthermore, consulting with Theorem 2.1, each matrix in \(\mathrm{M}_{n}(D)\) is a product of two semi-traceless matrices in \(\mathrm{M}_{n}(D)\), as required.
## 4. Finitary matrices
Let \(D\) be a division ring. Denote by \(\mathrm{M}_{\infty}(D)\) the ring of all _finitary matrices_, that is, countably infinite matrices with only finitely many non-zero entries. Note that for any matrix \(A\in\mathrm{M}_{\infty}(D)\), we may find a positive integer \(k\) such that \(A\) can be expressed as the block matrix
\[A=\begin{pmatrix}A^{\prime}&0\\ 0&0\end{pmatrix},\]
where \(A^{\prime}\in\mathrm{M}_{k}(D)\). So, \(A\) is called a traceless matrix in \(\mathrm{M}_{\infty}(D)\) if \(A^{\prime}\) is a traceless matrix in \(\mathrm{M}_{k}(D)\).
Let \(F\) be a field and let \(F\langle\mathfrak{X}\rangle\) be the free algebra generated by the set \(\mathfrak{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots\}\), that is, the algebra of non-commutative polynomials in the variables \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots\). For any \(F\)-algebra \(A\) and \(f=f(\mathbf{x}_{1},\ldots,\mathbf{x}_{m})\in F\langle\mathfrak{X}\rangle\), let \(f(A)=\{f(a_{1},\ldots,a_{m})\mid a_{1},\ldots,a_{m}\in A\}\) and we call \(f(A)\) the _image_ of \(f\). Note that \(f(A)\) is invariant under conjugation. The polynomial \(f=f(\mathbf{x}_{1},\ldots,\mathbf{x}_{m})\in F\langle\mathfrak{X}\rangle\) is called _multilinear_ if \(f\) is of the form:
\[f=\sum_{\sigma\in S_{m}}\lambda_{\sigma}\mathbf{x}_{\sigma(1)}\mathbf{x}_{ \sigma(2)}\cdots\mathbf{x}_{\sigma(m)}\]
in which \(\lambda_{\sigma}\in F\) and \(S_{m}\) is the symmetric group of degree \(m\).
**Proposition 4.1**.: _[_26_, Corollary 1.2]_ _Let \(F\) be an infinite field and let \(f\in F\langle\mathfrak{X}\rangle\) be a non-zero multilinear polynomial. Then, any traceless finitary matrices over \(F\) is a image of \(f\) evaluated on \(\mathrm{M}_{\infty}(F)\)._
We thus have to following corollary to Theorem 2.1 and Proposition 4.1.
**Corollary 4.2**.: _Let \(F\) be an infinite field. Then, any matrix in \(\mathrm{M}_{\infty}(F)\) is a product of two traceless matrices in \(\mathrm{M}_{\infty}(F)\). As a result, any matrix in \(\mathrm{M}_{\infty}(F)\) is a product of two images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(F)\)._
From Theorem 2.6, we conclude that if we let \(D\) to be a non-commutative division ring, then any matrix in \(\mathrm{M}_{\infty}(F)\) is a product of twelve traceless matrices in \(\mathrm{M}_{\infty}(D)\). Regarding semi-traceless matrices, we can decrease the number of matrices in the decomposition of Theorem 3.6.
We now manage to establish the following two chief results.
**Theorem 4.3**.: _Let \(D\) be a non-commutative division ring. Then, any matrix in \(\mathrm{M}_{\infty}(F)\) is a product of two semi-traceless matrices in \(\mathrm{M}_{\infty}(D)\)._
Proof.: Let \(A\in\mathrm{M}_{\infty}(D)\). Then, there exists a positive integer \(k\) such that \(A\) can be expressed as the block matrix
\[A=\begin{pmatrix}A^{\prime}&0\\ 0&0\end{pmatrix},\]
where \(A^{\prime}\in\mathrm{M}_{k}(D)\). According to [11, Section 8.4, Page 505], \(A^{\prime}\) is similar to the block diagonal matrix \(A^{\prime}_{k_{1}}\oplus\cdots\oplus A^{\prime}_{k_{t}}\) of companion matrices \(A^{\prime}_{k_{1}}\in\mathrm{M}_{k_{1}}(D),\ldots,A^{\prime}_{k_{t}}\in\mathrm{ M}_{k_{t}}(D)\) in which \(k_{1}+\cdots+k_{t}=k\), that is, there exists \(P\in\mathrm{GL}_{k}(D)\) such that \(P^{-1}A^{\prime}P=A^{\prime}_{k_{1}}\oplus\cdots\oplus A^{\prime}_{k_{t}}\). We divide the proof into the following cases:
**Case 1:** If \(k_{1},\ldots,k_{t}\) are all different from \(1\), then by Lemma 3.2, \(A^{\prime}\) is a product of two semi-traceless matrices in \(\mathrm{M}_{k}(D)\). Therefore, \(A\) is a product of two semi-traceless matrices in \(\mathrm{M}_{\infty}(D)\).
**Case 2:** There is only \(k_{i}\in\{k_{1},\ldots,k_{t}\}\) such that \(k_{i}=1\). Without loss of generality, we can assume \(k_{t}=1\). Instead of considering \(A^{\prime}\), we set \(A^{\prime\prime}=\begin{pmatrix}A^{\prime}&0\\ 0&0\end{pmatrix}\in\mathrm{M}_{k+1}(D)\). Then, one verifies that
\[\begin{pmatrix}P&0\\ 0&1\end{pmatrix}^{-1}A^{\prime\prime}\begin{pmatrix}P&0\\ 0&1\end{pmatrix}=A^{\prime}_{k_{1}}\oplus\cdots\oplus A^{\prime}_{k_{t-1}} \oplus\begin{pmatrix}A^{\prime}_{k_{t}}&0\\ 0&0\end{pmatrix}.\]
Note that \(\begin{pmatrix}A^{\prime}_{k_{t}}&0\\ 0&0\end{pmatrix}\) is a diagonal matrix in \(\mathrm{M}_{2}(D)\) and \(A^{\prime}_{k_{1}},\ldots,A^{\prime}_{k_{t-1}}\) are companion matrices of size greater than \(1\). According to Lemma 2.3 and Lemma 3.2, \(A^{\prime\prime}\) is a product of two traceless matrices in \(\mathrm{M}_{k+1}(D)\). Therefore, \(A\) is a product of two semi-traceless matrices in \(\mathrm{M}_{\infty}(D)\).
**Case 3:** There is at least two \(k_{i}\in\{k_{1},\ldots,k_{t}\}\) such that \(k_{i}=1\). Similarly, by using Lemma 2.3 and Lemma 3.2, \(A^{\prime}\) is a product of two traceless matrices in \(\mathrm{M}_{k}(D)\). Therefore, \(A\) is a product of two semi-traceless matrices in \(\mathrm{M}_{\infty}(D)\).
**Theorem 4.4**.: _Let \(D\) be a non-commutative division ring with center \(F\). Then, any matrix in \(\mathrm{M}_{\infty}(D)\) is a product of at most seven images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\)._
To prove Theorem 4.4, we need a series technical claims as follows:
In [16, Lemma 2.1], it was proven that if \(D\) is finite dimensional over its center, then every non-central matrix in \(\mathrm{GL}_{n}(D)\) is similar to \(XHY\) where \(X\) is a lower triangular matrix with the main diagonal entries are \(1\) and \(Y\) is an upper triangular matrix with the main diagonal entries are \(1\) and \(H\) is the diagonal matrix with the main diagonal entries are \(1,1,\ldots,1,h\) for some \(h\in D\setminus\{0\}\). In fact, this result still holds for arbitrary division rings, because the technique of the proof does not use the finiteness of their dimensions.
We, thereby, arrive at the following.
**Lemma 4.5**.: _[_16_, Lemma 2.1]_ _Let \(D\) be a division ring and \(n\geq 2\) an integer. If \(A\in\mathrm{GL}_{n}(D)\) is non-central, then there exists \(P\in\mathrm{GL}_{n}(D)\) such that \(P^{-1}AP\) has the form_
\[P^{-1}AP=XHY,\]
_where \(X\) is a lower triangular matrix with the main diagonal entries are \(1\) and \(Y\) is an upper triangular matrix with the main diagonal entries
_are \(1\) and \(H\) is the diagonal matrix with the main diagonal entries are \(1,1,\ldots,1,h\) for some \(h\in D\setminus\{0\}\)._
Let \(D\) be a division ring. For two elements \(\alpha,\beta\in D\), denote by
\[J_{n}(\alpha,\beta)=\begin{pmatrix}\alpha&\beta&0&\cdots&0&0&0\\ 0&\alpha&\beta&\cdots&0&0&0\\ 0&0&\alpha&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&\alpha&\beta&0\\ 0&0&0&\cdots&0&\alpha&\beta\\ 0&0&0&\cdots&0&0&\alpha\end{pmatrix}\in\mathrm{M}_{n}(D)\]
is the upper triangular matrix with entries the main diagonal and the first super diagonal are respectively \(\alpha\) and \(\beta\). For short, we will just write \(J_{n}(\alpha)\) for \(J_{n}(\alpha,1)\).
We also need the following technicality which we list below for completeness of the exposition.
**Lemma 4.6**.: _[_13_, Theorem 7]_ _Let \(D\) be a division ring with center \(F\), \(n\) a positive integer and \(A\in\mathrm{M}_{n}(D)\). If \(A\) is triangulate and algebraic over \(F\), then there exist positive integers \(s,m_{1},\ldots,m_{s}\) and elements \(\alpha_{1},\alpha_{2},\ldots,\alpha_{s},\beta_{1},\beta_{2},\ldots,\beta_{s} \in D\) such that \(A\) is similar to \(\bigoplus_{i=1}^{s}J_{m_{i}}(\alpha_{i},\beta_{i})\) in which \(m_{1}+m_{2}+\cdots+m_{s}=n\) and \(\beta_{i}\notin\{\alpha_{i}a-a\alpha_{i}\mid a\in D\}\) for every \(i=1,2,\ldots,s\). Additionally, for each \(i=1,2,\ldots,s\), if \(\alpha_{i}\) is separable over \(F\), then \(\beta_{i}\) may be chosen to be \(1\)._
We now have all the ingredients necessary to show the truthfulness of the following key result.
**Proposition 4.7**.: _Let \(D\) be a division ring and \(n\geq 2\) an integer. If \(A\in\mathrm{M}_{n}(D)\) is unipotent, then \(A\) is similar to the diagonal block matrix \(\bigoplus_{i=1}^{s}J_{m_{i}}(1)\) in which \(m_{1}+m_{2}+\cdots+m_{s}=n\)._
Proof.: Let \(A\in\mathrm{M}_{n}(D)\) be unipotent. Then, \(\mathrm{I}_{n}-A\) is nilpotent. Thanks to [2, Lemma 3.2], one sees that \(\mathrm{I}_{n}-A\) is similar to a strictly upper triangular matrix, which implies that \(A\) is similar to a upper triangular matrix whose diagonal entries are all \(1\). Employing Lemma 4.6, one checks that \(A\) is similar to the diagonal block matrix \(\bigoplus_{i=1}^{s}J_{m_{i}}(\alpha_{i},\beta_{i})\) in which \(m_{1}+m_{2}+\cdots+m_{s}=n\) and \(\beta_{i}\notin\{\alpha_{i}a-a\alpha_{i}\mid a\in D\}\) for every \(i=1,2,\ldots,s\). Moreover, since \(A\) is similar to a upper triangular matrix whose diagonal entries are all \(1\), we have that \(\alpha_{i}\)'s are \(1\). Furthermore, using Lemma 4.6 again, we deduce that \(\beta_{i}\)'s are \(1\). Therefore, \(A\) is similar to the diagonal block matrix \(\bigoplus_{i=1}^{s}J_{m_{i}}(1)\), as required.
We now continue with
_The proof of Theorem 4.4._ Let \(A\in\mathrm{M}_{\infty}(D)\). Then, there exists a positive integer \(n\) such that \(A\) can be expressed as the block matrix
\[A=\begin{pmatrix}A^{\prime}&0\\ 0&0\end{pmatrix},\]
where \(A^{\prime}\in\mathrm{M}_{n}(D)\). By using [9, Corollary 7], there exist \(G\in\mathrm{GL}_{n}(D)\) and \(N\) is nilpotent in \(\mathrm{M}_{n}(D)\) such that \(A\) can be expressed as the product of \(G\) and \(N\), writing \(A=GN\). If \(G\) is central, then by[14, SS21, Theorem 1, Page 140], there exists \(\lambda\in F\) such that \(G=\lambda\mathrm{I}_{n}\). Therefore, if \(G\) is central, then by using Corollary 4.2, the block matrix
\[\begin{pmatrix}G&0\\ 0&0\end{pmatrix}\]
is a product of two images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\). Now, we consider \(G\) is non-central. According to Proposition 4.5, there exists \(P\in\mathrm{GL}_{n}(D)\) such that \(P^{-1}GP\) has the form
\[P^{-1}GP=XHY,\]
where \(X\) is a lower triangular matrix with the main diagonal entries are \(1\) and \(Y\) is an upper triangular matrix with the main diagonal entries are \(1\) and \(H\) is the diagonal matrix with the main diagonal entries are \(1,1,\ldots,1,h\) for some \(h\in D\setminus\{0\}\). According to Proposition 4.7, both \(X\) and \(Y\) are similar to matrices over \(F\). On the other hand, \(H\) is a matrix over the subfield \(F(h)\) generated by \(h\) over \(F\). Again, by using Corollary 4.2, the block matrices \(\begin{pmatrix}X&0\\ 0&0\end{pmatrix},\begin{pmatrix}Y&0\\ 0&0\end{pmatrix}\) and \(\begin{pmatrix}H&0\\ 0&0\end{pmatrix}\) are products of two images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\). Therefore, the block matrix \(\begin{pmatrix}G&0\\ 0&0\end{pmatrix}\) is a product of six images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\). Furthermore, by using [2, Lemma 3.2], \(N\) is similar to a traceless matrix over \(F\), and applying Proposition 4.1, the block matrix \(\begin{pmatrix}N&0\\ 0&0\end{pmatrix}\) is a image of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\). Consequently, \(A\) is a product of seven images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\), as needed.
In view of Proposition 3.7, and by using Proposition 4.1, we deduce the following result.
**Proposition 4.8**.: _Let \(D\) be division ring with center \(F\). If \(F\) is perfect, then every matrix in \(\mathrm{M}_{\infty}(D)\) is a product of two images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\)._
We now arrange to prove the following major result.
**Theorem 4.9**.: _Let \(D\) be a non-commutative division ring which is finite-dimensional over the center \(F\) of \(D\). Then, any matrix in \(\mathrm{M}_{\infty}(D)\) is a product of at most five images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\)._
Proof.: Let \(A\in\mathrm{M}_{\infty}(D)\). Then, there exists a positive integer \(n\) such that \(A\) can be expressed as the block matrix
\[A=\begin{pmatrix}A^{\prime}&0\\ 0&0\end{pmatrix},\]
where \(A^{\prime}\in\mathrm{M}_{n}(D)\). The first of this proof is similar to the proof of Theorem 4.4. Invoking [9, Corollary 7], there exist \(G\in\mathrm{GL}_{n}(D)\) and \(N\) is nilpotent in \(\mathrm{M}_{n}(D)\) such that \(A\) can be expressed as the product of \(G\) and \(N\), writing \(A=GN\). Then, \(\begin{pmatrix}N&0\\ 0&0\end{pmatrix}\) is a image of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\). If \(G\) is central, then
\[\begin{pmatrix}G&0\\ 0&0\end{pmatrix}\]
is a product of two images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\). Now, we consider \(G\) is non-central. According to Lemma 4.5, there exists \(P\in\mathrm{GL}_{n}(D)\) such that \(P^{-1}GP\) has the form
\[P^{-1}GP=XY\]
where \(X\) is a lower triangular matrix with the main diagonal entries are \(1\) and \(Y\) is an upper triangular matrix with the main diagonal entries are \(1,1,\ldots,1,h\) for some \(h\in D\setminus\{0\}\). If \(h=1\), then by using the similar argument in the proof of Theorem 4.4, both \(X\) and \(Y\) are products of two images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{\infty}(D)\). Now, we consider \(h\neq 1\). By increasing \(n\) if necessary, we can assume \(n\) is even. Since \(D\) is a non-commutative division ring which is finite-dimensional over \(F\), the center \(F\) of \(D\) is infinite. If \(h\) and \(h^{-1}\) are conjugate, then by using [5, Lemma 2.3], one can choose \(\alpha\in F\setminus\{0\}\) such that \((\alpha h)\) and \((\alpha h)^{-1}\) are non-conjugate, so without loss of generality, we can assume \(h\) and \(h^{-1}\) are non-conjugate. On the other hand, by putting \(t=\frac{n-2}{2}\) and using [5, Lemma 2.3] again,
since \(F\) is infinite, we can choose \(x_{1},x_{1}^{-1},\ldots,x_{t},x_{t}^{-1}\in F\setminus\{-1,0,1\}\) such that
\[x_{1},x_{1}^{-1},\ldots,x_{t},x_{t}^{-1},h,h^{-1},1\]
are pairwise non-conjugate. Let \(Q\) be the diagonal matrix with entries on the main diagonal are
\[x_{1},x_{1}^{-1},\ldots,x_{t},x_{t}^{-1},h,1.\]
By using [6, Lemma 3.2], the matrix \(XQ\) is similar to \(Q\) and the matrix \(Q^{-1}Y\) is similar to the diagonal matrix \(Q^{\prime}\) with entries on the main diagonal are
\[x_{1}^{-1},x_{1},\ldots,x_{t}^{-1},x_{t},h^{-1},h.\]
Note that both \(Q\) and \(Q^{\prime}\) are matrices over the field \(F(h)\) generated by \(h\) over \(F\). Therefore, both block matrices \(\begin{pmatrix}X&0\\ 0&0\end{pmatrix}\) and \(\begin{pmatrix}Y&0\\ 0&0\end{pmatrix}\) are products of two images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\operatorname{M}_{\infty}(D)\). Consequently, \(A\) is a product of at most five images of non-zero multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\operatorname{M}_{\infty}(D)\), as stated.
## 5. Images of non-commutative polynomials
Continuing the previous section, in this section, we will firstly study products of images of multilinear polynomials and then products of images of non-commutative polynomials with zero constant terms. Throughout this section, we use all denotations in the preceding section.
First, we will begin with
**Proposition 5.1**.: _Let \(\mathbb{H}\) be the real quaternion division ring with \(i,j,k\) satisfying \(i^{2}=j^{2}=k^{2}=-1\) and \(ij=-ji=k\). Then, every element in \(\mathbb{H}\) is a product of two images of non-zero and non-central multilinear polynomials in \(\mathbb{R}\langle\mathfrak{X}\rangle\) evaluated on \(\mathbb{H}\)._
Proof.: Set \(\alpha\in\mathbb{H}\) and write \(\alpha=x+yi+zj+tk\) for some \(x,y,z,t\in\mathbb{R}\). By using [28, Lemma 2.1], there exists \(p\in\mathbb{H}\setminus\{0\}\) such that
\[p^{-1}\alpha p=x+\sqrt{y^{2}+z^{2}+t^{2}}i.\]
On the other hand,
\[x+\sqrt{y^{2}+z^{2}+t^{2}}i=j(-xj+\sqrt{y^{2}+z^{2}+t^{2}}k).\]
According to [1, Theorem 6, Page 12], the elements \(j\) and \(-xj+\sqrt{y^{2}+z^{2}+t^{2}}k\) are images of non-zero and non-central multilinear polynomials in \(\mathbb{R}\langle\mathfrak{X}\rangle\) evaluated on \(\mathbb{H}\). Therefore, \(\alpha\) is a product of two
images of non-zero and non-central multilinear polynomials in \(\mathbb{R}\langle\mathfrak{X}\rangle\) evaluated on \(\mathbb{H}\), as required.
Similarly to Proposition 4.8, we obtain an analogous version of the assertion for finite matrices over the real quaternion division ring, which states the following.
**Corollary 5.2**.: _Let \(\mathbb{H}\) be the real quaternion division ring with \(i,j,k\) satisfying \(i^{2}=j^{2}=k^{2}=-1\) and \(ij=-ji=k\) and \(n\geq 1\) a positive integer. Then, every element in \(\mathrm{M}_{n}(\mathbb{H})\) is a product of four images of non-zero and non-central multilinear polynomials in \(\mathbb{R}\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{n}(\mathbb{H})\)._
Proof.: Put \(A\in\mathrm{M}_{n}(\mathbb{H})\). Owing to [23, Theorem 5.5.3, Page 98], the matrix \(A\) is similar to a matrix over the field of complex numbers. According to [7, Theorem 2.1], \(A\) is a product of two diagonalizable matrices, writing \(A=BC\). On the other side, invoking Proposition 5.1, any diagonal matrix is a product of two images of non-zero and non-central multilinear polynomials in \(\mathbb{R}\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{n}(\mathbb{H})\), so are \(B\) and \(C\). Consequently, \(A\) is a product of four images of non-zero and non-central multilinear polynomials in \(\mathbb{R}\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{n}(\mathbb{H})\), as needed.
The next two propositions are pivotal for our further results.
**Proposition 5.3**.: _Let \(F\) be a field. Then, every element in \(\mathrm{M}_{2}(F)\) is a product of two images of non-zero and non-central multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(F)\)._
Proof.: Let \(A\in\mathrm{M}_{2}(F)\). Thus, Theorem 2.1 tells us that \(A\) is a product of two traceless matrices in \(\mathrm{M}_{2}(F)\). Consulting with [20, Theorem 1], any traceless matrix is a image of non-zero and non-central multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(F)\). Therefore, \(A\) is a product of two images of non-zero and non-central multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(F)\), as required.
**Proposition 5.4**.: _Let \(D\) be a division ring with center \(F\). Then, every element in \(\mathrm{M}_{2}(D)\) is a product of seven images of non-zero and non-central multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(D)\). In particular, if \(D\) is finite dimensional over \(F\), then every element in \(\mathrm{M}_{2}(D)\) is a product of five images of non-zero and non-central multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(D)\)._
Proof.: The first part of the proof uses a similar argument as in the proof of Theorem 4.4. In fact, let \(A\in\mathrm{M}_{2}(D)\). Using [9, Corollary 7], there exist \(G\in\mathrm{GL}_{2}(D)\) and a nilpotent \(N\) in \(\mathrm{M}_{2}(D)\) such that \(A\) can be
expressed as the product of \(G\) and \(N\), writing \(A=GN\). If \(G\) is central, then in view of [14, SS21, Theorem 1, Page 140], there exists \(\lambda\in F\) such that \(G=\lambda\mathbb{I}_{2}\). An application of Proposition 5.3 insures that \(G\) is a product of two images of non-zero and non-central multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(D)\). Moreover, in accordance with Lemma 4.5, there exists \(P\in\mathrm{GL}_{n}(D)\) such that \(P^{-1}GP\) has the form
\[P^{-1}GP=XHY,\]
where \(X\) is a lower triangular matrix such that the main diagonal entries are \(1\), and \(Y\) is an upper triangular matrix with the main diagonal entries \(1\), and \(H\) is the diagonal matrix with the main diagonal entries \(1,h\) for some \(h\in D\setminus\{0\}\). Exploiting Lemma 4.7, both \(X\) and \(Y\) are similar to matrices over \(F\). On the other side, \(H\) is a matrix over the subfield \(F(h)\) generated by \(h\) over \(F\). Furthermore, employing [2, Lemma 3.2], \(N\) is similar to a traceless matrix over \(F\), so \(N\) is a image of a non-zero and non-central multilinear polynomial in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(D)\) in view of [20, Theorem 1]. According to Proposition 5.3, the matrices \(X,Y\) and \(H\) are products of two images of non-zero and non-central multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(D)\). Therefore, \(A\) is a product of seven images of non-zero and non-central multilinear polynomials in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{2}(D)\), as required.
Concerning the second part, in case that \(D\) is finite dimensional over \(F\), the proof is completed by the usage of an analogous argument as that from the proof of Theorem 4.9.
It is well-known in [15] that every matrix over an arbitrary field is a product of two additive commutators by noticing that the arguments are similar to Theorem 2.1 and knowing the fact that every traceless matrix over a field is an additive commutator. In virtue of Theorem 2.1, Theorem 3.6 and [3, Theorem 2.4], we extract the following surprising result.
**Proposition 5.5**.: _Each matrix over a division ring is a product of at most four additive commutators._
Next, we consider generalized commutators in matrix rings. Recall that any element of form \(abc-cba\) in a ring with unity, containing the elements \(a,b,c\), is called a _generalized commutator_ (see, e.g., [18] or [12], respectively). It is clear that every additive commutator is a generalized commutator. Therefore, we have immediately from Proposition 5.5 the following interesting and non-trivial consequence.
**Corollary 5.6**.: _Each matrix over a division ring is a product of at most four generalized commutators._
We are now planning to show the validity of the following main assertion.
**Theorem 5.7**.: _Let \(R\) be a finite-dimensional algebra over a field \(F\) of characteristic \(0\) which has no direct summands that are fields in the Wedderburn decomposition. Then, every element in \(R\) is a product of at most four generalized commutators in \(R\)._
Proof.: Choose \(\alpha\in R\). Since \(F\) has characteristic \(0\), the Wedderburn decomposition of \(R\) is the following
\[R\cong\operatorname{M}_{n_{1}}(D_{1})\times\cdots\times\operatorname{M}_{n_{t }}(D_{t})\]
in which the numbers \(n_{i}\)'s are positive integers and the \(D_{i}\)'s are division rings that are finite-dimensional over \(F\). By assumptions, if there exists \(i\in\{1,\ldots,t\}\) such that \(n_{i}=1\), then \(D_{i}\) is not a field and so [17, Corollary 3.8] ensures that every element in \(D_{i}\) is a generalized commutator in \(D_{i}\). If, however, there exists an index \(i\in\{1,\ldots,t\}\) such that \(n_{i}\neq 1\), then, in conjunction with Corollary 5.6, we observe that every matrix in \(\operatorname{M}_{n_{i}}(D_{i})\) is a product of at most four generalized commutators in \(\operatorname{M}_{n_{i}}(D_{i})\). Therefore, we can conclude that each element of \(R\) is a product of at most four generalized commutators of \(R\), as stated.
Note that, under the circumstances of Theorem 5.7, for a finite-dimensional algebras whose elements are products of unipotents we refer the interested reader to [4].
The last two statements of ours manifestly demonstrate what happens in the case of an algebraically closed field.
**Proposition 5.8**.: _Let \(F\) be an algebraically closed field and \(n>1\) an integer. Then, every element in \(\operatorname{M}_{n}(F)\) is a product of two images of non-commutative polynomials with zero constant terms in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\operatorname{M}_{n}(F)\) in which the evaluations of such polynomials on \(F\) contain at least a non-zero element._
Proof.: Let \(A\in\operatorname{M}_{n}(F)\). Since \(F\) is an algebraically closed field, \(F\) is infinite. According to [7, Theorem 2.1] and [8, Theorem 2.2], \(A\) is a product of two diagonalizable matrices. On the the hand, by [27, Lemma 3.1], any diagonal matrix is a image of a non-commutative polynomial with zero constant term in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\operatorname{M}_{n}(F)\) in which the evaluation of such a polynomial on \(F\) contains at least a non-zero element, and hence so is any diagonalizable matrix. Thus,
the matrix \(A\) is a product of two images of non-commutative polynomials with zero constant terms in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{n}(F)\) in which the evaluations of such polynomials on \(F\) contain at least a non-zero element, as needed.
What we now derive as a consequence is the following curious claim.
**Corollary 5.9**.: _Let \(F\) be an algebraically closed field of characteristic \(0\) and let \(A\) be a locally finite \(F\)-algebra. Then, every element in \(A\) is a product of two images of non-commutative polynomials with zero constant terms in \(F\langle\mathfrak{X}\rangle\) evaluated on \(A\) in which the evaluations of such polynomials on \(F\) contain at least a non-zero element._
Proof.: Choose \(\alpha\in A\). With no loss of generality, we may assume that \(A\) is a finite dimensional \(F\)-algebra. Since \(F\) is an algebraically closed field of characteristic \(0\), by Wedderburn-Artin theorem,
\[R\cong\mathrm{M}_{n_{1}}(F)\times\cdots\times\mathrm{M}_{n_{t}}(F),\]
where the \(n_{i}\)'s are positive integers. Now, given
\[\alpha=(\alpha_{1},\cdots,\alpha_{t})\in\mathrm{M}_{n_{1}}(F)\times\cdots \times\mathrm{M}_{n_{t}}(F)\]
in which \(\alpha_{i}\in\mathrm{M}_{n_{i}}(F)\) for each \(1\leq i\leq t\). In accordance with [27, Lemma 3.1] and Proposition 5.8, each element \(\alpha_{i}\) is a product of two images of non-commutative polynomials with zero constant terms in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{n}(F)\) in which the evaluations of such polynomials on \(F\) contain at least a non-zero element. So, \(\alpha\) is also a product of two images of non-commutative polynomials with zero constant terms in \(F\langle\mathfrak{X}\rangle\) evaluated on \(\mathrm{M}_{n}(F)\) in which the evaluations of such polynomials on \(F\) contain at least a non-zero element, as required.
We end up the investigation with the following challenging question of some interest and importance.
**Problem 5.10**.: Decide what are the traceless and semi-traceless matrices in the Vershik-Kerov group, and find a decomposition of such a group into products of images of non-commutative polynomials.
**Acknowledgement.** The authors are very thankful to Professor Zachary Mesyan from the University of Colorado at Colorado Springs for the valuable private communications on the present subject.
**Funding:** The first-named author, P.V. Danchev, of this research paper was partially supported by the Junta de Andalucia under Grant FQM 264, and by the BIDEB 2221 of TUBITAK. |
2307.09338 | What is a reference frame in General Relativity? | In General Relativity, reference frames must be distinguished from
coordinates. The former represent physical systems interacting with the
gravitational system, aside from possible approximations, while the latter are
mathematical artefacts. We propose a novel three-fold distinction between
Idealised Reference Frames, Dynamical Reference Frames and Real Reference
Frames. In doing so, the paper not only clarifies the physical significance of
reference frames, but also sheds light on the similarities between idealised
reference frames and coordinates. It also analyses the salience of reference
frames to define local Dirac observables and to propose a physical
interpretation to diffeomorphism gauge symmetry. | Nicola Bamonti | 2023-07-18T15:20:26Z | http://arxiv.org/abs/2307.09338v3 | # What is a reference frame in General Relativity?
###### Abstract
In General Relativity, the terms'reference frame' and 'coordinate system' must be distinguished. The former refers to physical systems that are dynamically coupled with the gravitational field, aside from possible approximations, while the latter refers to a set of mathematical variables that are representative artefacts. This necessary distinction is lost in pre-general relativistic physics, where we can choose as a reference frame a system of real physical objects that is not affected and cannot affect the physical system under consideration. Therefore, we can make it coincide with a coordinate system without the need for approximations. We propose a a novel three-fold distinction between three types of reference frames, considered as material systems. In particular, we discern between Idealised Reference Frames, Dynamical Reference Frames and Real Reference Frames, depending on their increasing physical role in the total dynamics. Using a Bianchi I model in Minisuperspace, we give a cosmological example of the use of a gravitational Dynamical Reference Frame, namely a reference frame constructed with gravitational degrees of freedom in the standard case where the gravitational stress-energy tensor is not defined in the Einstein field equations. We also analyse the role of active and passive diffeomorphisms in changing a reference frame.
1
Footnote 1: See [Norton(1993)]
## 1 Introduction
In the 'post-Einsteinian' physical and philosophical literature, due to some ambiguity which can be traced back to Einstein himself1, it has been customary to conflate the terms'reference frame' and 'coordinate system', which have been used somewhat interchangeably, or at least have not been always clearly distinguished. To give just an example, in [Bergmann(1962), p.207]) the author states:
In all that follows we shall use the terms 'curvilinear four-dimensional coordinate system' and 'frame of reference' interchangeably.
This problem has been egregiously highlighted in its philosophical-historical components and addressed in [Norton(1989)] and [Norton(1993)]. In the present paper, we claim that the issue is still open: the two terms have still not been properly differentiated yet. We believe that our contribution is relevant because there is no discussion of the differences between the two concepts in the recent literature. To date, to the best of our knowledge, the works that deal most with this issue are those of Norton mentioned above (see also [Norton(1985)]). The purpose of this paper is to clarify the nature of a reference frame in General Relativity (GR), by first proposing a criterion to distinguish it from a set of coordinates and then classifying three ways to understand the notion of a reference frame in GR, considered as a material system [Rovelli(1991a)]. The benefit of this constraint is essentially due to the fact that GR is only deparametrizable for some specific material models. For such models, one is able to derive gauge-invariant Dirac observables, _i.e._, quantities invariant under the gauge group of the theory.2 In particular, we call Idealised Reference Frames (**IRFs**) those physical systems where both the dynamical equations and the stress-energy contribution to the Einstein Field Equations (EFEs) are neglected. The second class is that of Dynamical Reference Frames (**DRFs**), whose backreaction on the spacetime metric is neglected, but the frame satisfies a specific dynamical equation. Finally, we name Real Reference Frames (**RRFs**) those ones whose stress-energy contribution to the EFEs is taken into account, as well as their dynamics. Although **RRFs** are systems of great interest, as they are physically more realistic in principle, in the remainder of the paper we deal exclusively with **IRFs** and **DRFs**.
Footnote 2: In the case of GR the gauge group is the four-dimensional diffeomorphism group. Gauge transformations lead to redundant descriptions of physical states. This means that different mathematical representations can describe the same physical state. The redundancy due to gauge symmetries poses challenges in providing a unique physical interpretation of the theories. This makes it difficult to associate physically meaningful observables. Dirac’s [Dirac(2001)] proposal of gauge-invariant observables helps in addressing this issue by providing a set of observables that remain unaffected under gauge transformations, ensuring their physical relevance and interpretability. For further discussion on the concept of ‘observable’ in GR, see [Bergmann(1961a)], [Henneaux and Teitelboim(1994)], [Rovelli and Gaul(2000)], [Earman(2006)], [Thebault(2012)], [Gryb and Thébault(2016)], [Pitts(2022)].
Before proceeding, a clarification on the choice of nomenclature is worthwhile. There is a tendency in the literature to conflate two distinct things, namely, 'idealisation' and 'approximation' [Norton(2012)]. In the correct characterisation, 'idealisation' means the way to replace a target system under study with a distinct fictitious novel system whose properties provide an inexact description of some aspects of the target system. 'Approximation' means the way to treat certain quantities of the target system as negligible compared to others. Namely, it is an inexact description of the target system. The crucial difference, then, lies in whether one introduces a novel system. Here, we understand **IRFs** (as well as **DRFs**) as approximations, not idealisations. In that, as we shall see
in more detail, we proceed with approximations to a non-approximated reference frame target system, which we shall designate by the name of **RRF**. However, to be fair, **RRFs** are divided into a subclass that conceals an approximation to a _real_ physical system in the strict sense of the term, understood as a target reference frame where no approximation is implemented.
The proposed classification allow us to find a possible reason why the notions of reference frame and coordinate system are usually conflated in GR. According to us, the confusion stems from the practical, but not conceptual, equivalence that exists between **IRFs** and coordinate systems. Furthermore, we propose a formal argument to distinguish between **DRFs** and coordinates. Specifically, we posit that we can switch between two **DRFs** through a transformation linking two different gauge choices and this amount to use an active diffeomorphism3. On the other hand, coordinates are related by passive diffeomorphisms.
Footnote 3: For a good introduction to the distinction between active and passive diffeomorphisms see [Rovelli and Gaul(2000)] and [Rovelli(2004)]
Using a very simple cosmological example, namely a Bianchi I Minisuperspace model [Bianchi(1989)] with two equal scale factors, we introduce a class of **DRFs**, which we name 'gravitational dynamical reference frame', or **gDRFs**. A **gDRF** is represented by gravitational degrees of freedom, in the standard case where the gravitational stress-energy tensor is not defined in the EFEs. Early attempts to use purely gravitational degrees of freedom as a reference frame were proposed by Bergmann and Komar in ([Bergmann(1961a)],[Bergmann(1961b)],[Bergmann and Komar(1960)],[Komar(1958)]), who constructed space-time scalars from gravitational degrees of freedom that serve as dynamically coupled reference frames and as 'locators' for points. Our example is interesting because it may open the way for future work on characterising reference frames consisting of gravitational and non-material degrees of freedom in the case of pure vacuum GR. Such work, however, will not be carried out in this paper. For the purpose of our discussion, the role of the Bianchi I model will be twofold. Firstly, it will allow us to introduce a general-relativistic case in which a change of a **DRF** is implemented by an active diffeomorphism. Secondly, it will also clarify in what sense a passive diffeomorphism can implement a change of **DRF**. To this end, the distinction between **gDRF** and **DRF** is superfluous, although conceptually important and rightly worth exploring. The most relevant application of the Minisuperspace framework corresponds to the case of a homogeneous Universe, which is described by a class of cosmologies known as Bianchi models. The construction of the Hamiltonian representation of Bianchi models dynamics is usually performed by using a set of variables known as Misner variables \((\alpha,\beta_{\pm})\) [Misner(1969)]. In particular, the variable \(\alpha\) is related to the volume of the Universe, which scales as \(e^{3\alpha/2}\). The variables \(\beta_{\pm}\) represent the spatial anisotropies and correspond to the two physical degrees of freedom of gravity.
This paper clarifies the nature of an important and ubiquitous concept in physics: that of reference frame. In fact, whenever we set up an experiment or formalise the
behaviour of a physical system, we implicitly or explicitly use a reference frame. The main question of the present work is what a reference frame in GR is. One reason why it is important to reach this end is that researchers in contemporary physics and philosophy of physics are interested in quantum reference (see [Rovelli(1991b)], [Giacomini et al.(2019)], [Hoehn and Vanrietvelde(2020)]). In fact, all physical systems are, to our knowledge, ultimately quantum. We argue that, before we can really have a discussion on quantum reference frames in quantum gravity, we should know properly what reference frames are in classical GR.
The paper is structured as follows.
In Section 2, we analyse the role of reference frames in General Relativity and pre-general relativistic physics.
Section 3 contains the detailed classification of three possible ways of understanding a reference frame, supported by some concrete examples.
In Section 4, we provide some examples that demonstrate the origin of the confusion between the notions of reference frame and coordinate system in GR.
In Section 5, we analyse the definition of reference frame as provided by Earman and Norton. We then compare this class of reference frames with the Brown-Kuchar dust.
In section 6 we develop an argument with the aim of distinguishing between a **DRF** and a coordinate system, based on the different kind of map that implements a **DRF** and a coordinate change, respectively. By introducing a simplified Bianchi I model (6.1), we also give a cosmological example of a **gDRF**. This will shed more light on the differences between **DRFs** and coordinate systems.
## 2 Reference Frames vs. Coordinate systems: 'three decades of (missing) dispute'
After Norton and Earman's analysis of the concepts of coordinate system and reference frame between the 1970s and 1990s, it has become common practice within the literature to consider this conceptual problem solved. However, we argue that it is necessary to revive the debate and recognise its specific relevance within the analysis of the foundations of GR. The main motivations, which will be analysed below, concern the physical interpretation of gauge freedom, as well as interpretation of vacuum GR (Section 3); the fictitious role of coordinates in GR and the exposition of a new perspective on why the two concepts are often used interchangeably without much care (Section 4).
The conventional informal way to distinguish the concept of reference frame from that of coordinate system is to point out that only a reference frame has a link to an observer's state of motion ([DiSalle(2020)], see also [Pooley(2017)]). Following the work of Earman (in particular [Earman(1974)]) and Norton, the modern literature dealing with spatiotemporal theories in the broadest sense formally associates to a reference frame a timelike four-velocity field on a manifold (see _e.g._ [Bradley(2021)]). Equivalently a
reference frame is defined as a congruence of worldlines in space-time characterising the state of motion of a physical system. We will show that these kinds of definitions do not fully exhaust the characterisation of reference frames. On the other hand, some ambiguity still appears in [Read(2020)], in which the two terms are not clearly differentiated. In fact, on page 215 a (reference) frame-dependent object is defined as a 'non-tensorial object', that is thought of as a 'coordinate-dependent object' (ivi, p. 217).
The need to separate the two concepts does not emerge in pre-general relativistic (pre-GR) physics, since a reference frame can be represented as a set of degrees of freedom 'provided from outside' [Henneaux and Teitelboim(1994)]. For example, within Maxwell's theory in Minkowskian spacetime, the Maxwell field is understood as a subsystem of the Universe that does not affect the global inertial reference frame that can be defined on the fixed background structure. For instance, we can define locations in spacetime by means of non electrically charged objects, which constitute the reference frame. This point is elucidated in the following passage in [Einstein(1905), p.38]:
The theory to be developed--like every other electrodynamics--is based upon the kinematics of rigid bodies, since the assertions of any such theory concern relations between rigid bodies (systems of coordinates), clocks, and electromagnetic processes.
This passage can be interpreted to mean that the special relativistic theory is concerned with the relations between electromagnetic processes and material bodies that are 'dynamically _external_' to the electromagnetic system under study, which Einstein calls'system of coordinates'.
In contrast, GR has no available 'outside'. Namely, to be rigorous, we should not be allowed to consider reference frames as dynamically _external_, as we do in pre-GR physics. This follows from the fact that no existing physical system is gravitationally neutral. In other words, there is no way to disregard the interaction between the gravitational field and the reference objects. As we shall see, the fact that we cannot disregard the interaction between the gravitational field and the reference frame is true, unless approximations are made to the system constituting the reference frame itself. But it is precisely here that the lines become blurry. In pre-GR physics there is no need for approximations: it is always possible to define a reference frame as s physically 'irrelevant' real system and to make it coincide with the notion of coordinate system4. In contrast, in GR the only _physically significant_ way to introduce a physical coordination is to have it substantiated by a gravitationally interacting system. Therefore, the concept of a coordinate system, even if is widely used throughout the general relativistic literature, should be considered as a representational artefact without a physical interpretation. By this, we mean that in pre-GR physics a coordinate system may correspond to a system of physical objects external
to the dynamics of the problem, but in GR a coordinate system has no underlying physical content. In other words, it has no a direct physical referent. If this were not the case, it would be a set of gravitationally charged degrees of freedom and thus not dynamically 'external'. Consequently, in GR a reference frame cannot be understood as equivalent to a set of non-dynamic spacetime coordinates. As we shall see, the only way to make the reference frame 'dynamically external' is to implement approximations. So, it is clear that, conceptually, the notions of reference frame and coordinate system do not naturally coincide in GR.
Having clarified the physical status of reference frames, however, it is still necessary to define their nature. What is a reference frame? Let us summarise the main definitions of reference frame that can be found in the literature, from which all others are derived. The one that best meets the emphasised necessary distinction between the two concepts in GR is found in [Rovelli(1991a)]. Here, the most basic way to introduce a reference frame is to define it as a set of variables representing a material system, for example a set of physical bodies or a matter field, such that these degrees of freedom can be used to define a spatiotemporal location in the relational sense. This definition will ground our classification in Section 3. From [Rovelli(1991a)] also comes the suggestion that in GR reference frames can be considered dynamically external only if they are approximated. Moreover, as we already said, in the above-mentioned work of Norton and Earman (see also [Earman and Glymour(1978)]), as is now common in the vast majority of the physical and philosophical literature of GR, a reference frame is defined by a smooth, timelike four-velocity vector field \(U^{a}\) tangent to the worldlines of a material system to which an equivalence class of coordinates is locally adapted (see also[Brown(2005)])5. We will see more details in Section 5. It is straightforward that to consider the physical relevance of this definition of a reference frame, the vector field, exemplified _e.g._ by massive particle worldlines, should take into account the coupled dynamics between the particle system and the gravitational degrees of freedom. On the contrary, the vector field is usually treated as not _fully6_ coupled with gravitational dynamics.
Footnote 5: Here, we are using the abstract index notation (see[Penrose and Rindler(1987)]) to stress that it is a geometrical object independent from the choice of a coordinate representation.
Footnote 6: More on this in the next section.
## 3 Irf, Drf, Rrf
In this section and throughout the paper, following [Rovelli(1991a)] and [Rovelli(2004)], we define a reference frame at the most basic level as a gravitationally interacting material system. We argue that in GR we can mean three different things by the term'reference frame'. This novel three-fold division is based on the degree of approximation applied to a material system, which makes its dynamics more or less intertwined with that of the gravitational field. Although the paper follows the reasoning presented in [Rovelli(1991a)] that
adopting approximations in defining a reference frame will blur its physical significance, while reconsidering its stress-energy presence and the dynamics will bring the physical significance back into focus, our proposal provides an independent contribution to this topic. In particular, the work makes a new contribution in that it complements previous work, incorporates additional tools and engages with more philosophical literature Rovelli's original proposal. To give a concrete example, in [Rovelli(1991a)] the author completely ignores the class of **DRFs**, which are dealt with extensively here. This fact is curious because, as we shall see, a particularly relevant physical example of **DRFs** is precisely GPS coordinates, introduced in [Rovelli(2002a)]. Finally, in contrast to Rovelli's work, we place special emphasis on recognising **IRFs** as possible reference frames. This is because we argue that they play a fundamental role in fully understanding the role of coordinates in GR. Also, this work offers an in-depth analysis of reference frames as usually defined in the literature in the light of the new three-fold perspective (Section 5) and adopts a methodology that can provide a clear conceptual map between the possible reference frames in GR.
### Idealised Reference Frame
In what we name Idealised Reference Frame (**IRF**), the presence of the matter constituting the reference frame is completely ignored in the total dynamics, so that it remains dynamically decoupled from the other physical degrees of freedom. In particular (see [Rovelli(1991a)]), two approximations are adopted:
1. In the EFEs, the stress-energy tensor of the matter field used as reference frame is neglected
2. In the system of dynamical equations, the entire set of equations that determine the dynamic of the matter field is neglected
Step (b) introduces some underdetermination in the evolution of the total system (gravity plus matter). We argue that the class of **IRFs** represents what Rovelli calls 'undetermined physical coordinates' in [Rovelli(2004), p.62]. The reason for this designation is clearly expressed by the author when he states:
We obtain a system of equation for the gravitational field and other matter, expressed in terms of coordinates \(X^{\mu}\) that are interpreted as the spacetime location of reference objects whose dynamics we _have chosen_ to ignore. This set of equation is underdetermined: same initial conditions can evolve into different solutions. However, the interpretation of such underdetermination is simply that we have chosen to neglect part of the equations of motion.
Basically, when we use **IRFs**, similarly to the use of coordinates in GR, the whole system of equations is not deterministic7. To be precise, we have not a real indeterminism,
since this is the result of an unexpressed gauge freedom in the dynamics that allows the same initial data to evolve into two different physical solutions. Different physical solutions with the same initial conditions represent two gauge-related physical configurations. The difference between **IRFs** and coordinates is very subtle and will be discussed in Section 4. We believe that the origin of the confusion between the concepts of reference frame and coordinate system lies precisely in the fact that pragmatically it is completely equivalent to describe physics in terms of **IRFs** or coordinates.8 The real difference lies only at the level of interpretation. For this reason, it is impossible to give a practical example of a physical system playing the role of an **IRF**, as it would actually coincide with a generic coordinate system.
Footnote 8: All we mean here is that in theoretical practice, the use of **IRFs** or coordinates is indistinguishable. This does not mean that being aware of what one is using is irrelevant (see below).
However, as we will say later, in order to be able to give a physical interpretation to gauge symmetries, which characterise all the physical theories we know, it is important to recognise that we are using reference frames that we are approximating and not coordinates.
### Dynamical Reference Frame
If we assume only the first of the above approximations, namely (a), we get a Dynamical Reference Frame (**DRF**). Thus, in the case of a **DRF** we neglect the stress-energy tensor of the matter field in the EFEs, but we consider its presence as far as the total dynamics of the system is concerned. Consequently, unlike in the case of **IRFs**, we now have the possibility of using the dynamical equations of matter to fix the gauge freedom present in the theory and obtain a deterministic dynamical system. In this case, the use of a **DRF** in a theory corresponds to a gauge-fixed formulation of the same theory when we do not use a material system as the reference frame. The reason is simply that we can use the equations of motion of the matter in the same way in which we use a gauge-fixing condition when we deal with coordinates. This fact supports the position expressed in [10], according to which the existence of a gauge suggests the relational nature of the physical degrees of freedom of the physical system under analysis. The correspondence of a **DRF** with a set of gauge-fixed coordinates is also consistent with the definition given in [11], p.3], which states that a gauge theory is a theory
[...] in which the dynamical variables are specified with respect to a'reference frame' whose choice is arbitrary.
According to our approach, the reference frame to which they refer is precisely a **DRF**.
In the following, we will give three examples of a **DRF**.
Before doing so, to give a concrete and simple idea of what it means to fix a **DRF**, we propose a parallel to the case of a parametrized Newtonian system in one spatial dimension described by canonical variables \([q(t),p(t)]\). This is by no means intended to be a realistic example of a **DRF**, but at best a valuable analogy9. Furthermore, this formalism will be useful to fully grasp the example given in Section 6.1. Through the parametrization process we extend the configuration space \(\mathcal{C}=\{q(t)\}\rightarrow\mathcal{C}_{\text{ext}}=\{q(\tau),t(\tau)\}\) and unfreeze the time coordinate \(t\), which can now be treated as a dynamic variable on the same footing of the \(q^{i}\) variables. Both dynamical variables depend on an arbitrary parameter \(\tau\). The extended action of the parametrized system reads as
Footnote 9: The same analogy applies to **RRFs**. However, as we already said, we do not deal with **RRFs** in this paper.
\[S_{\text{ext}}=\int d\tau\left[p_{t}\frac{dt}{d\tau}+p\frac{dq}{d\tau}-N(\tau) \left(p_{t}+\frac{p^{2}}{2m}\right)\right], \tag{1}\]
while the Hamilton equations are
\[\begin{cases}\frac{dt}{d\tau}=N(\tau),&\frac{dp_{t}}{d\tau}=0\\ \frac{dq}{d\tau}=N(\tau)\frac{p}{m},&\frac{dp}{d\tau}=0\end{cases} \tag{2}\]
The extended system is subject to the reparametrization symmetry \(\tau\rightarrow\tau^{\prime}(\tau)\) and different choices of the Lagrange multiplier \(N(\tau)\), also known as 'lapse function', amount to considering the gauge dynamics in different parametrizations \(\tau\). We partially gauge fix the system, through the gauge choice \(N=1\), which amounts to having a parametrization \(\tau\) in which \(t(\tau)\) grows linearly, as can easily be seen from Hamilton's equations. The dynamics written in terms of the gauge-fixed parameter \(\tau\), however, is still a gauge dynamic and not a physical one, since it is expressed in terms of a non-physical parameter. Thus, we still have a gauge redundancy in the system and we cannot define gauge invariant quantities representing Dirac observables.
A well-known approach to constructing gauge-invariant relational observables [16] is to impose the canonical gauge condition \(t(\tau)\equiv t_{0}\), which completely eliminates any residual gauge redundancy. Geometrically, this condition defines a slice that cuts all the gauge orbits on the constraint surface once and only once. This amounts to going to the reduced phase space. That way, we can write relational observables which are understood as gauge-invariants extensions of a gauge-fixed quantity. In particular, a relational observable can be defined as the coincidence of \(q\) with \(t\), when \(t\) reads the value \(t_{0}\), or explicitly: \(q(\tau)|_{t(\tau)=t_{0}}:=q(\tau)+p/m\left[t_{0}-t(\tau)\right]\). Note that this is the definition of an evolving constant of motion (see [1]).
An equivalent approach to pick the variable \(t\) as the 'temporal reference frame' (also referred to as the internal, or relational 'clock') with respect to which the dynamics of the relational observable \(q(t)\) is described, is to simply invert the relation \(t(\tau)=\tau\leftrightarrow\tau(t)=\tau\)
\(t\). Note that this can be done since we are able to solve Hamilton's equations for the considered system. By inserting the quantity \(\tau(t)\) within the gauge-dependent quantity \(q(\tau)\) we obtain a gauge-invariant relational observable \(q(t)\), defined for any given value of \(t\), thus deparametrizing the system. Consequently, we recover the formalism of the unparametrized case in which \(t\) represented a mere coordinate. However, in such a case the physical interpretation of the time \(t\) as a dynamical variable is now revealed10. In fact, now \(q(t)\) describes the complete gauge invariant relational evolution of \(q\) with respect to the dynamical variable \(t\). Furthermore, the dynamical theory written in relational terms (_i.e._ deparametrized) becomes deterministic and without any gauge redundancy, thus coinciding with the unparametrized theory, written in coordinate \(t\).
Footnote 10: In this sense, it represents a good analogy of a component of a **DRF**.
This example shows that even in pre-GR physics we can define a system whose degrees of freedom, although acting _as if_ they are coordinates, hide an approximation procedure to their nature of internal dynamical degrees of freedom. The difference with GR is that in pre-GR physics, such an approximation _need not_ exist, since physical systems can be made to correspond _exactly_ with coordinate systems.
The first proper example of a **DRF** is the so-called _test fluid_ reference frame (see [20]). In short, the test fluid is affected by the metric field (it is acted upon), but the metric field is not affected by the test fluid (it does not act): thus, the back-reaction on gravity is neglected. As a toy model for a test fluid, we consider a set of four real, massless, free, Klein-Gordon scalar fields in a curved spacetime. We do not account for the stress-energy tensor of the scalar fields to describe the coupled total dynamics, but each scalar field \(\phi_{A},A=1,2,3,4\) has its own equations of motion (in abstract index notation)
\[\square\phi_{A}\equiv\nabla^{a}\nabla_{a}\phi_{A}=0 \tag{3}\]
and the system of the four scalar fields can be used as a reference frame (a clock and three rulers) with respect to which the phenomena can be described11. More clearly, to describe the dynamics of the scalar fields we first need to know the metric \(g_{ab}\) of spacetime in order to define the compatible connection \(\nabla_{a}\). In that sense, the metric acts on the test fluid, but it is not affected by it. It is straightforward that the total dynamics reads as12
Footnote 11: Arguably, the scalar field selected to play the role of the timelike variable (say \(\phi_{1}\)) needs to satisfy some properties such as a homogeneity condition \(\nabla^{i}\nabla_{i}\phi_{1}=0\), where \(i=1,2,3\) are spatial indices, as well as a ‘monotonicity condition’ connected with some assumptions on its potential.
Footnote 12: We are neglecting the contribution of the cosmological constant \(\Lambda\).
\[\begin{cases}G_{ab}=T_{ab}^{\text{others}}\\ \square\phi_{A}=0,\end{cases} \tag{4}\]
where with \(T_{ab}^{\text{others}}\) we indicate the stress-energy contribution from other material sources than the matter fields we want to use as a reference frame. It is worth noting that the four scalar fields satisfy the same equation that is written when harmonic gauge-fixing is
imposed on the coordinates. It makes clear what we have said about the correspondence between using a **DRF** and a gauge-fixed formulation of the theory written in coordinates. Of course, if we use the set of Klein-Gordon fields \(\phi^{A}\) as the reference frame, we can in principle write relational observables, _e.g_, \(g_{ab}[\phi^{A}]\), thus highlighting the role of the set of four scalar fields as the reference frame in which the dynamics takes place. In this case, as also shown above, it is clear that it is not necessary to perform a complete gauge-fixing procedure to choose a dynamical reference frame and write explicit relational observables. _If_ one is able to solve the equations of motion of the degrees of freedom to be used as space and time standards, one only needs to invert, _e.g._, the relation \(\phi^{1}(X^{\mu})\) written in some coordinates \(\{X^{\mu}\}\) and insert the inverted expression \(X^{\mu}(\phi^{1})\) into \(g_{\mu\nu}(X^{\mu})\) in order to obtain the relational observable \(g_{\mu\nu}[X^{\mu}(\phi^{1})]\). Taking away any reference to manifold points, one obtains a well-defined notion of local observables in GR and a definition of a physical spacetime point in terms of 'Einsteinian coincidences' [Einstein(1916)]. Physical objects do not localise relative to the manifold, but relative to one another. As we have shown, in practice one has to express the spatiotemporal localisation of observables through matter fields, which play the role of reference frames.
In accordance with the previously mentioned literature (see Section 2), a **DRF** could also be represented by a timelike four-velocity vector field \(U^{a}\) tangent to a congruence of worldlines of a particle system, or a matter fluid. We choose to analyse this particular case in Section 5, as we believe it warrants a closer examination and it is particularly significant from an historical and philosophical point of view.
Finally, a more realistic and well-known example of **DRF** is represented by the set of the so-called _GPS coordinates_ ([Rovelli(2002b)], [Rovelli(2004)], [Rovelli(2002a)]). The idea is to consider the system formed by GR coupled with four test bodies, referred to as'satellites', which are deemed point particles following timelike geodesics. Each particle is associated with its own proper time \(\tau\). Accordingly, we can uniquely associate four numbers \(\tau^{A},A=1,2,3,4\) to each spacetime point \(P\). These four numbers represent the four physical variables that constitute the **DRF**.
The importance of introducing **DRFs** can be summarised as follows. By using coordinates or approximating reference frames to **IRFs** we introduce redundant gauge freedom. Quantities written in terms of coordinates or IRFs are not Dirac observables, but are gauge-dependent quantities. This, undermines the predictive power of the theory. We can solve this problem by considering the coupled dynamics of the reference frames and the gravitational field. In this way, we can define, following a relational approach, Dirac observables.
### Real Reference Frame
Finally, it is possible to take into account both the dynamics of the chosen material system and its stress-energy tensor. In such a case, we get a Real Reference Frame (**RRF**). Examples of **RRFs** include pressureless dust fields ([Brown and Kuchar(1995)],
[Giesel et al.(2010)]) and massless scalar fields ([Rovelli and Smolin(1994)], [Domagala et al.(2010)]). It is worth noting that such reference frames are considered for reasons of mathematical convenience, rather than for their clear phenomenology. Moreover, while they lead to a complete deparameterization of the theory, this is not always the case for other **RRFs**. In fact, it is possible to propose a sub-classification of **RRFs** based on the possibility of being able to deparametrize the theory. As a matter of fact, in some cases approximations can be made to the Hamiltonian of the material field used as the **RRF**, thereby implementing a deparametrization procedure. However, when no such approximations are made, the resulting approach, while physically more stringent, is formally more complicated and does not allow for the analytical control of relational observables [Tambornino(2012)]. It should be pointed out that this subdivision also applies to **DRFs**. Actually, as previously mentioned, it is not always possible to solve Hamilton's equations and deparametrize the theory. In the remainder of the paper we will primarily focus on **IRFs** and **DRFs**, leaving the study of **RRFs** for future research.
Coordinate Systems:What about coordinate systems? A coordinate system in a N-dimensional manifold corresponds to the choice of a local _chart_, _i.e._ an open set and a homeomorphism \(\gamma\) which assign N labels to a point of the manifold. Thus, it has no direct physical interpretation. Formally, a coordinate system (also referred to as 'coordinate chart') is a \(1:1\), onto map \(\gamma:\mathcal{S}\to R^{N}\) from an open patch \(\mathcal{S}\subset M\) of the manifold \(M\) into the N-fold product of the real numbers.
We conclude by considering the usefulness of considering **IRFs** and **DRFs** as a possible class of reference frames. If we disregard any stress-energy contribution from other material sources, the solutions of the EFEs will be vacuum solutions. Following [Rovelli(1991a)], we can say that vacuum GR can be seen as an approximate theory, in which we make use of **IRFs** or **DRFs**. In other words, we are suggesting that _exact_ vacuum solutions may not exist in nature, but only _approximated_ matter solutions that behave like vacuum solutions could be permitted.
IRFs vs. Coordinates: what is the source of the confusion between reference frames and coordinate systems?
It can be argued that the distinction between a reference frame and a coordinate system, while conceptually relevant, is widely overlooked in both theoretical and experimental practice. In fact, from the theoretical point of view, in GR local coordinate systems are employed to compute solutions of EFEs. A straightforward example is the use of Schwarzschild coordinates \((t,r,\theta,\phi)\). Of course, the Schwarzschild geometry can be ex
pressed in a range of different choices of coordinates. As far as experimental practice is concerned, a notable success was the detection of gravitational waves by the LIGO project [Abbott et al.(2016)]. The gravitational contribution of mirrors used to detect gravitational waves on Earth is completely disregarded and their degrees of freedom are treated as coordinates. Even theoretically, the components of the metric are calculated within a particular gauge, namely the so-called Transverse-Traceless gauge (TT gauge).
Therefore, in GR a reference frame is often used as a coordinate system. We mean that, strictly speaking, the Schwarzschild coordinates \((t,r,\theta,\phi)\) should represent some approximated material degrees of freedom13. However, they are approximated to non-physical coordinates without _any_ connection to a material system substantiating them. Similarly, the components of the metrics measured by LIGO should not be interpreted as quantities written in coordinates, but as written in some reference frame that represents the mirrors of the experimental setting to some degree of approximation. This is because we understand from GR that localisation is relative between fields. The most we can do is to approximate the physical systems we choose as reference frames. We will thus have **IRFs** that behave exactly like coordinates, but do not coincide with coordinates. The point is that in GR coordinates should be interpreted _at least_ as variables of an **IRF**, but the two are confused since pragmatically identical. However, acknowledging that we are using reference frames can help us understand the physical reasons for presence of gauge freedoms in our theories, as stated in Section 3.
Footnote 13: The implications of this statement will require further investigation in the future. As already mentioned at the end of Section 3, adopting such an approach could have significant consequences for our understanding of vacuum solutions in GR.
We maintain that these facts properly demonstrate the confusion between the concepts of reference frame and coordinate system.
The puzzle, then, is why such approximations work so well that the difference between the two concepts can be overlooked. Or, stated differently, why changes to the dynamics by the reference frame seem to play no role compared to the dynamics written in coordinates. This issue is clearly expressed in [Thiemann(2006), p.2], where the author remarks14:
Footnote 14: Thiemann is referring here to Dirac Observables. That is, complete ones. For Rovelli [Rovelli(2002b)], a partial observable can be observed, in the sense of measured, even if not gauge-invariant. It is worth noting that Rovelli’s distinction between partial and complete observables offers a way to identify observables that retain physical significance even if not gauge-invariant. Although theories can only predict Dirac observables, _i.e._ complete ones, Rovelli argues that gauge-dependent quantities such as partial observables play a fundamental role in physical theories, as we use them to describe physical observations.
Why is it that the FRW equations describe the physical time evolution which is actually observed for instance through red shift experiments, of physical, that is observable, quantities such as the scale parameter? The puzzle here is that these observed quantities are mathematically described by functions on the phase space which _do not Poisson commute with the constraints_! Hence they are not gauge invariant and therefore should not be observable in obvious
contradiction to reality.
Simply put, in theoretical and experimental practice reference frames are approximated and made to coincide with coordinate systems. At the practical level, it is impossible to separate between the two concepts. However, this leads to the situation where all general-relativistic physics incorrectly interprets the dynamical equations of systems as physical evolution equations 'rather than what they really are, namely gauge transformation equations' [Thiemann(2006)] (ivi, p.3), as they are written in coordinates. As stated in [Tambornino(2012), p.4] 'a natural resolution of this apparent paradox, from the relational point of view, is the following: the coordinates which one measures are the readings of some physical coordinate system', namely they are gauge-dependent partial observables constituting a reference frame. When the dynamical equations of matter used as a reference frame are taken into account, then the dynamics of the system is a physical dynamics of complete gauge-invariant observables.
What is the underlying source of this confusion? According to us, one possible reason is that reference frames are approximated to such an extent that they play the role of **IRFs**. However, once these approximations are made it becomes impossible to realise that approximated physical systems in the sense of **IRFs**, rather than coordinate systems, are being used. The relevant point is that _in practice_ there is no difference between a coordinate system and an **IRF**. Both come in the form of a set of non-dynamic variables that can be used to define a spatiotemporal location. The difference between **IRFs** and coordinate systems all plays out on the conceptual level. An **IRF** is a physical system that would, by nature, interact with all other degrees of freedom in the theory, but to which we apply _a posteriori_ various approximations that are useful when 'doing the math'. On the other hand, a coordinate system is a set of mathematical variables that naturally have no dynamics whatsoever. In a nutshell, **IRFs** hide an approximation procedure, while coordinates do not, since they are naturally non-dynamic. Coordinates are mathematical tools used to represent **IRFs**. Basically, we can define **IRFs** as 'physically substantiated' coordinates and an **IRF** can be seen as a system of coordinates to which a physical _interpretation_ can be assigned.
To sum up, the interpretative distinction between **IRFs** and coordinates is rooted in the fact that coordinates are not partial observables [Rovelli(2002b)], whereas the variables that constitute an **IRF** are and can be associated to measurements performed by instruments15.
Footnote 15: It should be noted that the quantities that define a **DRF**, such as the GPS coordinates, are also partial observables that can be associated with a measuring procedure. However, in the case of partial observables that form an **IRF**, we neglect their dynamics.
Before delving into the differences between **DRFs** and coordinates, in the next section, we analyse in detail the notion of **DRF** as a timelike vector field tangent to the trajectories of a physical system in spacetime.
DRFs in the orthodox view
The 'orthodox' point of view (that is how we name the view introduced by Earman and Norton) recognises as the only viable characterisation of a reference frame the expression of matter's state of motion, _i.e._ the assignment of a four-velocity (timelike) vector field \(U^{a}\) tangent to the worldlines of a material system, satisfying some dynamical equation and to which locally adapt a coordinate system \((X^{0},X^{1},X^{2},X^{3})\). A straightforward example of such a reference frame is fluid of matter falling (not necessarily freely) towards, _e.g._, a black hole16. Of course, physical space is the set of points of the fluid, _viz._ the hypersurface orthogonal to the four-velocity, while the degree of freedom playing the role of time is a scalar quantity which grows monotonically along the fluid trajectory. In this case, the fluid and its physical ingredients, like energy density or pressure satisfy a precise prescription for their dynamical evolution, but since such matter typically produces only a small perturbation on the spacetime structure of the black hole, its physical back-reaction on spacetime is neglected (namely, its stress-energy tensor on the r.h.s. of the EFEs is set to zero).
Footnote 16: Arguably, this example loses its validity near the singularity, where quantum effects become dominant. Furthermore, at the singularity there is no longer a congruence of worldlines, as the geodesics intersect.
In the following, we analyse whether such a class of objects can be fitted within the class of **DRFs**. Of course, a **DRF** is defined as a physical material system that satisfies equations of motion and depends on, but not affects, the gravitational field \(g_{ab}\). Therefore, we can firmly assert that the orthodox view considers only **DRFs** as possible reference frames and not **IRFs**, nor **RRFs**. However, we will offer a more in-depth analysis in order to show some possible differences between **DRFs** and reference frames _a la_ Earman-Norton. Given the novelty of the proposed division between three types of reference frames in GR, this comparison can be useful both for a better understanding of **DRFs**, but above all for providing a more delimited conceptual context to reference frames as usually used in the literature.
Earman and Norton adopt what we will name the '_comoving hypothesis_': they consider a locally adapted coordinate chart such that the four-velocity takes the form \(U^{\mu}=(dX^{0}/d\tau,0,0,0)\), where \(\tau\) is the proper time defined at each point of the fluid. This is obvious in [Earman(1974), p.270], where he states:
In this context a reference frame is defined by a smooth, timelike vector field \(V\).
[...]Alternatively, a frame can, at least locally, be construed as an equivalence class of coordinate systems. The coordinate system \(\{x^{i}\}\), \(i=1,2,3,4\), is said to be adapted to the frame \(F\) if the trajectories of the vector field which defines \(F\) have the form \(x^{a}=\mbox{const},a=1,2,3\). If \(\{x^{i}\}\) is adapted to \(F\), then so is \(\{x^{\prime i}\}\) where \(x^{\prime a}=x^{\prime a}(x^{b})\), \(x^{\prime 4}=x^{\prime 4}(x^{b},x^{4})\); such a transformation is called an internal coordinate transformation. \(F\) may be identified with a maximal class of internally related class of coordinate systems.
and it is also clear in [Norton(1985), p.209]:
[...] it is now customary to represent the intuitive notion of a physical frame of reference as a congruence of time-like curves. Each curve represents the world line of a reference point of the frame. [...] A coordinate system \(\{x^{i}\}\) (\(i=1,2,3,4\)) is said to be 'adapted' to a given frame of reference just in case the curves of constant \(x^{1}\), \(x^{2}\) and \(x^{3}\) are the curves of the frame. These three coordinates are'spatial' coordinates and the \(x^{4}\) coordinate a 'time' coordinate.
We now want to confront the orthodox way of characterising a **DRF** with a well-known case of material reference frame: the Brown-Kuchar dust. In Section 3 we said that a Brown-Kuchar dust represents a **RRF**. Here, we will interpret it as a **DRF**, thus neglecting its stress-energy contribution to the EFEs. It is enough to simply imagine that the dust is immersed in a _given_ generic gravitational field17. Briefly, the dust (representing the **DRF**) is described in terms of eight scalar fields, four of which, _i.e._\([T(X^{\mu}),Z^{i}(X^{\mu})]\), represent the spatiotemporal degrees of freedom to be used as physical coordinates. The set \(\{X^{\mu}\}\) represents a set of generic mathematical coordinates. In particular, it can be shown that the dynamics constrains the three scalar fields \(Z^{i}\) to be constant along the geodesics generated by the dust four-velocity \(U^{\mu}\) (this request corresponds to the comoving gauge condition). Furthermore, the scalar field \(T\) measures proper time (this is due to the fact that the dust fluid is a geodesic one, thus the synchronous case applies). We notice that the four-velocity can be written as \(U^{\mu}=-\partial T/\partial X^{\mu}+\sum_{i}[W_{i}\partial Z^{i}/\partial X^ {\mu}]\). Namely, it is defined by its decomposition in the basis \((T,Z^{i})\) and it is a function of the set of scalar degrees of freedom of the dust. Consequently, according to the simple notation used above, it is possible to write relational observables of the form \(O(T,Z^{i})\). In this sense, the dust used as a reference frame constitutes a 'dynamic coordinate system'.
Footnote 17: We stress that the formalism is preserved, since the dynamical equations of the source matter are usually postulated. It is interesting to note that this fact conceptually constitutes an additional approximation adopted in GR to cope with the non-linearity of EFEs (see [Wald(1984)]).
Such description is, in fact, consistent with the one given in section 3, where we stated that using a **DRF** as a reference frame implies the possibility of writing relational observables of the form \(O(U^{a})\), where \(U^{a}\) represents a generic four-vector constituted by four physical scalar degrees of freedom and is solution of a specific dynamic equation.
From what has been said, becomes apparent that the purpose of the definition provided by Earman and Norton is clearly to make the set of adapted coordinates coincide with the scalar degrees of freedom of some test fluid, in order to label the points of the fluid through the adapted comoving coordinates. This is exactly what happens in the case of a Brown-Kuchar dust, in which the set of variables \((T,Z^{i})\) constitutes a set of coordinates of the 'dust manifold'18[Brown and Kuchar(1995)]. Thus, each point of the spacetime
manifold is labelled by a set of physical parameters \(\tau\equiv T(X^{\mu})\) and \(z^{i}\equiv Z^{i}(X^{\mu})\). Indeed, we point out the similarity between Norton's'relative space' and 'frame time' as defined in [11] and the dust space manifold \(\mathcal{S}\) and the scalar field \(T\), respectively.
However, we would like to highlight some differences between the dust characterisation of a **DRF** (which matches ours) and the orthodox one.
First of all, according to the orthodox characterisation of a reference frame as a maximal class of adapted coordinate systems, there is still the risk of conceptually confusing the set of adapted coordinates \(\{X^{\mu}\}\) that are non-dynamic variables, external to the physical system they are labelling, with the set \(\{T,Z^{i}\}\) of four dynamically relevant scalar variables that constitute the spatiotemporal degrees of freedom of the reference dust. In fact, the presence of these degrees of freedom is never made explicit, nor is the formalism from which the coincidence between the two sets is derived. This is related to the fact that the authors do not strictly think in terms of degrees of freedom and equations of motion in a broad sense. Rather, they think in terms of degrees of freedom of 3-dimensional matter in motion through spacetime. What we mean is that our example in Section 3 of **DRF** as a set of four massless, free scalar fields would be out of their case study. This observation is supported by the fact that both authors relate the concept of a reference frame to the concept of a state of motion of some observer. In this regard, it is worth mentioning that in the orthodox view there is a tendency to conceptually separate a reference frame from some sort of ideal observer comoving to that reference frame. This is because a reference frame is conceptually associated with a physical motion of 3-dimensional systems in spacetime, as is easily deduced from the fact that it is represented by a 4-velocity vector field. From the orthodox perspective, a reference frame is a space-filling system of instruments moving with arbitrary velocities. This is clear in [11], p.837], where he states:
If one conceives of a frame of reference as a space filling system of hypothetical instruments moving with arbitrary velocities, then the minimum information needed to pick out the frame is the specification of the world lines of its elements.
In contrast, according to our definition, a reference frame _is_ an observer (and vice versa) and measurements correspond to interactions between the reference frame and other physical systems. Furthermore, a reference frame is represented by four scalar fields forming a 4-vector, without necessarily requiring any interpretation in terms of 4-velocity.
Secondly, taking the orthodox definition of a reference frame literally, the **DRF** is directly represented by the four-velocity, which is expressed in terms of the comoving coordinate system as \(U^{\mu}=(dX^{0}/d\tau,0,0,0)\). Thus, the relational observables should take the form \(O(U^{0},\vec{0})\equiv O[1/\sqrt{g_{00}(X^{0},X^{i})},\vec{0}]\), since only one of the four components of the four-velocity is non-zero.
Thirdly, the orthodox case assumes comoving hypothesis, but not synchrony. Thus there is no exact coincidence between the set \((T,Z^{i})\) and the set \((X^{0},X^{i})\). Of course, in the comoving but not synchronous case, we can always establish the relationship between
the coordinate time and proper time, _i.e._, \(d\tau=\sqrt{g_{00}(X^{0},X^{i})}dX^{0}\). This is feasible since a **DRF** is defined once a metric is given.
To summarise, in the orthodox approach it is both formally and conceptually less clear how to construct relational observables using matter as a standard of space and time. This is understandable given that the earliest work on dust as a reference frame dates back to the 1990s, some twenty years after Earman's article we referred to. Indeed, the similarities between the two approaches reveal the author's foresight and deep understanding of Einstein's gravitation theory. However, as we have seen in Section 2, this orthodox definition is still used today. It is legitimate, therefore, to analyse its strengths and weaknesses.
## 6 DRF vs. Coordinate systems
The differences between **DRFs** and coordinates in GR are clear and can be summarised as follows. The total system of dynamical equations is deterministic when using a **DRF** and not deterministic when using coordinates. The variables used as coordinates are partial observables in the case of **DRF**, but not in the case of coordinates. Observables are gauge invariant if they are functions of the variables that constitute a **DRF**, but they are not gauge invariant when we use coordinates. For the sake of clarity, let us give a practical example of such differences. Let \(\{X^{0},X^{i}\}\) be a set of coordinates and \(\{T,Z^{i}\}\) a set of four scalar degrees of freedom similar to those introduced in the previous section for a dust. We consider the ADM space+time analysis19 of a model of GR plus a dust fluid. In total we have \(6\times\infty^{3}\) degrees of freedom of the 3-metric \(h_{ij}(X^{0},X^{i})\) written in coordinates and \(4\times\infty^{3}\) physical degrees of freedom of the material system. By removing the \(4\times\infty^{3}\) gauge degrees of freedom of the metric, \(6\times\infty^{3}\) physical degrees of freedom remain. In the case where we use the material system as a reference frame, the metric \(h_{ij}(T,Z^{i})\) written in terms of a **DRF** has \(6\times\infty^{3}\) physical degrees of freedom. This is because we are able to use the dynamics of the material system to eliminate the gauge redundancy of the theory. Thus, repeating what was said in Section 3, the same dynamical theory, when written in relational terms (_i.e._ using a **DRF**) becomes deterministic and without any gauge freedom to be fixed. Furthermore, the quantity \(h_{ij}(T,Z^{i})\) commutes with all constraints of the theory, hence it is a Dirac observable.
Footnote 19: The ADM formalism [Arnowitt et al.(1960)] is a Hamiltonian formulation of GR. It involves a foliation of the spacetime manifold into a set of spacelike 3-hypersurfaces labelled by a timelike variable. The gravitational dynamical variables of this formalism are taken to be the 3-metric tensor \(h_{ij}\) and its conjugate momentun \(p^{ij}\).
In the following, we want to propose a formal argument to distinguish a **DRF** from a coordinate system. This argument is based on the difference between transformations relating coordinate systems and **DRFs**, respectively. Let us analyse what we mean by 'change a **DRF**', assuming the case of 'temporal reference frames'. In particular, we deal with the situation in which we have multiple internal degrees of freedom of the same physical system that can be used as a time variable. Thus, we have different physical
Figure 1: In (a), a schematic diagram illustrates the analogy between a change of coordinates and a change of gauge fixing is illustrated. On the left, we have a change of coordinates. On the right, we have a change of gauge fixing. In (b), we propose a pictorial, geometrical interpretation of the diagram. It is evident that a gauge fixing consists of defining a slice that cuts all the gauge histories, represented by dotted lines, once and only once.
degrees of freedom that could correspond to a gauge-fixed time variable. As in Section 3, we again use the parameterized Newtonian particle as an analogy. In this case, we could identify both the two variables \((q(\tau),t(\tau))\) on the extended configurational space as a relational internal clock. The map between the two different relational clocks \(q\) and \(t\) does not correspond to a passive diffeomorphism, but rather to a map relating the two different gauge-fixings selecting either \(q\) or \(t\) as a relational clock. In particular, this map will be an active diffeomorphism, namely a field-dependent map (see [Goeller et al.(2022)]), depending on the dynamical degrees of freedom \((q,t)\). In fact, the two gauge-fixing conditions for \(q\) and \(t\) can be seen as applications of an active diffeomorphism on the dynamical variables \(q\) and \(t\), respectively. Thus, the map between the two gauge-fixings is a composition of two active diffeomorphisms, which results in an active diffeomorphism. Geometrically, the map that links one time gauge choice to another is _analogous_ to a coordinate transformation on a manifold. In this case, however, the role of the manifold is played by the space of gauge orbits defined on the constraint surface (see [Hoehn and Vanrietvelde(2020)] for further discussion on that point). The situation is summarised in Figure 1. In the figure, \(\mathcal{M}\) and \(\mathcal{C}\) indicate the spacetime manifold and the constraint surface, respectively. \(\mathcal{P}_{q}\) and \(\mathcal{P}_{t}\) represent the reduced phase space resulting from imposing the gauge condition \(q=q_{0}\) and \(t=t_{0}\), respectively. The \(\gamma:\mathcal{S}\to\mathbb{R}^{N}\) map, already defined in Section 3, assigns a set of coordinates to the manifold points. We can assign two different sets of coordinates to the same point depending on whether we use the \(\gamma_{1}\) or \(\gamma_{2}\) map. A passive diffeomorphism consists in the composition map \(\gamma_{2}\circ\gamma_{1}^{-1}\). The \(\phi_{q}:\mathcal{C}\to\mathcal{P}_{q}\) map associates to each gauge orbit in \(\mathcal{C}\) its intersection point with the gauge fixing surface setting \(q\) as the relational clock. The same holds for \(\phi_{t}:\mathcal{C}\to\mathcal{P}_{t}\), relative to time \(t\). An active diffeomorphism consists in the composition map \(\phi_{t}\circ\phi_{q}^{-1}\).
Naively, what is needed in order to change a gauge-fixing choice (say, _e.g._, we want to switch between relational evolution in \(q\) and \(t\) time) is to go back to the non-gauge-fixed level of the constraint surface (through the map \(\phi_{q^{-1}}\)), thus reintroducing a description in terms of the coordinate time \(\tau\) and then fix a new gauge to choose the new temporal variable (through the map \(\phi_{t}\)). Of course, if we want to continue describing relational evolution in terms of the same gauge-selected internal clock, we are still free to go back to the non-gauge-fixed level, operate a passive diffeomorphism on the coordinate parameter \(\tau\) and then choose _the same_ gauge condition as before. We will see in the next section an example of this procedure within a cosmological setting.
### Minisuperspace Bianchi I model
In the following, we will clarify in which sense a change of **DRF** can be implemented by a passive diffeomorphism. We will also provide a general-relativistic example of a change of **DRF** implemented by an active diffeomorphism. As mentioned in the introduction, and as we will make explicit below, we will actually be dealing with the case of **gDRF**. This is made clear by the fact that Bianchi's models are vacuum models. For the sake of
convenience, however, we will refer to generic **DRFs** in the remainder of the discussion, as the material-gravitational **DRF** distinction, while conceptually of great importance, is not decisive for the purpose of this section.
Let us consider a simple Minisuperspace model, corresponding to a Bianchi I model with two equal scale factors. This means that the configurational space is constituted only of one anisotropic variable \(\beta(\tau)\) and the volume of the Universe variable \(\alpha(\tau)\). Notice that the dynamical degrees of freedom, due to the model homogeneity, depend on coordinate time \(\tau\) only. This system is formally similar to a parameterized system, that is evident once we make explicit the action
\[S=\int d\tau\left(p_{\alpha}\dot{\alpha}+p_{\beta}\dot{\beta}-N(\tau)\left[e^{ -3\alpha/2}\left(-p_{\alpha}^{2}+p_{\beta}^{2}\right)\right]\right), \tag{5}\]
where \(\overset{\cdot}{(...)}\equiv d(...)/d\tau\), the \(p\)'s are conjugate momenta to the corresponding variables. We observe that the extended Hamiltonian
\[H_{E}=Ne^{-3\alpha/2}(-p_{\alpha}^{2}+p_{\beta}^{2}) \tag{6}\]
constitutes a first-class secondary constraint. By operating a deparametrization through a gauge choice on either variable \(\alpha\) or \(\beta\), we recover the so-called reduced phase space formulation of the theory. It is clear that we are completely free to choose either \(\alpha\) or \(\beta\) as possible internal clocks (thought as temporal **DRFs**) with respect to which the cosmological dynamics can be described. Evidently, in line with the discussion above, in this case we are dealing with two reference frames understood as degrees of freedom of the same physical system: the gravitational field under the assumptions of homogeneity, anisotropy and absence of matter. Let us assume we want to choose the variable \(\alpha\) as the internal clock. Using the Hamilton's equations
\[\begin{cases}\dot{\alpha}=-2p_{\alpha}Ne^{-3\alpha/2}\,,\,\dot{p}_{\alpha}=0\\ \dot{\beta}=2p_{\beta}Ne^{-3\alpha/2}\,,\,\dot{p}_{\beta}=0\end{cases} \tag{7}\]
we impose the following (partial) gauge condition
\[\dot{\alpha}=-2Np_{\alpha}e^{-3\alpha/2}\equiv 1, \tag{8}\]
which states just the coincidence (aside from a non-physical constant) of the variable \(\alpha\) with the label time \(\tau\). Imposing the gauge condition (8) turns out to be equivalent to fixing the following expression for the lapse function:
\[N(\alpha)\equiv-\frac{e^{3\alpha/2}}{2p_{\alpha}}\,. \tag{9}\]
Exactly as in the case of the parametrized particle given in Section 3, here we are able to solve Hamilton's equations. Thus, we can invert the relation \(\alpha(\tau)\leftrightarrow\tau(\alpha)\) and explicitly
write the relational observable \(\beta(\alpha)\). However, for the sake of our argument we choose to write relational observables via a complete gauge-fixing procedure. This amounts to choose a _specific_ fixed value of \(\alpha\equiv\alpha_{0}\), thus going to the reduced phase space. Here, this procedure consists of imposing the following gauge conditions:
\[\begin{cases}\dot{\alpha}\equiv 1\\ \alpha(\tau)\equiv\alpha_{0},\end{cases} \tag{10}\]
where the first condition represents the partial gauge-fixing implemented by the choice of the Lapse and the second one indicates a choice of a particular value of the quantity \(\alpha(\tau)\) along the gauge orbit on the constraint surface. As a result, the relational observable \(\beta(\alpha)\) is written as an evolving constant \(\beta(\tau)|_{\alpha(\tau)=\alpha_{0}}:=\beta(\tau)-(p_{\beta}/p_{\alpha})[ \alpha_{0}-\alpha(\tau)]\).
Recalling the link between the variable \(\alpha\) and the volume of the Universe \(V\circ e^{3\alpha/2}\), we see that the choice \(\dot{\alpha}\equiv+1\) corresponds to an expanding Universe, while the choice \(\dot{\alpha}\equiv-1\) corresponds to a collapsing one. We now demonstrate that we can switch from the gauge \(\{\dot{\alpha}\equiv 1;\alpha=\alpha_{0}\}\) to the gauge \(\{\dot{\alpha}\equiv-1;\alpha=\alpha_{0}\}\), that is from the expanding Universe to the contracting Universe case, through a passive diffeomorphism. Such a change of gauge can be interpreted as a change of **DRF** in the same sense as a frame re-coordination [Earman and Glymour(1978)].
Starting from the reduced phase space framework, we have to return to the non gauge-fixed level where the dynamics is encoded by both variables \(\{\alpha(\tau),\beta(\tau)\}\). Subsequently, we perform the \(\tau^{\prime}(\tau)=-\tau\) temporal passive diffeomorphism, and finally we fix again the same complete gauge choice (10), but now relative to coordinate time \(\tau^{\prime}\).
In the notation used at the beginning of the section, such a change is implemented by the map \(\phi_{\alpha}\circ\gamma_{\tau\rightarrow-\tau}\circ\phi_{\alpha}^{-1}\equiv \gamma_{\tau\rightarrow-\tau}\), which turns out to be a passive diffeomorphism, as we wanted to show. Here, the passive diffeomorphism composite map which changes the coordinate \(\tau\) in \(\tau^{\prime}=-\tau\) is represented by the symbol \(\gamma_{\tau\rightarrow-\tau}\). The map \(\phi_{\alpha}\) indicates the complete gauge choice (10). Of course, the map \(\phi_{\beta}\circ\phi_{\alpha}^{-1}\) from the gauge choice \(\{\dot{\alpha}\equiv 1;\alpha=\alpha_{0}\}\) to the gauge choice \(\{\dot{\beta}\equiv 1;\beta=\beta_{0}\}\) is not a passive diffeomorphism, but an active diffeomorphism.
We point out that with such an example we also made it clear that \(\alpha\) meets all the requirements of a **(g)DRF**, since its equations of motion are employed in the gauge-fixing procedure and the stress-energy tensor related to \(\alpha\) is necessarily neglected, since it is not even definable. This, is due to the well-known fact that in GR there is not a meaningful local expression for the gravitational stress-energy tensor. Therefore, in such cosmological sector it is possible to use gravitational degrees of freedom to define a dynamical reference frame, as we have done with the variable \(\alpha\) playing the role of the internal relational clock. In such a case, it is as if the gravitational field plays the role of both the dynamic variable and the reference frame relative to which the dynamics takes place.
Conclusion
We reviewed the notion of reference frame in physics and set out the need for the separation between the concepts of reference frame and coordinate system within GR.
We proposed three distinct classes of reference frames in GR, according to their increasing physical role in the gravitational dynamics. Indeed, we considered 'idealised' (**IRF**) those reference frames whose physical nature does not enter in any way into the physical picture, as 'dynamical' (**DRF**) those one which are associated with a specific set of dynamical equations, as'real' (**RRF**) those whose stress-energy tensor also contributes to the EFEs.
We stressed how reference frames are often confused with coordinates in theoretical and experimental practice. This is because reference frames are approximated as **IRFs** and are supposed to coincide with coordinate systems. An **IRF** behaves _as if_ it were a coordinate system. However, coordinates are representational tools of **IRFs**, which are constituted in fact by a set of partial observables to which a measuring device can be assigned. The difference between a coordinate system and a reference frame is also exemplified by the fact that a reference frame is employed within a relational approach and its degrees of freedom represent 'physical coordinates', whereas a coordinate system is a mathematical structure without a physical referent.
We proposed a comparison between the orthodox definition of a reference frame and the Brown-Kuchar dust. This discussion showed that the two definitions are similar. Nevertheless, the orthodox definition has some formal and conceptual difficulties in defining relational observables.
Finally, we presented a rather formal method to determine the difference between coordinate systems and **DRFs**. In short, a change of coordinate system is directly implemented by a passive diffeomorphism. A change of **DRF** can also be represented by a map which links different gauge-fixings. Only in the case where we do not change the choice of dynamical variables to be gauge-fixed, does a passive diffeomorphism directly represents a **DRF** change, which can be understood analogously to a frame re-coordination. This was clearly demonstrated using a simple Bianchi I Universe model.
The role of reference frames, as defined in this paper, has implications both for the increasingly studied notion of quantum reference frame and for future discussions on the nature of vacuum solutions of EFEs. In particular, it remains to be clarified how vacuum solutions can be reconsidered in terms of matter solutions where the stress-energy tensor is neglected. Consequently, the role of 'idealised matter' in the derivation of EFEs solutions in vacuum, such as the Schwarzschild solution describing a black hole, will have to be the subject of further analysis. Likewise, it remains to be clarified why and to what extent the approximation of reference frames as mere coordinates works so well that the difference between the two concepts can be overlooked. The measurement of gravitational waves, which is one of the greatest experimental successes in GR, is the clearest example of such a conundrum.
By introducing **gDRFs**, our work paves the way for future work on the analysis of gravitational, non-material reference frames. The close connection to the problem of defining a stress-energy tensor for the gravitational field underlines the relevance of these issues to the foundations of GR. In summary, in this paper we stated the _necessity_ to distinguish between the terms reference frame and coordinate system in GR. This distinction, while conceptually relevant, is sometimes blurred because in some cases, as in the case of **IRFs**, it has no bearing on theoretical practice. In other circumstances it becomes unavoidable in light of the relevant result of being able to write local Dirac observable in GR. However, clarifying this distinction and refining the definitions provided in the literature so far may have significant implications in the foundations of Einsteinian theory of gravity. |
2306.06322 | Towards Arabic Multimodal Dataset for Sentiment Analysis | Multimodal Sentiment Analysis (MSA) has recently become a centric research
direction for many real-world applications. This proliferation is due to the
fact that opinions are central to almost all human activities and are key
influencers of our behaviors. In addition, the recent deployment of Deep
Learning-based (DL) models has proven their high efficiency for a wide range of
Western languages. In contrast, Arabic DL-based multimodal sentiment analysis
(MSA) is still in its infantile stage due, mainly, to the lack of standard
datasets. In this paper, our investigation is twofold. First, we design a
pipeline that helps building our Arabic Multimodal dataset leveraging both
state-of-the-art transformers and feature extraction tools within word
alignment techniques. Thereafter, we validate our dataset using
state-of-the-art transformer-based model dealing with multimodality. Despite
the small size of the outcome dataset, experiments show that Arabic
multimodality is very promising | Abdelhamid Haouhat, Slimane Bellaouar, Attia Nehar, Hadda Cherroun | 2023-06-10T00:13:09Z | http://arxiv.org/abs/2306.06322v1 | # Towards Arabic Multimodal Dataset for Sentiment Analysis
###### Abstract
Multimodal Sentiment Analysis (MSA) has recently become a centric research direction for many real-world applications. This proliferation is due to the fact that opinions are central to almost all human activities and are key influencers of our behaviors. In addition, the recent deployment of Deep Learning-based (DL) models has proven their high efficiency for a wide range of Western languages. In contrast, Arabic DL-based multimodal sentiment analysis (MSA) is still in its infantile stage due, mainly, to the lack of standard datasets. In this paper, our investigation is twofold. First, we design a pipeline that helps building our Arabic Multimodal dataset leveraging both state-of-the-art transformers and feature extraction tools within word alignment techniques. Thereafter, we validate our dataset using state-of-the-art transformer-based model dealing with multimodality. Despite the small size of the outcome dataset, experiments show that Arabic multimodality is very promising.
Sentiment Analysis, Multimodal Learning, Transformers, Arabic Multimodal Dataset. +
Footnote †: This research is performed under the MESRS Project PRFU: C00L07N030120220002
## I Introduction
The field of Multimodal Machine Learning (MML) has been growing rapidly over the past few decades, driven by the increasing availability of multimodal data and the need for more sophisticated and effective machine learning models. MML entails integrating and modeling data from various modalities (text, audio, image, video).
Multimodal Sentiment Analysis (MSA), for instance, is an important and growing area of MML that aims to automatically determine the sentiment expressed in various modalities [1]. Early works in MSA deal with feature extraction and fusion processes in straightforward ways using standard machine learning algorithms [2, 3]. Over time, more complex methods, mainly deep learning models, were developed, such as CNN [4], RNN and its architectural variants [5], and Multimodal Multi-Uttearance models [6].
Arabic MSA is a promising area for academic research and practical applications due to the widespread use of the Arabic language and the increasing popularity of multimedia content. Furthermore, Arabic MSA is challenging due, on the one hand, to the complexity and the richness of the Arabic language, and on the other hand, to the significant cultural and linguistic variety of the Arab world. Therefore, it is still in its infancy [7].
Despite these challenges, there has been some limited work in Arabic MSA that has shown promising results [8, 9]. Hence, there is still much room for improvement in accuracy, efficiency, flexibility, and ability to handle diverse modalities. This paper has two main investigations:
1. First, we design a pipeline that facilitates the construction of a novel Arabic multimodal dataset. We accomplish this by leveraging state-of-the-art transformers and feature extraction tools alongside word alignment methods.
2. To assess the effectiveness of our Arabic multimodal dataset, we employ cutting-edge transformer models that are intended to handle multimodality.
The remainder of this paper is structured as follows. Section II introduces some basic concepts concerning SA and MML required to understand the rest of the paper. Section III provides an overview of previous research on English and Arabic MSA. In Section IV, we describe the proposed methodology for multimodal dataset collection. We also present the models that were used to evaluate the designed dataset. Section V deals with experiments and interpretation of the empirical findings. Finally, Section VI outlines the conclusions and future works.
## II Preliminaries
Before diving into the details of our approach, we start with the terminologies and background concepts that concern Sentiment Analysis (SA) and Multimodal Machine Learning
(MML) elements: data representation, modality fusion methods, alignment, and pre-trained models.
### _Sentiment Analysis_
Sentiment Analysis (SA), also referred to as opinion analysis, is the process of obtaining and examining the views, ideas, and perceptions of the public on a wide range of topics, products, subjects, and services. Corporations, governments, and people may all benefit from public opinion when gathering data and making choices based on it. [10].
Let us mention that the words _emotion_ and _sentiment_ are usually used interchangeably in daily life. While they are two different concepts. _Emotion_ is defined as a complex psychological state. There are six basic emotions, i.e., happiness, sadness, anger, fear, surprise, and disgust. This list is enriched by adding emotions such as pride, excitement, embarrassment, contempt, and shame. On the other hand, _Sentiment_ describes a mental attitude that is founded on emotion [11]. Positive, neutral, and negative are the three fundamental polarities. The SA also makes reference to a polarity categorization. There are several methods for performing SA, including rule-based methods, machine learning-based methods, and hybrid approaches. Some popular machine learning-based methods include Naive Bayes, Support Vector Machines (SVM), and Deep Learning-based (DL) models. However, DL-based models have proved their efficiency as SOTA approaches.
Human natural perception refers to our ability to perceive and understand information from multiple modalities in a seamless and integrated way, such as seeing a picture and hearing a sound simultaneously to understand a concept. Multimodal SA aims to replicate this natural perception by combining information from multiple modalities (text, audio and image/video, and more) to improve the accuracy and efficiency of learning systems.
### _Multimodal Machine Learning_
Multimodal Machine Learning (MML) involves integrating and modeling multiple communicative modalities, Such as linguistic (text), acoustic (sound), and visual messages(image and video) of data, from a variety of diverse and interconnected sources [12]. By leveraging the strengths of different modalities, multimodal learning can help overcome the limitations of individual modalities and enhance overall learning performance.
Liang et al. proposed a taxonomy of six core features in MML: Modality representation, alignment, reasoning, generation, transference, and quantification [13] that are understudied in conventional unimodal machine learning. Considering their importance for our study, we focus on two features: i) Representation: where we focus mainly on which adequate representation is suitable for each modality and then how and when to fuse and integrate information from two or more modalities, effectively reducing the number of separate representations. ii) Alignment: Alignment between modalities is also challenging and involves identifying connections between modality elements.
#### Ii-B1 Fusion Methods
Basically, We have two main methods to make a fusion of modalities. The first is _Early Fusion_, which happens when we mix the modalities before making decisions with concatenation, summation, or cross-attention mechanism. While the second is _Late Fusion_ method which makes a prediction based on each modality alone and then combines decisions to get a final prediction [14]. In our approach, We deploy an early fuse approach.
#### Ii-B2 Pre-trained Models
The deployment of semantic and Deep Learning based approaches leads us generally to use some pre-trained models such as GloVe multimodal bi-transformer model (MMBT) models [15], CLIP [16], BERT [17] and AraBERT [18]. For the purpose of this paper, we have deployed AraBERT.
BERT, which represents the basis of ArabBert, is a Bidirectional Encoder Representation from Transformers developed by Google [16]. BERT large encompasses \(24\) encoders with \(16\) bidirectional self-attention heads trained from unlabeled data extracted from the BooksCorpus and English Wikipedia.
AraBERT [18] is a pre-trained BERT transformer built for Arabic NLP tasks. It is trained on \(70\) million sentences, corresponding to \(24\)GB of Arabic text. AraBERT uses the same configuration as Bert. It has \(12\) encoder blocks, \(768\) hidden dimensions, \(12\) attention heads, \(512\) maximum sequence length, and \(110\)M parameters.
## III Related Work
This section reviews the relevant studies in the field of multimodal sentiment analysis (MSA), including traditional, machine learning, and deep learning approaches for both English and Arabic languages.
### _English Multimodal Sentiment Analysis_
The study of Zadeh [2] is considered one of the pioneering works in the field of MSA. It is the first work to tackle the challenge of tri-modal (visual, audio, and textual features) sentiment analysis. The author creates a dataset of 47 videos from YouTube. Each input in the dataset was annotated with either a positive, negative, or neutral label. Moreover, the paper identifies specific subset of audio-visual features relevant to sentiment analysis and presents some instructions for integrating these features. In experiment, author uses the Hidden Markov Model (HMM) classifier. The findings demonstrate the promise of MSA despite the small size of the dataset and the straightforward text analysis method.
Poria et al. [3] propose a novel methodology for performing MSA based on sentiment extraction from online videos. They deploy the dataset initially created by [2]. The authors discuss features extracting process from various modalities (text, audio, and visual). These features are fused by incorporating different techniques (feature-level and decision-level). The authors used multiple supervised machine learning classifiers (Support Vector Machine (SVM), Extreme Learning Machine (ELM), Naive Bayes (NB), and Neural Networks) to validate their approach. Finally, a comparative study was carried out on the selected dataset, revealing that their proposed MSA system
outperforms the current state-of-the-art systems. The best performance was achieved with Extreme Learning Machine (ELM) method.
The study in [4] provides a detailed review that explores the applicability, challenges, and issues for textual, visual, and MSA using CNNs. Several enhancements have been proposed, such as combining CNN and long short-term memory (LSTM) techniques.
Tembhurne and Diwan [5] study the role of sequential deep neural networks in MSA. They thoroughly examined applicability, problems, issues, and methodologies for textual, visual, and MSA based on RNN and its architectural variants.
Recently, Abdu et al. [6] draw up a survey on MSA using deep learning. They have categorized \(35\) cutting-edge models, recently suggested for the video sentiment analysis field, into eight categories, based on the specific architecture employed in each model. After a detailed examination of the results, authors conclude that the _Multimodal Multi-Uterance_ based architecture is the most powerful in the task of MSA.
Before concluding this section, we point out that the two transformer-based models known as the Multimodal transformer (Mult) [19] and LS-LSTM [20] that we have used to evaluate our dataset are described in Section IV-B.
### _Arabic Multimodal Sentiment Analysis_
In contrast to the MSA studies made for the English language, the one for the Arabic language encompasses a limited number of works.
Najadat and Abushaqra [8] aim to address the issue of MSA for Arabic. They start by building their dataset from YouTube. They extract different features (linguistic, audio, and visual) from the collected videos. Also, they augment data using _Weka_ re-sampling option. For training and testing purposes, the authors annotate their dataset by positive, negative, and neutral polarities. In the experiment stage, the authors use different machine learning classifiers (Decision Trees, Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Naive Bayes (NB), and Neural Networks). Obtained results reveal that the Neural Network classifier performs best when using only the audio modality. However, obtained results can be enhanced by feeding the dataset with more features.
In their paper [9], Alqarafi et al. try to tackle the problem of sentiment analysis in online opinion videos for modern standard Arabic. They begin by constructing their Arabic Multimodal Dataset (AMMD) from \(40\) different YouTube videos. First, they used the extracted features (text, video) to feed their dataset. After that, they add metadata about the videos, including audio, transcription, visual motions, and sentiment polarities. Authors use, to conduct experiments, the Support Vector Machine (SVM) classifier. Despite the limited size of the dataset, the experimental results demonstrate the validity of the constructed dataset. Additionally, the results indicate that for several sentiment analysis tasks, including subjectivity and polarity classifications, the fusion of different features (utterance, visual) improves the performance compared to using utterance features alone.
## IV Methodology
In order to tackle the problem of Arabic Multimodal Sentiment Analysis _AMSA_, in our study, we conduct two main investigations.
First, we design a pipeline that eases building Multimodal dataset for sentiment analysis that respects dataset collection engineering and harnesses transformers and SOTA feature extraction tools( IV-A).
Second, we assess our built dataset using SOTA transformer-based models that deal with multimodality. The transformers are chosen to leverage inherent semantics while multimodality is deployed to improve the sentiment learner (Section IV-B).
### _Multimodal Dataset collection Methodology_
As mentioned above, our targeted multimodal dataset for Arabic Sentiment Analysis, involving dataset collection engineering, aims to leverage transformers and SOTA feature extraction techniques. Indeed, we have proposed this generic pipe to build the Multimodal dataset:
1. Data Inventory, Collection, and Preprocessing.
2. Annotation.
3. Data representation.
Our methodology is inspired by both MOSEI [21] and CMU-MOSI [22] dataset-building processes while taking into account Arabic specificities.
#### Iv-B1 Data Inventory, Collection, and Preprocessing
We rely mainly on videos on Youtube and Social Media platforms that include various information about the videos, such as audio, visual gestures, metadata, and probably transcripts. First, we have to identify sources that guarantee encompassing subjective information passages such as video-Bloggers, political analysts, and influencers channels. We also rely on some Tv' Talk Shows. To get a large size of video we automatically draw some search lists and API to ease scraping information and its related metadata.
For our targeted NLP task, pre-processing the collected video include objective segment removal, speech extraction, text extraction, and video/audio segmentation. All these processing are semi-automatic, using open source tools and quiet manual intervention.
#### Iv-B2 Annotation
Annotating the polarity of video segments, as well as their associated text and speech, is the most challenging and resource-intensive task in our work. It requires a significant amount of time and resources to accurately annotate. We have opted to rely on manual crowdsourcing and manual annotation through a homemade platform. A guideline is devised in order to uniform the annotation. In this step, we use the classic polarities \([-1,0,1]\) for negative, neutral, and positive sentiments, respectively. The annotation evaluation is performed through the standard automatic Inter-Annotator Agreement method.
#### Iv-B3 Data Representation
The three targeted modalities are represented in such a way that they exhibit more information on the inherent sentiment.
**Text**
Concerning the text, one can use either word-embedding or pretrained transformers. However, the latter allows learning contextual relationships between words in a sentence through a bidirectional attention mechanism. This means that we represent words taking into account both the left and right context of each word in a sentence, giving it a more comprehensive understanding of the semantic meaning and providing more accurate representations of textual modality.
#### 4.1.1 Visual Features
The combination of body gestures and facial features can convey a more nuanced range of sentiments and emotions.
Body gestures refer to physical characteristics of a person's body, such as Open/crossed arms, nodding, shaking head, and Shrugging shoulders. Facial features refer to the characteristics of a face, such as facial expressions and movements, that are used to represent emotions and sentiments of an individual. These features include commonly: smiling, frowning, raised eyebrows, squinted eyes, lip biting, tears, and blushing.
For this version of our pipe, we have opted to rely on facial features as they are the most commonly used features. In addition, they are also easier to capture from videos compared with body gestures. In fact, facial features can be extracted using computer vision techniques. It may include measurements of facial landmarks, facial action units, and head movements. The main descriptors we extract are:
* Action Units (AUs): these are facial muscle movements that are associated with various facial expressions such as brow raise, lip stretch, and eye closure.
* Head pose: it estimates the orientation of the head in three dimensions, including pitch, yaw, and roll.
* Eye gaze: it captures the direction of the eye gaze, including the location of the gaze and the direction of the gaze vector.
* Facial Action Coding System (FACS): FACS is a system that describes facial expressions based on AUs.
* Facial symmetry: it informs about the symmetry of the face by comparing the left and right sides of the face.
#### 4.1.2 Acoustic Features
The speech extracted from the video can be characterized at different levels acoustic, phonology, or prosody. The acoustic features are the most effective ones as they are language-independent since they rely on the physical features of the signal. However, also prosody features are essential as they capture features related to the emotion related to the speaker's speech. These features are commonly used in speech recognition, speaker identification, and sentiment and emotion recognition systems. In our study, we extract those main features:
* Mel-Frequency cepstral coefficients (MFCCs): A set of coefficients that represent the spectral envelope of a speech _signal_.
* Prosody: These include fundamental frequency (F0), speaking rate, and energy.
* Voice quality: These include jitter, shimmer, harmonic-to-noise ratio (HNR), and glottal waveform features.
* Emotion-related: These include pitch slope, pitch variance, and various modulation features.
* Spectral: These include spectral centroid, spectral flux, and spectral roll-off.
* Format features: These include the first three formants, which are resonant frequencies of the vocal tract.
* Timing features: These include various measures of speech timing, such as pause duration and speech rate.
#### 4.1.3 Alignment Techniques
One of the crucial aspects of the MML is the alignment of multimodal data, which involves synchronizing the different modalities, such as text, audio, and video. This is typically done by aligning the timestamps of each modality and mapping them onto a common timeline.
Considering text as an important modality, we use, in our work, two stages in achieving this alignment. In the first stage, we perform Text and Audio alignment, where data are aligned at the word level. In the second stage, we perform video and text alignment. Thus, we get a global alignment over the text common modality. For both stages, we use forced alignment techniques.
#### 4.1.4 Forced Alignment Text-Audio
Within this alignment, a transcript is synchronized with an audio recording by mapping each speech segment to its corresponding words. This process of forced alignment typically involves breaking down the audio and transcript into smaller segments and using algorithms to compare the speech and text segments to determine their correspondence. The algorithms consider various factors, such as speech timing, pronunciation of words, and speech sounds. After the forced alignment process, we get a time-stamped representation of speech.
#### 4.1.5 Pivot-Based Multimodal Alignment
For effective handling of multimodal time series data featuring multiple views at different frequencies, it is crucial to align them to a designated "pivot" modality, which is typically done through textual modality. This involves grouping feature vectors from other modalities into bins based on the timestamps of the pivot modality and then applying a specific processing function, known as the "collapse function", to each bin. This function, often a pooling function, merges multiple feature vectors from another modality into a single vector, resulting in sequences of equal lengths across all modalities (matching the length of the pivot modality) in all-time series.
### _Models_
In this section, we provide a detailed explanation of the selected models used to validate our dataset. Our explanation includes a discussion of crucial multimodal learning (MML) techniques used in these models, such as fusion, modeling, and alignments.
The initial state-of-the-art model, known as the Multimodal transformer (Mult) [19], is a transformer-based model that deploys an attention mechanism. It allows each element of the input sequence \(X_{i}\) to attend to all the other elements, resulting in a new weighted sequence \(\widehat{X}_{i}\). This process is referred to as
self-attention, as it enables the elements to focus on the most relevant ones, \(i\) representing the modality among {text, video, audio}.
Mult integrates these \(\widehat{X}_{t}\), \(\widehat{X}_{a}\), and \(\widehat{X}_{v}\) by utilizing a pairwise feed-forward approach. This is achieved through the implementation of a deep Cross Attention Block (CAB), which is defined below, where each pair-modality is fed into two CABs that alternate between Query, Key, and Values matrices. Before making predictions, the Multimodal Transformer model concatenates the \(Z_{i\to j}\) and \(Z_{k\to j}\) matrices obtained from the CABs, where \(i,\ j,\) and \(k\) represent distinct modalities. This process results in the \(Z_{i}\) matrix. As previously mentioned, the prediction classifier is applied to the concatenated or summed \(\widehat{Y}\), which is composed of the \(Z_{t}\),\(Z_{a}\), and \(Z_{v}\) matrices where \(t\), \(a\), and \(v\) are referred to text, audio, and video respectively.
Cross Attention Block (CAB):In order to implement the fusion from multiple modalities, Cross-modal allows the model to focus on relevant information from each modality and weigh the contribution of different modalities in the final prediction. We can express CAB in the mathematical formulation below. Let us consider an MML model with two modalities: \(X,Y\). The query, key, and value matrices for each modality can be represented as follows:
\[Q_{X}=W_{X}^{Q}X,\quad K_{X}=W_{X}^{K}X,\quad V_{X}=W_{X}^{V}X \tag{1}\]
\[CAB_{x\to y}=Softmax(\frac{Q_{x}.K_{y}^{T}}{\sqrt{d_{k}}}).V_{y}=Z_{x\to y}, \tag{2}\]
where \(W_{X}^{Q}\), \(W_{X}^{K}\), \(W_{Y}^{V}\), \(W_{Y}^{Q}\), \(W_{Y}^{K}\), \(W_{Y}^{V}\) are the weight matrices for the query, key, and value computations for each modality. and \(Q,K,V\in\mathbb{R}^{l\epsilon d}\), \(d\) is the modality dimension, \(l\) is the length of input token \(X\).
Fig. 1 illustrates the cross attention block. The main purpose of using these two sub-layers before and after CAB layers is to make our model focus only on dependent features.
In light of Equations (1) and (2) described above, we can express for each modality latent representation \(Z_{i}\) by Equation (3).
\[Z_{x}=[CAB_{y\to x};CAB_{k\to x}] \tag{3}\]
where \(x,\underline{y}\), and \(k\) are all possible modalities.
The output \(\widehat{Y}\) of Mult model by Equation (4) as follows:
\[\widehat{Y}=\sum{[Z_{t};Z_{a};Z_{v}]} \tag{4}\]
In another hand, we select another deep learning model using another fusing approach. This model consists of three separate Long Short-Term Memory (LSTM) networks [20], one for each modality (text (t), visual (v), and acoustic (a)). Here we use a late-fusion where different modalities are processed separately to obtain their respective feature representations, and then these features are combined using a fusion mechanism to make the final prediction. In the first stage, the model takes as input three modalities \(X_{\{t,a,v\}}\) and extracts features \(h1_{i},h2_{i}\) from each of them using the corresponding two LSTMs with a normalization layer between them as shown in equations bellow.
\[o_{t} =\delta\left(W_{o}\times X_{\{t/a/v\}}+h_{t-1}+b_{o}\right)\] \[i_{t} =\sigma(W_{i}[X_{\{t/a/v\}},h_{t-1}]+b_{i})\] \[g_{t} =\tanh(W_{c}[X_{\{t/a/v\}},h_{t-1}]+b_{c})\]
\[O_{lstm1}(X)=h1_{t} =o_{t}\odot\tanh(c_{t})\] \[O_{lstm2}(h1_{t}) =h2_{t} =o_{t}\odot\tanh(c_{t})\]
These features are then concatenated and normalized, before being fed into a fully connected layer with a ReLU activation function and dropout regularization. Finally, the output is generated using another fully connected layer defined as:
\[Output\_model=O_{lstm2}\left(Norm\_Layer(O_{lstm1}(X_{\{t,a,v\}}))\right)\]
\[Output\_model=Concat[h2_{t};h1_{t};h2_{a};h1_{a};h2_{v};h1_{v}]\]
Where \(x_{t}\) represents the input at time step \(t\), \(h_{t-1}\) represents the hidden state at the previous time step, \(i_{t}\), \(f_{t}\), and \(o_{t}\) represent the input, forget, and output gates at time step \(t\), respectively. The symbol \(\odot\) denotes element-wise multiplication, and \(\sigma\) and \(\tanh\) are the sigmoid and hyperbolic tangent activation functions, respectively.
## V Experiments
In this section, first, we start by describing the details related to the implementation of our proposed Arabic Multimodal dataset pipe as well as the description of the collected dataset. In the second step, we empirically evaluate our built dataset through both Mult and LS-LSTM models.
Fig. 1: Cross-modal with CAB lead model to integrate input modalities. In this illustration, CAB combines video and text information (\(X_{v}\) and \(X_{t}\)) through an attention mechanism.
### _Data collection_
Following the above-designed pipe, our dataset is gathered from video-blogging' videos on mainly YouTube and some other social media platforms. The videos are retrieved and scraped automatically using a predefined list of keywords. In fact, by means of this latter, we ensure the existence of subjective information related to Arabic content.
All videos have been checked manually to keep the most convenient ones for our study. Then using a homemade collaborative front-end tool 1, We segmented each video by placing _start_ and _end_ markers so that each video segment encompasses one subjective information. Then we extracted from each video segment its related Arabic transcription and speech. For those purposes, we use both Klaam tool 2 and _Almufaroagh_ tool 3 for Arabic speech recognition. The automatically extracted transcripts are also checked manually to avoid and fix any transcription errors. The forced and pivot alignments are performed on the fly thanks to Audacity tool 4.
Footnote 1: [https://github.com/belgats/Arabic-Multimodal-Dataset/](https://github.com/belgats/Arabic-Multimodal-Dataset/)
Footnote 2: [https://github.com/ARBML/kaam](https://github.com/ARBML/kaam)
Footnote 3: [https://almufaroagh.com/](https://almufaroagh.com/)
Footnote 4: [https://www.audacityteam.org/](https://www.audacityteam.org/)
Footnote 5: https://OpenSmilettphys
Footnote 6: [https://github.com/belgats/Arabic-Multimodal-Dataset/](https://github.com/belgats/Arabic-Multimodal-Dataset/)
Let us mention that word alignment is a challenging task. The quality of alignment can be negatively affected when dealing with speeches in which words are not fully enunciated by the speaker.
Concerning the annotation process, each segment is labeled by \(5\) in lab annotators. A guideline is designed to reach more similar annotations. The Inter Agreement Annotator method is applied to assign a final label.
The resulting three modalities (Video, text, and audio) are then preprocessed to exhibit more information about their inherent sentiment, as described below.
Our method for extracting word vectors padded to max length from these transcripts is based on AraBERT [18] transformer. In fact, it is a BERT-based model that allows learning contextual relationships between words in a sentence through bidirectional attention mechanisms. That means that we represent words taking into account both the left and right context of each word in a sentence, giving it a more comprehensive understanding of the semantic meaning and providing more accurate representations of textual modality. Our text embeddings are in \(768\) dimensional vector.
Concerning the visual features, we opted for the facial features. Thanks to OpenFace toolkit [23], we extracted \(45\) facial features belonging to those described above.
The acoustic features are extracted using OpenSmile tool 5. We extracted \(52\) features described previously.
Footnote 5: https://OpenSmilettphys
Table I reports more details on the built dataset.
After that, we construct data formatted as a dictionary of multiple computational sequences using CMU-multimodal SDK [22].
A sample of our formatted dataset is available in 6
Footnote 6: [https://github.com/belgats/Arabic-Multimodal-Dataset/](https://github.com/belgats/Arabic-Multimodal-Dataset/)
### _Results and Discussion_
The deployed models are measured through three metrics: Accuracy, F1 score and Mean Absolute Error (MAE). MAE tells us the mean absolute difference between predicted sentiment scores and the true sentiment scores.
\[MAE=\frac{\sum_{i=1}^{n}|y_{i}-x_{i}|}{n}\]
Accuracy measures the proportion of true positives and true negatives out of total predictions.
F1 score measures the harmonic mean of precision and recall.
Figures 3 and 4 report the performances of the Mult model (respectively LF-LSTM) using our Arabic Multimodal dataset in terms of Accuracy, F1, and MAE metrics. For each model, four variants are learned. Three uni-modal models considered Text, Audio, and Video modalities alone. While TVA is the Multimodal that fuses the three modalities.
Let us mention that the uni-modal models for Mult are obtained by feeding the features of that specific modality with self-attention so that the CAB is replaced by a self-attention mechanism.
The results show that TVA Mult-based learner outperforms the uni-modal models regarding all metrics. It improves the accuracy by \(15.15\)%, \(19.63\)%, and \(18.22\)% for Text, Audio, and Video-based uni-models, respectively, while MAE is improved by 2% to 4% compared to uni-modals.
Concerning the LS-LSTM-based models, the same result is observed. The TVA learner outperforms the uni-modal models regarding all metrics. However, with less improvement. Multimodality has enhanced the learner in terms of F1 score by \(3.9\)%, \(10.19\)%, and \(5.6\)% for Text, Audio, and Video-based
uni-models, respectively. Furthermore, the MAE has decreased by more than \(8.64\)% for the text uni-modal model.
One can observe that the reached Multimodal based performances are not very high. F1 scores are about \(63,8\)% and \(58,9\)% for Mult and LF-LSTM-based models, respectively. However, these models show their superiority compared to uni-modal models.
One could argue that these results are impacted by two factors. Firstly, the dataset size is relatively modest and needs to be expanded to ensure greater accuracy. Secondly, the alignment process is highly challenging. As previously mentioned, we encountered significant difficulties when dealing with speeches where the speaker swallowed words, which negatively affected the word alignment.
Another result to be underlined is the superiority of modalities early fusion (Mult) compared to the late fusion (LF-LSTM), one at least for our built dataset. This result is expected as it is also confirmed for other Languages' Multimodal models [19].
## VI Conclusion and Future Work
In this paper, we have addressed the topic of multimodal sentiment analysis, which has the potential to revolutionize our understanding and analysis of human emotions, opening up new avenues for research and practical uses. However, to address the issue of the scarcity of Arabic multimodal datasets, we have developed a methodology for creating such a dataset. Subsequently, we assessed the effectiveness of our constructed dataset using state-of-the-art transformer models designed to handle multimodality.
Despite the relatively small size of the constructed dataset, the findings show that considering multimodality is crucial for accurate Arabic sentiment analysis.
As further work, we intend to expand our Arabic multimodal dataset to meet the size requirements for deep learning algorithms. Furthermore, we conjecture that enhancing the
Fig. 3: Performances of Mult-based Models.
Fig. 2: Illustration of an instance of our Dataset.
alignment techniques used in the dataset can considerably improve the accuracy and effectiveness of sentiment analysis.
|
2304.07645 | Magnitude Invariant Parametrizations Improve Hypernetwork Learning | Hypernetworks, neural networks that predict the parameters of another neural
network, are powerful models that have been successfully used in diverse
applications from image generation to multi-task learning. Unfortunately,
existing hypernetworks are often challenging to train. Training typically
converges far more slowly than for non-hypernetwork models, and the rate of
convergence can be very sensitive to hyperparameter choices. In this work, we
identify a fundamental and previously unidentified problem that contributes to
the challenge of training hypernetworks: a magnitude proportionality between
the inputs and outputs of the hypernetwork. We demonstrate both analytically
and empirically that this can lead to unstable optimization, thereby slowing
down convergence, and sometimes even preventing any learning. We present a
simple solution to this problem using a revised hypernetwork formulation that
we call Magnitude Invariant Parametrizations (MIP). We demonstrate the proposed
solution on several hypernetwork tasks, where it consistently stabilizes
training and achieves faster convergence. Furthermore, we perform a
comprehensive ablation study including choices of activation function,
normalization strategies, input dimensionality, and hypernetwork architecture;
and find that MIP improves training in all scenarios. We provide easy-to-use
code that can turn existing networks into MIP-based hypernetworks. | Jose Javier Gonzalez Ortiz, John Guttag, Adrian Dalca | 2023-04-15T22:18:29Z | http://arxiv.org/abs/2304.07645v2 | # Magnitude Invariant Parametrizations Improve Hypernetwork Learning
###### Abstract
Hypernetworks, neural networks that predict the parameters of another neural network, are powerful models that have been successfully used in diverse applications from image generation to multi-task learning. Unfortunately, existing hypernetworks are often challenging to train. Training typically converges far more slowly than for non-hypernetwork models, and the rate of convergence can be very sensitive to hyperparameter choices. In this work, we identify a fundamental and previously unidentified problem that contributes to the challenge of training hypernetworks: a magnitude proportionality between the inputs and outputs of the hypernetwork. We demonstrate both analytically and empirically that this can lead to unstable optimization, thereby slowing down convergence, and sometimes even preventing any learning. We present a simple solution to this problem using a revised hypernetwork formulation that we call Magnitude Invariant Parametrizations (MIP). We demonstrate the proposed solution on several hypernetwork tasks, where it consistently stabilizes training and achieves faster convergence. Furthermore, we perform a comprehensive ablation study including choices of activation function, normalization strategies, input dimensionality, and hypernetwork architecture; and find that MIP improves training in all scenarios. We provide easy-to-use code that can turn existing networks into MIP-based hypernetworks.
## 1 Introduction
Hypernetworks, neural networks that predict the parameters of another neural network, are increasingly important models in a wide range of applications such as Bayesian optimization [27, 39, 54], generative models [1, 10, 44, 61], amortized model learning [3, 11, 22, 38, 58], continual learning [12, 21, 57], multi-task learning [34, 49, 53], and meta-learning [6, 62, 63]. Despite their advantages and growing use, training hypernetworks is challenging. Compared to non-hypernetwork-based models, training existing hypernetworks is often unstable. At best this increases training time, and at worst can prevent training from converging at all. This burden limits their adoption, negatively impacting many applications. Existing hypernetwork heuristics, like gradient clipping [15, 27], are insufficient in many instances, while existing techniques aimed at improving standard neural network training often fail when applied to hypernetworks.
This work addresses a cause of training instability. We identify and characterize a previously unidentified hypernetwork design problem and provide a straightforward solution to address it. We demonstrate analytically and empirically that the typical choices of architecture and parameter initialization in hypernetworks cause a proportionality relationship between the scale of the hypernetwork inputs and the scale of the parameter outputs (Fig. 1a). The resultant fluctuations in predicted parameter scale lead to large fluctuations in the scale of the gradients during optimization, resulting in unstable training and slow convergence. In some cases, this phenomenon even prevents any meaningful learning. To overcome this issue, we propose a straightforward revision to hypernetwork models: Magnitude Invariant Parametrizations (MIP). MIP effectively eliminates the influence of the scale of hypernetwork inputs on the scale of the predicted parameters, while retaining the representational power of existing formulations. We demonstrate the effectiveness of our proposed solution across several hypernetwork learning tasks, providing evidence that hypernetworks using MIP achieve faster convergence without compromising model accuracy (Fig. 1b).
Our main contributions are:
* We characterize a previously unidentified optimization problem in hypernetwork training, and show that it leads to large gradient variance and unstable training dynamics.
* We propose a solution: Magnitude Invariant Parametrizations (MIP), a hypernetwork formulation that addresses the issue without introducing additional training or inference costs.
* We rigorously study the proposed parametrization. We first compare it with the standard formulation and against popular normalization strategies, showing that it consistently leads to faster convergence and more stable training. We then extensively test it using various
choices of optimizer, input dimensionality, hypernetwork architecture, and activation function, finding that it improves hypernetwork training in all evaluated settings.
* We release our implementation as an open-source PyTorch library, HyperLight1. HyperLight facilitates the development of hypernetwork models and provides principled choices for parametrizations and initializations, making hypernetwork adoption more accessible. We also provide code that enables using MIP seamlessly with existing models. Footnote 1: Source code at [https://github.com/JJGO/hyperlight](https://github.com/JJGO/hyperlight)
## 2 Related Work
Research into training stability and efficiency of neural networks involves a variety of strategies, including parameter initialization strategies, normalization techniques, and adaptive optimization.
**Parameter Initialization**. Deep neural networks experience unstable training dynamics in the presence of exploding or vanishing gradients [14]. Weight initialization plays a critical role in the magnitude of gradients, particularly during the early stages of training. Commonly, weight initialization strategies focus on preserving the magnitude of activations during the forward pass and maintaining the magnitude of gradients during the backward pass [13, 16]. Our work demonstrates that existing initialization strategies can be ineffective when applied to hypernetworks.
**Normalization Techniques**. Normalization techniques control the distribution of weights and activations, often leading to improvements in convergence by smoothing the loss surface [7, 23, 31, 48]. Batch normalization is widely used to normalize activations using minibatch statistics, and methods like layer or group normalization instead normalize across features [2, 55, 59]. Other methods reparametrize the weights using weight-normalization strategies or using self-normalizing networks [26, 41, 47]. As we show in our experiments, these strategies fail to resolve the proportionality issue we study. They either maintain the proportionality relationship (as in batch normalization), or eliminate proportionality by rendering the predicted weights independent of the hypernetwork input (as in layer normalization), eliminating the utility of the hypernetwork itself.
**Adaptive Optimization**. High gradient variance can be detrimental to model convergence in stochastic gradient methods [24, 46]. Solutions to mitigate gradient variance encompass adaptive optimization techniques, which aim to decouple the effect of gradient direction and gradient magnitude by normalizing by a history of previous gradient magnitudes [25, 60]. Similarly, applying momentum reduces the instantaneous impact of stochastic gradients [36, 40] by using parameter updates based on an exponentially decaying average of past gradients. These strategies are implemented by many widely-used optimizers, such as Adam [5, 25]. Our experiments show that although adaptive optimizers like Adam enhance hypernetwork optimization, they do not address the root cause of the identified proportionality issue, and most convergence problems still persist.
**Fourier Features**. High-dimensional Fourier projections have been used in feature engineering [42] and for positional encodings in language modeling applications to account for both short and long range relationships [56, 51]. Additionally, implicit neural representation models benefit from sinusoidal representations [50, 52]. Our work also uses low dimensional Fourier projections. We demonstrate their use as a means to project hypernetwork inputs to a vector space with constant Euclidean norm, mitigating the training challenge.
**Residual Forms**. Residual and skip connections are
Figure 1: **(a) Proportionality Issue**. With _default_ formulations, the scale of the predicted parameters \(\theta\) (measured in standard deviation) is directly proportional to scale of the hypernetwork input \(\gamma\) at initialization (initial), and even after training the model (final). Our proposed Magnitude Invariant Parametrizations (MIP) mitigates this proportionality issue with respect to \(\gamma\). **(b) Convergence Improvements**. Using MIP leads to faster convergence and result in reduced variance across network initializations than the default hypernetwork formulation.
widely used in deep learning models and often improve model training, particularly with increasing network depth [18, 19, 28, 56]. Building on this intuition, instead of the hypernetworks predicting the network parameters directly, our proposed hypernetworks predict parameter _changes_, mitigating part of the proportionality problem at hand.
## 3 The Hypernetwork Proportionality Problem
**Preliminaries**. Deep learning tasks most often involve a model \(f(x;\theta)\to y\), with learnable parameters \(\theta\). In hierarchical models using hypernetworks, the parameters \(\theta\) of the _primary network_\(f\) are predicted by a _hypernetwork_\(h(\gamma;\omega)\to\theta\) based on a input vector \(\gamma\). Instead of learning the parameters \(\theta\) of the primary network \(f\) directly, only the learnable parameters \(\omega\) of the hypernetwork \(h\) are optimized using backpropagation. The specific nature of the hypernetwork inputs \(\gamma\) varies across applications, but regularly corresponds to a low dimensional quantity that models properties of the learning task, and is often a simple scalar or embedding vector [3, 11, 22, 29, 38, 54, 58].
**Assumptions**. For our analysis we assume the following about the hypernetwork formulation: 1) The architecture is a series of fully connected layers of the form \(\phi(Wx+b)\) where \(W\) are the parameters, \(b\) the biases and \(\phi(x)\) the non-linear activation function; 2) The activation satisfies \(\phi(x)=\max(\alpha x,0)+\min(\beta x,0)\) for some coefficients \(\alpha\) and \(\beta\). This encompasses common choices such as ReLU, LeakyReLU or PReLU; 3) Bias vectors \(b\) are initialized to zero. Existing hypernetworks satisfy these properties for the large majority of applications [8, 10, 11, 15, 29, 33, 38, 54, 57, 58].
**Input-Output Proportionality**. We demonstrate that under these widely-used settings, inputs and outputs of hypernetworks involve a proportionality relationship, and describe how this can impede hypernetwork training. We show that 1) at initialization, any intermediate feature vector \(x^{(k)}\) at layer \(k\) will be proportional to the hypernetwork input \(\gamma\), even under the presence of non-linear activation functions, and 2) this leads to large gradient magnitude fluctuations detrimental to optimization.
We first consider the case where \(\gamma\in\mathbb{R}\) is a scalar value. Let \(h(\gamma;\omega)\) use a fully connected architecture composed of a series of fully connected layers
\[h(\gamma;\omega) = W^{(n)}x^{(n)}+b^{(n)}\] \[x^{(k+1)} = \phi(W^{(k)}x^{(k)}+b^{(k)}) \tag{1}\] \[x^{(1)} = \gamma\]
where \(x^{(k)}\) is the input vector of the \(k^{\text{th}}\) fully connected layer with learnable parameters \(W^{(k)}\) and biases \(b^{(k)}\). To prevent gradients from exploding or vanishing when chaining several layers, it is common to initialize the parameters \(W^{(i)}\) and biases \(b^{(i)}\) so that either the magnitude of the activations is approximately constant across layers in the forward pass (known as _fan in_), or so that the magnitude of the gradients is constant across layers in the backward pass (known as _fan out_) [13, 16]. In both settings, the parameters \(W^{(i)}\) are initialized using a zero mean Normal distribution and bias vectors \(b^{(i)}\) are initialized to zero. If \(\gamma>0\), and \(\phi(x)\) has the common form specified above, at initialization the \(i^{\text{th}}\) entry of vector \(x^{(2)}\) is
\[x^{(2)}_{i}=\phi(W^{(1)}_{i}\gamma+b^{(1)})=\gamma\phi(W^{(1)}_{i})\propto \gamma, \tag{2}\]
since \(b^{(1)}=0\) and \(\phi(W^{(1)}_{i})\) is independent of \(\gamma\). Using induction, we assume that for layer \(k,\ x^{(k)}_{j}\propto\gamma\,\forall j\), and show this property for layer \(k+1\). The value of the \(i^{\text{th}}\) element of vector \(x^{(k+1)}\) is
\[x^{(k+1)}_{i}=\phi\Big{(}b^{(k)}_{i}+\sum_{j}W^{(k)}_{ij}x^{(k)}_{j}\Big{)}= \gamma\phi\Big{(}\sum_{j}W^{(k)}_{ij}\alpha^{(k)}_{j}\Big{)}\propto\gamma, \tag{3}\]
since \(b^{(k)}_{i}=0\), and the term inside \(\phi\) is independent of \(\gamma\). If \(\gamma\) is not strictly positive, we can reach the same proportionality result, but with separate constants for the positive and the negative range. This dependency holds regardless of the number of layers and the number of neurons per hidden layer, and also holds when residual connections are employed. When \(\gamma\) is a vector input, we find a similar relationship with the overall magnitude of the input and the magnitude of the output. Given the absence of bias terms, and the lack of multiplicative interactions in the architecture, the fully connected network propagates magnitude changes in the input. We provide further details in the supplementary material.
**Training implications**. Since \(\theta=x^{(n+1)}\), this result leads to a proportionality relationship for the magnitude of the predicted parameters \(||\theta||_{2}\propto||\gamma||\) and their variance \(\text{Var}(\theta)\propto||\gamma||^{2}\). As the scale of the primary network parameters \(\theta\) will depend on \(\gamma\), this will affect the scale of the layer outputs and gradients of the primary network. In turn, these large gradient magnitude fluctuations lead to unstable training dynamics for stochastic gradient descent methods [13].
**Further Considerations**. Our analysis relies on biases being at zero, which only holds at initialization, and does not include normalization layers that are sometimes used. However, in our experiments, we find that biases remain near zero during early training, and hypernetworks with alternative choices of activation function, input dimensionality, or with normalization layers, still suffer from the identified issue and consistently benefit from our proposed parametrization (see Section 6).
## 4 Magnitude Invariant Parametrizations
To address the proportionality dependency, we make two straightforward changes to the typical hypernetwork formulation: 1) We introduce an encoding function that maps inputs into a constant-norm vector space, and 2) we treat hypernetwork predictions as additive _changes_ to the main network parameters, rather than as the parameters themselves. These changes make the primary network weight distribution
non-proportional to the hypernetwork input and stable across the range of hypernetwork inputs. Figure 2 illustrates these changes to the hypernetwork.
**Input Encoding.** To address the proportionality problem, we map the inputs \(\gamma\in[0,1]\) to a space with a constant Euclidean norm \(||\text{E}_{\text{L2}}(\gamma)||_{2}\!=\!1\) using the function \(\text{E}_{\text{L2}}(\gamma)\!=\![\cos(\gamma\pi/2),\!\sin(\gamma\pi/2)]\). With this change, the input magnitude to the hypernetwork is constant \(||\text{E}_{\text{L2}}(\gamma)||=1\ \forall\gamma\), so \(||x^{(1)}||\not\propto\gamma\). For higher-dimensional inputs, we apply this transformation to each input individually, leading to an output vector with double the number of dimensions. This transformation results in an input representation with a constant norm, thereby eliminating the proportionality effect.
For our input encoding we first map each dimension of the input vector to the range \([0,1]\) to maximize output range of \(\text{E}_{\text{L2}}\). We use min-max scaling of the input: \(\gamma^{\prime}=(\gamma-\gamma_{\min})/(\gamma_{\max}-\gamma_{\min})\). For unconstrained inputs such as Gaussian variables, we first apply the logistic function \(\sigma(x)\!=\!1/(1+\exp(-x))\). If inputs span several orders of magnitude, we take the log before the min-max scaling as in [3, 11].
**Output Encoding.** Residual forms have become a cornerstone in contemporary deep learning architectures [18, 28, 56]. Motivated by these methods, we replace the typical hypernetwork framework with one that learns primary network \(f\) parameters (what is typically learned in existing formulations), and then uses the hypernetwork predictions as _additive_ changes to these parameters. We introduce a set of learnable parameters \(\theta^{0}\), and compute the primary network parameters as \(\theta\!=\!\theta^{0}+h(\text{E}_{\text{L2}}(\gamma);\omega)\).
This output additive encoding predicts weights that are independent of the input by decomposing the hypernetwork contribution as a combination of an independent term \(\theta^{0}\) and a dependent term \(h(\text{E}_{\text{L2}}(\gamma);\omega)\). The output encoding also offers a straightforward and principled mechanism for initializing hypernetwork weights. First, the hypernetwork weights \(\omega\) can be initialized using common initialization methods for fully connected layers. Then, the independent parameters \(\theta^{0}\) can be initialized taking into consideration their role in the primary network.
**Connection to Learned Embeddings.** Our input encoding approach can be understood in relation to the encoding of categorical hyperparameters in hypernetworks. Commonly, embedding layers transform categorical inputs into learnable parameters [8, 15]. For a scalar input \(\gamma\), the first fully connected layer's weights, \(W\in\mathbb{R}^{N\times 1}\), act similarly to a learned embedding vector \(e\). For scalar inputs \(\gamma\) represented by magnitude, the traditional formulation scales this embedding vector linearly, \(x^{(2)}=\gamma e\). In contrast, our input encoding method uses \(W\in\mathbb{R}^{N\times 2}\), which can be decomposed into two embedding vectors, \(W=\{e_{0},e_{1}\}\). From this perspective, our encoding function can be viewed as interpolating between two learnable embeddings vectors, \(x^{(2)}\!=\!\cos(\gamma\pi/2)e_{0}\!+\!\cos(\gamma\pi/2)e_{1}\).
## 5 Experimental Setup
### Tasks
We evaluate our proposed parametrization on several tasks involving hypernetwork-based models.
**Bayesian Neural Networks**. Hypernetwork models have been used to learn families of functions conditioned on a prior distribution [54]. During training, the prior representation \(\gamma\!\in\!\mathbb{R}^{d}\) is sampled from the prior distribution \(\gamma\!\sim\!p(\gamma)\) and used to condition the hypernetwork \(h(\gamma;\omega)\!\rightarrow\!\theta\) to predict the parameters of the primary network model \(f(x;\theta)\). Once trained, the family of posterior networks is then used to estimate parameter uncertainty or to improve model calibration. For illustrative purposes we first evaluate a setting where \(f(x;\theta)\) is a feed-forward neural network used to classify the MNIST dataset. Then, we tackle a more
Figure 2: **Magnitude Invariant Parametrizations for Hypernetworks**. MIP first projects the hypernetwork inputs \(\gamma\) to a constant norm vector space. Then the outputs of the hypernetwork \(\Delta\theta\) are treated as additive changes to a set of independent learnable parameters \(\theta^{0}\) to generate the primary network weights \(\theta\). In blue we highlight the main components of MIP, the input encoding \(\text{E}_{\text{L2}}\) and the residual formulation \(\theta\!=\!\theta^{0}+\Delta\theta\).
complex setting where \(f(x;\theta)\) is a ResNet-like model trained the OxfordFlowers-102 dataset [37]. In both settings, we use the prior \(\mathcal{N}(0,1)\) for each input.
**Hypermorph**. Learning-based medical image registration networks \(f(x_{m},x_{f};\theta)\rightarrow\phi\) register a moving image \(x_{m}\) to a fixed image \(x_{f}\) by predicting a flow or deformation field \(\phi\) between them. The common (unsupervised) loss balances an image alignment term \(\mathcal{L}_{\text{sim}}\) and a spatial regularization (smoothness) term \(\mathcal{L}_{\text{reg}}\). The learning objective is then \(\mathcal{L}=(1-\gamma)\mathcal{L}_{\text{sim}}(x_{m}\circ\phi,x_{f})+\gamma \mathcal{L}_{\text{reg}}(\phi)\), where \(\gamma\) controls the trade-off. In Hypermorph [22], multiple regularization settings for medical image registration are learned jointly using hypernetworks. The hypernetwork is given the trade-off parameter \(\gamma\) as input, sampled stochastically from \(\mathcal{U}(0,1)\) during training. We follow the same experimental setup, using a U-Net architecture for the primary (registration) network and training with MSE for \(\mathcal{L}_{\text{sim}}\) and total variation for \(\mathcal{L}_{\text{reg}}\). We train models on the OASIS dataset. For evaluation, we use the predicted flow field to warp anatomical segmentation label maps of the moving image, and measure the volume overlap to the fixed label maps [4].
**Scale-Space Hypernetworks**. We also use a hypernetwork to efficiently learn a family of models with varying internal rescaling factors in the downsampling and upsampling layers, as recently done in [38]. In this setting, \(\gamma\) corresponds to the _scale factor_. Given hypernetwork input \(\gamma\), the hypernetwork \(h(\gamma;\omega)\rightarrow\theta\) predicts the parameters of the primary network, which performs the spatial rescaling operations according to the value of \(\gamma\). We study a setting where \(f(x;\theta)\) is a convolutional network with variable resizing layers, the rescaling factor is sampled from \(\mathcal{U}(0,0.5)\), and evaluate using the OxfordFlowers-102 classification problem and the OASIS segmentation task.
### Experiment Details
**Model**. We implement the hypernetwork as a neural network with fully connected layers and LeakyReLU activations [32] for all but the last layer, which has linear output. Hypernetwork weights are initialized using Kaiming initialization [17] on _fan out_ mode and biases are initialized to zero. Unless specified otherwise, the hypernetwork architecture has two hidden layers with 16 and 128 neurons respectively. We use this implementation for both the default (existing) hypernetworks, and our proposed (MIP) hypernetworks.
**Training**. We use two popular choices of optimizer: SGD with Nesterov momentum, and Adam [36, 25]. We search over a range of initial learning rates and report the best performing models; further details are included in the Supplementary material A.
**Implementation.** An important contribution of our work is the release of HyperLight, our PyTorch hypernetwork framework. HyperLight not only implements our proposed hypernetwork parametrization but provides a modular and composable API that facilitates the development of hypernetwork models. Using HyperLight, practitioners can employ existing non-hypernetwork model definitions and pretrained model weights, and can easily build models using hierarchical hypernetworks. HyperLight source code available at [https://github.com/JJGO/hyperlight](https://github.com/JJGO/hyperlight).
## 6 Experimental Results
### Effect of Proportionality on Parameter and Gradient Distributions
First, we empirically show how the proportionality phenomenon affects the distribution of predicted weights \(\theta\) and their corresponding gradients for the Bayesian neural networks on MNIST. Figures 2(a) and 2(b) compare the distributions of the primary network weights and layer outputs for a range of values of hypernetwork input \(\gamma\). While the default hypernetwork parametrization is highly sensitive to changes in the input, the proposed MIP eliminates this dependency, with the resulting distribution closely matching that of the non-hypernetwork models. Furthermore, Figure 0(a) (in the introduction), shows that using the default formulation, the scale of the weights correlates linearly with the value of the hypernetwork input, and that, crucially, this correlation is still present after the training process ends. In contrast, MIP parametrizations lead to a weight distribution that is robust to the input \(\gamma\), both at the start and end of training.
We also analyze how the proportionality affects the early phase of hypernetwork optimization by studying the distribution of gradient norms during training. Figure 2(c) shows the norm of the predicted parameter gradients \(||\nabla_{\theta}\mathcal{L}||\) as training progresses. As our analysis predicted, hypernetworks with the default parametrization experience large swings in gradient magnitude because of the proportionality relationship between inputs and predicted parameters. In contrast, the MIP strategy leads to a substantially smaller variance and more stable gradient magnitude compared to the standard formulation.
### Model Training Improvements
In this experiment, we analyze how MIP affects model convergence for the considered tasks. For all experiments, we found that MIP hypernetworks did not introduce a measurable impact in training runtime, so we report per-epoch steps.
Figure 3(a) shows the training loss and test accuracy for Bayesian networks trained on MNIST. We find that MIP parametrizations result in better loss and higher accuracy sooner during training. MIP also achieves substantially reduced variance across network initializations. Default parametrization suffers from sporadic training instabilities (spikes in the training loss), while MIP leads to stable training. We found similar results for models trained with SGD.
Figures 3(b) and 3(c) present convergence curves for the other two tasks. For Hypermorph, MIP parametrizations are crucial
when using SGD with momentum as otherwise the model fails to meaningfully train. For all choices of learning rate the default hypernetwork failed to converge, whereas with MIP parametrization it converged for a large range of values. With Adam, networks train meaningfully, and MIP models consistently achieve similar Dice scores substantially faster. They are less sensitive to weight initializations. While in this setting the Adam optimizer partially mitigates the gradient variance issue by normalizing by a history of previous gradients, the MIP parameterization leads to substantially faster convergence. Furthermore, for the Scale-Space segmentation, we find that for both optimizers MIP models achieve substantially faster convergence and better final accuracy compared to those with the default parametrization.
**Comparison to normalization strategies.** We compare the proposed parametrization to popular choices of normalization layers found in the deep learning literature. Using the default formulation, where the predicted weights start proportional to the hypernetwork input, we found that existing normalization strategies fall into two categories: they either keep the proportionality relationship present (such as batch normalization), or remove the proportionality by making the predicted weights independent of the hypernetwork input (such as layer or weight normalization). We provide further details in Section B of the supplemental material.
We test several normalization strategies. **BatchNorm-P**, adding batch normalization layers to the primary network. **LayerNorm-P**, adding feature normalization layers to the primary network. **LayerNorm-H**, adding feature normalization layers to the hypernetwork layers. **WeightNorm**, performing weight normalization, which decouples the gradient magnitude and direction, to weights predicted by the hypernetwork [23, 2, 47]. Figure 4(a) shows the evolution of the test accuracy for the Scale-Space hypernetworks trained on OxfordFlowers. We report wall clock time, as some normalization strategies, such as BatchNorm, substantially increase the computation time required per iteration. For networks trained with SGD, normalization strategies enable training, but do not significantly improve on default hypernetworks when trained with Adam. Models trained with SGD momemtum and hypernetwork feature normalization (LayerNorm-H) diverged early into training for all considered hyperparameter settings. Models trained with the proposed MIP parametrization lead to substantially faster convergence and better final model accuracy.
**Ablation Analysis.** We study the contribution of each of the two main components of the MIP parametrizations: input encoding and additive output formulation. Figure 4(b) shows the effect on convergence for two tasks. We found that both components reduce the proportionality dependency between the hypernetwork inputs and outputs, and that each component independently achieves substantial improvements in model convergence. However, we find that best results
Figure 3: **Distributions of primary network parameters (a) and layer activations (b). Measurements are taken at initialization for a default hypernetwork, our proposed MIP hypernetwork, and a conventional neural network with the same primary architecture. Distributions are shown as kernel density estimates (KDE) of the values due to the high degree of overlap between the distributions. In contrast, the MIP strategy leads to little change across input values and its distribution closely matches that of the non-hypernetwork model. **Evolution of Gradients (c)** Gradient magnitude with respect to hypernetwork outputs \(||\nabla_{\theta}\mathcal{L}||\) during early training. Standard deviation is computed across minibatches in the same epoch. MIP leads to substantially smaller magnitude and improved robustness compared to the default parametrization.**
(fastest convergence) are consistently achieved when both components are used jointly during training.
### Robustness Analysis
**Number of Input Dimensions to the Hypernetwork.** We study the effect of the number of dimensions of the input to the hypernetwork model. We evaluate on the Bayesian neural network task, and we vary the number of dimensions of the input prior. We train models with geometrically increasing number of input dimensions, dim\((\gamma)=1,2,...,32\). Figure 5c shows that the proposed MIP strategy leads to improvements in model convergence and final model accuracy as we increase the dimension of the hypernetwork input \(\gamma\).
**Choice of Hypernetwork Architecture.** We assess model performance when varying the properties of the hypernetwork architecture. We vary the width (number of hidden neurons per layer) and depth (number of layers)- fully connected networks with 3, 4 and 5 layers and with 16 and 128 neurons per layer, as well as an exponentially growing number of neurons per layer Dim\((x^{n})=16\cdot 2^{n}\). We find that the MIP improvements generalize to the all tested hypernetwork architectures with analogous improvements in model training. We provide additional results in section B of the supplement.
**Nonlinear Function Activation Ablation**. While our method is motivated by the training instability present in hypernetworks with (Leaky)-ReLU nonlinear activation functions, we explored applying it to other common choices of activation functions found in the literature: Tanh, GELU and SiLU [43, 20]. Figure 8 (in section B of the supplement) shows that MIP consistently helps for all choices of nonlinear activation function, and the improvements are similar to those of the LeakyReLU models.
## 7 Limitations
A limitation of this work is that all hypernetwork models used in our experiments are composed of fully connected layers and use activation and initialization choices commonly recommended in the literature. Similarly, we focused on two optimizers in our experiments, SGD with momentum and Adam. We believe that we would see similar results for other less common architectures and optimizers, but this remains an area of future work. Furthermore, we focus on training models from scratch. Given that hypernetworks are becoming increasingly popular in transfer learning tasks using pretrained models, we believe this will be an interesting avenue for future analysis of MIP.
## 8 Conclusion
We showed through analysis and experimentation that traditional hypernetwork formulations are susceptible to training instability, caused by the effect of the magnitude of hypernetwork input values on primary network weights and gradients, and that standard methods such as batch and layer normal
Figure 4: **Model Convergence Improvements. Comparison between default hypernetworks and hypernetworks with MIP for the Bayesian networks on MNIST (a), HyperMorph (b) and Scale-Space hypernetworks trained on OASIS (c). In all cases, we find that the MIP parametrization leads to faster model convergence without any sacrifice in final model accuracy compared to the default parametrization. In (a) we observe that the default hypernetworks experience sporadic training instabilities (spikes in the training loss), whereas MIP hypernetworks present more stable training. In (b) and (c), we find that for default hypernetworks using the Adam optimizer substantially helps the training process, however, incorporating MIP leads to even better training dynamics.**
ization do not solve the problem. We then proposed the use of a new method, Magnitude Invariant Parametrizations (MIP), for addressing this problem. Through extensive experiments, we demonstrated that MIP leads to substantial improvements in convergence times and model accuracy across multiple hypernetwork architectures, training scenarios, and tasks.
To further help with the adoption of hypernetworks, we release our hypernetwork learning library, HyperLight, which not only implements MIP but also provides a modular and composable API to facilitate the development of hypernetwork based models. Given that using MIP never reduces model performance and can dramatically improve training, we expect the method to be widely useful for training hypernetworks.
|
2302.12878 | Causal quartets: Different ways to attain the same average treatment
effect | The average causal effect can often be best understood in the context of its
variation. We demonstrate with two sets of four graphs, all of which represent
the same average effect but with much different patterns of heterogeneity. As
with the famous correlation quartet of Anscombe (1973), these graphs dramatize
the way in which real-world variation can be more complex than simple numerical
summaries. The graphs also give insight into why the average effect is often
much smaller than anticipated. | Andrew Gelman, Jessica Hullman, Lauren Kennedy | 2023-02-24T20:27:00Z | http://arxiv.org/abs/2302.12878v1 | # Causal quartets: Different ways to attain the same average treatment effect+
###### Abstract
The average causal effect can often be best understood in the context of its variation. We demonstrate with two sets of four graphs, all of which represent the same average effect but with much different patterns of heterogeneity. As with the famous correlation quartet of Anscombe (1973), these graphs dramatize the way in which real-world variation can be more complex than simple numerical summaries. The graphs also give insight into why the average effect is often much smaller than anticipated.
Given that real-world effects vary, and statistics is the study of variation, why does the causal inference literature focus on average effects?
Causal inference in statistics and economics focuses on the average causal effect. The purpose of this paper is to raise awareness of different patterns of heterogeneous causal effects: examples where the average effect does not tell the whole story.
Given that real-world effects vary, and statistics is the study of variation, it seems obvious to look at the variation of causal effects across different populations, different scenarios, different time frames, etc. We are not alone in advocating for the value of seeking to understand sources of variation: other authors, such as Baribalt et al. (2018), Bryan, Tipton, and Yeager (2021), and Yarkoni (2022), have argued for the importance of varying effects, both for theoretical understanding and practical decision-making. Indeed, the very phrase "average causal effect" implicitly considers how the effect might vary; otherwise one could simply say "causal effect" without the modifier.
Perhaps surprisingly, though, much of the literature of statistics and econometrics focuses on the estimation of average causal effects without much discussion of variation. Before proceeding to discuss the importance of varying treatment effects, it behooves us to consider why there has been such an interest in averages.
There are several good reasons for the traditional approach of considering the treatment effect to be a single parameter to be estimated:
* In a randomized experiment, the average difference comparing treatment and control groups yields an unbiased estimate of the sample average treatment effect. It makes sense to study this average effect, as this is what can be estimated from the data.
* More generally, under different assumptions in observational studies, various local average treatment effects are what can be identified (Imbens and Angrist, 1994).
* When estimating a causal model using linear regression without interactions, so the coefficient of the treatment variable represents the causal effect. In the presence of varying treatment effects, this coefficient represents an average treatment effect, in the same way that fitting a linear model to nonlinear data can be considered to estimate some sort of average regression line. Hence it can make sense to speak of "the" causal effect in the same way that we would speak of "the" regression coefficient \(\beta\), as representing a single parameter in a model or a population average quantity.
* Interactions can be hard to estimate; indeed, under some reasonable assumptions you need 16 times the sample size to estimate an interaction than to estimate a main effect (Gelman, 2018a). Thus it can make sense to fit a model with constant treatment effect even if you think there may be interactions in reality.
* Under the assumption of a constant treatment effect (the "Fisher null hypothesis"), it is possible to obtain exact confidence intervals for randomized experiments.
For all these reasons, along with the convenience of single-number summaries, it has become standard practice either to fit a model assuming a constant treatment effect or to aggregate to obtain an estimated average treatment effect when fitting models in which effects vary; see, for example, Hill (2011) and Wager and Athey (2018).
That all said, we have become convinced through work in many application areas that thinking about varying effects can be essential for understanding causal inference, and consequently for making decisions based on estimates, such as in implementing policies or interventions beyond the lab. In this article we present causal quartets as a graphical tool for helping reform how we think about effects. Sections 2 and 3 demonstrate and explain the value of such tools. Section 4 presents a software package that researchers or consumers of research can use to create causal quartets in order to reflect on their own research goals or interrogate effects in the literature. In Section 5 we discuss implications of treatment-effect heterogeneity for statistical practice in the context of the reasons discussed above for traditionally focusing on the average.
## 2 Two causal quartets
### Plots of latent causal effects
We dramatize variation in causal effects with two "quartets": sets of four plots with the same average effect but much different patterns of individual effects. All the displays plot the causal effect vs. a hypothetical individual-level predictor, \(x\). The first quartet shows examples of unpredictable or random variation, so that \(x\) is essentially just an index of units. The second quartet shows effects that vary as different systematic functions of \(x\). More generally, \(x\) could represent different types of units and could be an observed predictor or a latent quantity. The quartets are conceptual plots of different scenarios, not direct graphs of data.
Figure 1 shows four very different scenarios corresponding to an average treatment effect of 0.1. Figure 0(a) shows the simplest case, often implicitly assumed in discussions of "the" treatment effect. Figure 0(b) shows an effect that is always positive across units but with magnitude varying between 0 and 0.2. In Figure 0(c), there is high variation and the effect could be positive or negative at the level of the units. Finally, in Figure 0(d) the treatment effect is usually zero but is high among some small subset of units with nonzero effects.
These plots correspond to four different sorts of real-world situations, and we conjecture that some misunderstanding about effect sizes comes from the habit of thinking about the average effect without considering what that means in the context of variation. We discuss this in the context of some examples in Section 3.
Figure 2 presents another quartet, this time showing different forms of interaction in which the effect varies systematically as a function of a pre-treatment predictor. Figure 1(e) shows a linear interaction, which is typically the first model that is fit when researchers go beyond the model of constant effects. Figure 1(f) illustrates a treatment that is only effective when the pre-treatment variable exceeds some threshold; this could be a training program that requires some minimum level of preparedness of the trainee. Figure 1(g) adds a plateau, corresponding to the realistic constraint
Figure 2: _Four graphs showing patterns of causal effects, each with average effect of 0.1, but varying in different ways as a function of a pre-treatment predictor: (e) linear interaction, (f) no effect then steady increase, (g) plateau, (h) intermediate zone with large effects._
Figure 1: _Four graphs showing different patterns of causal effects, each with average effect of 0.1: (a) constant effect, (b) low variation, (c) high variation, (d) occasional large effects._
of some maximum effectiveness. Finally, Figure 2h shows a non-monotonic pattern with a "sweet spot," which could arise in a medical treatment that has no effect for the healthiest patients (because they do not need the treatment) or for the sickest (for whom the treatment is too late).
As with Figure 1, these are not intended to represent an exhaustive list of possibilities; Figure 2e shows the typical assumption made in modeling an interaction, while Figures 2f, g, and h represent different sorts of patterns that go beyond what would usually be included in a statistical model. The first quartet shows different levels and distributions of unpredictable variation; the second represents variation that depends on pre-treatment information. A realistic setting would include a mix of both.
Our Figures 1 and 2 are modeled on the famous correlation quartet of Anscombe (1973): four scatterplots with the same first and second moments but with much different bivariate patterns. This quartet is useful for teaching the limitations of the correlation statistic and also stimulating students and researchers to consider alternative models for data. Later work has explored general approaches to constructing such plots; see Chatterjee and Firat (2007) and Matejka and Fitzmaurice (2017).
### Plots of observable data
The big difference between our causal quartets and these earlier correlation quartets is that this earlier work concerned plots of _data_, so that departures from the assumed model could be seen directly--hence the title of Anscombe (1973), "Graphs in statistical analysis"--whereas Figures 1 and 2 graph latent _causal effects_, which in general cannot directly be observed. Thus, our plots are conceptual, and their utility to students and researchers is conceptual. Figures 1 and 2 should help in design and analysis of causal studies, both by suggesting ideas for models of treatment effects and as reminders of the limitations of the average causal effect, in the same way that the quartet of Anscombe (1973) dramatized the limitations of the correlation and regression coefficients in descriptive statistics.
To better understand these patterns, it can be helpful to visualize them in terms of observable data. In Figure 3, we display graphs of data that are consistent with the effects shown in Figures 1 and 2. Each of the new plots shows outcomes under treatment and control for the same hypothetical eleven units, with the differences representing the causal effects.
Figure 3: _Two quartets showing different patterns of observable data consistent with the causal effects displayed in Figures 1 and 2. In each plot, the crosses and circles represent treated and control units, respectively, and the difference between the two is the treatment effect._
In general, we cannot observe causal effects directly from data: even within-person designs that expose participants to both a control and treatment condition will be affected by factors such as order effects. However, the examples in Figure 3 give a sense of what data might look like under different patterns of causal effects in the absence of such factors.
## 3 Some practical implications of the causal quartets
We describe some scenarios where causal quartets can help expose problems with assumptions made about the sizes of effects. This can arise before or after data are collected.
### Before running a study: Anticipating an effect size
To design a study one must account for uncertainty in effect sizes. Researchers designing clinical trials often make optimistic assumptions corresponding to high power. Once we considering variation in the treatment effect, it becomes clear that average effects can be much lower than originally imagined.
We illustrate with an example from Zelner et al. (2021) of a doctor designing a trial for an existing drug that he thought could effectively treat high-risk coronavirus patients. He solicited our help to check his sample size calculation that a sample size of \(n=126\) would assure \(80\%\) power under an assumption that the drug increased survival rate by \(25\) percentage points. (With \(126\) people divided evenly split between two groups, the standard error of the difference in proportions is bounded above by \(\sqrt{0.5*0.5/63+0.5*0.5/63}=0.089\). To achieve \(80\%\) power requires the value of the effect to be at least \(2.8\) standard errors from the comparison point of \(0\), hence, an effect of \(0.25\) achieves the desired power with \(n=126\).) When asked how confident he felt about his guess of the effect size, the doctor replied that he thought the effect on these patients would be higher, such that \(25\) percentage points was a conservative estimate. At the same time, he recognized that the drug might not work. But when asked what he thought about increasing his sample size so he could detect, for example, a \(10\) percentage point increase in survival, he replied that this would not be necessary: he felt confident that if the drug worked, its effect would be large.
It might seem reasonable to suppose that a drug might not be effective but would have a large effect in case of success. But to stop at this assumption implies a problematic vision of uncertainty. Suppose, for example, that the survival rate was \(30\%\) among the patients who do not receive this new drug and \(55\%\) among the treatment group. Here is a hypothetical scenario of what we might expect given \(1000\) people:
* \(300\) would live either way,
* \(450\) would die either way,
* \(250\) would be saved by the treatment.
There are other possibilities consistent with a \(25\) percentage point average benefit--for example the drug could save \(350\) people while killing \(100\)--but the point is that once we assume a scenario as we did above, the posited benefit of the drug is not a \(25\) percentage point benefit for each patient; rather, it's a \(100\%\) benefit for \(25\%\) of the patients.
From that perspective, once we accept the idea that the drug works on some people and not others--or in some comorbidity scenarios and not others--we realize that "the treatment effect" in any given study will depend entirely on the patient mix. There is no underlying number representing the effect of the drug. Ideally one would like to know what sorts of patients the treatment would help, but in a clinical trial it is enough to show that there is some clear average effect.
Once we consider the treatment effect in the context of variation among patients, as in Figure 0(c), this can be the first step in a more grounded understanding of effect size.
### After running a study: Interpreting results
Downgrading an apparently huge effect
Gertler et al. (2013) performed a randomized evaluation of an early-childhood intervention program, yielding an estimate that the program increased adult earnings by 42%. This sounds a bit too good be true, even more so when considering it as an _average_ effect, given that the actual effect must surely vary a lot by person, considering the tortuous path from an intervention at age 4 to earnings at age 24. A realistic scenario might be some mix of Figure 0(b) and d--effects that are often negligible and can follow a wide range when positive--and Figure 1(d)--an effect that is larger in some intermediate zone. In any of these cases, we would argue that an average effect of 42% is hard to believe, given that it would reflect some combination of many effects near zero and some increases in earnings of 100% or more.
The implication of this reasoning is that the claimed effect is likely to be a wild overestimate--a point that we earlier made on inferential grounds (Gelman, 2018) but without reference to varying effects. Combining a realistic sense of the average effect size with an understanding of selection on statistical significance makes it clear that the study had low power and will yield a positively biased estimate (Button et al., 2013). The framework of nonconstant treatment effects gives us another reason to be skeptical about the claims made for this particular class of interventions.
Recognizing that an apparently large effect can be explained as an artifact of noise
Beall and Tracy (2013) performed two small surveys and found that women were three times as likely to wear red or pink during certain days of their monthly cycle. This result achieved conventional levels of statistical significance, but this could easily be explained by uncontrolled researcher degrees of freedom; see Simmons, Nelson, and Simonsohn (2011) for a general discussion of this issue and Gelman (2013) in the context of this particular study.
Here, however, we want to focus not on statistical significance but on the reported effect size, which is implausible on its face and even more outlandish when considered as an average effect, once we reflect that the effect will be zero for many people, for example, those who never wear red clothing or those whose clothing choices are restricted because of work. Even if think it's reasonable to expect a factor-of-3 effect for some women in the study, the average effect including those with no effect would have to be much lower, indeed in this case lower than the uncertainty in the estimated effect This implies that the published result, despite its apparent statistical significance, could be explained by a combination of chance and unintentional selection bias. Indeed, followup studies by these authors and others did not replicate the finding (see, for example, Hone and McCullough, 2020).
Beyond all this, if time of the month does influence clothing choices, we would expect this effect to vary greatly across people and scenarios. There is no theoretical reason to expect a common direction, hence a pattern such as Figure 0(c) seems likely, to the extent there are effects at all. Such variation makes it even more difficult to estimate an average treatment effect, as well as implying that any realistic average would be close to zero.
We have used the day-of-cycle and clothing study as an example of the perils of naive interpretation of statistics. Thinking about varying effects helps us understand why estimating an average effect here is not well motivated: the problem is not just the lack of successful replication but
rather the conceptual framework under which the effect is characterized by a single number or even a single direction.
#### Anticipating the decline effect: Treatments that are less effective in real life
When designing a medical trial, the first goal is to maximize statistical power. We say this not cynically but out of a realistic understanding that success--in the form of statistical significance at the conventional level--can be necessary for approval of a drug or procedure, so if you believe your idea is a good one, you want to design your experiment to have a high chance of demonstrating that it works.
Methods of increasing statistical power in an experiment include: (1) increasing the sample size, (2) improving the accuracy of measurements, (3) including additional pre-treatment predictors, (4) performing within-person comparisons, and (5) increasing the magnitude of the average treatment effect. Assuming the first four of these steps have been done to the extent possible, one way to achieve the fifth step is to restrict the participants of the study to those for whom the expected effect is as large as possible.
There is nothing wrong with performing this sort of restriction when designing a study--indeed, it makes a lot of sense in any experiment to focus on scenarios where the signal is highest--and the result should be a higher average treatment effect among participants in the experiment. When generalizing to a larger population, however, some modeling is necessary conditional on any information used in patient selection. Thinking about variation in treatment effects makes this clear: the average effect is not a general parameter; it depends on who is being averaged over.
## 4 causalQuartet: An R package for generating causal quartets
To facilitate generating causal quartets, we created an R package called causalQuartet.1The package takes as input an average treatment effect and a set of observations \(x\) (for latent quartets as in Figures 1 and 2) as well as a set of hypothetical control condition values (for observable quartets as in Figure 3). The user has the option of specifying additional parameters to control the presentation of the quartets.
Footnote 1: [https://github.com/jhullman/causalQuartet](https://github.com/jhullman/causalQuartet)
We envision researchers and consumers of research looking at these quartets of hypothetical treatment effects both before and after a study has been run.
For example, in light of well-known problems with overestimated effects when null hypothesis significance testing is applied to low powered studies (e.g., Button et al., 2013), journalists and other consumers of research might benefit from generating quartets to help explore the implications of a claimed effect size. Researchers who wish to interrogate an effect that they have found or that has been reported by their peers, such as when reviewing a paper, might generate a few quartets to support an argument about why an average treatment effect is likely to be an overestimate.
Before running a study, a researcher can use the quartets in deciding what size effect makes sense to target in sample size calculations. The quartets may also be useful in modeling, where they could help promote thinking about how consistent different patterns would be with prior knowledge. For example, a quartet like Figure 2 could be useful before or after one estimates interactions in a model, to stimulate reflection on the linear assumption. By aiming to prompt people to reflect on the sorts of informal expectations they bring to data analysis, the package is similar to prior work by Kim, Reinecke, and Hullman (2017) and Hullman et al. (2018), finding that asking users of graphical displays to make predictions about effects before seeing observed
data can improve their recall of the data later and their ability to make accurate predictions about new settings. For within-person or pre-post treatment designs, a researcher may even want to compare plots of observed subject-specific differences between a treatment and control to a plot like Figure 3. However, this should be done with acknowledgment that the observables in Figure 3 are hypothetical data that are not subject to factors such as order effects that are generally unavoidable in within-person designs.
Another scenario is one we see far too rarely in intervention-oriented empirical research: reflecting on the utility of putting an intervention into practice given an estimated effect. For example, researchers in disciplines ranging from psychology to medicine to economics to computer science often end their interpretation of estimated effects at the average treatment effect. With the help of causal quartets, researchers can instead use the estimate as a jumping off point for discussing the relative utility to be gained from implementing the new intervention under different assumptions about heterogeneity and varying stakes.
## 5 Discussion
As has been discussed in the judgment and decision making literature, quantities are generally understood comparatively. Hofman et al. (2020) and Kim et al. (2022) discuss comparisons of effect sizes to inferential or predictive uncertainty. In the present paper we compared the average causal effect to its variation.
### Different sources of variation in causal effects
Figures 1 and 2 present this potential variation in an abstract way; in particular applications these can represent variation across experimental units, across situations, and over time, and Figure 3 can be used to imagine data consistent with such types of variation. Each type of variation can have applied importance:
* Variation among people is relevant to policy (for example, personalized medicine) and understanding (for example in psychology, as discussed in Gelman, 2014).
* Variation across situations is relevant when deciding what "flavor" of treatment to do, for example with dosing in pharmacology or treatment levels in traditional agricultural experiments.
* Variation over time is crucial in settings such as A/B testing where an innovation that has been tested on past data is intended to be applied in the future in an evolving business environment.
Variation in effects is itself important, even setting aside inferential and predictive uncertainty in outcomes, that is, even if the true causal effects are known. That is the point of Figures 1 and 2 and the connection to the quartet of Anscombe (1973): Just as a single number of correlation can represent many sorts of bivariate relationships, so can a single number of average causal effect represent many sorts of causal patterns, even within the simplest setting of a single treatment, a single outcome, and no intermediate variables.
### Why the causal framework?
Nothing in this paper so far requires a causal connection. Instead of talking about heterogeneous treatment effects, we could just as well have referred to variation more generally. Why, then, are we putting this in a causal framework? Why "causal quartets" rather than "heterogeneity quartets"?
Most directly, we have seen the problem of unrecognized heterogeneity come up all the time in causal contexts, as in the examples in Section 3, and not so much elsewhere. We think a key reason is that the individual treatment effect is latent. So it's not possible to make the "quartet" plots with raw data. Instead, it's easy for researchers to simply assume the causal effect is constant, or to not think at all about heterogeneity of causal effects, in a way that's harder to do with observable outcomes. It is the very impossibility of directly drawing the quartets that makes them valuable as conceptual tools.
### The replication crisis
The ideas of this paper have has several points of connection to the replication crisis in science:
* Most immediately, in a world of varying effects, there is no particular interest in testing a null hypothesis of exactly zero effect, and we should be able to move away from the idea that a "statistical significant" finding represents something that should replicate; see, e.g., McShane and Gelman (2017).
* As illustrated in some of the examples in Section 3, when we think about how an effect can vary, we often lower our expectations of its average effect, which in turn can make us aware of problems of low power. For example, if a study is designed under the naive expectation of an effect size of 0.5, but then on reflection we think that an average effect of 0.1 is more plausible, then the study would require 25 times the sample size (or measurements that are 5 times as accurate) in order to maintain the desired power.
* Moving away from the framing of "the" treatment effect helps us think about variation. Instead of classifying a new study as an exact replication (with the implication that the effect should be the same as in the original study) or a conceptual replication (with the hope that the effect should have the same sign), we can think of the first study and the replication as representing two different collections of participants and situations.
As we have argued elsewhere (Gelman, 2015), "once we accept that treatment effects vary, we move away from the goal of establishing a general scientific truth from a small experiment, and we move toward modeling variation (what Rubin, 1989, calls response surfaces in meta-analysis), situation-dependent traits (as discussed in psychology by Mischel, 1968), and dynamic relations (Emirbayer, 1997). We move away from is-it-there-or-is-it-not-there to a more helpful, contextually informed perspective."
For example, consider a hypothetical experiment yielding an estimated treatment effect of 0.003 with standard error 0.001, in a setting in which an effect size of 0.1 would be large. One might first want to dismiss the result as "statistically significant but not practically significant"--but there are various scenarios under which even a small effect would be notable if its sign is well identified. In an A/B testing setting in a large company, even an effect of 0.003 could represent many dollars, and in social science we might be interested in the direction of an effect (for example, knowing whether people under stress performed better or worse on a certain task) more than its magnitude. In such an example, our concern would be that, even if the effect is accurately estimated at 0.003 for this particular experiment, it could easily differ for a new group of people in a different environment. Perhaps the effect would be \(-0.004\) tomorrow, \(+0.001\) the next day, and \(-0.002\) the day after that. The relevant comparison is not to the standard error--although that does give us a baseline level of uncertainty--but to changes among people, across scenarios, and over time. Some of this can be learned from data, other aspects of this variation need to be assumed--but there is generally no good reason to assume that the variation in the treatment effect is zero.
A slightly different argument is that in some applications we really only care about the existence and sign of an effect, not its magnitude: knowing that an intervention works, even a small amount, could give insight and be relevant for future developments. But the same problem arises here as before: there is not necessarily any good reason to believe that a small positive effect in one study will apply elsewhere. It is not clear how to interpret an average treatment effect, even in a clean randomized experiment, without considering how the effect could vary across people and scenarios and over time.
### Recommendations for design and analysis
Looking forward, how should this affect applied research?
To start, with smaller average effect sizes than previously imagined, better designs are needed: more accurate measurements, better pre-treatment predictors, larger sample size, and within-unit comparisons.
When moving to analysis, interactions are important but hard to estimate with precision. So when we do include interactions in our model, we should estimate them using regularization and not demand that they attain statistical significance or any other threshold representing near-certainty.
Conversely, when we fit simple models without interactions, we should not expect that the local average treatment effects being estimated to immediately generalize. Instead, when generalizing we should allow for both predictable and unpredictable variation in effects, even if in doing so we need to hypothesize scales of variation without direct evidence from the data at hand.
When generalizing beyond the observed sample, it is important to account for changes, which can be done by fitting a model accounting for key pre-treatment variables and then poststratifying to estimate the average treatment effect in the new setting (Kennedy and Gelman, 2021).
It is said that in the modern big-data world we should embrace variation and accept uncertainty. These two steps go together: modeling of variation is essential for making sense of a world of non-constant treatment effects, but this variation can be difficult to estimate precisely and is sometimes not even identifiable from data, hence the need to accept uncertainty. Just as the quartet of Anscombe (1973) is a reminder of the limits of correlation that is helpful even when our only readily available analytical tool is linear regression, so we hope the quartets in the present paper can help guide us when thinking about generalizing from local causal identification to future prediction and decision making.
|
2307.10669 | Unveiling the intrinsic dynamics of biological and artificial neural
networks: from criticality to optimal representations | Deciphering the underpinnings of the dynamical processes leading to
information transmission, processing, and storing in the brain is a crucial
challenge in neuroscience. An inspiring but speculative theoretical idea is
that such dynamics should operate at the brink of a phase transition, i.e., at
the edge between different collective phases, to entail a rich dynamical
repertoire and optimize functional capabilities. In recent years, research
guided by the advent of high-throughput data and new theoretical developments
has contributed to making a quantitative validation of such a hypothesis. Here
we review recent advances in this field, stressing our contributions. In
particular, we use data from thousands of individually recorded neurons in the
mouse brain and tools such as a phenomenological renormalization group
analysis, theory of disordered systems, and random matrix theory. These
combined approaches provide novel evidence of quasi-universal scaling and
near-critical behavior emerging in different brain regions. Moreover, we design
artificial neural networks under the reservoir-computing paradigm and show that
their internal dynamical states become near critical when we tune the networks
for optimal performance. These results not only open new perspectives for
understanding the ultimate principles guiding brain function but also towards
the development of brain-inspired, neuromorphic computation. | Guillermo B. Morales, Serena Di Santo, Miguel A. Muñoz | 2023-07-20T07:48:55Z | http://arxiv.org/abs/2307.10669v1 | # Unveiling the intrinsic dynamics of biological and artificial neural networks:
###### Abstract
Deciphering the underpinnings of the dynamical processes leading to information transmission, processing, and storing in the brain is a crucial challenge in neuroscience. An inspiring but speculative theoretical idea is that such dynamics should operate at the brink of a phase transition, i.e., at the edge between different collective phases, to entail a rich dynamical repertoire and optimize functional capabilities. In recent years, research guided by the advent of high-throughput data and new theoretical developments has contributed to making a _quantitative_ validation of such a hypothesis. Here we review recent advances in this field, stressing our contributions. In particular, we use data from thousands of individually recorded neurons in the mouse brain and tools such as a phenomenological renormalization group analysis, theory of disordered systems, and random matrix theory. These combined approaches provide novel evidence of quasi-universal scaling and near-critical behavior emerging in different brain regions. Moreover, we design artificial neural networks under the reservoir-computing paradigm and show that their internal dynamical states become near critical when we tune the networks for optimal performance. These results not only open new perspectives for understanding the ultimate principles guiding brain function but also towards the development of brain-inspired, neuromorphic computation. |
2304.09668 | On lines of constant polarisation in structured light beams | We show that Skyrmion field lines, constructed from the local Stokes
parameters, trace out lines of constant optical polarisation. | Stephen M. Barnett, Fiona C. Speirits, Joerg B. Goette | 2023-04-19T13:53:55Z | http://arxiv.org/abs/2304.09668v2 | # On lines of constant polarisation in structured light beams
###### Abstract
We show that Skyrmion field lines, constructed from the local Stokes parameters, trace out lines of constant optical polarisation.
Structured light beams are characterised by an engineered spatial variation of amplitude, phase and polarisation [1; 2; 3; 4]. Important examples of these include beams carrying orbital angular momentum [5; 6; 7; 8; 9], helicity lattices [10; 11; 12; 13; 14; 15], and the vector vortex beams [16; 17; 18; 19]. Some of these beams, in particular those with spatially varying polarisation, have been shown to exhibit Skyrmionic structure [20; 21]. Typically, these have a polarisation pattern in the transverse plane that, at its centre, has one polarisation but at its outer reaches of the plane have the orthogonal polarisation. In general, all possible polarisations appear at some point in this transverse plane. There exist numerous variations in this theme [22; 23; 24; 25; 26]. What has not yet been identified, however, is the physical significance of the Skyrmion field itself: we rectify that in this letter.
We present an unexpected property of Skyrmion field lines that has application whether or not a structured light beam has Skyrmionic form. Put simply, it is that Skyrmion field lines in any paraxial beam trace out contours of constant polarisation. Moreover, all such lines of constant polarisation are Skyrmion field lines. Several important properties of structured light beams then follow from the mathematical properties of the Skyrmion field. The central theme of our paper is the application of these ideas to paraxial light beams, but we conclude with a brief discussion of these ideas in other fields of physics, including electron [27; 28; 29] and neutron [30] optics and also gravitational waves [31].
Skyrmion field lines for paraxial light beams are defined in terms of the normalised Stokes parameters, \(S_{1}\), \(S_{2}\) and \(S_{3}\)[32]. The \(i^{th}\) component of the Skyrmion field is
\[\Sigma_{i}=\frac{1}{2}\varepsilon_{ijk}\varepsilon_{pqr}S_{p}\frac{\partial S_ {q}}{\partial x_{j}}\frac{\partial S_{r}}{\partial x_{k}}\,, \tag{1}\]
where \(\varepsilon_{ijk}\) is the alternating or Levi-Civita symbol and we employ the summation convention in which a summation is implied over repeated indices. The specific form of \(\Sigma_{i}\) is crucial to an appreciation of the link with lines of constant polarisation. For this reason it is worth writing explicitly one of the Cartesian components of \(\mathbf{\Sigma}\)
\[\Sigma_{z} =\frac{1}{2}\varepsilon_{pqr}S_{p}\left(\frac{\partial S_{q}}{ \partial x}\frac{\partial S_{r}}{\partial y}-\frac{\partial S_{r}}{\partial x }\frac{\partial S_{q}}{\partial y}\right)\] \[=S_{1}\left(\frac{\partial S_{2}}{\partial x}\frac{\partial S_{3 }}{\partial y}-\frac{\partial S_{3}}{\partial x}\frac{\partial S_{2}}{ \partial y}\right)\] \[\quad+S_{2}\left(\frac{\partial S_{3}}{\partial x}\frac{\partial S _{1}}{\partial y}-\frac{\partial S_{1}}{\partial x}\frac{\partial S_{3}}{ \partial y}\right)\] \[\quad+S_{3}\left(\frac{\partial S_{1}}{\partial x}\frac{\partial S _{2}}{\partial y}-\frac{\partial S_{2}}{\partial x}\frac{\partial S_{1}}{ \partial y}\right)\,. \tag{2}\]
Note that this \(z\)-component depends on the variation of the Stokes parameters, and therefore of the polarisations only in the \(x\) - and \(y\) - directions. Each term, moreover, depends on all three Stokes parameters,
The Skyrmion number associated with our structured beam is readily obtained by integration over the the plane transverse to direction of propagation. If we take this direction to define our \(z\) - axis, then the Skyrmion number is
\[n=\frac{1}{4\pi}\int\Sigma_{z}\,dxdy\,, \tag{3}\]
where the integral runs over the whole transverse plane. This value is typically an integer, although structures with non-integer Skyrmion number can be constructed [20]. Our present concerns do not involve Skyrmions or the Skyrmion number explicitly, but rather focus on the Skyrmion field. This exists wherever there is a spatially varying polarisation apart from a few special cases, such as where there is a polarisation variation only in one direction. We note that the Skyrmion field is a transverse field in that
\[\mathbf{\nabla}\cdot\mathbf{\Sigma}=0\,, \tag{4}\]
and therefore the integral over any closed surface is zero \(\oint\mathbf{\Sigma}\cdot d\,\mathbf{\mathrm{S}}=0\). The only exception to this condition will occur if there are lines along which the polarisation is undefined.
Let us turn to the properties of lines of constant polarisation. As is well known, structured light beams are threaded by lines of constant polarisation. The most studied example is the C-lines along which the polarisation is purely left- or right-handed circularly polarised [1; 33]. There is nothing in this context, however, that
is specific to circular polarisation, and we can trace such contours of constant polarisation for any chosen polarisation. Consider a point \(p\) in a structured paraxial light beam, as depicted in Fig. 1. From this point there extends a line (in two directions) along which the polarisation is the same as at \(p\). Note that the amplitude and phase will not, in general, remain the same along this line. Let us introduce a local right-handed Cartesian coordinate system at \(p\)\((u,v,w)\) in which the line of constant polarisation extends in the direction \(\mathbf{u}\). As the polarisation in the direction \(\mathbf{u}\) (and \(-\mathbf{u}\)) is unchanged, it follows that the direction of the Stokes vector \(\mathbf{S}\) is also unchanged:
\[(\mathbf{u}\cdot\mathbf{\nabla})\mathbf{S}=\frac{\partial}{\partial u}\mathbf{S}= 0\,, \tag{5}\]
where \(\mathbf{u}\) is a unit vector in the direction of the coordinate \(u\). We can write the components of the Skyrmion field at \(p\) in the \(u,v,w\) basis and find
\[\Sigma_{u} =\frac{1}{2}\varepsilon_{pqr}S_{p}\left(\frac{\partial S_{q}}{ \partial v}\frac{\partial S_{r}}{\partial w}-\frac{\partial S_{r}}{\partial v }\frac{\partial S_{q}}{\partial w}\right)\] \[\Sigma_{v} =\frac{1}{2}\varepsilon_{pqr}S_{p}\left(\frac{\partial S_{q}}{ \partial w}\frac{\partial S_{r}}{\partial u}-\frac{\partial S_{r}}{\partial w }\frac{\partial S_{q}}{\partial u}\right)\] \[\Sigma_{w} =\frac{1}{2}\varepsilon_{pqr}S_{p}\left(\frac{\partial S_{q}}{ \partial u}\frac{\partial S_{r}}{\partial v}-\frac{\partial S_{r}}{\partial u }\frac{\partial S_{q}}{\partial v}\right)\,. \tag{6}\]
The derivatives of the Stokes parameters with respect to \(u\) are zero and it follows that \(\Sigma_{v}=0=\Sigma_{w}\), and therefore that the Skyrmion field line points in the direction of constant polarisation. This is our principal result.
It is straightforward to confirm that the Skyrmion field is independent of the basis used to denote the Stokes vectors and hence the identification of the Skyrmion field lines with lines of constant polarisation holds for every possible polarisation. Such a global transformation changes the polarization at every point in the field but does not alter the Skyrmion field, which is associated with lines of constant polarization, but not the specific polarization along these lines. Identifying a Skyrmion field line does not determine the polarisation along the field line, merely the line itself. More formally, the Skyrmion field is invariant under any unitary transformation of the Poincare sphere and so is not dependent on the basis used to express \(S_{1}\), \(S_{2}\) and \(S_{3}\). In this way, the Skyrmion field extends the characterization of lines of constant circular or linear polarization [34] to every polarization.
The mathematical properties of the Skyrmion field allow us to make general statements about lines of constant polarisation. The simplest and most important among these follows from the transverse nature of the Skyrmion field, \(\mathbf{\nabla}\cdot\mathbf{\Sigma}=0\). Like other transverse fields, such as the magnetic induction \(\mathbf{B}\) in electromagnetism, Skyrmion field lines cannot start or end (no monopoles) and nor can they branch or coalesce. The identification of Skyrmion field lines with lines of constant polarisation means that the same properties must hold for lines of constant polarisation. The only exception to this rule occurs along lines at which the polarisation is undefined, where several lines of different polarisation can meet. At such lines, however, the transverse condition on \(\mathbf{\Sigma}\) will fail.
One remaining subtlety needs to be addressed. This is the fact that lines of constant polarisation do not have a preferred sense of direction: such a line is independent of whichever direction we choose to move along it. The Skyrmion field line, however, has a specific direction; if we change the sign of \(\mathbf{\Sigma}\), then it reverses its direction along the line of constant polarisation. In this sense, at least, there seems to be a significant difference between lines of constant polarisation and Skyrmion field lines and we should explain the origin of this difference.
We have seen that the Skyrmion field lines do not determine the local polarisation, merely the local direction along which the polarisation does not vary. For the structured light beam, however, there is a further class of symmetries that leave the pattern of lines of constant polarisation unchanged. This is to apply the operation of complex conjugation to the polarizations. To be specific, let \(\mathbf{e}_{\rm H}\) and \(\mathbf{e}_{\rm V}\) be the real unit vectors corresponding to horizontal and vertical polarisation, so that left- and right-circular polarizations are \(\mathbf{e}_{\rm L}=(\mathbf{e}_{\rm H}+i\mathbf{e}_{\rm V})/\sqrt{2}\) and \(\mathbf{e}_{\rm R}=(\mathbf{e}_{\rm H}-i\mathbf{e}_{\rm V})/\sqrt{2}\). If we apply the complex conjugation operation then the left- and right-circular polarizations switch, but horizontal and vertical are unchanged, as are all the other possible linear polarizations, and we arrive at an alternative (but physically allowed) polarisation pattern with the same lines of constant polarisation. This transformation is antiunitary in nature [35]. Such transformations are familiar from the study of time-reversal and CP symmetries in particle physics [36]. The complex conjugate transformation coupled with rotations provides a second set of symmetries under which the polarisation changes but the lines of constant polarisation do not. The Skyrmion field lines, however, switch direction under the antiunitary transformation as their value is based on a right-handed coordinate system for
Figure 1: Plot of a line of constant elliptical polarization and the local coordinate system \(\mathbf{u},\mathbf{v},\mathbf{w}\) at \(p\).
the Poincare sphere. The complex conjugation operation applied to polarisation, however, changes a right-handed arrangement of the Stokes parameters into a left-handed one and, in doing so, flips the sign of the Skyrmion field.
The connection between the Skyrmion field and lines of constant polarisation has been established here for paraxial structured light beams, but we may expect it to have wider applications. For electrons and neutrons, a similar association will hold for the Skyrmion field lines and lines along which the particle spin does not change. For non-paraxial optical fields there exists a variety of features that can be associated with Skyrmions and, by extension, with a Skyrmion field. It will be interesting to see how these are related to the spatial arrangement of spin-related properties of the electromagnetic field. Finally, gravitational waves have two orthogonal polarisations and so we would expect Skyrmion field lines to be associated, also, with spatial variations of the polarisation of these fields.
In summary, we have shown that lines of constant polarisation in any structured paraxial light beam are identified with Skyrmion field lines. It follows that there is a more intimate relationship between structured light beams and Skyrmion fields than simply whether or not a particular beam has an associated Skyrmion number or Skyrmionic structures.
This work was supported by a Royal Society Research Professorship, grant number RP150122.
|
2301.05854 | CoRuVSi: A potential candidate for spin semimetal with promising
spintronic and thermoelectric properties | Based on our experimental and theoretical studies, we report the
identification of the quaternary Heusler alloy, CoRuVSi as a new member of the
recently discovered spin semimetals class. Spin polarised semimetals possess a
unique band structure in which one of the spin bands shows semimetallic nature,
while the other shows semiconducting/insulating nature. Our findings show that
CoRuVSi possesses interesting spintronic and thermoelectric properties.
Magnetization data reveal a weak ferri-/antiferro magnetic ordering at low
temperatures, with only a very small moment $\sim$ 0.13 $\mu_B$/f.u.,
attributed to the disorder. Transport results provide strong evidence of
semimetallicity dominated by two-band conduction, while magnetoresistance data
show a non-saturating, linear, positive, magnetoresistance. Spin polarization
measurements using point-contact Andreev reflection spectra reveal a reasonably
high spin polarization of $\sim$ 50\%, which matches fairly well with the
simulated result. Furthermore, CoRuVSi shows a high thermopower value of $0.7$
$m Watt/ m-K^{2}$ at room temperature with the dominant contribution from the
semimetallic bands, rendering it as a promising thermoelectric material as
well. Our ab-initio simulation not only confirms a unique semimetallic feature,
but also reveals that the band structure hosts a linear band crossing at $\sim$
-0.4 eV below the Fermi level incorporated by a band-inversion. In addition,
the observed topological non-trivial features of the band structure is
corroborated with the simulated Berry curvature, intrinsic anomalous Hall
conductivity and the Fermi surface. The coexistence of many interesting
properties relevant for spintronic, topological and thermoelectric applications
in a single material is extremely rare and hence this study could promote a
similar strategy to identify other potential materials belonging to same class. | Jadupati Nag, R. Venkatesh, Ajay Jha, Plamen Stamenov, P. D. Babu, Aftab Alam, K. G. Suresh | 2023-01-14T08:15:11Z | http://arxiv.org/abs/2301.05854v1 | CoRuVSi: A potential candidate for spin semimetal with promising spintronic and thermoelectric properties
###### Abstract
Based on our experimental and theoretical studies, we report the identification of the quaternary Heusler alloy, CoRuVSi as a new member of the recently discovered spin semimetals class. Spin polarised semimetals possess a unique band structure in which one of the spin bands shows semimetallic nature, while the other shows semiconducting/insulating nature. Our findings show that CoRuVSi possesses interesting spintronic and thermoelectric properties. It crystallizes in perfect cubic structure with a partial L2\({}_{1}\)-type disorder at room temperature. Magnetization data reveal a weak ferri-/antiferro magnetic ordering at low temperatures, with only a very small moment \(\sim\) 0.13 \(\mu_{B}\)/f.u., attributed to the disorder. Transport results provide strong evidence of semimetality dominated by two-band conduction, while magnetoresistance data show a non-saturating, linear, positive, magnetoresistance. Spin polarization measurements using point-contact Andreev reflection spectra reveal a reasonably high spin polarization of \(\sim\) 50%, which matches fairly well with the simulated result. Furthermore, CoRuVSi shows a high thermopower value of 0.7 \(mWatt/m-K^{2}\) at room temperature with the dominant contribution from the semimetallic bands, rendering it as a promising thermoelectric material as well. Our ab-initio simulation not only confirms a unique semimetallic feature, but also reveals that the band structure hosts a linear band crossing at \(\sim\) -0.4 eV below the Fermi level incorporated by a band-inversion. In addition, the observed topological non-trivial features of the band structure is corroborated with the simulated Berry curvature, intrinsic anomalous Hall conductivity and the Fermi surface. The partial L2\({}_{1}\) disorder is simulated using a special quasi random structure, which plays a crucial role in correctly explaining the magnetism and anomalous Hall effect. The simulated anomalous Hall conductivity for ordered and L2\({}_{1}\) disordered phase of CoRuVSi is found to be 102 and 52 S/cm, the later agrees fairly well with the experimentally measured value (45 S/cm). The coexistence of many interesting properties relevant for spintronic, topological and thermoelectric applications in a single material is extremely rare and hence this study could promote a similar strategy to identify other potential materials belonging to same class.
## I Introduction
Heusler alloys are known to exhibit exotic phenomena as well as novel potential applications, which have stimulated a tremendous interest in physics and materials technology. Many systems from this family are reported to be promising spintronic materials such as half-metals (HM) [1] spin gapless semiconductors (SGS),[2] bipolar magnetic semiconductors (BMS),[3] spin-valve [4] etc. Most of the reported Heusler materials are superior to other materials from the application point of view because of their stable structure and high spin-polarization. The coexistence of different and interesting properties in these systems gives rise to new avenues for multifunctional materials suitable for technological applications such as spintronics. Recently, tuning the electronic structure by defects/impurities has become a major focus by various researchers to achieve the desired properties suitable for applications.[5] As Heusler alloys are prone to anti-site disorder, complex magnetic/electronic structures can be realized in these materials, with a wide tuning capability. One of the main motives of this work is to understand the role of anti-site disorder in band engineering and hence in the tuning of the magneto-electronic properties.
In this article, we report the addition of a new member to the recently identified magnetic quantum material class namely spin semi-metals (SSM), with several complementary properties. This is a combined theoretical and experimental study where SSM nature is confirmed in a new quaternary Heusler alloy (QHA) CoRuVSi. The objective of this work is two-fold: (1) better understanding the key features of this relatively new class both from physics and materials perspectives, (2) highlight the importance of this class of materials for potential spintronic and thermoelectric applications. In HMs, one of the spin bands shows metallic nature, while the other shows semiconducting/insulating behavior. SSM, on the other hand, is an unconventional class of spintronic materials in which one of the spin bands possesses semimetallic nature, while the other possesses a small gap near the Fermi level (E\({}_{F}\)). Thus, electronic states of such materials can be easily controlled by an external perturbation (magnetic field, temperature etc.) and hence are advantageous for spin-transport based applications. This advantage is missing
in the conventional spintronic systems such as HM and SGS. A schematic representation of the density of states (DoS) and overlap of conduction and valence bands for HM and SSM are shown in Fig. 1.
CoRuVSi is found to crystallize in the perfect cubic structure (space group \(F\bar{4}3m\)) with a partial L2\({}_{1}\)-type disorder. The magnetization data indicates a weak ferri-/antiferro-magnetic ordering at very low temperature, with a very small saturation magnetization \(\sim 0.13\)\(\mu_{B}\)/f.u. The magnetization data indicate quenching of moment, attributed to the atomic disorder, a prediction also supported by our ab-initio disorder calculations. Theoretical studies reveal a fully compensated ferrimagnetic nature for CoRuVSi. Transport results provide strong evidence of semimetallic behavior dominated by two-band conduction, while low-T magnetoresistance data indicates the non-saturating, linear, positive magnetoresistance (LPMR), with a quadratic behavior with T. Close analysis of MR data hints toward the small-gap electronic structure near the E\({}_{F}\) as the origin of quantum LPMR, which indirectly hints toward the SSM nature present in this system. Point contact Andreev reflection (PCAR) measurements reveal a reasonably high spin polarization of \(\sim 50\%\). This matches fairly well with the theoretical calculations, again facilitating an indirect evidence of SSM feature in this system. CoRuVSi also shows a reasonably high thermopower value of 0.7 \(mWatt/m-K^{2}\) at room temperature and hence can be further explored for its potential as a promising thermoelectric material. Our ab-initio simulation confirms the spin semimetallic feature in this alloy with a high spin polarization. Overall, the present study introduces a new member namely CoRuVSi to the magnetic quantum phase, having the potential for multifunctional applications, and gives a comprehensive analysis of the interplay between the non-trivial electronic states with magnetism and anti-site disorder. Such a combined theoretical and experimental study gives a unique platform to explore new exotic states of quantum matter.
## II Experimental details
Polycrystalline samples of CoRuVSi were prepared using an arc melting system in a high purity Argon atmosphere using stoichiometric constituent elements having a purity of 99.99%. To accomplish perfect homogeneity, the samples were melted several times and a very small weight loss (\(<0.15\) %) was observed after the final melting. To study the crystal structure, at room temperature (RT) X-ray diffraction (XRD) patterns were taken using Cu-K\(\alpha\) radiation with the help of Panalytical X-pert diffractometer. For the crystal structure analysis, FullProf Suite software [6] was used. Magnetization measurements at various temperatures were carried out using a vibrating sample magnetometer (VSM) attached to a physical property measurement system (PPMS) (Quantum Design) for fields up to 70 kOe. Temperature and field-dependent resistivity along with the MR measurements were carried out employing a physical property measurement system (PPMS-DynaCool; Quantum Design) using the electrical transport option (ETO) in a traditional four-probe method, applying a 10 mA current at a 15 Hz frequency. Hall measurements were carried out using PPMS with the van der Pauw method by applying a 5 mA current at 21 Hz frequency. Specific heat (C\({}_{p}\)) measurements were done in a 14T/2 K PPMS. A small piece of the sample (18 mg) was used to measure C\({}_{p}\), down to 2 K in zero field and in 5T applied magnetic field using a relaxation calorimetry technique. Thermoelectric power (TEP) in zero magnetic fields was measured using the differential dc sandwich method in a homemade setup in the temperature range of 4-300 K. Point contact Andreev reflection (PCAR) measurements were performed in PPMS using a superconductive Nb tip. The landing of the tip on the sample is carefully controlled by a fully automated vertical Attocube piezo-stepper. Two additional horizontal Attocube piezo steppers are used to move the sample in horizontal directions, in order to probe the pristine area of the sample. The differential conductance spectra were fitted using the modified Blonder-Tinkham-Klapwijk (m-BTK) model, as detailed elsewhere.[7; 8]
## III Computational details
To study the electronic/magnetic structure of CoRuVSi, _abinitio_ calculations were performed using spin-resolved density functional theory (DFT) [9] implemented within Vienna ab initio simulation package (VASP) [10; 11; 12] with a projected augmented-wave (PAW) basis.[13] We used the electronic exchange-correlation potential due to Perdew, Burke, and Ernzerhof (PBE) [14] within the generalized gradient approximation (GGA) scheme. For the Brillouin zone integration within the tetrahedron method, a \(24\times 24\times 24\) k-mesh was used. A plane wave energy cut-off of 420 eV was used for all the calculations. All the structures were fully relaxed with total energies (forces) converged to values less than \(10^{-6}\)
Figure 1: Schematic representation of spin polarized (a) density of states (DoS) for a conventional half metal (HM) (b) bands (left and right panels) and DoS (middle) for spin semi-metal (SSM)
eV (0.01 eV/A). The Wannier90 [15; 16; 17]simulation tool was used to compute the tight-binding Hamiltonian. A total of 62-bands were wannier by taking projections on atomic sites as: Co (s, p, d), Ru (s, p, d), V (s, p, d), Si (s,p) etc. Further, Berry curvature, Fermi surface and anomalous Hall conductivity were calculated to investigate the semimetallic nature. The intrinsic anomalous Hall conductivity (\(\sigma_{int}^{AHE}\)) was estimated by integrating the Berry curvature (-\(\Omega_{z}(\textbf{k})\)) over the entire Brillouin zone considering a k-grid of \(40\times 40\times 40\) with adoptive refinement k-mesh size of \(5\times 5\times 5\). To capture the effect of disorder in L2\({}_{1}\) structure, a 64-atom special quasi-random structure (SQS)[18] was generated. SQS is a carefully generated ordered structure, which mimics the random correlations up to a certain neighboring distance in disordered compounds. To generate the SQSs, Alloy Theoretic Automated Toolkit (ATAT)[19] was used. Our generated SQSs mimic the random pair correlation functions accurately up to third-nearest neighbors.
## IV Experimental Results
### Crystal Structure
CoRuVSi crystallizes in LiMgPdSn prototype structure (space group \(F\bar{4}3m\)) with the measured lattice parameter of 5.80 A as found from the Rietveld refinement. The crystal structure can be viewed as four interpenetrating fcc sub-lattices with Wyckoff positions \(4a(0,0,0)\), \(4b(0.5,0.5,0.5)\), \(4c(0.25,0.25,0.25)\), and \(4d(0.75,0.75,0.75)\). In general, for a QHA XX\({}^{\prime}\)YZ, there exist three possible energetically non-degenerate structural configurations[4] (keeping Z-atom at \(4a\)-site) as follows:
* (I) X at \(4d\), X\({}^{\prime}\) at \(4c\) and Y at \(4b\) site,
* (II) X at \(4b\), X\({}^{\prime}\) at \(4d\) and Y at \(4c\) site,
* (III) X at \(4d\), X\({}^{\prime}\) at \(4b\) and Y at \(4c\) site.
For a detailed structural analysis, we consider the structure factor for configuration-I, which can be expressed as,
\[F_{hkl}=4(f_{Z}+f_{Y}e^{\pi i(h+k+l)}+f_{X}e^{\frac{\pi i}{2}(h+k+l)}+f_{X^{ \prime}}e^{-\frac{\pi i}{2}(h+k+l)}). \tag{1}\]
where \((h,k,l)\) are the miller indices. \(f_{X}\), \(f_{X^{\prime}}\), \(f_{Y}\), and \(f_{Z}\) are the atomic scattering factors. The structure factor for super lattice reflections [111] and [200] can be written as:
\[F_{111} =4[(f_{Z}-f_{Y})-i(f_{X}-f_{X^{\prime}})]\] \[F_{200} =4[(f_{Z}+f_{Y})-(f_{X}+f_{X^{\prime}})]\]
Figure 2 shows the room temperature XRD pattern of CoRuVSi along with the Rietveld refinement for configuration-I with 50% disorder between tetrahedral site atoms i.e. Co/Ru (X/X\({}^{\prime}\)). This is the best fit we got after carrying out rigorous refinement considering all possible disorders in all the configurations. Clearly, the low intensity of the superlattice peak (111) indicates the possibility of disorder in the octahedral/tetrahedral sites. For L2\({}_{1}\)-type refinement, we have also considered 50% anti-site disorder between octahedral site atoms for configuration-I which did not fit well. The best fit with the lowest \(\chi^{2}\) (1.80) was found in configuration-I with the L2\({}_{1}\) order (also see inset of Fig. 2) in comparison with other refinements considering all possible other disorders like \(B2\) (\(\chi^{2}\)= 5.24), \(A2(\chi^{2}\) =10.5), DO\({}_{3}\) (\(\chi^{2}\) =8.1) and perfectly ordered Y-type (\(\chi^{2}\)=4.35). As such, we conclude that CoRuVSi crystallizes in the L2\({}_{1}\) structure. The crystal structure corresponding to the Y-type order and the best fit with the L2\({}_{1}\) order are shown in the right insets of Fig. 2.
### Magnetic properties
Figure 3(a) shows magnetization (M) vs. temperature (T) for CoRuVSi measured at H = 500 Oe. The field cooled warming (FCW) curve taken at H= 500 Oe shows a rapid increase in M below 25 K, which hints toward the possibility of magnetic ordering at very low T. Inset of Fig. 3(a) shows the Curie-Weiss (C-W) law fitting of the susceptibility data in high T range at H=500 Oe.
The magnetic moment (\(m\)) can be calculated considering the Slater-Pauling (S-P) rule using the total number of valence electrons (\(n_{v}\)) of the constituent elements.[20] The total moment (\(m\)) per formula unit can be expressed as:[21; 22]\(m=(n_{v}-24)\)\(\mu_{B}/f.u.\) For CoRuVSi, the
Figure 2: For CoRuVSi, room temperature powder XRD pattern including the Rietveld refined data for configuration-I with 50% disorder between tetrahedral site Co/Ru atoms. Left inset shows a zoomed in view near super-lattice peaks (111) and (200) with L2\({}_{1}\) structure. Right insets show primitive unit cell structures corresponding to the Y-type order (top) and L2\({}_{1}\)-type disorder (bottom).
S-P rule predicts \(m=\)2.0 \(\mu_{B}\)/f.u. in the fully ordered state, but interestingly, the M-H curve shows a very small saturation magnetization (0.13 \(\mu_{B}\)/f.u.) even at 3 K, which is a complete deviation from the S-P rule. The presence of L2\({}_{1}\) disorder can be a plausible reason for the quenching of moment in this system. To get an idea about the magnetic interactions present in this system, the inverse-susceptibility data (H=500 Oe) has been fitted (solid red line) above 150 K using the C-W law (\(\chi^{-1}=\frac{1}{\chi_{0}+C/(T-\theta_{P})}\)) and from the fitting, we obtained effective moment \(m=\)0.2 \(\mu_{B}\)/f.u., \(\chi_{0}\)=0.40 \(emu/mole-\)Oe and Weiss temperature, \(\theta_{P}\)= \(-\)93 K, that indicates the presence of antiferromagnetic interactions in the system. The sharp shoot below 25 K may arise because of the moments of ferri/antiferromagnetic clusters can easily prevail over the paramagnetic regime at low T. Additionally, non-saturating behavior (up to 70 kOe field) of low-T M-H curve (Fig. 3(b)), along with no hysteresis indicates superparamagnetic-like behavior in this system, attributable to the L2\({}_{1}\) disorder, which gives rise to the moment quenching. Thus, magnetization data reveal the possibility of small magnetic clusters, formed by weakly interacting moments, with no spontaneous magnetization. This confirms the absence of coherent long-range ordering, mediated by the atomic disorder, giving rise to complex magnetic nature in CoRuVSi.[23]
### Transport properties
#### iii.3.1 Pcar
The electronic spin polarization \(P\) at the Fermi level (\(E_{F}\)) is defined as:
\[P=\frac{n_{\uparrow}(E_{F})-n_{\downarrow}(E_{F})}{n_{\uparrow}(E_{F})+n_{ \downarrow}(E_{F})} \tag{2}\]
where \(n_{\uparrow}(E_{F})\) and \(n_{\downarrow}(E_{F})\) are the spin-projected density of states at (\(E_{F}\)) for spin-up and -down channels respectively. Figure 4 summarize the spin polarization data as obtained by the PCAR measurement. It shows a maximum spin polarization of \(\sim\) 50% at the Fermi level, which is reasonably high to serve as potential spintronic material.[2] The reduction in spin polarization value as compared to the theoretical value corresponds to a narrowing of the spin gap in the density of states, which is possibly due to the presence of small density of states attributed to the disorder in the real system.
#### iii.3.2 Resistivity
Figure 5(a) shows the T-dependence of resistivity (\(\rho_{xx}\)) at different applied fields. It reflects a semi-metallic behavior (also revealed by the electronic structure calculations shown later). To gain further understanding, we have fitted the zero-field resistivity data considering various scattering mechanisms in various T-ranges. A dip-like feature below 5 K in the zero-field data indicates the possibility of weak localization arising from disorder in this system. Resistivity follows a power law behaviour (\(\rho(T)=\rho_{0}+AT^{n}\)) in the T-range of 5 K\(<\)T\(<\)30 K, as shown in Fig. 5 (b). Above 30 K, resistivity data fit well with the two-carrier model which supports the semi-metallic behavior in this system (also supported by the carrier concentration from Hall data, shown later). For
Figure 3: For CoRuVSi, (a) M vs. T in field cooled warming (FCW) mode in \(H=500\) Oe. Inset shows \(T\)-variation of inverse susceptibility (\(1/\chi\)) along with Curie-Weiss fitting in high \(T\)-regime. (b) M-H curves at 3K and 300K.
Figure 4: For CoRuVSi, point-contact Andreev reflection (PCAR) spectra, along with fit, and extracted parameters. The extracted m-BTK-model parameters are provided inside the box. The measured spin polarization was found to be \(P\)= 48(5)%.
further investigation, we have fitted the conductivity (\(\sigma\)) data (see Fig. 5(c)) with a modified two-carrier model (Eq. 4),[24; 25] in the T-range \(30-310\) K. A two-carrier model for \(\sigma\) can be written as,
\[\sigma(T)=e(n_{e}\mu_{e}+n_{h}\mu_{h}) \tag{3}\]
where, \(n_{i}=n_{i0}\;e^{-\Delta E_{i}/k_{\rm B}T}(i=e,h)\) are the electron/hole carrier concentrations with mobilities \(\mu_{i}\) and pseudo-energy gaps \(\Delta E_{i}\). Eq.(3) can be further expressed as,
\[\sigma(T)=[A_{e}(T)\;e^{-\Delta E_{e}/k_{\rm B}T}+A_{h}(T)\;e^{-\Delta E_{h}/ k_{\rm B}T}]. \tag{4}\]
After fitting the conductivity data with the above equation, we obtained the pseudo-energy gaps for electrons and holes to be 0.11 meV and 15.9 meV, which are quite small and resemble those of a narrow band gap semiconductor. It appears that the atomic disorder plays a crucial role in significantly reducing the pseudo-gaps, especially for electrons which have an extremely small gap and are likely to become metallic with small perturbations (e.g. applied field, thermal fluctuation etc.).
#### iii.1.3 Magnetoresistance
Figure 5(d) shows the field dependence of MR at different T, where MR is defined as MR(H)=\(\left[\rho(H)-\rho(0)\right]/\rho(0)\)\(\times\)100%. At 5 K, non-saturating linear positive magnetoresistance (LPMR) is observed, which is also confirmed by the linear fitting of MR vs. H (see Fig. 5 (e)). But with increasing T, field-dependent MR(H) becomes almost quadratic in nature and at 25K an unsaturated quadratic MR is observed. The origin of LPMR at the lowest T (5 K) is possibly due to the zero/small-gap electronic structure near the E\({}_{F}\).[26] MR magnitude decreases gradually with T and at 100 K it becomes almost zero. To characterize the type of carriers, we have further performed the Hall measurement (as described below).
#### iii.1.4 Hall Measurements
Figure 6(a) shows the field-dependence of Hall resistivity \(\rho_{xy}\) at various T. Generally, Hall resistivity for a magnetic material has two contributions expressed as,
\[\rho_{xy}(T)=\rho_{xy}^{O}+\rho_{xy}^{A}=R_{0}H+R_{A}M, \tag{5}\]
where, \(\rho_{xy}^{O}\) and \(\rho_{xy}^{A}\) are ordinary and anomalous contribution to \(\rho_{xy}\), \(R_{0}\) and \(R_{A}\) denote the ordinary and anomalous Hall coefficients respectively. At 5 K and 10 K, both the contributions are observed, but at 50 K, the amplitude of \(\rho_{xy}\) drops abruptly to zero (see Figs. 6(b-c)), as anomalous contribution die out with increasing T and only ordinary contribution remains. We have extracted \(\rho_{xy}^{O}\) and \(\rho_{xy}^{A}\) contributions at 5 K and 10 K, as shown in Fig. 6(b-c). To scale the anomalous Hall effect (AHE) contribution, we have fitted a linear curve to \(\rho_{xy}\) data at large H, and extracted \(\rho_{xy}^{A}\) contribution. From the AHE data, a small AHE contribution (\(\sim\) 0.15\(\mu\Omega\)-cm) is observed. Figure 6(d) shows the field-dependence of anomalous Hall conductivity (\(|\sigma_{xy}^{A}|\sim\frac{\rho_{xy}^{A}}{\rho_{xx}^{A}}\)) at 5 K and 10 K. \(\sigma_{xy}^{A}\) reaches a maxima \(\sigma_{xy0}^{A}\) =45 S cm\({}^{-1}\) at 5 K, confirming a non-saturating and non-linear behaviour. The measured value of the carrier concentration(\(n\)) at 5 K is 7.4\(\times\)10\({}^{18}\) cm\({}^{-3}\), which falls well within the range of carrier densities for semimetals/semiconductors, again indicating the semi-metallic nature of CoRuVSi.[27] The positive slope of \(R_{0}\) reveals holes as majority charge carriers. The origin of this behavior may be attributed to the change in electronic structure brought about the atomic disorder.
#### iii.1.5 Thermoelectric power
Figure 7 shows the T-dependence of the Seebeck coefficient (S) (left y-scale) along with the power factor (\(S^{2}\sigma\)) (right y-scale). S shows a sub-linear variation with T, which is typically seen in semimetals [28; 29] The negative slope of S with T corresponds to electron-driven thermopower, which again reveals two-carrier conduction in this system. The linear behavior of S suggests the dominance of diffusion thermopower. \(|\)S\(|\) attains a value of 23
Figure 5: For CoRuVSi, (a) Longitudinal resistivity (\(\rho_{xx}\)) vs. T in three different fields, 0, 50 and 90 kOe. (b) \(\rho(T)=\rho_{0}+AT^{n}\) fitting in the T-range 5-30 K. Inset shows a zoomed in view of the \(\rho_{xx}\) data in the low-\(T\) range. (c) longitudinal conductivity (\(\sigma_{xx}\)) vs. \(T\) in zero field along with a two-carrier model fit between 30-310 K. (d) MR vs. \(H\) at four different temperatures 5, 25, 50, and 100K. (e) linear and quadratic fitting of MR vs. H at 5 K and 25 K respectively.
\(\mu V/K\) at 300 K, which is comparable to that of other potential thermoelectric (TE) materials, at RT.[30; 31; 32] To further evaluate the potential of CoRuVSi for thermoelectric applications, we have calculated the power factor (PF=S\({}^{2}\sigma\)), a key parameter determining the efficiency of thermoelectric material. PF varies linearly with T and attains a maximum value of 0.7 \(mWatt/m-K^{2}\) at RT, which is reasonably high as compared to many Heusler-based TE materials,[33] and also comparable with other reported promising TE materials.[4; 5; 34; 35; 36] To get an idea about the carrier density (n) and E\({}_{F}\), S-data is fitted with the equation S\({}_{d}\)=S\({}_{0}\)+\(sT\) in the high-T regime, where S\({}_{d}\) is the diffusion thermopower, S\({}_{0}\) is a constant and s= \(\frac{\pi^{2}k_{B}^{2}}{3\text{cE}_{F}}\). From this fitting, we obtained E\({}_{F}\) = 1.41 eV and n=\(7.2\times 10^{18}\)cm\({}^{-3}\), which is comparable with the Hall data and falls well within the range of carrier densities of promising TE materials. Further investigation on high T measurements and thermal transport can help in determining the potential of CoRuVSi as a promising thermoelectric material at high T.
#### iii.2.6 Specific heat
Figure 8 shows the T-dependence of specific heat for 0 and 50 kOe. The low-T C\({}_{p}\) data is fitted with the equation \(C(T)=\gamma T\)+\(\beta\)T\({}^{3}\), where the first term is electronic contribution to C\({}_{p}\) while the second term is the low-T phonon contribution. The inset of Fig. 8 shows C\({}_{p}\)/T vs. T\({}^{2}\) plot along with the linear fit. From this fitting, we obtained \(\gamma\)=0.05 \(J/mole-K^{2}\) (Sommerfeld coefficient), which in turn gives the density of states at E\({}_{F}\) i.e. \(n(E_{F})\)=3\(\gamma\)/(\(\pi^{2}k_{B}^{2}\)) \(\sim\) 4.5 states/eV f.u.[37] This value matches quite well with the theoretical results (see next section) and is in good agreement with small DoS near E\({}_{F}\) for semimetals. This is another indication of expected semimetallic feature in CoRuVSi. From the fitting, we also extracted the Debye temperature, \(\theta_{D}\)= 383 K using the value of \(\beta\)=1.3795\(\times\) 10\({}^{-4}\)\(J/mole-K^{4}\). Interestingly C\({}_{p}\)/T vs. T\({}^{2}\) plot shows a shallow minimum, which may be related to the AFM-like interaction present in CoRuVSi.
Figure 8: For CoRuVSi, specific heat (C\({}_{p}\)) vs. \(T\) for 0 and 50 kOe fields. Inset shows C\({}_{p}\)/T vs. \(T^{2}\) along with the linear fit (solid red line) for zero field.
Figure 6: For CoRuVSi, (a) Hall resistivity (\(\rho_{xy}\)) vs. applied field (H) at 5, 10 and 50 K. (b-c) Ordinary and anomalous contributions of \(\rho_{xy}\) at 5 and 10 K respectively. (d) Anomalous Hall conductivity (\(\sigma_{xy}^{A}\)) vs. H at 5 and 10K.
Figure 7: For CoRuVSi, \(T\)-dependence of thermoelectric power (S) and power factor (\(S^{2}\sigma\)) along with the fitting of diffusion thermopower (S\({}_{d}\)=S\({}_{0}\)+\(sT\)) between 100-300 K.
## V Theoretical results
We have used _ab-initio_ simulation to investigate various magnetic states including para-, ferro-, antiferro-, and ferri-magnetic configurations in the ordered and L2\({}_{1}\)-disordered phases for CoRuVSi. Out of all the configurations, type-I configuration (see Sec. IV(A)) with ferrimagnetic ordering turned out to be energetically the most favorable one. Table 1 shows the optimized lattice parameters, total and atom-projected moments and relative energies of three different ordered structures (type-I, II and III) in their respective lowest energy magnetic ground state. Figure 9 shows the spin-resolved density of states and band structure for the lowest energy type-I configuration, which indicates a nearly half-metallic ground state with a net magnetization of \(\simeq\)2 \(\mu_{B}/f.u.\).
In order to further explore the half metallic/semimetallic nature, we have simulated the band structure of the ordered CoRuVSi (type-I configuration) including the spin-orbit coupling (SOC), as shown in Fig. 10(a). The corresponding band crossing near the Fermi level are shown in Fig. 10(b). This clearly illustrates the semimetallic nature with the bands 25, 26 and 27 crossing E\({}_{F}\). The Fermi surfaces corresponding to these three individual bands as well as the net combined Fermi surface are shown in Fig. 10(c-f). This confirms the emergence of hole pockets from bands 25 and 26, while band 27 gives rise to electron pocket. Figure 11(a) shows the z-component of the Berry curvature (\(\Omega_{z}(\mathbf{k})\)) along the high symmetry \(\vec{k}\)-points. The corresponding 2D projection of \(\Omega_{z}(\mathbf{k})\) in k\({}_{x}\)-k\({}_{y}\) plane is shown in Fig. 11(b). Here, black solid lines show intersections of the Fermi surface with this plane. The large spike in the Berry curvature near the vicinity of \(L\) point is attributed to the two spin-semimetallic bands (25 and 26), one of which is unoccupied (band 25) and the other (band 26) is occupied in a small k-interval. Due to spin-orbit coupling, a small energy gap opens up, giving rise to a small energy denominator in the definition of Berry curvature (i.e. \(\Omega_{n}\sim 1/(\Delta{\varepsilon_{n}}^{2})\) from the Kubo-formula)[38]. So, these topological spin-semimetallic bands induce an appreciable Berry curvature, which is purely intrinsic in nature. The intrinsic anomalous Hall conductivity is calculated by integrating \(\Omega_{z}(\mathbf{k})\) over the entire Brillouin zone (BZ), using the following expression,[38]
\[\sigma_{int}^{AHE}=-(\epsilon^{2})/(8\pi^{3}\hbar)\int_{(BZ)}d^{3}k\ \Omega_{z}( \mathbf{k}), \tag{6}\]
The simulated AHE for ordered CoRuVSi (type I configuration) is \(|\sigma_{int}^{AHE}|\)=102 \(S/cm\), a reasonably high value. The calculated \(|\sigma_{int}^{AHE}|\) is almost double as compared to the experimentally measured value (45 S/cm). It is important to note that the simulated net magnetization (\(\simeq\)2 \(\mu_{B}/f.u.\)) of the completely ordered phase is quite different as compared to the measured value (\(\simeq\) 0.13 \(\mu_{B}/f.u.\)) To unveil the possible reason for these discrepancies between theory and experiment, we have simulated the band structure including Berry curvature, anomalous Hall conductivity and Fermi surface of the L2\({}_{1}\)-disordered phase (50% disorder between tetrahedral site atoms Co and Ru, as confirmed by XRD-refinement) of CoRuVSi using a 64 atom SQS cell. This disordered structure gives a reduced magnetization of 0.29 \(\mu_{B}/f.u.\) with a nearly compensated ferrimagnetic structure (see Table 2 for the optimized lattice parameter and the atom projected and total moments). This value of net moment
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Type & \(a_{0}\) (Å) & \(m^{\text{Co}}\) & \(m^{\text{Ru}}\) & \(m^{\text{V}}\) & \(m^{\text{Total}}\) & \(\Delta E\)(eV/f.u.) \\ \hline I & 5.82 & 1.68 & -0.13 & 0.53 & 2.07 & 0 \\ II & 5.85 & 1.32 & 0.73 & -0.21 & 1.84 & 0.32 \\ III & 5.83 & 1.35 & 0.28 & -0.5 & 1.16 & 0.22 \\ \hline \end{tabular}
\end{table}
Table 1: For ordered CoRuVSi, theoretically optimized lattice parameter (\(a_{0}\)), total and atom-projected magnetic moments (\(\mu_{B}\)), and relative energy (\(\Delta E\)) for type I, II and III configurations with respect to the energy of type-I configuration.
Figure 9: For ordered CoRuVSi (in type-I configuration), spin resolved band structure and density of states (DoS) at the optimized lattice parameter (\(a_{0}\)). A few electron pockets at/around the E\({}_{F}\) are observed for minority spin-channel.
agrees fairly well with our experimental finding. Interestingly, in the L2\({}_{1}\) disordered structure, CoRuVSi shows a spin-semimetal behavior (see Fig. 12(a)). This is due to a small overlap between the conduction and valence bands (CB and VB) close to the E\({}_{F}\), which in turn is tunable by the influence of impurity/disorder or external field, and hence plays a crucial role in the overall electronic structure of the material. To crosscheck the effect of SOC on SSM behavior, we have also simulated the electronic structure including SOC effect. This is shown in Fig. 12(b,c). A close inspection of this band structure in the disordered phase reveals the existence of a linear band crossing at/around -0.4 eV below E\({}_{F}\), supported by a band inversion near the \(X\) point (see the inset of Fig. 12(c)). The simulated Berry curvature, Fermi surface (for band #25, 26) and the band positions for the L2\({}_{1}\) disordered phase of CoRuVSi are shown in Fig. 13. Interestingly, we also found remarkable agreement between the calculated AHE (\(|\sigma_{int}^{AHE}|\)=52.2 \(S/cm\)) of the L2\({}_{1}\) partially disordered phase and the corresponding experimental value (\(|\sigma_{int}^{AHE}|\)=45 \(S/cm\)). The semimetallic bands, which give rise to the topological non-trivial features are found to be robust against the disorder as indicated by the Berry curvature and Fermi surface calculations.
## VI Summary and Conclusion
In summary, we report the identification of a new member, namely CoRuVSi, to the quantum material class namely _spin semi-metals_ which can be quite promising for future spintronic and thermoelectric applications. Using a combined theoretical and experimental study, we have investigated the structural, magnetic, transport, and electronic properties of CoRuVSi. It crystallizes in the cubic structure (space group \(F\bar{4}3m\)) with a partial L2\({}_{1}\)-type disorder in the tetrahedral site atoms Co/Ru, as confirmed from our XRD measurement. The magnetization data indicate a weak ferrimagnetic ordering at low T, with a very small moment \(\sim 0.12\)\(\mu_{B}\)/f.u. caused by the anti-site disorder. Resistivity results provide a strong evidence of semimetallic nature dominated by two-band conduction, while low-T magnetoresistance data in
Figure 11: For ordered CoRuVSi (in type-I configuration), simulated Berry curvature (-\(\Omega_{z}\)(k)) (a) along the high symmetry paths and (b) in the k\({}_{x}\)-k\({}_{y}\) plane at E\({}_{F}\). Black solid lines show intersections of the Fermi surface with this plane.
Figure 12: For L2\({}_{1}\) partially ordered CoRuVSi (SQS structure) (a) spin resolved density of states without spin orbit coupling (SOC) at the optimized lattice parameter(\(a_{0}\)) (b-c) DoS and band structure including the effect of SOC. Red circle in Fig. (c) highlights the linear band crossing at/near \(X\) point with the band inversion, again confirming semimetallic nature. Inset shows a zoomed in view of this band crossing.
Figure 13: For L2\({}_{1}\) partially ordered CoRuVSi (SQS structure), simulated Berry curvature (-\(\Omega_{z}\)(k)) (a) along the high symmetry paths and (b) in the k\({}_{x}\)-k\({}_{y}\) plane at E\({}_{F}\). Black solid lines show intersections of the Fermi surface with this plane. Fermi surfaces attributed to (c) band 25 and (d) band 26, illustrate the emerging electron and hole pockets again confirming the semimetallic feature. (e) widths of various bands, illustrating the band-crossing at/near \(E_{F}\).
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \(a_{0}\) (Å) & \(m^{\text{Co}}\) & \(m^{\text{Ru}}\) & \(m^{\text{V}}\) & \(m^{\text{Total}}\) \\ \hline
5.85 & 0.29 & -0.10 & 0.10 & 0.29 \\ \hline \hline \end{tabular}
\end{table}
Table 2: For L2\({}_{1}\) partially ordered CoRuVSi (SQS structure), optimized lattice parameter (\(a_{0}\)), total and atom-projected average moments (\(\mu_{B}\)).
dicate the non-saturating, linear positive magnetoresistance. The latter hints toward the small-gap electronic structure near the Fermi level, indirectly supporting the prediction of semimetallic nature. Specific heat data confirm a low value of density of states at/near E\({}_{F}\), supporting our theoretical findings about the semimetallic nature. PCAR measurements reveal a high spin polarization of \(\sim 50\%\). CoRuVSi also shows a high thermopower value of 0.7 \(mWatt/m-K^{2}\) at room temperature, rendering it as a promising thermoelectric material as well. _Ab-initio_ simulation of CoRuVSi with L2\({}_{1}\) disorder reveals a spin semimetal feature with nearly compensated ferrimagnetic configuration having a small net magnetization, as observed experimentally. Interestingly, the band structure hosts a linear band crossing at \(\sim\)-0.4 eV below the Fermi level, along with a band inversion, confirming the topological non-trivial nature of CoRuVSi. This was further assessed from the simulated Berry curvature, anomalous Hall conductivity and Fermi surface. The simulated anomalous Hall conductivity for L2\({}_{1}\) partially ordered CoRuVSi is 52 S/cm which agrees fairly well with experimentally measured value of 45 S/cm. The coexistence of many promising features in a single material is rare and hence it opens up new opportunities to search for other novel materials with multifunctional properties.
**Acknowledgments:** JN acknowledges the financial help provided by IIT Bombay. JN also thanks Mr. Vinay Kaushik UGC-DAE-CSR Indore for setting up HC measurements. The authors thank Dr. Durgesh Singh for setting up TEP measurements. KGS acknowledges the funding from the Indo-Russian Project-TPN- 64868. A.A. acknowledges DST-SERB (Grant No. CRG/2019/002050) for funding to support this research.
|
2308.06856 | On the large time asymptotics of bi-laplacian Schrödinger equation
with general data | We study the bi-laplacian Schr\"odinger equation with a general interaction
term, which can be either linear or nonlinear, and is time-dependent. We prove
that the global solutions for this equation are asymptotically given by a free
wave and a weakly localized part. The proof relies on constructing the Free
Channel Wave Operator in a new way, based on the method developed from recent
studies \cite{SW20221}. | Avy Soffer, Jiayan Wu, Xiaoxu Wu, Ting Zhang | 2023-08-13T22:45:57Z | http://arxiv.org/abs/2308.06856v1 | # On the large time asymptotics of bi-Laplacian Schrodinger equation with general data
###### Abstract.
We study the bi-laplacian Schrodinger equation with a general interaction term, which can be either linear or nonlinear, and is time-dependent. We prove that the global solutions for this equation are asymptotically given by a free wave and a weakly localized part. The proof relies on constructing the Free Channel Wave Operator in a new way, based on the method developed from recent studies [24].
Key words and phrases: forth-order Schrodinger operator, free channel wave operator 2010 Mathematics Subject Classification: 35R11, 35B33
## 1. Introduction
### Background
The analysis of scattering, and in particular proving Asymptotic Completeness (AC for short) goes back to the 1920's. The rigorous works were focused on the perturbations of the Laplacian, which are localized or small, and are time independent, please see [14, 15]. Functional Analytic techniques were used to the study of the spectral properties of such operators. This was later extended to N-body hamiltonians, beginning with the Classical work of Faddeev [4, 5, 6] on the three body problem from 1963. These methods could not be applied to time dependent potentials or nonlinear equations. In 1978 Enss [3] introduced a new approach, a time dependent one, which offered new venues for scattering theory, without the need to analyze the Green's function of the Hamiltonian. Building on this intuition, led to the new spectral theory methods developed by Mourre(1979-1983) [10, 11, 12]. This also led to new and comprehensive proofs of AC for the three-body short range and long range scattering. In 1986-89, a new approach to scattering theory was developed by Sigal and Soffer [17, 18, 19, 20, 21]. Utilizing the general estimates that follows from Mourre's method, and combined with phase-space analysis to derive a-priory propagation estimates for the full dynamics, led to the proof on \(N\)-body AC, and other propagation estimates of interest for \(N\)-body hamiltonians. Non of these works though, led to major progress on the case of time dependent perturbations.
For nonlinear equations that remain spatially localized for all times, we believe that the solution will evolve like a linear solution plus coherent structures, which include solitons, breathers, kinks, black-holes, vortices, etc. In many cases, we believe that these coherent
###### Abstract
We consider the general class of NLSE with bi-laplacian dynamics:
\[i\partial_{t}\psi(x,t)=H_{0}\psi(x,t)+\mathcal{N}(x,t,|\psi(x,t)|)\psi,\quad(x,t) \in\mathbb{R}^{n+1},n\geq 1, \tag{1.1}\]
with initial data \(\psi(x,0)=\psi_{0}(x)\in L^{2}_{x}(\mathbb{R}^{n})\). We consider the general class of NLSE with bi-laplacian dynamics:
\[i\partial_{t}\psi(x,t)=H_{0}\psi(x,t)+\mathcal{N}(x,t,|\psi(x,t)|)\psi,\quad(x,t )\in\mathbb{R}^{n+1},n\geq 1, \tag{1.2}\]
with initial data \(\psi(x,0)=\psi_{0}(x)\in L^{2}_{x}(\mathbb{R}^{n})\). Here \(\mathcal{N}\) can be either linear or nonlinear, and is time-dependent:
\[\mathcal{N}(x,t,|\psi|)=V(x,t),N(|\psi(t)|),\text{ or }V(x,t)+N(|\psi(t)|). \tag{1.3}\]
Note that \(\mathcal{N}\) does not necessarily have to be real. Throughout this paper, we assume that the solution to system (1.2) exists in \(L^{2}_{x}(\mathbb{R}^{n})\) for all \(t\in\mathbb{R}\):
**Assumption 1.1**.: _We assume that the initial data \(\psi(x,0)=\psi_{0}(x)\) leads to a global solution to system (1.2): \(\psi(t)\) exists in \(L^{2}_{x}(\mathbb{R}^{n})\) for all \(t\in\mathbb{R}\)._
Then \(\mathcal{N}(x,t,|\psi|)\) is well-defined for all \(t\in\mathbb{R}\). As we can treat the nonlinearity as a time-dependent perturbation, we can write, without loss of generality, that \(\mathcal{N}=V(x,t)\) when there is no ambiguity. To be precise, let \(\langle x\rangle=\sqrt{1+x^{2}}\) and let \(L^{p}_{x,\sigma}(\mathbb{R}^{n})\) denote the weighted \(L^{p}_{x}(\mathbb{R}^{n})\) space
\[L^{p}_{x,\sigma}(\mathbb{R}^{n}):=\{f(x):\langle x\rangle^{\sigma}f(x)\in L^{p }_{x}(\mathbb{R}^{n})\} \tag{1.4}\]
for \(1\leq p\leq\infty\). We assume that the interaction, \(\mathcal{N}\), meets one of the conditions outlined below:
1. When \(n\geq 1\), 1. (a) (linear localized potentials) \(\mathcal{N}(x,t,|\psi(t)|)=V(x,t)\) is a linear potential with (1.5) \[\langle x\rangle^{\sigma}V(x,t)\in L^{\infty}_{t}L^{\infty}_{x}(\mathbb{R}^{n }\times\mathbb{R})\] for some \(\sigma>1\). (b) (nonlinear localized potentials) \(\mathcal{N}(x,t,|\psi(t)|)\) is nonlinear and satisfies that (1.6) \[\langle x\rangle^{\sigma}\mathcal{N}(x,t,|\psi|)\in L^{\infty}_{t}L^{\infty}_ {x}(\mathbb{R}^{n}\times\mathbb{R})\] for some \(\sigma>1\). Under Assumption 1.1, \(\mathcal{N}(x,t,|\psi(t)|)\) is well-defined for all \(t\in\mathbb{R}\) and we set \(\mathcal{N}(x,t,|\psi|)=V(x,t)\), treating the interaction as a time-dependent linear perturbation.
2. When \(n\geq 5\), 1. (a) (linear localized potentials) \(\mathcal{N}(x,t,|\psi(t)|)=V(x,t)\) is a linear potential with (1.7) \[V(x,t)\in L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{n}\times\mathbb{R}).\] 2. (b) (nonlinear localized potentials) \(\mathcal{N}(x,t,|\psi(t)|)\) is nonlinear and satisfies that (1.8) \[\mathcal{N}(x,t,|\psi|)\in L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{n}\times\mathbb{ R}).\] Under Assumption 1.1, \(\mathcal{N}(x,t,|\psi(t)|)\) is well-defined for all \(t\in\mathbb{R}\) and we set \(\mathcal{N}(x,t,|\psi|)=V(x,t)\), treating the interaction as a time-dependent linear perturbation.
In a word, we regard \(N(x,t,|\psi|)\) as a time-dependent perturbation, \(V(x,t)\). This interpretation holds regardless of whether \(N(x,t,|\psi|)\) is linear or nonlinear. Under Assumption 1.1, \(N(x,t,|\psi|)=V(x,t)\) is well-defined for all \(t\in\mathbb{R}\).
This note focuses on the scattering theory of system (1.2). We refer to \(\psi_{0}(x)\) as a scattering state of system (1.2) if its solution \(\psi(t)\) will spread in space as \(t\) goes to plus or minus infinity. Scattering theory is the study of the long-time behavior of the solutions of scattering states.
The central task of scattering theory is to build AC (asymptotic completeness): if we start with a scattering state \(\psi(0)=\psi_{0}(x)\), then
\[\|\psi(t)-\sum_{a}\psi_{a,\pm}(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}\to 0, \tag{1.9}\]
as \(t\to\pm\infty\), where \(\psi_{a,\pm}\) denote solutions to simpler systems. For example, such simple system can be a free system, which is well understood, and \(\psi_{a,\pm}\) are free waves. The celebrated soliton resolution conjecture is a similar version of AC for pure nonlinear Schrodinger equations. Specifically, for any initial data \(\psi(x,0)\in L^{2}_{x}(\mathbb{R}^{n})\) that leads to a gobal solution, we believe that
\[\|\psi(t)-e^{-itH_{0}}\psi_{\pm}-\psi_{sol}(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}\to 0 \tag{1.10}\]
as \(t\to\pm\infty\) for some \(\psi_{\pm}\in L^{2}_{x}(\mathbb{R}^{n})\), where \(\psi_{sol}(t)\) denotes a soliton. Here, solition is some stable solution satisfying:
\[(H_{0}+\mathcal{N}(|\psi_{sol}|))\psi_{sol}=\lambda\psi_{sol} \tag{1.11}\]
for some \(\lambda\in\mathbb{R}\). Here, the initial data doesn't necessarily have to be a scattering state. To address the soliton resolution conjecture, one has to identify the non-free component as a soliton as well.
Proving soliton resolution conjecture or AC is challenging with the presence of solitons. It is not clear what is the space of all scattering states for nonlinear/time-dependent systems. It is known that for the standard nonlinear Schrodinger equations, the solution may spread in a non-free pattern, see [22]. By using the method initiated by [24], this work gives a characterization of the space of all scattering states, which is considered the first step towards proving the soliton resolution conjecture.
### Notations
Let \(\langle x\rangle=\sqrt{1+x^{2}}\) and let \(L^{p}_{x,\sigma}(\mathbb{R}^{n})\) denote the weighted \(L^{p}_{x}(\mathbb{R}^{n})\) space
\[L^{p}_{x,\sigma}(\mathbb{R}^{n}):=\{f(x):\langle x\rangle^{\sigma}f(x)\in L^{p }_{x}(\mathbb{R}^{n})\} \tag{1.12}\]
for \(1\leq p\leq\infty\). Let \(\bar{\mathcal{F}}_{c}(\lambda)\), \(\mathcal{F}_{j}(\lambda)(j=1,2)\) denote smooth cut-off functions of the interval \([1,+\infty)\):
\[\mathcal{F}_{c}(\lambda\leq a):=1-\bar{\mathcal{F}}_{c}(\lambda/a),\quad \mathcal{F}_{j}(\lambda>a):=\mathcal{F}_{j}(\lambda/a),\quad j=1,2, \tag{1.13a}\]
\[\bar{\mathcal{F}}_{c}(\lambda>a):=\bar{\mathcal{F}}_{c}(\lambda/a),\quad\bar {\mathcal{F}}_{j}(\lambda\leq a):=1-\mathcal{F}_{j}(\lambda/a),\quad j=1,2, \tag{1.13b}\]
where \(\mathcal{F}_{c}(k)\) and \(\mathcal{F}_{j}(k),j=1,2\), satisfy:
\[\mathcal{F}_{j}(k)=\begin{cases}1&\text{ when }k\geq 1\\ 0&\text{ when }k\leq 1/2\end{cases},\quad j=c,1,2. \tag{1.13c}\]
Throughout the paper, we use the notation \(A\lesssim_{s}B\) and \(A\gtrsim_{s}B\) to indicate that there exists a constant \(C=C(s)>0\) such that \(A\leq CB\) and \(A\geq CB\), respectively.
### New free channel wave operators
Denote the evolution operator of the above system, from time \(0\) to time \(t\), as \(U(t,0)\). The free channel wave operator associated with \(H_{0}\) and \(H:=H_{0}+V(x,t)\) is defined by
\[W_{\pm}:=s\text{-}\lim_{t\to\pm\infty}U(0,t)e^{-itH_{0}}\quad\text{ on }L_{x}^{2}(\mathbb{R}^{n}) \tag{1.14}\]
and its adjoint, by
\[W_{\pm}^{*}:=s\text{-}\lim_{t\to\pm\infty}e^{itH_{0}}U(t,0)P_{c}\quad\text{ on }L_{x}^{2}(\mathbb{R}^{n}), \tag{1.15}\]
where \(P_{c}\) denotes the projection on the space of all free scattering states: \(f(x)\in L_{x}^{2}(\mathbb{R}^{n})\) is a free scattering state if there exists \(f_{+}\in L_{x}^{2}(\mathbb{R}^{n})\) such that
\[\|U(t,0)f-e^{-itH_{0}}f_{+}\|_{L_{x}^{2}(\mathbb{R}^{n})}\to 0 \tag{1.16}\]
as \(t\to\infty\). Here, the name of free channel wave operator comes from multi-channel scattering theory and the starting point in (1.23) comes from multi-channel scattering theory.
**Remark 1.1**.: _When \(V(x,t)\) is localized in \(x\), the existence of \(W_{\pm}\) is a classical fact, see Kato [9] for Schrodinger equations. For forth-order Schrodinger equations, the existence of \(W_{\pm}\) can be inferred using a similar argument (by using unitarity of \(U(0,t)\) and \(e^{-itH_{0}}\) and local decay estimates of the free flow)._
**Remark 1.2**.: _The existence of \(W_{\pm}^{*}\) can be deduced from the definition of \(P_{c}\). Yet, in time-dependent or nonlinear scenarios, we do not know the space of all free scattering states. In this paper, our aim is to devise the free channel wave operator using the approach introduced in [24], bypassing the need for prior knowledge of \(P_{c}\). Additionally, with these new wave operators, we can subsequently define \(P_{c}\)._
The free channel wave operators play a crucial role in scattering theory in that it connects the full dynamics to the free one via intertwining property: when \(V(x,t)=V(x)\) is time-independent, we have
\[W_{\pm}e^{-itH_{0}}=e^{-itH}W_{\pm},\quad\text{ on }L_{x}^{2}(\mathbb{R}^{n}). \tag{1.17}\]
For time-dependent potential \(V(x,t)\), the intertwining property
\[W_{\pm}e^{-itH_{0}}=U(t,0)W_{\pm},\quad\text{ on }L^{2}_{x}(\mathbb{R}^{n}) \tag{1.18}\]
may not hold, see page 62 in [13].
When it comes to non-linear problems, only \(W^{*}_{\pm}\) makes sense.
### Main results
In this paper, we stick to \(W^{*}_{+}\). We employ the technique developed in [24] to construct a new free channel wave operator. This new wave operator is equal to \(W^{*}_{\pm}\), defined in (1.15). This equality is based on the observation that
\[w\text{-}\lim_{t\to\infty}e^{itH_{0}}U(t,0)\quad\text{ on }L^{2}_{x}(\mathbb{R}^{n}) \tag{1.19}\]
exists when \(V(x,t)\) meets our assumptions.
To be precise, we let \(P:=-i\nabla_{x}\). The key observation is that it is sufficient to prove
\[s\text{-}\lim_{t\to\infty}e^{itH_{0}}J(t,x,P)U(t,0),\quad\text{ on }L^{2}(\mathbb{R}^{n}) \tag{1.20}\]
exists for some smooth cut-off function \(J\) which ensures that
\[w\text{-}\lim_{t\to\infty}e^{itH_{0}}(1-J(t,x,P))U(t,0)=0\quad\text{ on }L^{2}_{x}(\mathbb{R}^{n}). \tag{1.21}\]
Then \(W^{*}_{+}\), defined in (1.15), is equal to
\[s\text{-}\lim_{t\to+\infty}e^{itH_{0}}J(t,x,P)U(t,0),\quad\text{ on }L^{2}(\mathbb{R}^{n}), \tag{1.22}\]
and \(P_{c}\) is given by
\[P_{c}=s\text{-}\lim_{t\to\infty}U(0,t)J(t,x,P)U(t,0)\quad\text{ on }L^{2}_{x}(\mathbb{R}^{n}). \tag{1.23a}\]
In [24], Soffer and Wu proposed using \(J=\mathcal{F}_{c}(\frac{|x-2tP|}{t^{\alpha}}\leq 1),\alpha\in(0,1-2/n)\) for cases where \(V(x,t)\in L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{n+1})\), \(n\geq 3\). For scenarios where \(\langle x\rangle^{\sigma}V(x,t)\in L^{\infty}_{x,t}(\mathbb{R}^{n+1})\) for some \(\sigma>1\) and \(n\geq 1\), they recommended the use of \(\mathcal{F}_{c}(\frac{|x-2tP|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t^{\beta}|P|>1)\) for \(b\in(0,1-\frac{1+1/\sigma}{2})\) and \(a\in(b,1-b)\).
In our setting, when \(n\geq 1\), if \(\langle x\rangle^{\sigma}V(x,t)\in L^{\infty}_{t}L^{\infty}_{x}(\mathbb{R}^{n }\times\mathbb{R})\) for some \(\sigma>1\), we take
\[J(t,x,P)=e^{-itH_{0}}\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1} (t^{b}|P|>1)e^{itH_{0}} \tag{1.24}\]
for some \(\alpha\in(0,\min\{\sigma-1,1/2\}),b\in(0,\alpha)\). For \(\mathcal{F}_{c}\) and \(\mathcal{F}_{1}\), see (1.13). The new free channel wave operator, \(W^{*}_{\alpha,b}\), is defined as
\[W^{*}_{\alpha,b}=s\text{-}\lim_{t\to\infty}W^{*}_{\alpha,b}(t),\quad\text{ on }L^{2}_{x}(\mathbb{R}^{n}) \tag{1.25}\]
with
\[W^{*}_{\alpha,b}(t):= e^{itH_{0}}J(t,x,P)U(t,0)\] \[= \mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t^{b}| P|>1)e^{itH_{0}}U(t,0). \tag{1.26}\]
**Theorem 1.3**.: _For \(n\geq 1\), let \(\sigma\) be as in (1.5) and assume that \(V(x,t)\) satisfies the condition (1.5). If Assumption 1.1 holds, then_
1. _for all_ \(\alpha\in(0,\min\{\sigma-1,1/4\}),b\in(0,\alpha),\) _the new free channel wave operator acting on_ \(\psi_{0}\)_, given by_ (1.27) \[W^{*}_{\alpha,b}\psi_{0}:=s\mbox{-}\lim_{t\to\infty}W^{*}_{\alpha,b}(t)\psi_{0},\] _exists in_ \(L^{2}_{x}(\mathbb{R}^{n})\)_, and_ (1.28) \[w\mbox{-}\lim_{t\to\infty}\left(1-\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1 )\mathcal{F}_{1}(t^{b}|P|>1)\right)e^{itH_{0}}\psi(t)=0\] _in_ \(L^{2}_{x}(\mathbb{R}^{n})\)_._
2. _Furthermore, if_ \(\sigma>4\)_, we have then following asymptotic decomposition holds: for all_ \(\epsilon\in(0,1/4)\)_,_ (1.29) \[\lim_{t\to\infty}\|\psi(t)-e^{-itH_{0}}\phi_{+}-\psi_{w,\epsilon}(t)\|_{L^{2 }_{x}(\mathbb{R}^{n})}=0\] _where_ \(\phi_{+}\in L^{2}_{x}\) _and_ \(\psi_{w,\epsilon}\) _is the weakly localized part of the solution, with the following property: It is weakly localized in the region_ \(|x|\leq t^{1/4+\epsilon}\) _when_ \(t\geq 1,\) _in the following sense_ (1.30) \[(\psi_{w,\epsilon}(t),|x|\psi_{w,\epsilon}(t))_{L^{2}_{x}}\lesssim_{\epsilon}t ^{1/4+\epsilon}.\]
**Remark 1.4**.: _Separating the free wave from the solution is the first step of solving soliton resolution conjecture. In order to solve soliton resoluton conjecture, knowing some properties of the "non-free" part would be helpful. Theorem 1.3 shows how to separate the free wave by constructing the free channel wave operator, and suggests that the weakly localized part, the "non-free" part, can not spread faster than \(t^{1/4+\epsilon}\) provided the interaction \(V(x,t)\) is well-localized in \(x\)._
When \(n\geq 5\), we let
\[W^{*}_{\alpha}(t):=\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}U(t,0). \tag{1.31}\]
Here, we choose
\[J(t,x,P)=e^{-itH_{0}}\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}} \tag{1.32}\]
in (1.20). If Assumption 1.1 hold and if \(\mathcal{N}(x,t,|\psi|)\in L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{n}\times \mathbb{R})\), then we have following result:
**Theorem 1.5**.: _If Assumption 1.1 holds and if \(\mathcal{N}(x,t,|\psi|)\in L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{n}\times\mathbb{R})\), then when \(n\geq 5\), for all \(\alpha\in(0,\frac{1}{2}-\frac{2}{n}),\) the free channel wave operator acting on \(\psi_{0}\), given by_
\[W^{*}_{\alpha}\psi_{0}:=s\mbox{-}\lim_{t\to\infty}W^{*}_{\alpha}(t)\psi_{0}, \tag{1.33}\]
_exists in \(L^{2}_{x}(\mathbb{R}^{n}).\)_
### Outline of the proof
Throughout the paper, we need the following tools introduced in [24].
* (**Propagation Estimate**) Given an operator \(B\), we denote (1.34) \[\langle B(t),\psi(t)\rangle_{t}:=(\psi(t),B(t)\psi(t))_{L^{2}_{x}(\mathbb{R}^{ n})}=\int_{\mathbb{R}^{3}}\psi(t)^{*}B(t)\psi(t)dx\] where \(\psi(t)\) denotes the solution to (1.2). We say the family \(B(t)\) a **Propagation Observable** if \(B(t)\) satisfies the following: Suppose a family of self-adjoint operators \(B(t)\) satisfy the following estimate: (1.35) \[\partial_{t}(\psi(t),B(t)\psi(t))=\langle\psi(t),C^{*}C\psi(t) \rangle+g(t)\] \[g(t)\in L^{1}_{t}[1,\infty),\quad C^{*}C\geq 0.\] Upon integration over time, we obtain the following **Propagation Estimate**: (1.36) \[\int_{t_{0}}^{T}\|C(t)\phi(t)\|^{2}_{L^{2}_{x}(\mathbb{R}^{n})}dt =(\psi(T),B(T)\psi(T))_{L^{2}_{x}(\mathbb{R}^{n})}-\] \[(\psi(t_{0}),B(t_{0})\psi(t_{0}))_{L^{2}_{x}(\mathbb{R}^{n})}- \int_{t_{0}}^{T}g(s)ds\] \[\leq\sup_{t\in[t_{0},T]}\left|(\psi(t),B(t)\psi(t))_{L^{2}_{x}( \mathbb{R}^{n})}\right|+C_{g},\] where \(C_{g}:=\|g(t)\|_{L^{1}_{t}(\mathbb{R})}\).
* (**Relative Propagation Estimate**) Given an operator \(\tilde{B}\), we define its time-dependent expectation value as (1.37) \[\langle\tilde{B}:\phi(t)\rangle_{t}:=(\phi(t),\tilde{B}(t)\phi(t))_{L^{2}_{x} (\mathbb{R}^{n})}=\int_{\mathbb{R}^{n}}\phi(t)^{*}\tilde{B}(t)\phi(t)d^{n}x,\] where \(\phi(t)\) does not need to be the solution to (1.2), but it satisfies (1.38) \[\sup_{t\geq 0}\langle\tilde{B}:\phi(t)\rangle_{t}<\infty.\] Suppose (1.38) is satisfied, and \(\partial_{t}\langle\tilde{B}:\phi(t)\rangle_{t}\) satisfies the following estimate: (1.39) \[\partial_{t}\langle\tilde{B}:\phi(t)\rangle_{t}=\pm\langle\phi(t ),C^{*}C\phi(t)\rangle+g(t)\] \[g(t)\in L^{1}(dt),\quad C^{*}C\geq 0.\]
We then refer to the family \(\tilde{B}(t)\) as a **Relative Propagation Observable** with respect to \(\phi(t)\). Upon integration over time, we obtain the bound:
\[\int_{t_{0}}^{T}\|C(t)\phi(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}^{2}dt\leq\sup_{t\in[t _{0},T]}\left|(\phi(t),\tilde{B}(t)\phi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\right|+C_ {g},\quad C_{g}:=\|g(t)\|_{L^{1}_{t}(\mathbb{R})}. \tag{1.40}\]
We call this estimate **Relative Propagation Estimate**.
The key ingredient in the proof of existence of new free channel wave operators is, for example when \(n\geq 5\): We take
\[\phi(t)=e^{itH_{0}}U(t,0)\psi(x,0)\] and choose (1.41b) \[\tilde{B}(t)=\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1),\] \[g(t)= (-i)(\phi(t),\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{ itH_{0}}V(x,t)U(t,0)\psi(x,0))_{L^{2}_{x}(\mathbb{R}^{n})} \tag{1.41c}\] \[+i(e^{itH_{0}}V(x,t)U(t,0)\psi(x,0),\mathcal{F}_{c}(\frac{|x|}{t^ {\alpha}}\leq 1)\phi(t))_{L^{2}_{x}(\mathbb{R}^{n})}. \tag{1.41a}\]
Then
\[\sup_{t\geq 0}\langle\tilde{B}:\phi(t)\rangle_{t}\leq\|\psi(x,0)\|_{L^{2}_{x}( \mathbb{R}^{5})}^{2}, \tag{1.41d}\]
and
\[\frac{d}{dt}\left[\langle\tilde{B}:\phi(t)\rangle_{t}\right]=(\phi(t),\partial _{t}[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)]\phi(t))_{L^{2}_{x}(\mathbb{R}^{n}) }+g(t), \tag{1.41e}\]
where
\[(\phi(t),\partial_{t}[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)]\phi(t))_{L^{ 2}_{x}(\mathbb{R}^{n})}\geq 0, \tag{1.41f}\]
and \(g(t)\in L^{1}_{t}[1,\infty)\) since
\[\|\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}V(x,t)U(t,0 )\psi(x,0)\|_{L^{2}_{x}(\mathbb{R}^{n})}\] \[\lesssim \frac{1}{t^{\frac{n(1-2\alpha)}{4}}}\|V(x,t)\|_{L^{\infty}_{t}L^ {2}_{x}(\mathbb{R}^{n+1})}\|\psi(x,0)\|_{L^{2}_{x}(\mathbb{R}^{n})} \tag{1.41g}\] \[\in L^{1}_{t}[1,\infty).\]
Then by using **Relative Propagation Estimate**, we get
\[a(t):=(\phi(t),\partial_{t}[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)] \phi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\in L^{1}_{t}[1,\infty), \tag{1.41h}\]
which implies the existence of \(W^{*}_{\alpha}\psi(x,0)\) in \(L^{2}_{x}(\mathbb{R}^{n})\). When \(n\geq 1\), we use **Relative Propagation Estimate** twice and get the existence of \(W^{*}_{\alpha,b}\psi(x,0)\) in \(L^{2}_{x}(\mathbb{R}^{n})\).
## 2. Estimates for interaction terms and Commutator Estimates
In this section, we present some key estimates for interaction term for bi-laplacian Schrodinger equations. For free bi-laplacian Schrodinger equations, the free propagator \(e^{-it\Delta^{2}}\) can be expressed by
\[e^{-it\Delta^{2}}f=\mathcal{F}^{-1}(e^{-it|q|^{4}}\hat{f})=\int_{\mathbb{R}^{n} }I_{0}(t,x-y)f(y)dy, \tag{2.1}\]
where \(I_{0}(t,x)=\mathcal{F}^{-1}(e^{-it|q|^{4}})(x)\) is the kernel of \(e^{-it\Delta^{2}}\). For biharmonic operator \((-\Delta)^{2}\), Ben-Artzi, Koch and Saut [1] had proven the following sharp kernel estimate
\[|D^{\gamma}I_{0}(t,x)|\leq C|t|^{-(n+|\gamma|)/4}(1+|t|^{-1/4}|x|)^{(|\gamma|- n)/3},\quad t\neq 0,x\in\mathbb{R}^{n}, \tag{2.2}\]
which implies
\[\|D^{\gamma}e^{-itH_{0}}\|_{L^{1}_{x}\to L^{\infty}_{x}}\lesssim|t|^{- \frac{n+|\gamma|}{4}},\quad|\gamma|\leq n. \tag{2.3}\]
Here \(D=(\partial_{x_{1}},\partial_{x_{2}},\cdots,\partial_{x_{n}})\). From (2.3) with \(\gamma=0\), we will using the \(L^{1}_{x}\to L^{\infty}_{x}\) decay estimates for free forth-order Schrodinger equation
\[\|e^{-itH_{0}}\|_{L^{1}_{x}\to L^{\infty}_{x}}\lesssim|t|^{-\frac{n}{4}}. \tag{2.4}\]
For notational simplicity, we suppress the input of \(\mathcal{F}_{c}\) and \(\mathcal{F}_{1}\), denoting \(\mathcal{F}_{c}\equiv\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1), \mathcal{F}_{1}\equiv\mathcal{F}_{1}(t^{b}|P|>1)\) when the context is clear. We need the following estimates for the interaction term.
For the simplicity of proof, one let \(e=\frac{1}{4}+\frac{3}{2\delta}\) with some \(\delta>2\). And we remark that \(\epsilon=\frac{1}{2\delta}\in(0,\frac{1}{4})\).
**Proposition 2.1**.: _For \(n\geq 1\), let \(\sigma\) be as in (1.5) and assume that \(V(x,t)\) satisfies the condition (1.5). Let \(e=\frac{1}{4}+\frac{3}{2\delta}\) with \(\delta>2\), \(b\in(0,\frac{1}{3}(1-e)),\)\(\alpha\in(b,1-3b)\), we have that for \(t\geq 1,\)_
\[\|\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t^{b}|P|>1)e^{itH _{0}}V(x,t)\psi(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}\lesssim_{e,b,\alpha}\frac{1} {t^{e\delta-1}}\|V(x,t)\|_{L^{\infty}_{t}L^{\infty}_{x,\sigma}(\mathbb{R}^{n+1 })}\|\psi_{0}\|_{L^{2}_{x}(\mathbb{R}^{n})} \tag{2.5}\]
_with \(\sigma=\delta-1/e>1\) and_
\[e(\delta-\frac{1}{e})=\frac{\delta}{4}+\frac{1}{2}>1.\]
Proof.: Let
\[a_{\psi}(t):=(-i)\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t ^{b}|P|>1)e^{itH_{0}}V(x,t)\psi(t). \tag{2.6}\]
We break it into two pieces
\[a_{\psi}(t)=(-i)\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t ^{b}|P|>1)e^{itH_{0}}\chi(|x|>t^{e})V(x,t)\psi(t)\]
\[+(-i)\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t^{b}|P|>1)e ^{itH_{0}}\chi(|x|\leq t^{e})V(x,t)\psi(t)=:a_{\psi,1}(t)+a_{\psi,2}(t). \tag{2.7}\]
We use localization of \(V(x,t)\) to obtain decay in \(t\) for \(a_{\psi,1}(t),\) that is,
\[\|a_{\psi,1}(t)\|_{L^{2}_{x}} \lesssim\frac{1}{t^{\epsilon\delta-1}}\|V(x,t)\psi(t)\|_{L^{2}_{x,\sigma}(\mathbb{R}^{n})}\] \[\lesssim\frac{1}{t^{\epsilon\delta-1}}\|V(x,t)\|_{L^{\infty}_{t} L^{\infty}_{x,\sigma}(\mathbb{R}^{n+1})}\|\psi(t)\|_{L^{2}_{x}(\mathbb{R}^{n})} \tag{2.8}\] \[\lesssim\frac{1}{t^{\epsilon\delta-1}}\|V(x,t)\|_{L^{\infty}_{t} L^{\infty}_{x,\sigma}(\mathbb{R}^{n+1})}\|\psi_{0}\|_{L^{2}_{x}(\mathbb{R}^{n})}.\]
Here we choose \(\sigma=\delta-1/e>1,\) so we can get
\[e\sigma=e(\delta-\frac{1}{e})=\frac{\delta}{4}+\frac{1}{2}>1. \tag{2.9}\]
For \(a_{\psi,2}(t),\) we use the method of non-stationary phase to get
\[\|a_{\psi,2}(t)\|_{L^{2}_{x}}\lesssim\frac{1}{t^{N}}\|V(x,t)\|_{L^{\infty}_{t} L^{\infty}_{x,\sigma}(\mathbb{R}^{n+1})}\|\psi_{0}\|_{L^{2}_{x}(\mathbb{R}^{n})} \tag{2.10}\]
since
\[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{i(x-y)\cdot q}e^{ itq^{4}}\chi(|x|\leq t^{e})=\] \[\frac{1}{i[(x-y)\cdot\hat{q}+4t|q|^{3}]}\partial_{|q|}[\mathcal{ F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{i(x-y)\cdot q}e^{itq^{4}}\chi(|y|\leq t^{e})] \tag{2.11}\]
with
\[|(x-y)\cdot\hat{q}+4t|q|^{3}|\gtrsim t^{1-3b} \tag{2.12}\]
when \(b<1/3(1-e)<1/4,\)\(\alpha<1-3b.\) Choose \(N=2,\) then combining (2.8) with (2.10), we obtain
\[\|a_{\psi}(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}\lesssim_{e,b,\alpha}\frac{1}{t^{ \epsilon\delta-1}}\|V(x,t)\|_{L^{\infty}_{t}L^{\infty}_{x,\sigma}(\mathbb{R}^ {n+1})}\|\psi_{0}\|_{L^{2}_{x}(\mathbb{R}^{n})}. \tag{2.13}\]
We finish the proof.
**Proposition 2.2**.: _For \(n\geq 1\), let \(\sigma\) be as in (1.5) and assume that \(V(x,t)\) satisfies the condition (1.5). Let \(e=\frac{1}{4}+\frac{3}{2\delta}\) with \(\delta>2\), \(b\in(0,\frac{1}{3}(1-e)),\)\(\alpha\in(b,1-3b)\), we have that for \(t\geq 1,\)_
\[|(\mathcal{F}_{1}e^{itH_{0}}\psi(t),\mathcal{F}_{c}\mathcal{F}_{1}e^{itH_{0}} V(x,t)\psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}|\lesssim_{e,b,\alpha}\frac{1}{t^{ \epsilon\delta-1}}\|V(x,t)\|_{L^{\infty}_{t}L^{\infty}_{x,\sigma}(\mathbb{R}^ {n+1})}\|\psi_{0}\|^{2}_{L^{2}_{x}(\mathbb{R}^{n})}, \tag{2.14}\]
\[|(\mathcal{F}_{1}e^{itH_{0}}V(x,t)\psi(t),\mathcal{F}_{c}\mathcal{F}_{1} \mathcal{F}_{c}e^{itH_{0}}\psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}|\lesssim_{e,b, \alpha}\frac{1}{t^{\epsilon\delta-1}}\|V(x,t)\|_{L^{\infty}_{t}L^{\infty}_{x, \sigma}(\mathbb{R}^{n+1})}\|\psi_{0}\|^{2}_{L^{2}_{x}(\mathbb{R}^{n})}, \tag{2.15}\]
_with \(\sigma=\delta-1/e>1\) and_
\[e(\delta-\frac{1}{e})=\frac{\delta}{4}+\frac{1}{2}>1.\]
Proof.: Since
\[|(\mathcal{F}_{1}e^{itH_{0}}\psi(t), \mathcal{F}_{c}\mathcal{F}_{1}e^{itH_{0}}V(x,t)\psi(t))_{L^{2}_{x}}|\] \[\leq\|\mathcal{F}_{1}e^{itH_{0}}\psi(t)\|_{L^{2}_{x}(\mathbb{R}^ {n})}\|\mathcal{F}_{c}\mathcal{F}_{1}e^{itH_{0}}V(x,t)\psi(t)\|_{L^{2}_{x}( \mathbb{R}^{n})} \tag{2.16}\] \[\lesssim_{e,b,\alpha}\frac{1}{t^{e\delta-1}}\|V(x,t)\|_{L^{\infty }_{t}L^{\infty}_{x,\sigma}(\mathbb{R}^{n+1})}\|\psi_{0}\|^{2}_{L^{2}_{x}},\]
where we use Proposition 2.1. And use the same argument on (2.15), we finish the proof.
Remark: If we set \(\sigma=0\), then we need to require that the dimension n should be greater than or equal to \(5\), please refer to Proposition 2.3. In this paper, We want to consider the case where \(n\geq 1\), so we need to assume that the potential \(V(x,t)\) is localized in \(x\).
**Proposition 2.3**.: _For \(n\geq 5\), assume that \(V(x,t)\) satisfies the condition (1.7), \(\alpha\in(0,\frac{1}{2}-\frac{2}{n})\), we have that for \(t\geq 1\),_
\[\|\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}V(x,t)\psi(t)\|_{L^{ 2}_{x}(\mathbb{R}^{n})}\lesssim\frac{1}{t^{\frac{n}{4}(1-2\alpha)}}\|V(x,t)\| _{L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{n+1})}\|\psi(t)\|_{L^{2}_{x}(\mathbb{R}^ {n})}. \tag{2.17}\]
Proof.: Let
\[a_{\psi}(t):=\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}V(x,t) \psi(t) \tag{2.18}\]
Then using Holder's inequality and \(L^{1}_{x}\to L^{\infty}_{x}\) decay estimates (2.4) for free forth-order Schrodinger equation.
\[\|a_{\psi}(t)\|_{L^{2}_{x}(\mathbb{R}^{n})} \lesssim\|\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\|_{L^{2}_{x} (\mathbb{R}^{n})}\|e^{itH_{0}}\|_{L^{1}_{x}\to L^{\infty}_{x}}\|V(x,t)\psi(t) \|_{L^{1}} \tag{2.20}\] \[\lesssim\frac{1}{t^{\frac{n}{4}(1-2\alpha)}}\|V(x,t)\|_{L^{\infty }_{t}L^{2}_{x}(\mathbb{R}^{n+1})}\|\psi(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}. \tag{2.19}\]
Thus we get (2.17) and finish the proof.
Before using propagation estimation, we need to construct a reasonable Propagation Observable \(B(t)\), which will result in an explicit expression for some remainder terms. Since in this case the commutators can be explicitly computed using Fourier transform, we also give a direct estimate below.
Let
\[\langle\tilde{r}_{1}(t)\rangle:=(e^{itH_{0}}\psi(t),\partial_{t}[ \mathcal{F}_{1}]\mathcal{F}_{c}\mathcal{F}_{1}e^{itH_{0}}\psi(t))_{L^{2}}-(e^{ itH_{0}}\psi(t),\sqrt{\mathcal{F}_{c}}\mathcal{F}_{1}\partial_{t}[ \mathcal{F}_{1}]\sqrt{\mathcal{F}_{c}}e^{itH_{0}}\psi(t))_{L^{2}(\mathbb{R}^{n})}\] \[=(\left[\sqrt{\mathcal{F}_{c}}(\frac{|x|}{t^{\alpha}}\leq 1), \partial_{t}[\mathcal{F}_{1}(t^{b}|P|>1)]\right]e^{itH_{0}}\psi(t),\sqrt{ \mathcal{F}_{c}}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t^{b}|P|>1)e^{itH_{0}} \psi(t))_{L^{2}_{x}(\mathbb{R}^{n})} \tag{2.21}\]
\[+(\sqrt{\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)}e^{itH_{0}}\psi(t), \partial_{t}[\mathcal{F}_{1}(t^{b}|P|>1)]\left[\sqrt{\mathcal{F}_{c}(\frac{|x|}{t ^{\alpha}}\leq 1)},\mathcal{F}_{1}(t^{b}|P|>1)\right]e^{itH_{0}}\psi(t))_{L^{2}_{x} (\mathbb{R}^{n})}\]
and
\[\langle\tilde{r}_{2}(t)\rangle:=(e^{itH_{0}}\psi(t),\mathcal{F}_{1} \mathcal{F}_{c}\partial_{t}[\mathcal{F}_{1}]e^{itH_{0}}\psi(t))_{L^{2}}-(e^{ itH_{0}}\psi(t),\sqrt{\mathcal{F}_{c}}\mathcal{F}_{1}\partial_{t}[ \mathcal{F}_{1}]\sqrt{\mathcal{F}_{c}}e^{itH_{0}}\psi(t))_{L^{2}(\mathbb{R}^{ n})}\] \[=(\mathcal{F}_{1}(t^{b}|P|>1)e^{itH_{0}}\psi(t),\sqrt{\mathcal{F}_ {c}(\frac{|x|}{t^{\alpha}}\leq 1)}\left[\sqrt{\mathcal{F}_{c}(\frac{|x|}{t^{ \alpha}}\leq 1)},\partial_{t}[\mathcal{F}_{1}(t^{b}|P|>1)]\right]e^{itH_{0}} \psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\] \[+(e^{itH_{0}}\psi(t),\left[\mathcal{F}_{1}(t^{b}|P|>1),\sqrt{ \mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)}\right]\partial_{t}[\mathcal{F}_{ 1}(t^{b}|P|>1)]\sqrt{\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)}e^{itH_{0}} \psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}. \tag{2.22}\]
And let
\[\langle\tilde{\tau}_{1}(t)\rangle:=(e^{itH_{0}}\psi(t),\partial_{t }[\mathcal{F}_{c}]\mathcal{F}_{1}\mathcal{F}_{c}e^{itH_{0}}\psi(t))_{L^{2}}-( e^{itH_{0}}\psi(t),\sqrt{\mathcal{F}_{1}}\mathcal{F}_{c}\partial_{t}[ \mathcal{F}_{c}]\sqrt{\mathcal{F}_{1}}e^{itH_{0}}\psi(t))_{L^{2}(\mathbb{R}^{ n})}\] \[=(\left[\sqrt{\mathcal{F}_{1}(t^{b}|P|>1)},\partial_{t}[ \mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)]\right]e^{itH_{0}}\psi(t), \sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1 )e^{itH_{0}}\psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\] \[+(\sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}e^{itH_{0}}\psi(t),\partial_{ t}[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)]\left[\sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}, \mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\right]e^{itH_{0}}\psi(t))_{L^{2}_{x}( \mathbb{R}^{n})} \tag{2.23}\]
and
\[\langle\tilde{\tau}_{2}(t)\rangle:=(e^{itH_{0}}\psi(t),\mathcal{F} _{c}\mathcal{F}_{1}\partial_{t}[\mathcal{F}_{c}]e^{itH_{0}}\psi(t))_{L^{2}}-( e^{itH_{0}}\psi(t),\sqrt{\mathcal{F}_{1}}\mathcal{F}_{c}\partial_{t}[ \mathcal{F}_{c}]\sqrt{\mathcal{F}_{1}}e^{itH_{0}}\psi(t))_{L^{2}(\mathbb{R}^{ n})}\] \[=(\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}\psi(t), \sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}\left[\sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}, \partial_{t}[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)]\right]e^{itH_{0}} \psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\] \[+(e^{itH_{0}}\psi(t),\left[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}} \leq 1),\sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}\right]\partial_{t}[\mathcal{F}_{c}( \frac{|x|}{t^{\alpha}}\leq 1)]\sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}e^{itH_{0}} \psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}. \tag{2.24}\]
We give the following commutator estimate.
**Lemma 2.4**.: _For \(t\geq 1,\,b\in(0,\frac{1}{3}(1-e)),\,b<\alpha\leq 1\), \(l=0,1\) we have_
\[\|[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1),\mathcal{F}_{1}^{(l)}(t^{b}|P|>1)] \|_{L^{2}_{x}(\mathbb{R}^{n})\to L^{2}_{x}(\mathbb{R}^{n})}\lesssim\frac{1}{t^{ \alpha-b}}, \tag{2.25}\]
\[\|[\mathcal{F}_{c}^{(l)}(\frac{|x|}{t^{\alpha}}\leq 1),\mathcal{F}_{1}(t^{b}|P|>1)] \|_{L^{2}_{x}(\mathbb{R}^{n})\to L^{2}_{x}(\mathbb{R}^{n})}\lesssim\frac{1}{t^{ \alpha-b}}, \tag{2.26}\]
_where_
\[\mathcal{F}_{1}^{(l)}(k):=\frac{d}{dk^{l}}[\mathcal{F}_{1}]\quad\text{and} \quad\mathcal{F}_{c}^{(l)}(k):=\frac{d^{l}}{dk^{l}}[\mathcal{F}_{c}]. \tag{2.27}\]
Proof.: Let
\[\tilde{\mathcal{F}}:=\mathcal{F}_{1}^{(l)}. \tag{2.28}\]
Then we write \([\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1),\tilde{\mathcal{F}}]\) as
\[[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1),\tilde{\mathcal{F}}]=c_{n} \int_{\mathbb{R}^{n}}\mathring{\tilde{\mathcal{F}}}(q)\mathcal{F}_{c}e^{it^{b}P \cdot q}-\mathring{\tilde{\mathcal{F}}}(q)e^{it^{b}P\cdot q}\mathcal{F}_{c}d^{n}q \tag{2.30}\] \[=c_{n}\int_{\mathbb{R}^{n}}\mathring{\tilde{\mathcal{F}}}(q)e^{ it^{b}P\cdot q}\times\left[e^{-it^{b}P\cdot q}\mathcal{F}_{c}(\frac{|x|}{t^{ \alpha}}\leq 1)e^{it^{b}P\cdot q}-\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1) \right]d^{n}q\] (2.31) \[=c_{n}\int_{\mathbb{R}^{n}}\mathring{\tilde{\mathcal{F}}}(q)e^{ it^{b}P\cdot q}\times\left[\mathcal{F}_{c}(\frac{|x-t^{b}q|}{t^{\alpha}}\leq 1)- \mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\right]d^{n}q. \tag{2.29}\]
Since we have
\[\frac{\mathcal{F}_{c}(\frac{|x-t^{b}q|}{t^{\alpha}}\leq 1)-\mathcal{F}_{c}( \frac{|x|}{t^{\alpha}}\leq 1)}{t^{b-\alpha}|q|}\lesssim\sup_{x\in\mathbb{R}^{n}}| \mathcal{F}_{c}^{{}^{\prime}}(|x|\leq 1)|\lesssim 1, \tag{2.32}\]
Therefore, for each \(\psi\in L^{2}_{x}(\mathbb{R}^{n}),\)
\[\|[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1),\mathcal{F}_{1}^{(l)}(t^{b}|P |>1)]\psi(x)\|_{L^{2}_{x}(\mathbb{R}^{n})}\lesssim_{n}\frac{1}{t^{\alpha-b}} \int_{\mathbb{R}^{n}}d^{n}q|q|\mathring{\tilde{\mathcal{F}}}(q)|\|\psi(x)\|_{L ^{2}_{x}(\mathbb{R}^{n})}\lesssim_{n}\frac{1}{t^{\alpha-b}}\|\psi(x)\|_{L^{2}_ {x}(\mathbb{R}^{n})}. \tag{2.33}\]
This implies (2.25). With the same argument, we derive (2.26).
## 3. Proofs of Theorem 1.3 and Theorem 1.5
In this section, we prove Theorem 1.3 and 1.5.
Proof of Theorem 1.3.: When \(n\geq 5,\) we define
\[W^{*}_{\alpha}(t)\psi_{0}=\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{ itH_{0}}\psi(t), \tag{3.1}\]
in \(L^{2}_{x}(\mathbb{R}^{n})\) with \(\psi_{0}\in L^{2}_{x}(\mathbb{R}^{n}).\) Here, we have \(W^{*}_{\alpha}(t)\psi_{0}\in L^{2}_{x}(\mathbb{R}^{n})\) due to Assumption 1.1. And we apply Cook's method to expand \(W^{*}_{\alpha}(t)\psi_{0}\)
\[W^{*}_{\alpha}(t)\psi_{0}= W^{*}_{\alpha}(1)\psi_{0}+\int_{1}^{t}ds\partial_{s}[ \mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1)]e^{isH_{0}}\psi(s)\] \[+(-i)\int_{1}^{t}ds\mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1 )e^{isH_{0}}V(x,s)\psi(s) \tag{3.2}\] \[=: W^{*}_{\alpha}(1)\psi_{0}+W_{\alpha,1}(t)+W_{\alpha,2}(t).\]
Thanks to Assumption (1.1),
\[\|W^{*}_{\alpha}(1)\psi_{0}\|_{L^{2}_{x}(\mathbb{R}^{n})}\lesssim\sup_{t\in \mathbb{R}}\|\psi(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}\lesssim\|\psi_{0}\|_{L^{2}_ {x}(\mathbb{R}^{n})}. \tag{3.3}\]
For the term \(W_{\alpha,1}(t),\) by taking
\[\begin{cases}B(t)=\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\\ \phi(t)=e^{itH_{0}}\psi(t)\end{cases},\]
we can get
\[\langle B(t):\phi(t)\rangle_{t}\leq\left(\sup_{t\in\mathbb{R}}\|\psi(t)\|_{L^{ 2}_{x}(\mathbb{R}^{n})}\right)^{2} \tag{3.4}\]
and
\[\partial_{t}\langle B(t),\phi(t)\rangle_{t}=(e^{itH_{0}}\psi(t), \partial_{t}\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}\psi(t))+\] \[(-i)(e^{itH_{0}}\psi(t),\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}} \leq 1)e^{itH_{0}}V(x,t)\psi(t))+i(e^{itH_{0}}V(x,t)\psi(t),\mathcal{F}_{c}( \frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}\psi(t)) \tag{3.5}\] \[=:B_{1}(t)+B_{2}(t)+B_{3}(t)\]
with \(B_{1}(t)\geq 0.\) Then using Holder's inequality and Proposition 2.3, we get
\[\|B_{2}(t)+B_{3}(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}\lesssim\frac{1}{t^{\frac{n} {4}(1-2\alpha)}}\|V(x,t)\|_{L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{n+1})}\|\psi(t )\|_{L^{2}_{x}(\mathbb{R}^{n})}^{2} \tag{3.6}\]
that is \(B_{2}(t)+B_{3}(t)\in L^{1}_{t}([1,\infty)).\) And applying **Relative Propagation Estimate**, then for all \(T\geq 1,\)
\[\int_{1}^{T}B_{1}(t)dt \leq\langle B(t):\phi(t)\rangle_{t}|_{t=T}-\langle B(t):\phi(t) \rangle_{t}|_{t=1}+\|B_{2}(t)+B_{3}(t)\|_{L^{1}_{t}([1,\infty))} \tag{3.7}\] \[\leq\left(\sup_{t\in\mathbb{R}}\|\psi(t)\|_{L^{2}_{x}(\mathbb{R}^ {n})}\right)^{2}+\|B_{2}(t)+B_{3}(t)\|_{L^{1}_{t}([1,\infty))}.\]
Hence,
\[\int_{1}^{\infty}B_{1}(t)dt<\infty, \tag{3.8}\]
as \(T\to\infty.\) And for all \(t_{2}\geq t_{1}>1,\) by using Holder's inequality in \(s\) variable, one has
\[\|W_{\alpha,1}(t_{2}) -W_{\alpha,1}(t_{1})\|_{L^{2}_{x}(\mathbb{R}^{n})}=\|\int_{t_{1} }^{t_{2}}\partial_{s}[\mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1)]e^{isH_{0}} \psi(s)ds\|_{L^{2}_{x}(\mathbb{R}^{n})} \tag{3.10}\] \[\leq\|\left(\int_{t_{1}}^{t_{2}}\partial_{s}[\mathcal{F}_{c}( \frac{|x|}{s^{\alpha}}\leq 1)]ds\right)^{1/2}\left(\int_{t_{1}}^{t_{2}} \partial_{s}[\mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1)]|e^{isH_{0}} \psi(s)|^{2}ds\right)^{1/2}\|_{L^{2}_{x}(\mathbb{R}^{n})}\] (3.11) \[\leq\left(\int_{T}^{\infty}B_{1}(s)ds\right)^{1/2}\to 0, \tag{3.9}\]
as \(T\to\infty.\) So \(\{W_{\alpha,1}(t)\}_{t\geq 1}\) is Cauchy in \(L^{2}_{x}(\mathbb{R}^{n})\) and we obtain
\[\lim_{t\to\infty}W_{\alpha,1}(t) \tag{3.12}\]
exists in \(L^{2}_{x}(\mathbb{R}^{n})\).
For the interaction term \(W_{\alpha,2}(t)\), we use Proposition 2.2 to get
\[\lim_{t\to\infty}W_{\alpha,2}(t) \tag{3.13}\]
exists in \(L^{2}_{x}(\mathbb{R}^{n})\).
Proof of Theorem 1.5 for part (i).: For lower space dimensions with \(n\geq 1\), a similar argument applies. We need to estimate the interaction term and use propagation estimates twice with the commutator argument to control the term which has a "sign". To be precise, when the space dimension \(n\geq 1\), we define
\[W^{*}_{\alpha,b}(t)\psi_{0}:=\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1) \mathcal{F}_{1}(t^{b}|P|>1)e^{itH_{0}}\psi(t) \tag{3.14}\]
in \(L^{2}_{x}(\mathbb{R}^{n})\) with \(\psi_{0}\in L^{2}_{x}(\mathbb{R}^{n})\). Here, we have \(W^{*}_{\alpha,b}(t)\psi_{0}\in L^{2}_{x}(\mathbb{R}^{n})\) due to Assumption 1.1. And we apply Cook's method to expand \(W^{*}_{\alpha,b}(t)\psi_{0}\)
\[W^{*}_{\alpha,b}(t)\psi_{0}=W^{*}_{\alpha,b}(1)\psi_{0}+\] \[\int_{1}^{t}\partial_{s}[\mathcal{F}_{c}(\frac{|x|}{s^{\alpha}} \leq 1)]\mathcal{F}_{1}(s^{b}|P|>1)e^{isH_{0}}\psi(s)ds+\int_{1}^{t} \mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1)\partial_{s}[\mathcal{F}_{1}(s^{b}|P |>1)]e^{isH_{0}}\psi(s)ds\] \[\qquad+(-i)\int_{1}^{t}\mathcal{F}_{c}(\frac{|x|}{s^{\alpha}} \leq 1)\mathcal{F}_{1}(s^{b}|P|>1)e^{isH_{0}}V(x,s)\psi(s)ds\] \[\qquad=:W^{*}_{\alpha,b}(1)\psi_{0}+\psi_{W,1}(t)+\psi_{W,2}(t)+ \psi_{W,3}(t). \tag{3.15}\]
Thanks to Assumption 1.1, we have \(W^{*}_{\alpha,b}(1)\psi_{0}\in L^{2}_{x}(\mathbb{R}^{n})\)
For the term \(\psi_{W,1}(t)\), by taking
\[\begin{cases}B(t)=\mathcal{F}_{1}(t^{b}|P|>1)\mathcal{F}_{c}(\frac{|x|}{t^{ \alpha}}\leq 1)\mathcal{F}_{1}(t^{b}|P|>1)\\ \phi(t)=e^{itH_{0}}\psi(t)\end{cases},\]
we can get
\[\langle B(t):\phi(t)\rangle_{t}\leq\left(\sup_{t\in\mathbb{R}}\|\psi(t)\|_{L^ {2}_{x}(\mathbb{R}^{n})}\right)^{2} \tag{3.16}\]
and \(\partial_{t}[\langle B(t):\phi(t)\rangle_{t}]\) reads
\[\partial_{t}[\langle B(t):\phi(t)\rangle_{t}]=c_{1}(t)+c_{2}(t)+g_{1}(t), \tag{3.17}\]
with
\[\partial_{t}[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)]\geq 0,\quad\forall t \geq 1,\]
\[c_{1}(t):= (\mathcal{F}_{1}(t^{b}|P|>1)e^{itH_{0}}\psi(t),\partial_{t}[ \mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)]\mathcal{F}_{1}(t^{b}|P|>1)e^{ itH_{0}}\psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\]
\[\geq 0,\quad\forall t\geq 1,\]
\[c_{2}(t):= 2(\sqrt{\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)}e^{itH_{0}} \psi(t),\] \[\mathcal{F}_{1}(t^{b}|P|>1)\partial_{t}[\mathcal{F}_{1}(t^{b}|P|>1 )]\sqrt{\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)}e^{itH_{0}}\psi(t))_{L^{2}_{x} (\mathbb{R}^{n})}\geq 0,\quad\forall t\geq 1,\]
and
\[g_{1}(t) :=(-i)(\mathcal{F}_{1}(t^{b}|P|>1)e^{itH_{0}}\psi(t),\mathcal{F}_ {c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t^{b}|P|>1)e^{itH_{0}}V(x,t) \psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\] \[+i(\mathcal{F}_{1}(t^{b}|P|>1)e^{itH_{0}}V(x,t)\psi(t),\mathcal{F }_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t^{b}|P|>1)e^{itH_{0}} \psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\] \[\qquad+\langle\tilde{r}_{1}(t)\rangle+\langle\tilde{r}_{2}(t)\rangle,\]
where \(\langle\tilde{r}_{1}(t)\rangle\) and \(\langle\tilde{r}_{2}(t)\rangle\) are defined in (2.21) and (2.22). Then using Holder's inequality and Proposition 2.2 and Lemma 2.4, we get
\[\|g_{1}(t)\|_{L^{2}_{x}}\lesssim\|\psi_{0}\|^{2}_{L^{2}_{x}}. \tag{3.18}\]
And applying **Relative Propagation Estimate**, then for all \(T\geq 1\),
\[\int_{1}^{T}c_{1}(t)dt\leq \int_{1}^{T}\left(c_{1}(t)+c_{2}(t)\right)dt\] \[\leq \langle B(t):\phi(t)\rangle_{t}|_{t=T}-\langle B(t):\phi(t) \rangle_{t}|_{t=1}+\|g_{1}(t)\|_{L^{1}_{t}[1,\infty)} \tag{3.20}\] \[\lesssim \|\psi(0)\|^{2}_{L^{2}_{x}(\mathbb{R}^{n})}<\infty. \tag{3.19}\]
Hence,
\[\int_{1}^{\infty}c_{1}(t)dt=\lim_{T\to\infty}\int_{1}^{T}c_{1}(t)dt\quad\text { exists in }\mathbb{R}. \tag{3.21}\]
For \(t_{2}\geq t_{1}>1\), by using Holder's inequality in \(s\) variable, one has
\[\psi_{W,1}(t_{2}) -\psi_{W,1}(t_{1})=\int_{t_{1}}^{t_{2}}\partial_{s}[\mathcal{F}_ {c}(\frac{|x|}{s^{\alpha}}\leq 1)]\mathcal{F}_{1}(s^{b}|P|>1)e^{isH_{0}} \psi(s)ds\] \[\leq\left(\int_{t_{1}}^{t_{2}}\partial_{s}[\mathcal{F}_{c}(\frac {|x|}{s^{\alpha}}\leq 1)]ds\right)^{1/2}\left(\int_{t_{1}}^{t_{2}}\partial_{s}[ \mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1)](\mathcal{F}_{1}(s^{b}|P|>1)e^{ isH_{0}}\psi(s))^{2}ds\right)^{1/2}\] \[\leq\left(\int_{t_{1}}^{t_{2}}\partial_{s}[\mathcal{F}_{c}(\frac {|x|}{s^{\alpha}}\leq 1)](\mathcal{F}_{1}(s^{b}|P|>1)e^{isH_{0}}\psi(s))^{2}ds \right)^{1/2}, \tag{3.22}\]
then taking \(L^{2}_{x}\) norm and applying Fubini's theorem, we have that
\[\|\psi_{W,1}(t_{2})-\psi_{W,1}(t_{1})\|_{L^{2}_{x}}\leq\left(\int_{t_{1}}^{t_{2 }}\|\sqrt{|\partial_{s}[\mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1)]}| \mathcal{F}_{1}(s^{b}|P|>1)e^{isH_{0}}\psi(s)\|^{2}_{L^{2}_{x}(\mathbb{R}^{n})} ds\right)^{1/2}\to 0, \tag{3.23}\]
as \(t_{1}\rightarrow\infty.\) So \(\{\psi_{W,1}(t)\}_{t\geq 1}\) is Cauchy in \(L^{2}_{x}(\mathbb{R}^{n}).\) Therefore,
\[\psi_{W,1}(\infty):=\lim_{t\rightarrow\infty}\psi_{W,1}(t)\text{ exists in }\ L^{2}_{x}(\mathbb{R}^{n}). \tag{3.24}\]
For the term \(\psi_{W,2}(t),\) we write \(\psi_{W,2}(t)\) as
\[\psi_{W,2}(t)=\int_{1}^{t}\partial_{s}[\mathcal{F}_{1}(s^{b}|P|>1 )]\mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1)e^{isH_{0}}\psi(s)ds-\] \[\int_{1}^{t}[\partial_{s}[\mathcal{F}_{1}(s^{b}|P|>1),\mathcal{F} _{c}(\frac{|x|}{s^{\alpha}}\leq 1)]e^{isH_{0}}\psi(s)ds:=\psi_{W,21}(t)+\psi_{W,2 2}(t). \tag{3.25}\]
For \(\psi_{W,22}(t),\) we conclude that
\[\psi_{W,22}(\infty):=\lim_{t\rightarrow\infty}\psi_{W,22}(t) \tag{3.26}\]
exists in \(L^{2}_{x}(\mathbb{R}^{n})\) by Lemma 2.4. For \(\psi_{W,21}(t),\) we use the same argument for the estimate of \(\psi_{W,1}(t).\) By taking
\[\begin{cases}B(t)=\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1} (t^{b}|P|>1)\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\\ \phi(t)=e^{itH_{0}}\psi(t)\end{cases},\]
we can get
\[\langle B(t):\phi(t)\rangle_{t}\leq\left(\sup_{t\in\mathbb{R}}\|\psi(t)\|_{L^ {2}_{x}(\mathbb{R}^{n})}\right)^{2} \tag{3.27}\]
and \(\partial_{t}[\langle B(t):\phi(t)\rangle_{t}]\) reads
\[\partial_{t}[\langle B(t):\phi(t)\rangle_{t}]=\tilde{c}_{1}(t)+\tilde{c}_{2}( t)+\tilde{g}_{1}(t), \tag{3.28}\]
with
\[\partial_{t}[\mathcal{F}_{1}(t^{b}|P|>1)]\geq 0,\quad\forall t\geq 1,\]
\[\tilde{c}_{1}(t):= (\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}\psi(t), \partial_{t}[\mathcal{F}_{1}(t^{b}|P|>1)]\mathcal{F}_{c}(\frac{|x|}{t^{\alpha }}\leq 1)e^{itH_{0}}\psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\] \[\geq 0,\quad\forall t\geq 1,\]
\[\tilde{c}_{2}(t):= 2(\sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}e^{itH_{0}}\psi(t),\] \[\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\partial_{t}[ \mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)]\sqrt{\mathcal{F}_{1}(t^{b}|P|>1)}e^{ itH_{0}}\psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\geq 0,\quad\forall t\geq 1,\]
and
\[\tilde{g}_{1}(t):=(-i)(\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^ {itH_{0}}\psi(t),\mathcal{F}_{1}(t^{b}|P|>1)\mathcal{F}_{c}(\frac{|x|}{t^{ \alpha}}\leq 1)e^{itH_{0}}V(x,t)\psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\] \[+i(\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}V(x,t) \psi(t),\mathcal{F}_{1}(t^{b}|P|>1)\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1 )e^{itH_{0}}\psi(t))_{L^{2}_{x}(\mathbb{R}^{n})}\] \[\qquad+\langle\tilde{\tau}_{1}(t)\rangle+\langle\tilde{\tau}_{1}( t)\rangle,\]
where \(\langle\tilde{\tau}_{1}(t)\rangle\) and \(\langle\tilde{\tau}_{2}(t)\rangle\) are defined in (2.23) and (2.24). Then using Holder's inequality and Proposition 2.2 and Lemma 2.4, we get
\[\|\tilde{g}_{1}(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}\lesssim\|\psi_{0}\|^{2}_{L^{2} _{x}(\mathbb{R}^{n})}. \tag{3.29}\]
And again applying **Relative Propagation Estimate**, then for all \(T\geq 1\),
\[\int_{1}^{T}\tilde{c}_{1}(t)dt\leq \int_{1}^{T}\left(\tilde{c}_{1}(t)+\tilde{c}_{2}(t)\right)dt\] \[\leq \langle B(t):\phi(t)\rangle_{t}|_{t=T}-\langle B(t):\phi(t) \rangle_{t}|_{t=1}+\|\tilde{g}_{1}(t)\|_{L^{1}_{t}[1,\infty)} \tag{3.30}\] \[\lesssim \|\psi(0)\|^{2}_{L^{2}_{x}(\mathbb{R}^{n})}<\infty.\]
Hence,
\[\int_{1}^{\infty}\tilde{c}_{1}(t)dt=\lim_{T\to\infty}\int_{1}^{T}\tilde{c}_{1 }(t)dt\quad\text{ exists in }\mathbb{R}. \tag{3.31}\]
For \(t_{2}\geq t_{1}>1\), by using Holder's inequality in \(s\) variable, one has
\[\psi_{W,1}(t_{2}) -\psi_{W,1}(t_{1})=\int_{t_{1}}^{t_{2}}\partial_{s}[\mathcal{F} _{1}(t^{b}|P|>1)]\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{isH_{0}} \psi(s)ds\] \[\leq\left(\int_{t_{1}}^{t_{2}}\partial_{s}[\mathcal{F}_{1}(t^{b}| P|>1)]ds\right)^{1/2}\left(\int_{t_{1}}^{t_{2}}\partial_{s}[\mathcal{F}_{1}(t^{b }|P|>1)](\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{isH_{0}}\psi(s))^{2}ds \right)^{1/2}\] \[\leq\left(\int_{t_{1}}^{t_{2}}\partial_{s}[\mathcal{F}_{1}(t^{b} |P|>1)](\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{isH_{0}}\psi(s))^{2}ds \right)^{1/2}, \tag{3.32}\]
then taking \(L^{2}_{x}(\mathbb{R}^{n})\) norm and applying Fubini's theorem, we have that
\[\|\psi_{W,21}(t_{2})-\psi_{W,21}(t_{1})\|_{L^{2}_{x}(\mathbb{R}^{n})}\leq \left(\int_{t_{1}}^{t_{2}}\|\sqrt{|\partial_{s}[\mathcal{F}_{1}(t^{b}|P|>1)]} |\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{isH_{0}}\psi(s)\|^{2}_{L^{2}_{x}( \mathbb{R}^{n})}ds\right)^{1/2}\to 0, \tag{3.33}\]
as \(t_{1}\to\infty\). So \(\{\psi_{W,21}(t)\}_{t\geq 1}\) is Cauchy in \(L^{2}_{x}(\mathbb{R}^{n})\). Therefore,
\[\psi_{W,21}(\infty):=\lim_{t\to\infty}\psi_{W,21}(t)\text{ exists in }\ L^{2}_{x}(\mathbb{R}^{n}). \tag{3.34}\]
For the last term \(\psi_{W,3}(t)\), by using proposition 2.1, we have
\[\psi_{W,3}(\infty):=\lim_{t\to\infty}\psi_{W,3}(t)\text{ exists in }\ L^{2}_{x}(\mathbb{R}^{n}). \tag{3.35}\]
Then combining (3.24), (3.26), (3.34) and (3.35)), we obtain \(W^{*}_{\alpha,b}(\infty)\psi_{0}\) exists in \(L^{2}_{x}(\mathbb{R}^{n})\). This finishes the proof.
Now we prove the second part of Theorem 1.5. For the part with \(\bar{\mathcal{F}}_{c}(\frac{|x|}{t^{\alpha}}>1)\), we have
**Lemma 3.1**.: _For \(\alpha>0\),_
\[w\text{-}\lim_{t\to\infty}\left(1-\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1) \mathcal{F}_{1}(t^{b}|P|>1)\right)e^{itH_{0}}\psi(t)=0. \tag{3.36}\]
Proof.: Choose \(\phi\in L^{2}_{x}(\mathbb{R}^{n}).\) For any \(\alpha>0,\) we have
\[|(\phi,\left(1-\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1) \mathcal{F}_{1}(t^{b}|P|>1)\right)e^{itH_{0}}\psi(t))_{L^{2}_{x}(\mathbb{R}^{n} )}|\] \[\leq\|\left(1-\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1) \mathcal{F}_{1}(t^{b}|P|>1)\right)\phi\|_{L^{2}_{x}(\mathbb{R}^{n})}\|\psi(t) \|_{L^{2}_{x}(\mathbb{R}^{n})} \tag{3.37}\] \[\leq\|\left(1-\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1) \mathcal{F}_{1}(t^{b}|P|>1)\right)\phi\|_{L^{2}_{x}(\mathbb{R}^{n})}\|\psi_{0 }\|_{L^{2}_{x}(\mathbb{R}^{n})}\to 0,\]
as \(t\to\infty.\) This finish the proof.
Let
\[\psi_{\alpha,d}(t):=e^{-itH_{0}}\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1 )\mathcal{F}_{1}e^{itH_{0}}(\psi(t)-e^{-itH_{0}}\psi_{0}). \tag{3.38}\]
**Theorem 3.2**.: _Let \(\sigma\) be as in (1.5) and assume that \(V(x,t)\) satisfies the condition (1.5), then_
\[\psi_{+,\alpha,d}:=s\text{-}\lim_{t\to\infty}e^{itH_{0}}\psi_{\alpha,d}(t) \tag{3.39}\]
_exists in \(L^{2}_{x}\) for \(\alpha\in(b,1-3b).\)_
Proof.: We use Cook's method to expand \(e^{itH_{0}}\psi_{\alpha,d}(t)\)
\[e^{itH_{0}}\psi_{\alpha,d}(t) =e^{iH_{0}}\psi_{\alpha,d}(1)+\int_{1}^{t}\partial_{s}[\mathcal{F }_{c}(\frac{|x|}{s^{\alpha}}\leq 1)]\mathcal{F}_{1}(s^{b}|P|>1)e^{isH_{0}}( \psi(s)-e^{-isH_{0}}\psi_{0})ds\] \[+\int_{1}^{t}\mathcal{F}_{c}(\frac{|x|}{s^{\alpha}}\leq 1) \partial_{s}[\mathcal{F}_{1}(s^{b}|P|>1)]e^{isH_{0}}(\psi(s)-e^{-isH_{0}}\psi_ {0})ds\] \[\qquad\qquad+(-i)\int_{1}^{t}\mathcal{F}_{c}(\frac{|x|}{s^{ \alpha}}\leq 1)\mathcal{F}_{1}(s^{b}|P|>1)e^{isH_{0}}V(x,s)\psi(s)ds \tag{3.40}\] \[=:e^{iH_{0}}\psi_{\alpha,d}(1)+\psi_{1,d}(t)+\psi_{2,d}(t)+\psi_{ 3,d}(t).\]
One can follow a similar process in Theorem 1.5 to get
\[\lim_{t\to\infty}e^{itH_{0}}\psi_{\alpha,d}(t) \tag{3.41}\]
exists in \(L^{2}_{x}\) by using Proposition 2.1 and Lemma 2.4 and Propagation estimates via choosing
\[B(t):=\mathcal{F}_{1}(t^{b}|P|>1)\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1 )\mathcal{F}_{1}(t^{b}|P|>1) \tag{3.42}\]
and
\[C(t):=\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)\mathcal{F}_{1}(t^{b}|P|>1) \mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1) \tag{3.43}\]
for \(\psi_{1,d}(t)\) and \(\psi_{2,d}(t)\) respectively.
First, we give a general idea of constructing the weakly localized part. Given \(\epsilon>0\), let \(\alpha\in[1/4,1/4+\epsilon)\), choosing \(b=1/4-\epsilon/3\), let
\[\psi_{\epsilon,j,+}:=\mathcal{F}_{2,t}(x_{j}>t^{1/4+\epsilon})e^{-itH_{0}} \bar{\mathcal{F}}_{c}(\frac{|x|}{t^{\alpha}}>1)e^{itH_{0}}\psi_{d}(t) \tag{3.44}\]
and
\[\psi_{\epsilon,j,-}:=\mathcal{F}_{2,t}(-x_{j}>t^{1/4+\epsilon})e^{-itH_{0}} \bar{\mathcal{F}}_{c}(\frac{|x|}{t^{\alpha}}>1)e^{itH_{0}}\psi_{d}(t), \tag{3.45}\]
where
\[\psi_{d}(t):=\psi(t)-e^{-itH_{0}}\psi_{0}, \tag{3.46}\]
\[\mathcal{F}_{2,t}(x_{j}>t^{1/4+\epsilon}):=\left(\prod_{l=1}^{j-1}\bar{ \mathcal{F}}_{2}(|x_{l}|\leq t^{1/4+\epsilon})\right)\mathcal{F}_{2}(x_{j}>t ^{1/4+\epsilon}) \tag{3.47}\]
and
\[\mathcal{F}_{2,t}(-x_{j}>t^{1/4+\epsilon}):=\left(\prod_{l=1}^{j-1}\bar{ \mathcal{F}}_{2}(|x_{l}|\leq t^{1/4+\epsilon})\right)\mathcal{F}_{2}(-x_{j}>t ^{1/4+\epsilon}). \tag{3.48}\]
Then
\[e^{-itH_{0}}\bar{\mathcal{F}}_{c}(\frac{|x|}{t^{\alpha}}>1)e^{itH_{0}}\psi_{d }(t)=\psi_{w,b,\epsilon}(t)+\sum_{j=1}^{n}(\psi_{\epsilon,j,+}+\psi_{\epsilon, j,-}), \tag{3.49}\]
where we use
\[\prod_{l=1}^{n}\bar{\mathcal{F}}_{2}(|x_{l}|\leq t^{1/4+\epsilon})+\sum_{j=1} ^{n}\left(\prod_{l=1}^{j-1}\bar{\mathcal{F}}_{2}(|x_{l}|\leq t^{1/4+\epsilon} )\right)\mathcal{F}_{2}(|x_{j}|>t^{1/4+\epsilon})=1.\]
And we set
\[\psi_{w,\epsilon}(t):=\left(\prod_{l=1}^{n}\bar{\mathcal{F}}_{2}(|x_{l}|\leq t ^{1/4+\epsilon})\right)e^{-itH_{0}}\bar{\mathcal{F}}_{c}(\frac{|x|}{t^{\alpha }}>1)e^{itH_{0}}\psi_{d}(t). \tag{3.50}\]
**Lemma 3.3**.: _If \(V(x,t)\in L_{t}^{\infty}L_{\sigma,x}^{\infty}(\mathbb{R}^{n}\times\mathbb{R})\) for some \(\sigma>4\), then when \(\alpha\in[1/4,1/4+\epsilon)\), \(a\geq 0\),_
\[\|\psi_{\epsilon,j,\pm}\|_{\mathcal{H}_{x}^{a}(\mathbb{R}^{n})}\to 0,\text{ as }t \rightarrow\infty. \tag{3.51}\]
Proof.: It suffices to prove
\[\|\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})e^{-itH_{0}}\bar{\mathcal{F}}_{c}( \frac{|x|}{t^{\alpha}}>1)e^{itH_{0}}\psi_{d}(t)\|_{L_{x}^{2}(\mathbb{R}^{n})}\to 0 \tag{3.52}\]
and
\[\|\mathcal{F}_{2}(-x_{j}>t^{1/4+\epsilon})e^{-itH_{0}}\bar{\mathcal{F}}_{c}( \frac{|x|}{t^{\alpha}}>1)e^{itH_{0}}\psi_{d}(t)\|_{L_{x}^{2}(\mathbb{R}^{n})} \to 0, \tag{3.53}\]
as \(t\to\infty\). The ideal behind the estimate for this part of phase space is that when the position (\(x_{j}>t^{1/4+\epsilon}\)) is large positive, and the velocity is small or has an opposite sign (\(t^{1/4-\epsilon/3})P_{j}\leq 1/10\)), there is no propagation: the norm of the operator
\[\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})\bar{\mathcal{F}}_{1}(t^{1/4-\epsilon/3 }P_{j}\leq 1/10)e^{-ibH_{0}}\langle x\rangle^{-\sigma} \tag{3.54}\]
decays in \(t\), and with the decay rate that is absolutely integrable over \(t\) when \(b\in(0,t]\), \(\sigma>4\). It corresponds to the expectation that as time goes to infinity, a particle starting from the origin with a small or negative velocity, has a very small probability of moving to a positive position. The so-called Maximum and Minimum velocity bounds for forth-order Schrodinger operator, see the following Lemma. We obtain that this part vanishes as \(t\to\infty\),
\[\|\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})\bar{\mathcal{F}}_{1}(t^{1/4- \epsilon/3}P_{j}\leq 1/10)e^{-itH_{0}}\bar{\mathcal{F}}_{c}(\frac{|x|}{t^{ \alpha}}>1)e^{itH_{0}}\psi_{d}(t)\|_{L^{2}_{x}(\mathbb{R}^{n})}\to 0, \tag{3.55}\]
as \(t\to\infty\).
When \(t^{1/4-\epsilon/3}P_{j}>1/10\), this part is an outgoing wave. We write
\[\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})\mathcal{F}_{1}(t^{1/4- \epsilon/3}P_{j}>1/10))e^{-itH_{0}}\bar{\mathcal{F}}_{c}(\frac{|x|}{t^{\alpha }}>1)e^{itH_{0}}\psi_{d}(t)\] \[= \mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})\mathcal{F}_{1}(t^{1/4- \epsilon/3}P_{j}>1/10))e^{-itH_{0}}\tilde{\psi}_{+,d}\] \[+\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})\mathcal{F}_{1}(t^{1/4- \epsilon/3}P_{j}>1/10))(\psi_{d}(t)-e^{-itH_{0}}\tilde{\psi}_{+,d})\] \[-\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})\mathcal{F}_{1}(t^{1/4- \epsilon/3}P_{j}>1/10))e^{-itH_{0}}\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1 )e^{itH_{0}}\psi_{d}(t) \tag{3.56}\] \[=:\psi_{d}^{1}(t)+\psi_{d}^{2}(t)+\psi_{d}^{3}(t)\]
with
\[\tilde{\psi}_{+,d}:=\lim_{t\to\infty}e^{itH_{0}}(\psi_{d}(t)-e^{-itH_{0}}\psi _{0}) \tag{3.57}\]
in \(L^{2}_{x}(\mathbb{R}^{n})\).
For \(\psi_{d}^{2}(t)\), using Duhamel formula, we have
\[\psi_{d}^{2}(t)=(-i)\int_{t}^{\infty}\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon}) \mathcal{F}_{1}(t^{1/4-\epsilon/3}P_{j}>1/10))e^{-(t-s)H_{0}}V(x,s)\psi(s)ds. \tag{3.58}\]
Using (3.65) in Lemma 3.4, we obtain, for \(\sigma>4\),
\[\|\psi_{d}^{2}(t)\|_{L^{2}_{x}} \lesssim_{\epsilon}\int_{t}^{\infty}\frac{1}{|t^{1/4+\epsilon}+|s -t|^{1/4}|^{\sigma}}\|\langle x\rangle^{\sigma}V(x,t)\psi(t)\|_{L^{\infty}_{t} L^{2}_{x}(\mathbb{R}^{n+1})} \tag{3.59}\] \[\lesssim_{\epsilon}\frac{1}{|t|^{\sigma/4-1}}\|\langle x\rangle^{ \sigma}V(x,t)\|_{L^{\infty}_{t}L^{\infty}_{x}}\|\psi_{0}\|_{L^{2}_{x}(\mathbb{ R}^{n})}\to 0,\]
as \(t\to\infty\). The decay rate is absolutely integrable over \(t\) when \(s\geq t\), \(\sigma>4\). It corresponds to the observation that as time goes to negative infinity, a particle starting from the origin with
a positive velocity, has a very small probability of moving to a positive position. here we also use that \(V(x,s)\) is localized in space. We can use
\[\tilde{\psi}_{+,d}=\psi_{+,\alpha,d} \tag{3.60}\]
in the weak sense. In fact, \(\tilde{\psi}_{+,d}\) can be regarded as \(\psi_{+,\alpha,d}\). Thanks to Theorem 3.2, we have
\[\psi_{d}^{1}(t)=\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})\mathcal{F}_{1}(t^{1/4 -\epsilon/3}P_{j}>1/10))e^{-itH_{0}}\psi_{+,\alpha,d}. \tag{3.61}\]
Due to (3.61), \(\psi_{d}^{1}(t)+\psi_{d}^{3}(t)\) can be rewritten as
\[\psi_{d}^{1}(t)+\psi_{d}^{3}(t)= \mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})\mathcal{F}_{1}(t^{1/4- \epsilon/3}P_{j}>1/10)) \tag{3.62}\] \[\times(e^{-itH_{0}}\psi_{+,\alpha,d}-e^{-itH_{0}}\mathcal{F}_{c} (\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}\psi_{d}(t))\to 0\]
in \(L_{x}^{2}(\mathbb{R}^{n})\) by using Theorem 3.2. Based on (3.55),(3.58) and (3.62), we have
\[\|\mathcal{F}_{2}(x_{j}>t^{1/4+\epsilon})e^{-itH_{0}}\bar{\mathcal{F}}_{c}( \frac{|x|}{t^{\alpha}}>1)e^{itH_{0}}\psi_{d}(t)\|_{L_{x}^{2}(\mathbb{R}^{n})} \to 0, \tag{3.63}\]
as \(t\to\infty\). Similarly, we have
\[\|\mathcal{F}_{2}(-x_{j}>t^{1/4+\epsilon})e^{-itH_{0}}\bar{\mathcal{F}}_{c}( \frac{|x|}{t^{\alpha}}>1)e^{itH_{0}}\psi_{d}(t)\|_{L_{x}^{2}(\mathbb{R}^{n})} \to 0, \tag{3.64}\]
as \(t\to\infty\). This finishes the proof.
Now let us introduce Minimal and Maximal velocity bounds.
**Lemma 3.4**.: _For \(a>0,\,b\in(0,t]\), \(\alpha\in(0,1/4+\epsilon),\) we have_
\[\|\mathcal{F}_{2}(\frac{|x_{1}|}{t^{1/4+\epsilon}}>1)\mathcal{F}_{1}(t^{1/4- \epsilon/3}P_{1}>1/10)e^{iaH_{0}}\langle x_{1}\rangle^{-\sigma}\|_{L_{x}^{2}( \mathbb{R}^{n})\to L_{x}^{2}(\mathbb{R}^{n})}\lesssim_{\epsilon}\frac{1}{|t^{ 1/4+\epsilon}+|a|^{1/4}|\sigma}, \tag{3.65}\]
\[\|\mathcal{F}_{2}(\frac{\pm x_{1}}{t^{1/4+\epsilon}}>1)\mathcal{F}_{1}(t^{1/4 -\epsilon/3}P_{1}\leq 1/10)e^{-ibH_{0}}\langle x_{1}\rangle^{-\sigma}\|_{L_{x}^{2}( \mathbb{R}^{n})\to L_{x}^{2}(\mathbb{R}^{n})}\lesssim_{\epsilon}\frac{1}{|t^{ 1/4+\epsilon}+|b|^{1/4}|\sigma} \tag{3.66}\]
_and_
\[\|\mathcal{F}_{2,t}(\pm x_{j}>t^{1/4+\epsilon})\bar{\mathcal{F}}_{1}(\pm t ^{1/4-\epsilon/3}P_{j}\leq 1/10)e^{-itH_{0}}\bar{\mathcal{F}}_{c}(\frac{|x|}{t^{ \alpha}}>1)e^{itH_{0}}e^{-ibH_{0}} \langle x_{1}\rangle^{-\sigma}\|_{L_{x}^{2}(\mathbb{R}^{n})\to L_{x}^{2}( \mathbb{R}^{n})}\] \[\lesssim_{\epsilon}\frac{1}{|t^{1/4+\epsilon}+|b|^{1/4}|\sigma}. \tag{3.67}\]
Proof.: These estimates are proved by using the method of non-stationary phase for constant coefficients. Break the LHS of (3.65) into two parts:
\[\mathcal{F}_{2}(\frac{|x_{1}|}{t^{1/4+\epsilon}}>1)\mathcal{F}_{1}(t ^{1/4-\epsilon/3}P_{1}>1/10)e^{iaH_{0}}\langle x_{1}\rangle^{-\sigma}=\] \[\mathcal{F}_{2}(\frac{|x_{1}|}{t^{1/4+\epsilon}}>1)\mathcal{F}_{1} (t^{1/4-\epsilon/3}P_{1}>1/10)e^{iaH_{0}}\langle x_{1}\rangle^{-\sigma}\chi(|x_ {1}|\geq(t^{1/4+\epsilon}+|a|^{1/4})/1000)+\] \[\mathcal{F}_{2}(\frac{|x_{1}|}{t^{1/4+\epsilon}}>1)\mathcal{F}_{1} (t^{1/4-\epsilon/3}P_{1}>1/10)e^{iaH_{0}}\langle x_{1}\rangle^{-\sigma}\chi(|x _{1}|<(t^{1/4+\epsilon}+|a|^{1/4})/1000) \tag{3.68}\] \[:=J_{1}+J_{2}.\]
For \(J_{1}\),
\[\|J_{1}\|_{L^{2}_{x}\to L^{2}_{x}}\lesssim \|\mathcal{F}_{2}(\frac{|x_{1}|}{t^{1/4+\epsilon}}>1)\mathcal{F}_{ 1}(t^{1/4-\epsilon/3}P_{1}>1/10)e^{iaH_{0}}\|_{L^{2}_{x}(\mathbb{R}^{n})\to L ^{2}_{x}(\mathbb{R}^{n})}\times\frac{1}{|t^{1/2+\epsilon}+|a|^{1/4}|^{\sigma}} \tag{3.69}\] \[\lesssim\frac{1}{|t^{1/2+\epsilon}+|a|^{1/4}|^{\sigma}}.\]
For \(J_{2}\), going to Fourier space, by using the factor \(\mathcal{F}_{2}(\frac{|x_{1}|}{t^{1/4+\epsilon}}>1)\mathcal{F}_{1}(t^{1/4- \epsilon/3}q_{1}>1/10)\) and the factor \(\chi(|y_{1}|<(t^{1/4+\epsilon}+|a|^{1/4})/1000)\),
\[e^{ix_{1}q_{1}}e^{iaq^{4}}e^{-iy_{1}q_{1}}=\frac{1}{i(x_{1}+4a|q|^{2}q_{1}-y_{1 })}\partial_{q_{1}}[e^{ix_{1}q_{1}}e^{iaq^{4}}e^{-iy_{1}q_{1}}] \tag{3.70}\]
with
\[|x_{1}+4a|q|^{2}q_{1}-y_{1}|\geq t^{1/4+\epsilon}\chi(|a|<t^{1+4\epsilon})+|a| ^{1/4}\chi(|a|\geq t^{1+4\epsilon}), \tag{3.71}\]
we have
\[\|J_{2}\|_{L^{2}_{x}(\mathbb{R}^{n})\to L^{2}_{x}(\mathbb{R}^{n})} \lesssim_{\epsilon}\frac{1}{|t^{1/4+\epsilon}+|a|^{1/4}|^{\sigma}} \tag{3.72}\]
via taking integration by parts in \(q_{1}\) sufficiently many times. Thus, one get (3.65). This finishes the proof.
**Remark 3.5**.: _When we use Lemma 3.4 above, we need \(\sigma>4\) in order to get integrability in \(a\) or \(b\) when \(|a|,|b|\geq 1\)._
Proof of Theorem 1.5 for part (ii).: Since
\[\psi(t)=e^{-itH_{0}}\psi_{0}+\psi_{d}(t)\] \[=e^{-itH_{0}}(\psi_{0}+\psi_{+,\alpha,d})+e^{-itH_{0}}\mathcal{F} _{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}}\psi_{d}(t)+e^{-itH_{0}} \bar{\mathcal{F}}_{c}(\frac{|x|}{t^{\alpha}}>1)e^{itH_{0}}\psi_{d}(t)\] \[=e^{-itH_{0}}(\psi_{0}+\psi_{+,\alpha,d})-e^{-itH_{0}}\psi_{+, \alpha,d}+e^{-itH_{0}}\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0}} \psi_{d}(t)+\]
\[\psi_{w,b,\epsilon}(t)+\sum_{j=1}^{n}(\psi_{\epsilon,j,+}+\psi_{\epsilon,j,-}), \tag{3.73}\]
by using Theorem 3.2, we have
\[\|\psi(t)- e^{-itH_{0}}(\psi_{0}+\psi_{+,\alpha,d})-\psi_{w,\epsilon}(t)\|_{L^{ 2}_{x}(\mathbb{R}^{n})}= \tag{3.74}\] \[\|e^{-itH_{0}}\mathcal{F}_{c}(\frac{|x|}{t^{\alpha}}\leq 1)e^{itH_{0 }}\psi_{d}(t)-e^{-itH_{0}}\psi_{+,\alpha,d}+\sum_{j=1}^{n}(\psi_{\epsilon,j,+ }+\psi_{\epsilon,j,-})\|_{L^{2}_{x}(\mathbb{R}^{n})}\to 0,\]
as \(t\to\infty.\) And by definition of \(\psi_{w,\epsilon}(t),\) (1.30) follows and we finish the proof.
**Acknowledgements**: A. S. was partially supported by Simons Foundation Grant number 851844 and NSF grants DMS-2205931. X. W. was partially supported by NSF Grant DMS-1802170, NSF Grant DMS-21-06255, NSF Grant DMS-2205931, NSF Grants DMS-1854453, DMS-2204795, DMS-2305523, Humboldt Fellowship, NSF CAREER DMS-2044626/DMS-2303146.
|
2306.10540 | Indirect Search for Lepton Flavour Violating Signals at the Future
Electron-Proton Colliders | The search for lepton flavor violation (LFV) is a powerful probe to look for
new physics beyond the Standard Model. We explored the possibility of searches
for LFV $Z$ boson couplings to electron and muon pair at the upcoming
electron-proton colliders, namely the Large Hadron Electron Collider (LHeC) and
the Future Circular lepton-hadron Collider (FCC-eh). We employed the study via
a single muon plus an associated jet channel to search for the LFV signal. We
used a multivariate technique to obtain an improved signal-background analysis.
By using the condition on nonobservation of any significant deviation of the
signal over the expected background, we provide an upper limit on the LFV $Z$
boson coupling and corresponding branching ratio. We find that an upper limit
of $2.0\times 10^{-7}$ and $1.5 \times 10^{-7}$ the can be set on BR($Z\to e
\mu$) at 95\% C.L. with one year run of LHeC and FCC-eh, respectively, if the
LFV coupling is governed by vector or axial-vector coupling. For tensor or
axial-tensor coupling these limits can be improved to $2.9\times 10^{-8}$ and
$1.5\times 10^{-8}$ for LHeC and FCC-eh machines, respectively. The projected
numbers improve significantly over the existing limit of $2.6\times 10^{-7}$
set by ATLAS. | Anjan Kumar Barik, Atri Dey, Tousik Samui | 2023-06-18T12:09:09Z | http://arxiv.org/abs/2306.10540v1 | # Indirect Search for Lepton Flavour Violating Signals at the Future Electron-Proton Colliders
###### Abstract
The search for lepton flavor violation (LFV) is a powerful probe to look for new physics beyond the Standard Model. We explored the possibility of searches for LFV \(Z\) boson couplings to electron and muon pair at the upcoming electron-proton colliders, namely the Large Hadron Electron Collider (LHeC) and the Future Circular lepton-hadron Collider (FCC-eh). We employed the study via a single muon plus an associated jet channel to search for the LFV signal. We used a multivariate technique to obtain an improved signal-background analysis. By using the condition on nonobservation of any significant deviation of the signal over the expected background, we provide an upper limit on the LFV \(Z\) boson coupling and corresponding branching ratio. We find that an upper limit of \(2.0\times 10^{-7}\) and \(1.5\times 10^{-7}\) the can be set on BR(\(Z\to e\mu\)) at 95% C.L. with one year run of LHeC and FCC-eh, respectively, if the LFV coupling is governed by vector or axial-vector coupling. For tensor or axial-tensor coupling these limits can be improved to \(2.9\times 10^{-8}\) and \(1.5\times 10^{-8}\) for LHeC and FCC-eh machines, respectively. The projected numbers improve significantly over the existing limit of \(2.6\times 10^{-7}\) set by ATLAS.
Introduction
The successful framework of the Standard Model (SM) of particle physics is equipped with the conservation of lepton flavors although the observations of neutrino masses and mixing [1; 2] provides non-zero lepton flavor violation (LFV)1 via the loop. The amount of such violations is extremely small to be detected in an experiment. To date, no experimental measurement shows a piece of direct evidence in support of charged LFV [3; 4]. However, the violation of the lepton flavor always remains a topic of interest in the particle physics community. Primarily because an experimental observation of establishing lepton flavor violation opens up a plethora of avenues of new physics beyond the Standard Model (BSM). For example, various neutrino mass models, such as Type-II, Type-III, and inverse seesaw models, which generates neutrino mass at the tree level, exhibit LFV scenarios. On the other hand, some models having neutrino mass generated radiatively, _e.g._ Zee-Babu model, Scotogenic models, etc., also show signature for LFV [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15].
Footnote 1: The phrase ‘lepton flavor violating’ is also abbreviated as LFV in this article. The exact meaning of the abbreviation should be clear from the context.
A series of experiments, both dedicated and general-purpose, have been performed at various levels in order to search for LFV in the charged leptons. None of which shows any significant evidence for it. This in turn sets upper limits to the LFV branching ratios (BRs) of the decays and LFV couplings of the decays of various particles. The MEG experiment sets an upper limit on BR(\(\mu\to e\gamma\)) at \(4.2\times 10^{-13}\)[16], the SINDRUM experiment sets an upper limit on BR(\(\mu\to 3e\)) at \(1.0\times 10^{-12}\)[17], the BaBar experiment sets an upper limit on BR(\(\tau\to e\gamma\)) at \(3.3\times 10^{-8}\) and on BR(\(\tau\to\mu\gamma\)) at \(4.4\times 10^{-8}\)[18], the Belle experiment sets a limit on BR(\(\tau\to 3e\)) \(<2.7\times 10^{-8}\) and BR(\(\tau\to 3\mu\)) \(<4.4\times 10^{-8}\)[19] at 90% C.L. at all the experiments.
Another series of bounds are given on the LFV decays of heavy neutral bosons. These types of bounds primarily come from the collider experiments because of their ability to produce on-shell heavy bosons. The LFV decays of such particles have been searched for at the Large Hadron Collider (LHC). The BRs of the Higgs boson in the \(e\mu\), \(e\tau\), and \(\tau\mu\) channels have been set to be no more than \(3.5\times 10^{-4}\)[20], \(6.1\times 10^{-3}\), and \(2.5\times 10^{-3}\)[21], respectively at 95% C.L. by CMS collaboration at the LHC. On the other hand, the searches for lepton flavor violating \(Z\) decays (LFVZD) date back to the era of the Large Electron-Positron (LEP) collider, where the neutral boson could be produced copiously. The LFV decays of neutral boson \(Z\) have been measured in \(e\mu\), \(e\tau\), and \(\mu\tau\) channels in the LEP by OPAL, and DELPHI collaborations [22; 23]. However, the recent searches of LFVZD in the \(e\mu\) channel at the LHC by the ATLAS collaboration supersedes the previous bound LEP.
The projections for such LFV decays have also been studied in the future electron-positron colliders [24; 25; 26; 27]. Similar studies have also been performed in the future electron-proton colliders [28; 29; 30; 31; 32]. However, these studies primarily focused on the electron-tau LFV scenarios. In this work, we investigate the discovery potential of LFV in future electron-proton colliders, namely the Large Hadron Electron Collider (LHeC) and Future Circular lepton-hadron Collider (FCC-eh). We have primarily looked at the possibility of measuring BR(\(Z\to e\mu\)) at these upcoming colliders. For this, we employed an indirect search strategy via \(\mu+j\) channels, where a single \(\mu\) is produced due to the LFV coupling with the \(Z\) boson. This indirect channel is able to provide us good sensitivity depending on the type of coupling, namely vector, axial-vector, tensor, or axial-tensor, with the \(Z\) boson. Our study suggests that at least a comparable (to LHC) result can be obtained in the LHeC and an improved result can be expected at the FCC-eh machine.
The outline of this article is as follows. In Section II, we briefly discuss the generic Lagrangian providing LFV coupling of the \(Z\) boson. We describe our analysis strategy in Section II.1. We then discuss the methods and multivariate analysis in Section II.2. We present our result in Section III. After that, we summarize our findings in Section IV.
## II Prospect of LFV \(Z\) boson coupling at electron-proton collider
We study the LFVZD in a model-independent way. For this, we consider the following general-purpose BSM Lagrangian.
\[\mathcal{L}_{Z}^{\text{eff}}=\bar{\ell}_{i}\gamma^{\mu}(g_{v}^{ij}+g_{av}^{ij} \gamma_{5})\ell_{j}Z_{\mu}+\bar{\ell}_{i}\sigma^{\mu\nu}(g_{t}^{ij}+g_{at}^{ ij}\gamma_{5})\ell_{j}Z_{\mu\nu}+h.c., \tag{1}\]
where \(\ell_{i}\) represents the \(i^{\text{th}}\) generation lepton and \(g^{ij}\)'s are the coupling constants of \(\ell_{i}\) and \(\ell_{j}\) pair with the \(Z\) boson. The subscript of \(g^{ij}\) essentially indicates the Lorentz structure, namely vector, axial-vector, tensor, or axial-tensor, of the couplings with the \(Z\) boson. The Eq. (1) represents a general and model-independent Lagrangian. This Lagrangian should not be identified as an ultraviolet complete model. In the popular BSM models, these types of interaction are usually generated when particles with non-diagonal flavor coupling to the lepton sector run in the loop [33; 34]. In the flavor conserving models, the flavor off-diagonal couplings of the \(Z\) boson with the leptons are absent in the tree-level Lagrangian and thereby avoiding the ultraviolet divergences via the loops. However, they necessarily impose relations between the different \(g\) couplings.
In terms of these couplings, the branching ratio of \(Z\) boson to \(\ell_{i}\ell_{j}\) pair becomes
\[\text{BR}(Z\to\ell_{i}\ell_{j})=\frac{M_{Z}}{12\,\pi\,\Gamma_{Z}^{\text{SM}}} \Big{[}2(|C_{v}^{\ell}\delta_{ij}+g_{v}^{ij}|^{2}+|C_{av}^{\ell}\delta_{ij}+g _{av}^{ij}|^{2})+M_{Z}^{2}(|g_{t}^{ij}|^{2}+|g_{at}^{ij}|^{2})\Big{]}, \tag{2}\]
where \(C_{v}^{\ell}\) and \(C_{av}^{\ell}\), respectively, are the SM vector and axial-vector couplings of the \(Z\) boson with the charged leptons. In Eq. (2), we assumed the total decay width of the \(Z\) boson to be that of the SM since the BSM contributions to the total decay width is negligible. The upper limit on the branching ratio of LFVZD can be translated to the couplings on the Lagrangian in Eq. (1). We tabulate the current upper limits on various LFV \(Z\) boson branching ratios and the derived upper limit on the coupling constant considering a single coupling at a time. The limit on the vector and axial-vector provides the same BRs, and, therefore, they have the same limit. Likewise, the tensor and axial-tensor coupling constants have the same derived upper limits.
### Signals and Backgrounds
In order to study the LFV scenario in an electron-proton collider, we consider the process
\[e^{-}P\to\mu^{-}j. \tag{3}\]
The LFV couplings introduced in the Lagrangian in Eq. (2) would induce such a process. In any electron-proton collider, the production of a single muon without any source of missing energy is unlikely in the SM because of the conservation of the lepton number of each generation. Therefore, this provides an interesting channel to search for lepton flavor violations. If no signal for such violation is observed in the collider, one may put a constraint on the LFV couplings introduced in Eq. (1). We then can use this bound on LFV couplings to provide an upper limit on the branching ratios of the LFVZD. In this work, we will be considering one type of coupling out of the \(g_{v}^{e\mu}\), \(g_{av}^{e\mu}\), \(g_{t}^{e\mu}\), or \(g_{at}^{e\mu}\), that is to say, one of vector, axial-vector, tensor, or axial-tensor operators at a time in our analysis. We have differed the scope of the study containing more than one operator in a separate work.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Experiment & BR & \(g_{v}^{ij}=g_{av}^{ij}\) & \(g_{t}^{ij}=g_{at}^{ij}\) \\ \hline \(Z\to e\mu\) & ATLAS [35] & \(2.6\times 10^{-7}\) & \(3.68\times 10^{-4}\) & \(5.71\times 10^{-6}\) GeV\({}^{-1}\) \\ \hline \(Z\to e\tau\) & ATLAS [36] & \(5.0\times 10^{-6}\) & \(1.61\times 10^{-3}\) & \(2.49\times 10^{-5}\) GeV\({}^{-1}\) \\ \hline \(Z\to\mu\tau\) & ATLAS [36] & \(6.5\times 10^{-6}\) & \(1.83\times 10^{-3}\) & \(2.84\times 10^{-5}\) GeV\({}^{-1}\) \\ \hline \end{tabular}
\end{table}
Table 1: Current upper limit at 95% C.L. on the LFV \(Z\) boson branching ratios and the sources of such limits. The upper limit on various couplings (Eq. 2) derived from the bounds on the LFV \(Z\) BRs. A the single coupling has been considered to be nonzero while deriving the bounds.
We have considered the Large Hadron Electron collider (LHeC) and the Future Circular lepton-hadron Collider (FCC-eh) for the searches for the above signal. Each machine has been proposed to run in two phases. We label these phases as 'LHeC1', 'LHeC2', 'FCC-eh1', and 'FCC-eh2' for further reference. We also list the most important specifications of these machines in Table 2.
In \(e\)-\(P\) collision, the signal process comes up due to the flavor changing vertex of \(V_{Z}^{e\mu}\) via a \(t\)-channel exchange of \(Z-\)boson. This particular channel is not present in the SM. However, some other SM processes can give rise to the final state as the signal. The following SM processes are considered to be potential backgrounds for our signal.
\[\begin{array}{cccc}\textbf{mu:}&e^{-}P&\longrightarrow\nu_{e}jW& \longrightarrow\nu_{e}j\mu\nu_{\mu}\\ \textbf{e-mu:}&e^{-}P&\longrightarrow e^{-}jW&\longrightarrow e^{-}j\mu\nu_ {\mu}\\ \textbf{mu-mu:}&e^{-}P&\longrightarrow\nu_{e}j\gamma^{*}/Z&\longrightarrow\nu _{e}j\mu\mu\end{array}\]
* **mu:** The main background comes from the production of a \(W\) boson and a jet along with an invisible neutrino. The \(W\) boson can then decay into a muon and a neutrino. This background contains a single muon and has the same visible final states as that of the signal. However, the large missing energy from the undetectable interacting neutrinos makes the background reduction easier.
* **e-mu:** The second main background primarily comes from the production of an electron, a jet, and a \(W\) boson. The subsequent decay of \(W\) boson to muon channel gives a similar final state as the signal if the electron in the final state gets missed. The source of missing energy makes the background reduction easier.
* **mu-mu:** The third background is similar to the first one with the exception of the production of \(\gamma^{*}/Z\) boson along with the jet and MET. The decay of \(\gamma^{*}/Z\) to two muons and one of them being missed makes its final state identical to the signal.
\begin{table}
\begin{tabular}{|l|r|r|r|r|} \hline & LHeC1 [37] & LHeC2 [38] & FCC-eh1 [39] & FCC-eh2 [40] \\ \hline Electron energy [GeV] & 30 & 50 & 60 & 60 \\ \hline Proton energy [GeV] & 7000 & 7000 & 20000 & 50000 \\ \hline Luminosity [nb\({}^{-1}\)s\({}^{-1}\)] & 5 & 9 & 8 & 15 \\ \hline Integrated Luminosity (1 year run) [fb\({}^{-1}\)] & 157.7 & 283.8 & 252.2 & 473.0 \\ \hline \end{tabular}
\end{table}
Table 2: Some important specifications of the upcoming electron-proton colliders.
We further note that the same event topology in the backgrounds also appear when the vector bosons decay to \(\tau\) and the \(\tau\) then decays leptonically. We have taken care of this aspect in the background event generation.
For further signal-background analysis, we have implemented the new Lagrangian described by Eq. (1) in Feynrules[41] to obtain the Universal Feynman Object (UFO)[42] files. Then the model UFO is used to generate the signal events for the scattering process. The signal and background events at the parton-level have been generated using MadGraph5@aMCNLO (v3.4.2)[43] at the lowest order. At the parton-level event generation, we have imposed \(p_{T}^{\mu,e}>5\) GeV, \(p_{T}^{j}>10\) GeV, \(\eta_{e,\mu}<3.5\), \(|\eta_{j}|<8.0\) cuts. These parton-level events have then been showered in Pythia8[44] and stored in HepMC2[45] formatted files. We then used fastjet3[46] to form jets from pythia8 generated hadrons. We further note that the signal contains a \(\mu\), which has a very good resolution with their high \(p_{T}\) at the detector. We do not expect much change in the result due to the detector effects. Therefore, the detector effects have not been considered in this work.
The key to any signal-background analysis is to put some valuable cuts on some variables which can give us a signal-favored region with more signals and fewer backgrounds. To know those good variables in the signal and backgrounds plane, we illustrate, in Figs. 1 and 2, the normalized distributions of some important variables. In Fig. 1, we show the \(p_{T\mu}\) and \(p_{Tj_{1}}\) distribution for the three different backgrounds and for the signal with non-zero vector and non-zero tensor coupling. The variable distribution for the signal with non-zero axial-vector and non-zero axial-vector couplings are similar very much similar to their non-axial counterparts and therefore are not shown. Figure 1(a) is for the distribution of \(p_{T\mu}\) for LHeC1 and Fig. 1(b) is for FCC-eh1 machine. In the right column of Fig. 1(c), we show the normalized distribution of transverse momentum of the leading jet (\(p_{T_{j_{1}}}\)) for the signal and backgrounds having the same machine energies. Because of the momentum dependence factors in tensor couplings, the distributions of \(p_{T\mu}\) and \(p_{Tj_{1}}\) have long tails for higher \(p_{T}\) compared to the vector coupling scenario.
Similarly, in the top row of Fig. 2, we show the distribution of missing transverse energy \(\not{E}_{T}\) for both LHeC1 (Fig. 2(a)) and FCC-eh1 (Fig. 2(b)). In the second row, we illustrate the distribution of \(\Delta R\) between the muon and the leading jet \(\Delta R_{\mu j_{1}}\) for LHeC1 (Figs. 2(c)) and for FCC-eh1 (Fig. 2(d)). From the distribution of \(\not{E}_{T}\) one can easily guess that missing energy is a good variable to reduce the SM backgrounds over signal with both types of couplings and in both colliders. Further inspection of the other variable might reveal more interesting signal region. Instead of manually inspecting on all possible event variables, we do the signal and background separation with the help of multivariate analysis. In order to gain in the machine learn advantage
in the computation time, we provide a cut of \(\not{E}_{T}<50\)GeV before sending the variables to the multivariate analysis tool. Although the \(\Delta R_{\mu j_{1}}\) also looks have a significantly different distributions between signal and background, we did not put any cut on it. It is because the preselection cuts on the missing transverse energy reduces the dissimilarity between distributions of signal and backgrounds.
Keeping these in mind, we have imposed the following preselection cuts for the signal and the
Figure 1: Normalized distribution of \(p_{T_{\mu}}\) for both signal and backgrounds for (a) LHeC-Run1 (top-left) and (b) FCCeh-Run1 (top-right) in the first panel. In the second panel there is normalized distribution of \(p_{T_{j}}\) for both signal and backgrounds for (c) LHeC-Run1 (bottom-left) and d) FCCeh-Run1 (bottom-right). For vector-like coupling we use \(g_{v}=3.68\times 10^{-4}\) and for tensor-like coupling \(g_{t}=3.00\times 10^{-6}\) GeV\({}^{-1}\).
backgrounds.
\[p_{T}^{\mu,e}>10\text{ GeV};\qquad p_{T}^{j}>20\text{ GeV};\qquad \not{E}_{T}<50\text{ GeV};\] \[|\eta_{e,\mu}|<5.0;\qquad|\eta_{j}|<5.0;\qquad N_{\mu}=1;\qquad N_{e }=0;\qquad N_{j}\geq 1. \tag{4}\]
After putting the preselection cuts, the cross sections for our signal and backgrounds for all types of coupling for different machine energies are listed in Table. 3.
Figure 2: Normalized distribution of \(\not{E}_{T}\) for both signal and backgrounds for a) LHeC-Run1 (top-left) and b) FCCeh-Run1 (top-right) in the first panel. In the second panel there is normalized distribution of \(\Delta R_{\mu j_{1}}\) for both signal and backgrounds for c) LHeC-Run1 (bottom-left) and d) FCCeh-Run1 (bottom-right). For vector-like coupling we use \(g_{v}=3.68\times 10^{-4}\) and for tensor-like coupling \(g_{t}=3.00\times 10^{-6}\) GeV\({}^{-1}\).
### Multivariate analysis
After applying the aforementioned cuts on the observables for signal and background events, we move on to investigate potential improvements in separating signal from the background using some established machine learning methods, such as Gradient Boosted Decision Trees [47]. In comparison to rectangular cut-based analysis, these techniques have been widely employed in the literature recently and have been found to offer a superior separation of the signal from the background. The key objective here is to build a one-dimensional observable after appropriately combining the important observables that may effectively distinguish our signals from the background. We have used XGBoost[48] toolkit for gradient boosting in this analysis.
For further analysis, a total of 13 input variables, called feature variables, have been used for the training and validation of our data sample. The variable along with their definition and description is provided in Table 4. With a maximum depth of 4 and a learning rate of 0.01, we have taken about 1000 estimators for the gradient boosted decision tree technique of separation. We have used 80% of the whole dataset for training purposes and 20% for validation in both XGBoost analyses. Overtraining of the data sample is one potential drawback of these strategies. In cases of overtraining, the test sample cannot match the training sample's incredibly high accuracy. With our selection of parameters, we have specifically verified that the algorithm is not overtrained.
## III Result and discussion
As mentioned previously, we have considered a single coupling at a time in order to find out the upper limit on the branching ratio of \(Z\to e\mu\) decay. In the last section, we noticed that our vector-only coupling and axial-vector-only coupling are not noticeably different from each other. The same is true for tensor-only and axial-tensor-only couplings. Therefore, in the next subsection, we have we plan to discuss the result from the vector-only coupling scenario, omitting the discussion on the axial-vector-only scenario. In the subsection after that, we plan to discuss the tensor-only coupling scenario.
### Vector and axial-vector coupling
For the discussion in this subsection, we have chosen only \(g_{v}=3.68\times 10^{-}4\) to be non-zero. We first show, in Fig. 3(a), the performance of the BDT network through the Receiver Operating Characteristic (ROC) curve. The good performance of the network is evident from the figure. The background acceptance below 0.01 with 50% signal efficiency. This high performance is because all the backgrounds contain neutrinos, a source of missing energy, in contrast to the signal, where there is no source of missing energy. The network returns the BDT classifier variable which then
\begin{table}
\begin{tabular}{||c|c||} \hline Variable & Definition \\ \hline \hline \(P_{T}^{\mu_{1}}\) & Transverse momentum of the leading muon \\ \(P_{T}^{j_{1}}\) & Transverse momentum of the leading jet \\ \(E_{T}^{\rm miss}\) & Missing transverse energy \\ \(N_{\mu}\) & No of muons in the event \\ \(N_{j}\) & No of jets in the event \\ \(m_{\mu j_{1}}\) & Invariant mass of the leading muon and leading jet \\ \(m_{\rm cluster}\) & The cluster transverse mass [49] \\ \(m_{T}\) & Transverse mass \\ \(H_{T}\) & Scalar sum of \(p_{T}\) of all the final state particles \\ \(\Delta\phi_{\mu j_{1}}\) & \(\Delta\phi\) between leading muon and leading jet \\ \(\Delta\phi_{\mu\not{E}_{T}}\) & \(\Delta\phi\) between leading muon and missing energy \\ \(\Delta\phi_{j_{1}\not{E}_{T}}\) & \(\Delta\phi\) between leading jet and missing energy \\ \(\Delta R_{\mu j_{1}}\) & \(\Delta R\) between leading muon and leading jet \\ \hline \end{tabular}
\end{table}
Table 4: Feature variables for training in the XGBoost toolkit.
can, in principle, be used to set a cut and calculate the signal significances. We have shown these signal significances as a function of signal efficiencies.
For given signal efficiency \(\epsilon_{S}\) and background efficiency \(\epsilon_{B}\), the signal efficiencies are calculated as
\[\text{Signal Significance}(\mathfrak{S})=\frac{\sigma_{S}^{0}\times\mathcal{L} \times\epsilon_{S}}{\sqrt{\sigma_{B}^{0}\times\mathcal{L}\times\epsilon_{B}}}, \tag{5}\]
where \(\sigma_{S}^{0}(\sigma_{B}^{0})\) are the cross section for signal (backgrounds), and \(\mathcal{L}\) represents the integrated luminosity. The Variation of signal significance as a Function of signal efficiency is shown in Fig. 3 for all the four machine energies as described in Table 2. A feature of the curves shows that the signal significance is the best in the region 30% to 60% signal efficiencies. The background efficiencies in this region are actually substantially small which varies between \(10^{-3}\) to \(10^{-2}\).
We then choose our working point to be \(\epsilon_{S}=0.5\). The variation of signal significance as a function of luminosity has been shown in Fig. 4 for four different colliders. One can observe that the signal significance is distinctively better (\(\sim 1.4-1.9\) times better) in FCCeh2 compared to FCCeh1 and all LHeC. Furthermore, for a given luminosity, the signal significance varies as the square of the coupling constant \(g_{v}\). Now, we can, in principle, put an upper limit on the coupling constant at a \(2\sigma\) C.L. That is, to say, the value of \(g_{v}\) at which \(2\sigma\) signal significance is achieved.
Figure 3: (a) Receiver operating characteristic curves on the BDT classifier variable for signal vs. background. (b) Variation of signal significance as a function of signal efficiency on the BDT classifier variable with \(g_{v}=3.68\times 10^{-4}\). The significances are calculated with luminosities 150 fb\({}^{-1}\) for LHeC1, 250 fb\({}^{-1}\) for LHeC2, 250 fb\({}^{-1}\) for FCC-eh1, and 500 fb\({}^{-1}\) for FCC-eh2.
These limits on \(g_{v}\) can then be translated into the limit on the LFV \(Z\) boson branching ratio by using Eq. 2. We plot, in Fig. 4 the upper limit on \(\text{BR}(Z\to e\mu)\) at 95% C.L. as a function of luminosity for four different machines. In Fig. 4, the horizontal line represent the current bound, which is achievable within 200 fb\({}^{-1}\) in all the future electron-proton colliders.
### Tensor and axial-tensor coupling
Similar to the previous subsection, we then performed an analysis with the tensor LFV coupling. In this case, we have only the tensor coupling to be non-zero at \(g_{t}=3.0\times 10^{-6}\text{ GeV}^{-1}\). We first show the ROC curves for four different machines in Fig. 5. The ROC curves actually feature a very good separation between the signal and the backgrounds. In all the machine energies, the background acceptance rate is between \(10^{-3}\) to \(10^{-2}\) signal significances between 30% to 70%. Signal significance as a function of signal efficiency has been plotted in Fig. 5. We can see that, in the case of tensor-only coupling, FCC-eh runs can provide better (approximately a factor of two) significance than LHeC runs. Comparing with Fig. 3, one can see that tensor-only couplings have better signal significance than the ones with vector-only couplings.
We then choose our working point to be \(\epsilon_{S}=50\%\) to see the variation of signal significance as
Figure 4: (a) Variation of signal significance as a function of integrated luminosity. (b) Variation of 95% C.L. upper limit on \(\text{BR}(Z\to e\mu)\) as a function of integrated luminosity. For both panels, only vector coupling has been considered and signal efficiency has been taken to be 50%.
a function of luminosity. This is shown in Fig. 6. The signal significances are above \(2\sigma\) even with \(100\) fb\({}^{-1}\) in the case of LHeC runs, and are well above \(2\sigma\) in the case of FCC-eh runs. The
Figure 5: (a) Receiver operating characteristic curves on the BDT classifier variable for signal vs. background. (b) Variation of signal significance as a function of signal efficiency on the BDT classifier variable with \(g_{v}=3.68\times 10^{-4}\). The significances are calculated with luminosities \(150\) fb\({}^{-1}\) for LHeC1, \(250\) fb\({}^{-1}\) for LHeC2, \(250\) fb\({}^{-1}\) for FCC-eh1, and \(500\) fb\({}^{-1}\) for FCC-eh2.
Figure 6: (a) Variation of signal significance as a function of integrated luminosity. (b) Variation of \(95\%\) C.L. upper limit on BR(\(Z\to e\mu\)) as a function of integrated luminosity. For both panels, only vector coupling has been considered, and signal efficiency has been taken to be \(50\%\).
projected upper limit on BR(\(Z\to e\mu\)) at \(\epsilon_{S}=50\%\) as a function of integrated luminosity has been plotted in Fig. 6(a). In the case of tensor-only coupling, one can see that the current bound on BR(\(Z\to e\mu\)) 95% C.L. can easily be achieved with less than 100 fb\({}^{-1}\) integrated luminosity in all four machine energies.
### Bounds on LFV branching ratio
We now put together the results from the last two subsections by providing projected bounds on BR(\(Z\to e\mu\)) for four different machine energies. As we mentioned previously that we consider only one effective coupling at a time. These projected bounds on the branching ratio and the effective coupling are shown in Tables 5 and 6. We have considered the integrated luminosity of each machine to be equal to the one-year continuous run of that particular machine. These values of integrated luminosity are provided in Table 2.
As was hinted at by the discussions in the previous two subsections, we note that the future electron-proton collider has the potential to provide a stronger upper bound. As mentioned in Table 2, if we will look in the LHeC case, for run 1 we can have a luminosity around 150 fb\({}^{-1}\), and for run 2 it can be \(\sim 270\) fb\({}^{-1}\) after collecting one year of data. Considering that we can see that we can not improve the limit on BR(\(Z\to e\mu\)) compared to the current bound for the case of vector-only and axial-vector-only coupling. However, if the new physics LFV couplings are governed by tensor or axial-tensor coupling, the bound on BR(\(Z\to e\mu\)) can be \(\sim\!4\) times stronger than the current upper limit. Overall, there is small scope for improvement in the current bounds at both runs of the LHeC machine. These numbers are calculated assuming that the run is for one year only. With more than one year run, there is further scope for improvement.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{|c|}{Projected upper bound @95\% CL on BR(\(Z\to e\mu\)) and couplings} \\ \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{|c|}{LHeC1} & \multicolumn{2}{|c|}{LHeC2} \\ \hline Coupling Type & BR(\(Z\to e\mu\)) & \(g_{v/t}\) & BR(\(Z\to e\mu\)) & \(g_{v/t}\) \\ \hline Vector & \(2.69\times 10^{-7}\) & \(3.73\times 10^{-4}\) & \(1.98\times 10^{-7}\) & \(3.20\times 10^{-4}\) \\ \hline Axial Vector & \(2.52\times 10^{-7}\) & \(3.61\times 10^{-4}\) & \(2.02\times 10^{-7}\) & \(3.23\times 10^{-4}\) \\ \hline Tensor & \(5.17\times 10^{-8}\) & \(2.54\times 10^{-6}\) GeV\({}^{-1}\) & \(2.91\times 10^{-8}\) & \(1.90\times 10^{-6}\) GeV\({}^{-1}\) \\ \hline Axial Tensor & \(4.66\times 10^{-8}\) & \(2.41\times 10^{-6}\)GeV\({}^{-1}\) & \(3.02\times 10^{-8}\) & \(1.94\times 10^{-6}\) GeV\({}^{-1}\) \\ \hline \multicolumn{5}{c}{Current upper bound @95\% CL on BR(\(Z\to e\mu\)) from ATLAS is \(2.62\times 10^{-7}\)} \\ \end{tabular}
\end{table}
Table 5: Projected upper bounds at 95% C.L. on BR(\(Z\to e\mu\)) and LFV couplings for LHeC collider. The working point has been chosen to be \(\epsilon_{S}=50\%\) for all cases.
Let us look at the FCC-eh case now. In that case, with the run-1, we can get an integrated luminosity of 252 fb\({}^{-1}\) and for run-2, it can be 473fb \({}^{-1}\) after collecting all data for one year. So, if one only focus on the luminosity after one year run for FCC-eh run 1, the improvement on the BR(\(Z\to e\mu\)) can be \(\sim 1.2\) times stronger than the existing bound set by ATLAS for the case of vector-only and axial-vector-only coupling. Whereas, for tensor-only or axial-tensor-only coupling, the improvement can be more than a factor of 15. On the other hand, a more remarkable improvement can be shown if we will take a look at FCC-eh run-2 case. In that case after the collection of one-year data, the bound can be \(\sim 18\) times stronger compared to the existing bound. For more than one year run, these projected bounds can be made stronger.
## IV Summary and Conclusion
The searches for LFV is an important area of research since experimental observation of LFV will hint towards new physics beyond the SM. We performed an analysis of such an LFV scenario in the context of future electron-proton colliders via an indirect method. We focused on the LFV coupling of \(Z\) boson to an electron-muon pair. We employed an indirect method to search for such violation containing a channel with a single \(\mu\) plus an associated \(j\). This \(\mu+j\) final state without any large missing \(E_{T}\) can only come from LFV scenarios for which the SM background is very small. If no signal is found at the collider, an upper limit on the LFV coupling to the \(Z\) boson can be set. The upper limit of the coupling can then be translated to the branching ratio of \(Z\to e\mu\) decay. For this work, we have considered a single type of coupling out of four different types of couplings, namely vector, axial-vector, tensor, and axial-tensor coupling.
We have used multivariate technique to maximize the discovery potential of such LFV signals
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Projected upper bound @95\% CL on BR(\(Z\to e\mu\)) and couplings} \\ \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{FCC-eh1} & \multicolumn{2}{c|}{FCC-eh2} \\ \hline Coupling Type & BR(\(Z\to e\mu\)) & \(g_{v/t}\) & BR(\(Z\to e\mu\)) & \(g_{v/t}\) \\ \hline Vector & \(2.25\times 10^{-7}\) & \(3.41\times 10^{-4}\) & \(1.47\times 10^{-7}\) & \(2.76\times 10^{-4}\) \\ \hline Axial Vector & \(1.99\times 10^{-7}\) & \(3.21\times 10^{-4}\) & \(1.43\times 10^{-7}\) & \(2.72\times 10^{-4}\) \\ \hline Tensor & \(2.19\times 10^{-8}\) & \(1.65\times 10^{-6}\) GeV\({}^{-1}\) & \(1.52\times 10^{-8}\) & \(1.37\times 10^{-6}\) GeV\({}^{-1}\) \\ \hline Axial Tensor & \(2.14\times 10^{-8}\) & \(1.63\times 10^{-6}\) GeV\({}^{-1}\) & \(1.55\times 10^{-8}\) & \(1.39\times 10^{-6}\) GeV\({}^{-1}\) \\ \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{Current upper bound @95\% CL on BR(\(Z\to e\mu\)) from ATLAS is \(2.62\times 10^{-7}\)} \\ \end{tabular}
\end{table}
Table 6: Projected upper bounds at 95% C.L. on BR(\(Z\to e\mu\)) and LFV couplings for FCC-eh collider. The working point has been chosen to be \(\epsilon_{S}=50\%\) for all cases.
in the presence of SM backgrounds. The absence of discovery at \(2\sigma\) has then been translated to the upper limit on the LFV Z couplings and on the BR(\(Z\to e\mu\)). We observed that vector-only coupling and axial-vector-only coupling provides approximately similar sensitivity. The same is true for tensor-only and axial-tensor-only couplings.
We carried out our calculation for two future electron-hadron colliders, namely LHeC and FCC-eh. For LHeC run-1, with 150 fb\({}^{-1}\) integrated luminosity after one year run, an upper limit of \(\sim\,2.7\times 10^{-7}\) at 95% C.L. on the BR(\(Z\to e\mu\)) can be set if the LFV coupling is completely either vector or axial-vector coupling. This bound is almost similar to the existing bound, which is set as \(2.6\times 10^{-7}\) by ATLAS, on such a branching ratio. If we consider that the either tensor or axial-tensor coupling is responsible for the lepton flavor violation, the projected bounds can be made stronger. For tensor coupling, our projection is BR(\(Z\to e\mu\))\(<5.2\times 10^{-8}\) at 95% C.L. For LHeC run-2, with 270 fb\({}^{-1}\) integrated luminosity after one year run, the projected 95% C.L. limit becomes BR(\(Z\to e\mu\)) \(<2.0\times 10^{-7}\) for vector-only coupling scenario and BR(\(Z\to e\mu\)) \(<2.9\times 10^{-8}\) for tensor-only coupling scenario.
A significant improvement can be made at the FCC-eh machine. In that case, for run-1 with 252 fb\({}^{-1}\) luminosity after one year data, the estimated projection becomes BR(\(Z\to e\mu\)) \(<2.25\times 10^{-7}\) with vector-only coupling and BR(\(Z\to e\mu\)) \(<2.2\times 10^{-8}\) with tensor-only coupling. The best case scenario happens for FCC-eh run-2 which will collect data with an integrated luminosity \(\approx\)473 fb\({}^{-1}\) after one year run. The projected bound in that case is BR(\(Z\to e\mu\)) \(<1.5\times 10^{-7}\) and BR(\(Z\to e\mu\)) \(<1.5\times 10^{-8}\) with vector-only and tensor-only couplings, respectively. With more years of running for both the machines, the projection on BR(\(Z\to e\mu\)) is expected to get better.
## Acknowledgements
The authors thank the November Meeting at IISER-K 2022 for providing an environment for fruitful discussions. The authors acknowledge the support of the Kepler Computing facility maintained by the Department of Physical Sciences, IISER Kolkata, and the RECAPP cluster facility for various computational needs. A.K.B. acknowledges support from the Department of Atomic Energy, Government of India, for the Regional Centre for Accelerator-based Particle Physics (RECAPP). A.D. acknowledges financial support from Science Foundation Ireland Grant 21/PATH
S/9475 (MOREHIGGS) under the SFI-IRC Pathway Programme.
|
2305.09541 | Phase diagram and superconductivity of Calcium Alanates under pressure | In this paper we present a first-principles study of the high-pressure
superconducting phase diagram of calcium alanates (Ca-Al-H), based on ab-initio
crystal structure prediction and anisotropic Migdal-Eliashberg Theory. Calcium
alanates have been intensively studied at ambient pressure for their
hydrogen-storage properties, but their high-pressure behavior is largely
unknown. By performing a full scan of the ternary convex hull at several
pressures between 0 and 300 GPa, we identify several new structural motifs,
characterized by a high Al-H coordination, where Al--$d$ orbitals participate
in the bonding. Among all new phases thus identified, we focus in particular on
a phase with CaAlH$_7$ composition, which lies on the convex hull at 300 GPa,
and remains dynamically stable down to 50 GPa, with a predicted superconducting
T$_c$ of 82 K, which likely represents a new promising template to achieve
increase chemical precompression in ternary hydrides. Our findings reveal
important insights into the structure-property relationships of calcium
alanates under high pressure, and highlight a possible strategy to achieve
conventional superconductivity at low pressures. | Simone Di Cataldo, Lilia Boeri | 2023-05-16T15:31:06Z | http://arxiv.org/abs/2305.09541v1 | # Phase diagram and superconductivity of Calcium Alanates under pressure
###### Abstract
In this paper we present a first-principles study of the high-pressure superconducting phase diagram of calcium alanates (Ca-Al-H), based on ab-initio crystal structure prediction and anisotropic Migdal-Eliashberg Theory. Calcium alanates have been intensively studied at ambient pressure for their hydrogen-storage properties, but their high-pressure behavior is largely unknown. By performing a full scan of the ternary convex hull at several pressures between 0 and 300 GPa, we identify several new structural motifs, characterized by a high Al-H coordination, where Al \(d\) orbitals participate in the bonding. Among all new phases thus identified, we focus in particular on a phase with CaAlH\({}_{7}\) composition, which lies on the convex hull at 300 GPa, and remains dynamically stable down to 50 GPa, with a predicted superconducting \(T_{\rm c}\) of 88 K, which likely represents a new promising template to achieve increase chemical precompression in ternary hydrides. Our findings reveal important insights into the structure-property relationships of calcium alanates under high pressure, and highlight a possible strategy to achieve conventional superconductivity at low pressures.
+
Footnote †: : _J. Phys.: Condens. Matter_
_Keywords: Superconductivity, Condensed matter physics, Electronic structure, Electron-phonon coupling_
## 1 Introduction
The discovery of high-temperature superconductivity at 203 K in H\({}_{3}\)S in 2014 [1, 2] at Megabar pressures brought hydrides into the spotlight of superconducting materials research. Their extremely high \(T_{\rm c}\)s deriving from an electron-phonon mechanism finally demonstrated that conventional superconductors can, in fact, achieve high \(T_{\rm c}\), contrary to previous misconceptions.
In this context, computational predictions based on ab-initio calculations have become an invaluable tool for materials discovery, often anticipating and guiding experiments towards the most promising materials [3, 4, 5]. In the span of just eight years, all possible combinations of a single element plus hydrogen (binary hydrides) have been computationally explored [4, 3, 6] seeking novel high-temperature superconductors, several of which were also experimentally confirmed [7, 8, 9, 10, 11, 12, 13]. Computational studies of high-pressure hydrides not only permitted to identify unknown materials, but also to gain a much deeper understanding of the relationship between chemical bonding and conventional superconductivity [14, 15, 16, 17, 18], instrumental to the design of new materials with improved superconducting properties.
In the last three years, the focus of hydride research has shifted towards identifying materials which can form and remain stable at more accessible pressure than record superhydrides, even at the cost of a reduced \(T_{\rm c}\), as this would open up the possibility of more practical applications. One possible route is to explore _ternary_ hydrides, i.e. compounds that contain hydrogen and two other elements. Indeed, the presence of two different elements permits to realize a much wider variety of chemical environments for hydrogen. For example, we have shown that in lanthanum hydrides the addition of a third element stabilizes a high-T\({}_{c}\) structure with LaBH\({}_{8}\) composition down to 35 GPa [19, 20, 21]. We later demonstrated that the stabilization pressure could be further reduced with a careful choice of the elements in the La/B site within the same structural template down to 3 GPa in BaSiH\({}_{8}\). [22, 23] It is very likely that other mechanism of increased chemical precompression may be identified in other ternary hydrides, which offer an unexplored playground of more than 7000 potential combinations.
In this study, we focus on calcium alanates (CAH), a class of widely-available materials which has been extensively investigated at ambient pressure because of their hydrogen storage properties [24, 25, 26, 27]. CAH are also closely related to calcium borohydrides (CBH), which we studied in a previous publication [28], without finding viable candidates for increased chemical precompression.
At ambient pressure, both CBH and CAH form hydrogen-rich molecular crystals containing Ca\({}^{++}\) and \((YH_{x})_{2}^{-}\) anions (\(Y=\) B, Al; \(x=\) 4 for Al, \(x=\) 2, 3, 4 for B) [29, 27], which can absorb/desorb large amounts of hydrogen. Despite sharing the same valence, Al differs from B because of the presence of empty \(3d\) orbitals, which lie close in energy to the \(2p\) states, and can participate in the bonding. Already at ambient pressure, partial occupation Al-\(d\) orbitals stabilize a CaAlH\({}_{5}\) phase with corner-sharing octahedra [24, 25], which is absent in the phase diagram of CBH. Since it is well known that high pressures (_forbidden chemistry_) phases, i.e. phases with unusual compositions and configurations, particularly for elements with low-lying unoccupied orbitals, we expect that also in CAH Al-\(d\) orbitals will play an
increasing role in the bonding, leading to a phase diagram substantially different from that of CBH. This is indeed confirmed by our calculations, which show that CAH tends to form very complex high-pressure phases, with high Al-H coordination.
In particular, we identify a high-\(T_{\rm c}\) (82 K) CaAlH\({}_{7}\) phase, which should form at high pressures and remain dynamically stable down to 50 GPa, with an unusual structural template, in which planes of H-cages are connected by H-H bonds. It is very likely that also this new \(XY\)H\({}_{7}\) could be further optimized by a careful substitution of other elements in place of calcium, leading to higher \(T_{\rm c}\) and/or lower stabilization pressures.
The paper is structured as follows: first we describe the ternary phase diagram at 0, 50, 100, and 300 GPa, and describe the thermodynamically stable structures. Second, we discuss the electronic properties of those structures. Third, we compare the low- and high-pressure structures of CAH and CBH. Finally, we discuss in detail the superconducting properties of the high-pressure CaAlH\({}_{7}\) structure.
## 2 Results and discussion
### Phase Diagram
The phase diagram of CAH as a function of pressure was obtained computing the ternary convex hull, using _ab-initio_ evolutionary crystal structure prediction as implemented in the _USPEX_ code [30, 31]. For the underlying total energy calculations and relaxations we employed the DFT Vienna ab-initio simulation package (VASP) [32], with Projector-Augmented Wave pseudopotentials [33] and PBE exchange-correlation functional. Further details can be found in the SM.
The left panel of figure 1 shows the convex hulls obtained at 0, 50, 100, and 300 GPa, for each of these pressures, we sampled over 5000 structures and over a hundred unique
Figure 1: Convex hull diagrams for the Ca-Al-H system at 0, 50, 100 and 300 GPa. Thermodynamically stable compositions are indicated as orange circles. Compositions within 25 meV/atom of the hull are shown as red squares. Crystal structures for all thermodynamically stable ternary alanates at 0, 50, 100, and 300 GPa. Ca, Al, and H atoms are shown as grey, red, and blue spheres, respectively.
compositions. These values of pressures were chosen to ensure a reasonable sampling of low and intermediate/high-pressure phases, based on our previous experience on binary and ternary hydrides. The right panel of the figure shows the crystal structures of the phases corresponding to stable compositions.
At ambient pressure (0 GPa) we find that the most stable ternary composition is CaAlH\({}_{5}\), although Ca(AlH\({}_{4}\))\({}_{2}\) is only 22 meV/atom above the hull, in agreement with calculations from Ref. [25], as well as experiment, since both phases can be synthesized [34, 35]. The two structures are characterized by a qualitatively different geometry of the AlH bonds: corner-sharing AlH\({}_{6}\) octahedra in CaAlH\({}_{5}\), and AlH\({}_{4}\) tetrahedra in Ca(AlH\({}_{4}\))\({}_{2}\), in both cases with interstitial calcium atoms.
At 50 GPa, CaAlH\({}_{5}\) remains the only stable composition, and the stable structure contains face- and corner-sharing AlH\({}_{8}\) polyhedra with square antiprismatic geometries, alternating with Ca in a body-centered orthorhombic sublattice.
At 100 GPa, the stable compositions are CaAlH\({}_{5}\), CaAl\({}_{2}\)H\({}_{8}\), and Ca\({}_{2}\)AlH\({}_{11}\). The strucuture of CaAlH\({}_{5}\) contains the same AlH\({}_{8}\) square antiprisms seen at 50 GPa, but now arranged in a corner- and edge-sharing pattern which compenetrates the calcium sublattice. In Ca(AlH\({}_{4}\))\({}_{2}\), which re-enters as a stable composition, the structure is radically different
Figure 2: Electronic band structure for four selected calcium alanates. Black and colored lines indicate the DFT and Wannierized bands, respectively. The Wannier orbital onto which the band projection is carried out is indicated on the right side of each figure.
from the ambient pressure one, as it shows a sublattice of corner- and edge-sharing AlH\({}_{8}\) distorted snub disphenoids encaging Ca atoms. The ground-state structure of Ca\({}_{2}\)AlH\({}_{11}\) is characterized by a lattice of AlH\({}_{10}\) elongated square bipyramids which sharing the top vertex, alternated with interstitial Ca atoms and trapped H\({}_{2}\) molecules.
Finally, at 300 GPa the only stable ternary composition is CaAlH\({}_{7}\). The crystal structure contains a combination of side-sharing AlH\({}_{12}\) cuboctahedra which share faces with CaH\({}_{16}\) truncated cubes capped with square pyramids. The two combined polyhedra fully tessellate the space. The AlH\({}_{12}\) cuboctahedra lie in separate planes, connected by a H-H bond, highlighted by green hydrogen atoms in Fig. 1 (right panel). The H-H bond distance increases with increasing pressure, going from 0.86 to 0.95 \(\AA\) from 50 to 300 GPa. This value indicates delocalized atomic H-H bonds rather than molecular ones.
Overall, the structural changes in ternary calcium alanates (CAH) with increasing pressure suggest a profound change in the orbital hybridization. At ambient pressure, the presence of octahedral AlH\({}_{6}\) motifs indicates that H partially hybridizes with Al-\(d\) states, whereas the AlH\({}_{4}^{-}\) motifs indicates H-Al \(sp^{3}\) hybridization. The tetrahedral geometry is also observed in Ca(BH\({}_{4}\))\({}_{2}\), while stable Ca(BH\({}_{3}\))\({}_{2}\) and Ca(BH\({}_{2}\))\({}_{2}\) compositions correspond to \(sp^{2}\) and \(sp\) hybridization [36, 28].
As pressure increases, however, CAH behaves in a substantially different way from CBH, as Al-\(d\) states are effectively pulled down in energy compared to \(s,p\) states. In the CBH phase diagram only structures with \(sp\), \(sp^{2}\), and \(sp^{3}\) hybridization are stable or weakly metastable up to 150 GPa, and some survive up to 300 GPa; even at 300 GPa, the B-H coordination is never larger than six. In CAH, already at 50 GPa, stable structures exhibit complex polyhedral structures; the average number of vertices and faces increases with pressure, up to a 12-coordinated Al-H polyhedron in CaAlH\({}_{7}\).
### Electronic properties
The trend in crystal structures suggests that in many CAH structures Al-\(d\) orbitals participate in the chemical bond with H. To make the argument less qualitative, in Fig. 2 we show the partial density of states (pDOS) [37], calculated for all the thermodynamically stable structures: Ca(AlH\({}_{4}\))\({}_{2}\) and CaAlH\({}_{5}\) at 0 GPa, CaAlH\({}_{5}\) at 50 GPa, CaAlH\({}_{5}\) and Ca\({}_{2}\)AlH\({}_{11}\) at 100 GPa, and CaAlH\({}_{7}\) at 300 GPa. Note that in the Figure the Fermi energy (for metals) and the valence band maximum (for insulators) is taken as zero.
In CaAlH\({}_{5}\) at 0 GPa, the DOS in the valence region is characterized by two peaks; the electronic states can be understood in terms of molecular AlH\({}_{6}\) octahedra, resulting from Al-\(sp^{3}d^{2}\) hybridization. The system is an insulator with a wide bonding/antibonding gap of 2.6 eV. The DOS in the valence region of Ca(AlH\({}_{4}\))\({}_{2}\) is characterized by two well-separated peaks, extending from about -6 to -3 and to -3 to 0 eV. These states exhibit a very small dispersion, and like in Ca(BH\({}_{4}\))\({}_{2}\)[28], correspond to \(sp^{3}\) molecular orbitals of the AlH\({}_{4}^{-}\) anion. We note, however, that a small projection of Al-\(d\) states is found near the top of the valence band. This system is also insulating, with a wide boding/antibonding gap of 4.5 eV, in line with other calculated values in the literature [38, 39].
CaAlH\({}_{5}\) at 50 GPa is still insulating, but the gap is reduced to 1.2 eV. Here the occupied states merge into a single peak, and the
weight of Al-\(d\) states is significantly enhanced compared to lower pressures.
At 100 GPa, the electronic structure of CaAlH\({}_{5}\) is very similar to the one at 50 GPa, with a gap only slightly reduced to about 1 eV. Ca\({}_{2}\)AlH\({}_{11}\), on the other hand, is a compensated semimetal.
The behavior of the only phase stable at 300 GPa is qualitatively very different from what observed at lower pressures. CaAlH\({}_{7}\) is metallic, with strongly dispersed electronic bands. Here, Al-\(d\) states give a non-zero contribution to the DOS down to the bottom of the valence band at -15 eV, indicating a major rearrangment of the bonds.
In order to further elucidate the increasing role of Al-\(d\) orbitals in the bonding, we performed an additional analysis, in which we we performed a systematic substitution of Al in CBH structures and vice versa, and evaluated the thermodynamical and dynamical stability of the resulting structures. In Fig. 3 we show the ranking of the resulting structures according to formation enthalpy \(*\).
When Al is substituted in the ambient-pressure structures of B (Ca(BH\({}_{3}\))\({}_{2}\) and Ca(BH\({}_{4}\))\({}_{2}\)), the relaxed structures retains the same qualitative features as the original structure, and turn out to be competitive in energy with the lowest-energy structure for Ca(AlH\({}_{4}\))\({}_{2}\).1. This can be easily understood, as Al employs in this structure its \(s,p\) valence orbitals. The \(B\to Al\) substitution is, on the other hand, more problematic: while Ca(AlH\({}_{4}\))\({}_{2}\)) also retains its qualitative features upon B substitution, CaAlH\({}_{5}\) does not, and the structure "breaks down" upon relaxation, giving rise to a distorted combination of BH\({}_{4}\) anions and atomic hydrogen. In fact, the B \(3d\) orbitals lie too far away in energy from the \(2s,2p\) valence orbitals to participate in the bonding.
Footnote 1: The formation enthalpy is considered with respect to the pure elements, e.g. \(\Delta H\left(CaAlH_{5}\right)=H\left(CaAlH_{5}\right)-H\left(Ca\right)-H \left(Al\right)-\frac{5}{2}H\left(H_{2}\right)\)
Footnote 2: The data on the crystal structures after the relaxation is available as a compressed file in the Supplementary Material [40]
At 300 GPa, we substituted Al in the structures with CaBH\({}_{5}\) and CaBH\({}_{6}\) composition that we found to be stable for the Ca-B-H system at the same pressure in our previous work[28]. Both structures exhibit unusual B-H bonding: the former is characterized by BH\({}_{5}\) triangular bipyramids, the latter by BH\({}_{6}\) 6-vertex antiprisms. Al substitution in the CaBH\({}_{5}\) and CaBH\({}_{6}\) structures leads to a qualitative change in the crystal structure. In CaBH\({}_{5}\) the BH\({}_{5}\) triangular bipyramids drastically rearrange into side-sharing AlH\({}_{10}\) irregular polyhedra, while in CaBH\({}_{6}\) the 6-vertex antiprisms become corner-sharing regular cuboctahedra. The opposite process, i.e. B substi
Figure 3: Formation enthalpy ranking of different structures with atomic substitution of B (Al) into the Al (B) site. The formation enthalpy is calculated with respect to pure elements. The formation enthalpy is indicated by a colored line, along with the composition. The substitution is indicated by the labels over the \(x\) axis (e.g. an Al substitution in the B site is indicated by _Al in B_). Structures that are dynamically unstable or relax into a different phase are indicated with a red cross.
tution in CaAlH\({}_{7}\) exhibits the same qualitative features, but the structure is dynamically unstable.
### Superconductivity
Of all the CAH structures predicted in this work, CaAlH\({}_{7}\) is the only one with the qualitative prerequisites to host high-\(T_{\rm c}\) superconductivity [41, 4]: a dense hydrogen sublattice, metallic behavior, and a significant fraction of hydrogen DOS at the Fermi level. We computed its electron-phonon coupling properties at 50, 100, and 300 GPa using Density Functional Perturbation Theory [42, 43, 44, 45].
In Fig. 4 we show the phonon dispersions, Eliashberg function and phonon DOS at 300 GPa (The phonon dispersions at the other pressures are shown in the Supplementary Figure 3[40]). Similarly to most hydrides, the total \(ep\) coupling is spread over the whole optical branch, and the Eliashberg function has a predominantly hydrogen character, suggesting that the H-H intraband interactions, rather than Al-H interband ones, are providing most of the coupling. We observe a soft mode in the branch around 100 meV around the \(X\) point, and another in the acoustic branch around the \(R\) point (See also Supplementary Figure 3[40]). The latter drives the system dynamical unstable below 50 GPa. Some hydrogen modes between 100 and 200 meV are not strongly dispersive, indicating that these modes correspond to short-range, molecular-like H-H vibrations, while between 200 and 300 meV the larger dispersion indicates collective vibrations of the H cages, analogous to sodalite-like clathrate hydrides [18, 17]. We note that non-dispersive modes correspond to phonon eigenvectors parallel to the \(c\) axis, and vice versa.
Footnote 2: 32 \(\times\) 32 \(\times\) 32 for both electrons and phonons
Using the calculated \(ep\) spectrum we calculated the superconducting \(T_{\rm c}\) by solving self-consistently the anisotropic Migdal-Eliashberg equations on a fine interpolated grid \(\lx@sectionsign\) at 50, 100 and 300 GPa, using the EPW code [46, 47], and a constant value of the Morel-Anderson
Figure 4: Phonon dispersions, atom-projected phonon density of states (\(F(\omega)\)) and Eliashberg function (\(\alpha^{2}F(\omega)\)) for CaAlH\({}_{7}\) at 300 GPa. The total \(F(\omega)\) and \(\alpha^{2}F(\omega)\) are shown as solid black lines, while their projections onto Ca, Al, and H are shown as green, red, and orange filled curves, respectively.
Figure 5: Leading edge of the anisotropic superconducting gap at 50 (red) and 300 GPa (blue) as a function of temperature. The corresponding superconducting \(T_{\rm c}\) is shown in Tab. 1. The interpolating line is obtained from a fit with the function \(\Delta(T)=\Delta_{0}\sqrt{k\frac{T_{\rm c}-T}{T}}\) of the weighted average of the anisotropic gap at \(\omega=0\).
pseudopotential \(\mu^{*}=0.10\), further details are available in the Supplementary Materials.
In Fig. 5 we show the calculated leading edge of the superconducting gap at \(\omega=0\) for different pressures. The superconducting gap is quite isotropic, and spreads over a small energy interval of 2-3 meV. (Further details on the anisotropic gap are provided in the Supplementary Material).
The \(T_{\rm c}\) is determined by fitting the average of the leading edge of the superconducting gap with an interpolating function (See caption of Fig. 5), and extrapolating to \(\Delta=0\). The results are summarized in Tab. 1. The ratio \(\frac{2\Delta}{T_{\rm c}}\) is exactly 3.52 at 300 GPa, and it increases with decreasing pressure up to 3.97 at 50 GPa, as a soft-mode boosts coupling at lower pressure and pushes the system away from the weak-coupling limit.
The \(T_{\rm c}\) increases from 61 K at 300 GPa, to 82 K at 50 GPa, as a consequence of mode softening. In fact, the e-ph coupling constant \(\lambda\) increases from 0.66 to 1.06, while the logarithmic-average frequency \(\omega_{log}\) decreases by almost one third (from 150 to 68 meV) in the same interval.
A \(T_{\rm c}\) of 82 K at 50 GPa places CaAlH\({}_{7}\) in the same class as XYH\({}_{8}\) ternary clathrate hydrides, [19, 22, 21] i.e. that of ternary hydride superconductors, where efficient chemical precompression stabilizes a dense hydrogen sublattice at lower pressures than binary hydrides. It is very likely that \(T_{\rm c}\) and stabilization pressure may further be optimized by careful chemical substitution as was shown for LaBH\({}_{8}\) and BaSiH\({}_{8}\)[19, 22].
## 3 Conclusions
In conclusion, using _ab initio_ methods based on Density Functional Theory, we studied the phase diagram and the superconducting properties of calcium aluminum hydrides (CAH) under pressures of 0, 50, 100, and 300 GPa. We found several stable phases in which aluminum progressively increases its coordination with hydrogen as pressure increases.
In particular, we find a structure with CaAlH\({}_{7}\) composition which, to the best of our knowledge, is still unreported. The structural motif comprises layers of AlH\({}_{12}\) cage-like polyhedra (cubooctahedra), linked by atomic H-H bonds, and is thus structurally analogous to other ternary \(XY\)H\({}_{n}\) hydrides, where H cage-like units can be retained down to relatively low pressures, due to the chemical precompression exerted by the X/Y sublattice.
CaAlH\({}_{7}\) is thermodynamically stable at 300 GPa, but remains dynamically stable down to 50 GPa, where we predict a superconducting \(T_{\rm c}\) of 82 K by self-consistently solving the fully anisotropic Migdal-Eliashberg equations.
Hence, CaAlH\({}_{7}\) is one of the very few hydrides retaining high- \(T_{\rm c}\) superconducting properties down below Megbar pressures. Like in LaBH\({}_{8}\), where we have demonstrated the stabilization pressure can be lowered significantly by Ba/Si substitution, the CaAlH\({}_{7}\) structure could also be further optimized by replacing Ca with other alkaline metals or earths, and Al with other non-metals such as Ga, In, or Sn.
## 4 Acknowledgments
L.B. and S.D.C. acknowledge funding from the Austrian Science Fund (FWF) P30269-N36 and support from Fondo Ateneo-Sapienza 2018-2021. S.D.C. acknowledges computational resources from CINECA, proj. IsC90-HTS-TECH and IsC99-ACME-C, and the Vienna Scientific Cluster, proj. 71754 "TEST". LB acknowledges support from
Project PE0000021, "Network 4 Energy Sustainable Transition - NEST", funded by the European Union - NextGenerationEU, under the National Recovery and Resilience Plan (NRRP), Mission 4 Component 2 Investment 1.3 - Call for tender No. 1561 of 11.10.2022 of Ministero dell'Universita e della Ricerca (MUR).
|
2303.12589 | Do Backdoors Assist Membership Inference Attacks? | When an adversary provides poison samples to a machine learning model,
privacy leakage, such as membership inference attacks that infer whether a
sample was included in the training of the model, becomes effective by moving
the sample to an outlier. However, the attacks can be detected because
inference accuracy deteriorates due to poison samples. In this paper, we
discuss a \textit{backdoor-assisted membership inference attack}, a novel
membership inference attack based on backdoors that return the adversary's
expected output for a triggered sample. We found three crucial insights through
experiments with an academic benchmark dataset. We first demonstrate that the
backdoor-assisted membership inference attack is unsuccessful. Second, when we
analyzed loss distributions to understand the reason for the unsuccessful
results, we found that backdoors cannot separate loss distributions of training
and non-training samples. In other words, backdoors cannot affect the
distribution of clean samples. Third, we also show that poison and triggered
samples activate neurons of different distributions. Specifically, backdoors
make any clean sample an inlier, contrary to poisoning samples. As a result, we
confirm that backdoors cannot assist membership inference. | Yumeki Goto, Nami Ashizawa, Toshiki Shibahara, Naoto Yanai | 2023-03-22T14:19:06Z | http://arxiv.org/abs/2303.12589v1 | # Do Backdoors Assist Membership Inference Attacks?
###### Abstract
When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier. However, the attacks can be detected because inference accuracy deteriorates due to poison samples. In this paper, we discuss a _backdoor-assisted membership inference attack_, a novel membership inference attack based on backdoors that return the adversary's expected output for a triggered sample. We found three crucial insights through experiments with an academic benchmark dataset. We first demonstrate that the backdoor-assisted membership inference attack is unsuccessful. Second, when we analyzed loss distributions to understand the reason for the unsuccessful results, we found that backdoors cannot separate loss distributions of training and non-training samples. In other words, backdoors cannot affect the distribution of clean samples. Third, we also show that poison and triggered samples activate neurons of different distributions. Specifically, backdoors make any clean sample an inlier, contrary to poisoning samples. As a result, we confirm that backdoors cannot assist membership inference.
backdoor-assisted membership inference attack, backdoor attack, poisoning attack, membership inference attack, machine learning
## I Introduction
Membership inference attacks are currently used for evaluating privacy leakage in various machine learning models [1, 2, 3]. In a membership inference attack [4], an adversary infers whether a sample was utilized for training a machine learning model.
In recent years, Tramer et al. [5] proposed an advanced attack, called poisoning-assisted membership inference attack, for amplifying privacy leakage by injecting poison samples into a dataset. The drawback of the attack is to deteriorate the inference accuracy of the victim model injected poison samples. Consequently, the owner of the victim model can detect the underlying poison samples in any kind of poisoning attack [6]. Namely, the poisoning-assisted membership inference attack can be prevented by detecting poison samples; thus, it may be less severe than expected.
The above limitation leads us to a membership inference attack utilizing a backdoor attack, i.e., _a backdoor-assisted membership inference attack_. Backdoor attacks [7] are stealthier than poisoning attacks because they manipulate the output of only triggered samples and maintain test accuracy [6]. There are also advanced attacks, called imperceptible backdoors [8, 9, 10, 11, 12, 13, 14, 15], that bypass existing backdoor detection tools [16, 17, 18, 19, 20, 21].
In this paper, we take the first step for answering two following key questions on backdoor-assisted membership attacks. (1) _Is a backdoor-assisted membership inference attack feasible?_ (2) _Do backdoors impact the loss distributions of a victim model?_
The above questions are non-trivial. For the first question, it is unclear whether a backdoor-assisted membership inference attack works because the backdoor attacks maintain inference accuracy. The key idea of the existing poisoning-assisted membership inference attack [5] is to make the target sample an outlier by deteriorating accuracy with poison samples. In contrast, backdoors may not make the target sample an outlier because they maintain accuracy. For this reason, the backdoor-assisted membership inference attack is significantly different from the existing attack.
Next, for the second question, the difference between loss distributions of training and non-training data is also essential for amplifying the privacy leakage. It is known that a membership inference attack is statistical testing with loss distribution [22], and then poison samples can boost membership inference attacks by separating loss distributions between training and non-training samples [5]. However, it is unclear whether the backdoors can boost membership inference attacks because the difference between the loss distributions of a backdoor-assisted attack and a poisoning-assisted attack has never been discussed.
We found three crucial insights through the experiments with a typical academic benchmark, i.e., the CIFAR-10 dataset. As the first insight, the backdoor-assisted membership inference attack is _unsuccessful_ as opposed to the poisoning-assisted membership inference attack. Specifically, a backdoor attack amplifies only few attack success rates of the membership inference.
Next, we analyze loss distributions and neuron activations of the victim models to understand the above phenomenon deeply. Then, as the second insight, we found that backdoors cannot separate loss distributions of training and non-training samples. We also demonstrate, as the third insight, that the backdoor-assisted membership inference attack makes a target sample an inlier in the distribution of activated neurons. In contrast, the poisoning-assisted membership inference attack makes it an outlier. We thus believe that backdoors _do not_ assist in membership inference attacks.
To sum up, we found the following crucial insights in this
paper:
* Backdoor-assisted membership inference attacks are unsuccessful.
* Backdoors do not separate loss distributions of training and non-training samples.
* Backdoor-assisted membership inference attacks make a target sample an inlier, while poisoning-assisted membership inference attacks make it an outlier.
## II Related Work
In this section, we describe related works of backdoor attacks and privacy violations assisted by poisoning attacks.
### _Backdoor Attacks_
Backdoor attacks [23, 7] are a kind of attack whereby an adversary trains a model such that he/she obtains the expected output for only triggers. Recent backdoor attacks [8, 9, 11, 24, 25, 26] can bypass detection tools [16, 17, 18, 19, 20, 21], and thus existence of backdoors is imperceptible. (We call them imperceptible backdoor attacks for the sake of convenience.)
There are three approaches for constructing imperceptible backdoor attacks. The first approach [8, 9, 10, 11] is based on trigger generation that is visually imperceptible for humans, referred to as the trigger method. The second approach [12, 13] is based on latent representations whose distributions are close between clean inputs and triggers, referred to as the latent-representation method. The third approach [14, 15] is unified attacks [14, 15] for the above two approaches, referred to as the unified method. We evaluate backdoor-assisted membership inference attacks based on the above three imperceptible backdoor attacks as well as the original backdoor attack [7].
In recent years, backdoor attacks have been discussed in real-world applications such as natural language processing [26, 27] and face authentication [24]. Combining our attack with these works makes privacy violations in real-world applications possible.
### _Privacy Violations Assisted by Poisoning_
There are existing works on privacy violations based on poisoning attacks [28, 29, 30, 31, 5]. The first work [29] was in simple models such as support vector machines. Whereas several papers [28, 30] discussed property inference attacks [32] that infer properties of a training dataset, Tramer et al. [5] discussed a membership inference attack, attribute inference [33, 34, 35], and data extraction [36, 37, 38]. Nevertheless, the above works did not discuss backdoor attacks. In other words, the above works succeeded in membership inference attacks by sacrificing accuracy [5].
The closest work to ours is by Chen et al. [31]. They evaluated a membership inference attack with clean-label poisoning [39, 40], whose labels remain unchanged and samples are visually indistinguishable from clean samples. However, in their attack, the distance between clean and poison samples in latent representations still becomes far from each other to maximize the influence of the target. As described in the previous subsection, we discuss not only the original backdoor attack [7], which is regardless of the distance between clean and poison samples, but also the imperceptible backdoor attacks whose distance between clean and poison samples in latent representations is close to each other. Especially, the latter backdoor attacks extremely differ from Chen et al.'s work.
Although several works [41, 42] combine backdoors with membership inference, they are close to watermarking [43] to check if a model is backdoored. Namely, these works are quite different from privacy violations, i.e., our leading problem, because they infer backdoors embedded by a model owner.
## III Backdoor-Assisted Membership Inference Attack
We describe a backdoor-assisted membership inference attack as the problem setting of this paper below. We first define an attack formally and its metrics. We then describe the key questions in detail.
### _Formalization_
The attacks in this paper are defined as a game between an adversary \(\mathcal{A}\) and a challenger \(\mathcal{C}\). We first denote by \(\mathcal{X}\) a set of data samples, by \(\mathcal{Y}\) a set of labels, by \(\mathcal{D}=\mathcal{X}\times\mathcal{Y}\) a set of datasets, and by \(\mathcal{M}\) a set of machine learning models. Then, a machine learning model \(M\in\mathcal{M}\) is defined as a mapping function \(M:\mathcal{X}\rightarrow\mathcal{Y}\) and a training algorithm, i.e., a loss function, is defined as a mapping function \(L_{M}:\mathcal{D}\rightarrow\mathcal{M}\).
The game is defined below. \(\mathcal{A}\) then interacts with the final trained model. \(\mathcal{A}\) is in the black-box setting that he/she sends queries to the model \(M\) and obtains nothing more than the outputs. Here, \(\mathcal{A}\) can only provide statically poison samples.
1. \(\mathcal{C}\) chooses a clean dataset \(D\subseteq\mathcal{D}\).
2. \(\mathcal{C}\) chooses a bit \(b\leftarrow\{0,1\}\). If \(b=1\), \(\mathcal{C}\) chooses a sample \(z\in D\); otherwise, \(\mathcal{C}\) chooses \(z\in\mathcal{D}\backslash D\).
3. Given \(z\), \(\mathcal{A}\) chooses a poisoning dataset \(D_{p}\subset\mathcal{D}\) consisting of \(n\) samples, and send it to \(\mathcal{C}\).
4. \(\mathcal{C}\) trains \(M\) by \(M=L_{M}(D^{*})\) with the entire dataset \(D^{*}=D\cup D_{p}\). In doing so, assuming \(M^{*}=L_{M^{*}}(D)\) for any model \(M^{*}\in\mathcal{M}\), \(M^{*}\) and \(M\) achieve the following relations: (1) for any \((x,y)\in D\), \(M(x)=M^{*}(x)\) holds; and, (2) for any \((x^{*},y^{*})\in D_{p}\), \(M(x^{*})=y^{*}\).
5. \(\mathcal{A}\) sends samples \(x_{1},\cdots,x_{q}\in\mathcal{X}\) to \(M\) and obtains \(y_{1}=M(x_{1}),\cdots,y_{q}=M(x_{q})\).
6. \(\mathcal{A}\) returns a bit \(b^{\prime}\in\{0,1\}\). If \(b=b^{\prime}\) holds, \(\mathcal{A}\) wins the game.
In the above game-based definition, the sentences with red color differ from the existing attack by Tramer et al. [5]. While an adversary in the existing attack does not require a model \(M\) anything other than learning a dataset \(D_{p}\), our adversary requires a model \(M\) to misinfer only a sample \(x^{*}\) in \(D_{p}\) as his/her expected output \(y^{*}\). It is the requirement of backdoor attacks [7].
### _Evaluation Metrics_
We adopt the following evaluation metrics in this paper.
**Membership-inference-attack success rates (MIA-SR) [4]:** For the number n of execution times of the game described in the previous section and the number \(a\) of times that \(\mathcal{A}\) wins the game, it is defined as \(a/n\).
**Membership-inference-attack AUC (MIA-AUC) [44]:** MIA-AUC is represented as an area under the ROC curve. The ROC curve is a two-dimensional curve defined by true positive rates (TPR) and false positive rates (FPR). They mean that positive and negative values are accurately estimated as members and non-members.
In addition to the above metrics, we introduce metrics for backdoor attacks [7], i.e., test accuracy and backdoor identification rates1. For a model \(M=L_{M}(D^{*})\) with the entire dataset \(D^{*}=D\cup D_{p}\) and another model \(M^{*}=L_{M^{*}}(D)\) with only a clean dataset \(D\), they are defined as follows:
Footnote 1: It is originally defined as the attack success rate in [7], but we say backdoor identification rate for convenience.
**Test Accuracy (TA):** For any pair \((x,y)\in D\subseteq\mathcal{D}\backslash D_{p}\) of a clean sample and its label, it is defined as a ratio such that \(y=M(x)\) holds, where \(M(x)=M^{*}(x)\) holds to maintain the original inference by \(D\) possibly. Intuitively, accuracy is necessary for stealthy compared to conventional poisoning attacks.
**Backdoor identification rates (BIR):** For any pair \((x^{*},y^{*})\in D_{p}\) of a poison sample and its label, it is defined as a ratio such that \(M(x^{*})=y^{*}\) holds, where \(M(x^{*})\neq M^{*}(x^{*})\) may hold. It means that an adversary \(\mathcal{A}\) can certainly exploit backdoors embedded in \(M\).
### _Key Questions_
We have two key questions about backdoor-assisted membership inference attacks. First, we discuss the impact of the difference from the existing works [5, 31], i.e., the sentences with red color, on the attacks. We then evaluate MIA-SR and MIA-AUC with respect to TA and BIR. Second, we discuss loss distributions between training and non-training samples for each attack.
Table I summarizes the primary differences from the existing works [5, 31].
## IV Experiment
We conduct extensive experiments with our backdoor-assisted membership inference attacks. As described in the previous section, our goal is to discuss the impact of backdoors on MIA-SR and MIA-AUC by comparing them with the existing attack by Tramer et al. [5].
We follow the targeted attack setting by Tramer et al. [5], where an adversary targets a specific example. The attacks were implemented with PyTorch. The source code for the attacks is published in GitHub2.
Footnote 2: URL will be updated in the later versions.
### _Setting_
#### Iv-A1 Model and Baseline
We also trained six ResNet18 models: (1) model attacked with neither poisoning nor backdoor attacks, referred to as Clean-Only model; (2) model referred to as Truth Serum [5] as a baseline attacked with \(250\times r\) poison samples in \(D_{p}\) for \(r\in\{1,2,4,8,16\}\); and (3)-(6) models backdoor-attacked with \(250\times r\) triggered samples in \(D_{p}\) based on BadNets [7], TaCT [13] LIRA [10], andIBD [15], respectively. We refer to each model of (3)-(6) as BadNets, Tact, LIRA, and IBD.
#### Iv-A2 Dataset
We utilize the CIFAR-10 dataset for the experiment. The \(250\) samples in \(D_{p}\) for poisoning or trigger are extracted from 50,000 training samples of the CIFAR-10 dataset. Also, the 50,000 training samples are divided into two groups: the training dataset \(D\) and the test dataset \(\mathcal{D}\backslash D\) of the victim model. The victim model learns the entire dataset \(D^{*}=D\cup D_{p}\). Here, for the clean model, 25,000 samples are randomly chosen from 50,000 training samples as \(D\). In the same way, the dataset for the shadow model is prepared.
#### Iv-A3 Membership Inference Attack Method
We implemented the membership inference by Carlini et al. [22]. Their attack needs shadow models to mimic the data distribution of a victim model \(M\); hence, we prepare twenty models. We then choose the victim model \(M\) from the twenty models, and the remaining models are utilized for shadow models to conduct a full leave-one-out cross-validation. We measure MIA-SR and MIA-AUC on these six models and then evaluate their results.
### _Results_
Table II shows the result of the membership inference attack against the Clean-Only model, which is identical to the baseline. Fig. 1 and Table III show the results of each attack.
According to the table, BadNets and TaCT, which need a few triggers for backdoors, keep high test accuracy, unlike Truth Serum. However, when we compare BadNets with TaCT, both MIA-SR and MIA-AUC deteriorate for any number of samples. It indicates that MIA-SR and MIA-AUC deteriorate due to triggers generated in imperceptible approaches. On the other hand, MIA-AUC of Truth Serum increases more than or equal to 0.32 compared to the Clean-Only model.
We also explain why TA and BIR for MIA-SR and MIA-AUC are low below. LIRA and IBD need the same number of triggers as clean samples for backdoors. In this experiment,
we used at most 4000 triggers for LIRA and IBD, despite training with 25000 clean samples. That is, TA and BIR for LIRA and IBD became low compared to BadNets and TaCT.
## V Discussion
In this section, we discuss why MIA-SR and MIA-AUC of the backdoor-assisted membership inference attack deteriorate from two standpoints. First, we analyze the impact of backdoors on loss distributions of a victim model because they are essential for MIA-SR in general [22]. Second, we analyze the impact of backdoors on neuron activation to understand the internal parameters of the victim model in detail.
### _Impact of Backdoors on Loss Distributions_
We discuss the impact of backdoors on loss distributions of a victim model for the backdoor-assisted membership inference attack. It is considered that a membership inference attack is a statistical testing with loss distribution [22]. Hence, we analyze the loss distribution for each model.
Fig. 2 shows loss distributions of training and non-training samples for each model. According to the figure, Truth Serum boosts membership inference attacks because the losses of training and non-training samples are separated. Remarkably, borderlines between the losses of training and non-training samples are stable regardless of the size of \(D_{p}\), and these phenomena are consistent with the original work [5] of Truth Serum.
In contrast, BadNets and TaCT are unable to separate the losses of training and non-training samples compared to Truth Serum. It is thus considered that MIA-SR and MIA-AUC deteriorate because the membership inference is almost random. More specifically, the loss distributions of training and non-training samples partially overlap. Then, MIA-SR and MIA-AUC should be improved compared to the Clean-Only model. Nevertheless, MIA-SR and MIA-AUC for TaCT are less than those of the Clean-Only model because the loss distributions of training and non-training samples are distributed over each other for most of a clean dataset \(D\).
Next, we discuss LIRA and IBD. According to Fig. 2, their loss distributions of training and non-training samples are obliviously separated similarly to Truth Serum. We discuss why MIA-SR and MIA-AUC of LIRA and IBD deteriorate in the next section.
### _Impact of Backdoors on Neuron Activations_
We discuss the impact of backdoors on neuron activation from a standpoint different from the previous section to understand the backdoor-assisted membership inference attacks in more detail. In particular, we built a hypothesis that backdoors do not make loss distributions of training samples an outlier, and therefore a membership inference attack is unsuccessful even through the backdoor attacks. We observe neuron activations for each model to confirm the above hypothesis and then find strong evidence whereby a membership inference attack is unsuccessful. We show them in Fig. 3.
According to the figure, Truth Serum makes a target sample an outlier over the distribution of training samples, which is consistent with the original work [5].
By contrast, all the backdoor attacks make target samples inliers over the distributions of training samples. It means that triggers for backdoors have independent distributions of training samples. Namely, LIRA and IBD make target samples inliers, although LIRA and IBD separate loss distributions between training and non-training samples. It is thus considered that MIA-SR and MIA-AUC of LIRA and IBD deteriorate. The above phenomenon might be different if the test accuracies are improved.
## VI Conclusion
We discussed backdoor-assisted membership inference attacks, which do not deteriorate the accuracy. We first evaluated whether backdoor-assisted membership inference attacks with the original backdoors [7] and the imperceptible backdoors [10, 13, 15] are successful in comparison with the existing poisoning-assisted membership inference attack [5]. We then showed that backdoor-assisted membership inference attacks are unsuccessful in contrast to the existing poisoning-assisted membership inference attack by Tramer et al. [5]
We also analyzed their resultant models with respect to loss distributions and neuron activations to deeply understand the reason for the unsuccessful results. Then, we confirmed that triggers cannot affect the distribution of clean samples; namely, any clean sample becomes inliers while the existing attack makes it an outlier. Thus, we believe that backdoors cannot assist membership inference attacks.
|
2305.16461 | Singularities of the Chern-Ricci flow | We study the nature of finite-time singularities for the Chern-Ricci flow,
partially answering a question of Tosatti-Weinkove. We show that a solution of
degenerate parabolic complex Monge-Amp\`ere equations starting from arbitrarily
positive (1,1)-currents are smooth outside some analytic subset, generalizing
works by Di Nezza-Lu. We extend Guedj-Lu's recent approach to establish uniform
a priori estimates for degenerate complex Monge-Amp\`ere equations on compact
Hermitian manifolds. We apply it to studying the Chern-Ricci flows on complex
log terminal varieties starting from an arbitrary current. | Quang-Tuan Dang | 2023-05-25T20:28:33Z | http://arxiv.org/abs/2305.16461v1 | # Singularities of the Chern-Ricci flow
###### Abstract.
We study the nature of finite-time singularities for the Chern-Ricci flow, partially answering a question of Tosatti-Weinkove [53]. We show that a solution of degenerate parabolic complex Monge-Ampere equations starting from arbitrarily positive (1,1)-currents are smooth outside some analytic subset, generalizing works by Di Nezza-Lu [16]. We extend Guedj-Lu's recent approach to establish uniform a priori estimates for degenerate complex Monge-Ampere equations on compact Hermitian manifolds. We apply it to studying the Chern-Ricci flows on complex log terminal varieties starting from an arbitrary current.
Key words and phrases:Parabolic Monge-Ampere equations, Chern-Ricci flow, singularities 2020 Mathematics Subject Classification: 53E30, 32U20, 32W20
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 A priori estimates
* 4 Degenerate Monge-Ampere flows
* 5 Finite time singularities
* 6 The Chern-Ricci flow on varieties with log terminal singularities
## 1. Introduction
Finding canonical metrics on complex varieties has been a central problem in complex geometry over the last few decades. Since Yau's solution to Calabi's conjecture, there have been a lot of developments in this direction. Cao [6] introduced a parabolic approach to provide an alternative proof of the existence of Kahler-Einstein metrics on manifolds with numerically trivial or ample canonical line bundle by the Kahler-Ricci flow. This flow is only Hamilton's Ricci flow evolving Kahler metrics. Motivated by the problem of the classification of complex varieties, Song-Tian [42, 43] have proposed an _Analytic Minimal Model Program_ to classify algebraic varieties with mild singularities, using the Kahler-Ricci flow. It requires to a theory of weak solutions for degenerate parabolic complex Monge-Ampere equations starting from a rough initial data. Since then, there have been various achieved results in this direction. Song-Tian initiated the study of the Kahler-Ricci flow starting from an initial current with continuous potentials. While, Guedj-Zeriahi [30] (also [54]) showed that the Kahler-Ricci could be continued from an initial current with zero Lelong number. To the author's knowledge, the best results were, at least so far, obtained by DiNezza-Lu [16], where
they succeeded in running the Kahler-Ricci flow from an initial current with positive Lelong number. There have been several related works in such singular settings, from a pluripotential theoretical point of view, and we refer to the recent works [27, 10] and the references therein.
Beyond the Kahler setting, there more recently has been interest in the study of geometric flows, in the context of non-Kahler manifolds. Unlike the Kahler case, Hamilton's Ricci flow will not, in general, preserve special Hermitian condition. It is natural to look for another geometric flow of Hermitian metrics, which somehow specializes in the Ricci flow in the Kahler context. Many parabolic flows on complex manifolds which do preserve the Hermitian property have been proposed by Streets-Tian [45, 44] and Liu-Yang [33]. The Anomaly flow of \((n-1,n-1)\)-forms has been extensively studied by Phong-Picard-Zhang [36, 37].
This paper is devoted to the Chern-Ricci flow which is an evolution equation of Hermitian metrics on a complex manifold by their Chern-Ricci form, first introduced by Gill [21] in the setting of manifolds with vanishing first Bott-Chern class. Let \((X,\omega_{0})\) be a compact \(n\)-dimensional Hermitian manifold. The _Chern-Ricci flow_\(\omega=\omega(t)\) starting at \(\omega_{0}\) is an evolution equation of Hermitian metrics
\[\frac{\partial\omega}{\partial t}=-\mathrm{Ric}(\omega),\quad\omega|_{t=0}= \omega_{0}, \tag{1.1}\]
where \(\mathrm{Ric}(\omega)\) is the _Chern-Ricci form_ of \(\omega\) associated to the Hermitian metric \(g=(g_{i\bar{\jmath}})\), which in local coordinates is given by
\[\mathrm{Ric}(\omega)=-dd^{c}\log\det(g).\]
Here \(d=\partial+\bar{\partial}\) and \(d^{c}=i(\bar{\partial}-\partial)/2\) are both real operators, so that \(dd^{c}=i\partial\bar{\partial}\). In the Kahler setting, \(\mathrm{Ric}(\omega)=iR_{\bar{\jmath}\bar{\jmath}}dz\wedge d\bar{z}_{k}\), where \(R_{\bar{\jmath}\bar{\jmath}}\) is the usual Ricci curvature of \(\omega\). Thus if \(\omega_{0}\) is Kahler i.e., \(d\omega_{0}=0\), (1.1) coincides with the Kahler-Ricci flow. For complex manifolds with \(c_{1}^{\mathrm{BC}}(X)=0\), Gill [21] proved the long time existence of the flow and smooth convergence of the flow to the unique Chern-Ricci-flat metric in the \(\partial\bar{\partial}\)-class of the initial metric. For general complex manifolds, Tosatti and Weinkove [52, Theorem 1.3] characterize the maximal existence time \(T_{\mathrm{max}}\) of the flow as
\[T_{\mathrm{max}}:=\sup\{t>0:\exists\ \psi\in\mathcal{C}^{\infty}(X)\ \text{with}\ \omega_{0}-t\mathrm{Ric}(\omega_{0})+dd^{c}\psi>0\}.\]
**Finite time singularities.** Suppose that the flows (1.1) exists on maximal interval \([0,T_{\mathrm{max}})\) with \(T_{\mathrm{max}}<\infty\), so the flow develops a singularity at finite time. Tosatti-Weinkove [53, Question 6.1] ask the following question
**Question 1.1**.: _Do singularities of the Chern-Ricci flow develop precisely along closed analytic subvarieties of \(X\)?_
In the Kahler setting, this question was posed by Feldman-Ilmanen-Knopf [19] and affirmatively answered by Collins-Tosatti [8]. When \(X\) is a compact complex surface and \(\omega_{0}\) is Gauduchon, i.e., \(dd^{c}\omega_{0}=0\), the Chern-Ricci flow preserves Gauduchon (pluriclosed) condition, in particular, the limiting form \(\alpha_{T_{\mathrm{max}}}=\omega_{0}-T_{\mathrm{max}}\mathrm{Ric}(\omega_{0})\) is Gauduchon. The answer is thus affirmative in this case, due to Gill-Smith [22] (also [51, 53]) where they proved singularities of the Chern-Ricci flow form a finite union of disjoint (-1)-curves. We partially answer this question when the limiting form \(\alpha_{T_{\mathrm{max}}}\) is _uniformly non-collapsing_:
\[\int_{X}(\alpha_{T_{\mathrm{max}}}+dd^{c}\psi)^{2}\geq c_{0}>0,\ \forall\,\psi\in \mathcal{C}^{\infty}(X). \tag{1.2}\]
We mention that when \(\omega_{0}\) is a Gauduchon metric on compact complex surface \(X\) the latter condition is equivalent to \(\int_{X}\alpha_{T_{\mathrm{max}}}^{2}>0\). We say in such a case that
the Chern-Ricci flow is _volume non-collapsing_ at time \(T_{\max}\), otherwise we say that the flow is _volume collapsing_; cf. [51]). As also mentioned in [53] the question is trivial when the flow is volume collapsing. We generalize the result to higher dimensional manifold \(X\) which admits a Hermitian metric \(\omega_{X}\) such that \(\nu_{+}(\omega_{X})<+\infty\) (cf. Definition 2.4). The latter automatically holds in the case of compact complex surfaces.
**Theorem A**.: _Let \((X,\omega_{0})\) be a compact complex \(n\)-dimensional manifold with \(v_{+}(\omega_{0})<+\infty\). Assume that the Chern-Ricci flow (1.1) starting at \(\omega_{0}\) exists on the maximal interval \([0,T_{\max})\) with \(T_{\max}<\infty\), and that the limiting form \(\alpha_{T_{\max}}\) is uniformly non-collapsing, i.e.,_
\[\int_{X}(\alpha_{T_{\max}}+dd^{c}\psi)^{n}\geq c_{0}>0,\ \forall\,\psi\in \mathcal{C}^{\infty}(X). \tag{1.3}\]
_Then as \(t\to T^{-}\) the metric \(\omega_{t}\) converge to \(\omega_{T_{\max}}\) in \(\mathcal{C}^{\infty}(\Omega)\) for some Zariski open set \(\Omega\subset X\)._
The strategy of the proof is as follows. From the uniformly non-collapsing condition of \(\alpha_{T_{\max}}\), we show that there exists a quasi-plurisubharmonic function \(\rho\) with analytic singularities such that \(\alpha_{T_{\max}}+dd^{c}\rho\) dominates a hermitian metric. Such a form is called _big_ (cf. Definition 2.6). Then \(\Omega\) is the set in which \(\rho\) is smooth. In particular, it is Zariski open. We next establish several uniformly local estimates of \(\omega\) near the maximal time \(T_{\max}\), adapting the same as that of [8, 21]. The convergence immediately follows.
### Degenerate parabolic complex Monge-Ampere equations
In the previous paragraph, we studied the behavior of the Chern-Ricci flow at finite singularity time. It is natural to ask whether the flow can pass through this singularity. To do this, we must define weak solutions of the Chern-Ricci flow starting from degenerate initial currents on a compact complex variety with mild singularities. Several geometric contexts are encountered in the minimal model program, which require us to treat the case of complex variety with Kawamata log terminal (klt) singularities. From an analytic point of view, the latter naturally leads one to deal with densities that are allowed to blow up while belonging to \(L^{p}\) for some exponent \(p>1\) whose size depends on the algebraic nature of the singularities.
On a compact complex \(n\)-manifold \((X,\omega_{X})\), we consider the following degenerate parabolic complex Monge- Ampere equation
\[\frac{\partial\varphi_{t}}{\partial t}=\log\left[\frac{(\theta_{t}+dd^{c} \varphi_{t})^{n}}{\mu}\right], \tag{1.4}\]
for \(t\in(0,T_{\max})\), where \(T_{\max}<\infty\) and
* \(\theta_{t}=\theta+t\chi\) is an affine family of smooth semi-positive forms and there is a quasi-plurisubharmonic function \(\rho\) with analytic singularities such that \[\theta+dd^{c}\rho\geq\delta\omega_{X}\text{ for some }\delta>0;\]
* \(\mu\) is a positive measure on \(X\) of the form \[\mu=e^{\psi^{+}-\psi^{-}}\] with \(\psi^{\pm}\) quasi-plurisubharmonic functions, being smooth on a given Zariski open subset \(U\subset\{\rho>-\infty\}\) and \(e^{-\psi^{-}}\in L^{p}\) for some \(p>1\);
* \(\varphi:[0,T_{\max}]\times X\to\mathbb{R}\) is the unknown function, with \(\varphi_{t}:=\varphi(t,\cdot)\).
We define the weak solution of the Chern-Ricci flow:
**Definition 1.2**.: A family of functions \(\varphi_{t}:X\to\mathbb{R}\) for \(t\in(0,T_{\max})\) is said to be a weak solution of the equation (1.4) starting with \(\varphi_{0}\) if the following hold.
1. for each \(t\), \(\varphi_{t}\) is \(\theta_{t}\)-plurisubharmonic on \(X\);
2. \(\varphi_{t}\to\varphi_{0}\) in \(L^{1}(X)\) as \(t\to 0^{+}\);
3. for each \(\varepsilon>0\) there exists a Zariski open set \(\Omega_{\varepsilon}\subset X\) such that the function \((t,x)\mapsto\varphi(t,x)\in\mathcal{C}^{\infty}([\varepsilon,T_{\max}- \varepsilon]\times\Omega_{\varepsilon})\). Furthermore, the equation (1.4) satisfies in the classical sense on \([\varepsilon,T_{\max})\times\Omega_{\varepsilon}\).
Our first theorem establishes the existence for the complex Monge-Ampere flow starting with an initial function \(\varphi_{0}\) with small Lelong numbers.
**Theorem B**.: _Let \(\varphi_{0}\) be an \(\theta\)-plurisubharmonic function satisfying \(p^{*}/2c(\varphi_{0})<T_{\max}\) where \(\varphi^{*}\) is the conjugate exponent of \(p\). Then there exists a weak solution \(\varphi\) of the flow (1.4) starting at \(\varphi_{0}\) for \(t\in(0,T_{\max})\)._
Here \(c(\varphi_{0})\) denotes the integrability index of \(\varphi_{0}\) which is the superemum of positive constant \(c>0\) such that \(e^{-2c\varphi_{0}}\) is locally integrable. We note that \(c(\varphi_{0})=+\infty\) if and only if \(\varphi_{0}\) have zero Lelong numbers at all points, as follows from Skoda's integrability theorem.
Let us briefly describe the strategy of the proof of Theorem B. We first approximate \(\varphi_{0}\) by a decreasing sequence of smooth \((\theta+2^{-j}\omega_{X})\)-plurisubharmonic functions \(\varphi_{0,j}\) thanks to Demailly's regularization result. Similarly, \(\psi^{\pm}\) are approximated by smooth quasi-plurisubharmonic functions. We consider the corresponding solution \(\varphi_{t,j}\) to the equation (1.4) with \(\theta_{t,j}=\theta_{t}+2^{-j}\omega_{X}\). We aim to establish several a priori estimates allowing us to pass to the limit \(j\to+\infty\). Precisely, we are going to prove that for any \(\varepsilon>0\), there is a Zariski open set \(\Omega_{\varepsilon}\subset X\) such that for each \(0<T<T_{\max}\) fixed and \(K\subset\Omega_{\varepsilon}\),
* \(\|\varphi_{t,j}\|_{\mathcal{C}^{0}([\varepsilon,T]\times K)}\leq C_{ \varepsilon,T,K}\);
* \(\partial_{t}\varphi_{t,j}\) is uniformly bounded on \([\varepsilon,T]\times K\);
* \(\Delta_{\omega_{X}}\varphi_{t,j}\) is uniformly bounded on \([\varepsilon,T]\times K\).
We then apply the parabolic Evans-Krylov theory and Schauder estimates to obtain more higher locally uniformly estimates for all derivatives of \(\varphi_{t,j}\) (we can refer to [21] for a recent account in the Chern-Ricci flow context). We therefore can pass to the limit to show that
\[\varphi_{t,j}\to\varphi_{t}\in\mathcal{C}^{\infty}([\varepsilon,T]\times \Omega_{\varepsilon})\]
as \(j\to+\infty\). We automatically have the weak convergence \(\varphi_{t}\to\varphi_{0}\) as \(t\to 0^{+}\). More stronger convergence are discussed in Section 4.4 when \(\varphi_{0}\) are less singular.
We also emphasize here that the mild assumption \(p^{*}/2c(\varphi_{0})<T_{\max}\) guarantees that the approximating flow is well-defined (not identically \(-\infty\)) and is crucial for the smoothing properties of the flow. As mentioned by DiNezza-Lu [16] in the Kahler context, without this assumption, the Kahler-Ricci flow can still run, but there is probably no regularization effect at all due to the presence of positive Lelong numbers. Also, as in this case, they mentioned that the main difficulty is establishing a priori \(\mathcal{C}^{0}\)-estimate. Their proof relies on Kolodziej's method by using their generalized Monge-Ampere capacity. The approach we use is recently developed by Guedj-Lu [24, 25], whose advantage is that it still can be applied in the case of degenerate (1,1) forms in non-Kahler context.
We finally apply the previous analysis to treat the case of mildly singular varieties. This allows us to define a good notion of the weak Chern-Ricci flow on complex compact varieties with log terminal singularities. We will discuss it in Section 6 and prove the following.
**Theorem C**.: _Let \(Y\) be a compact complex variety with log terminal singularities. Assume that \(\theta_{0}\) is a Hermitian metric such that_
\[T_{\max}:=\sup\{t>0:\,\exists\,\,\psi\in\mathcal{C}^{\infty}(Y)\text{ such that }\theta_{0}-tRic(\theta_{0})+dd^{c}\psi>0\}>0.\]
_Assume that \(S_{0}=\theta_{0}+dd^{c}\phi_{0}\) is a positive (1,1)-current with sufficiently small slopes. Then there exists a family \((\omega_{t})_{t\in[0,T_{\max})}\) of positive (1,1) current on \(Y\) starting at \(S_{0}\) such that_
1. \(\omega_{t}=\theta_{0}-tRic(\theta_{0})+dd^{c}\varphi_{t}\) _are positive (1,1) currents;_
2. \(\omega_{t}\to S_{0}\) _weakly as_ \(t\to 0^{+}\)_;_
3. _for each_ \(\varepsilon>0\) _there exists a Zariski open set_ \(\Omega_{\varepsilon}\) _such that on_ \([\varepsilon,T_{\max})\times\Omega_{\varepsilon}\)_,_ \(\omega\) _is smooth and_ \[\frac{\partial\omega}{\partial t}=-Ric(\omega).\]
This generalizes previous results of Song-Tian [43], Guedj-Zeriahi [29], To [54], DiNezza-Lu [16], Guedj-Lu-Zeriahi [27] and the author [10] to the non-Kahler case, and of [55, 35] and the author [9] to more degenerate initial data.
### Organization of the paper
We establish a priori estimates in Section 3, which will be used to prove Theorem B in Section 4. While, Theorem A will be proved in Section 5, studying the behavior of the Chern-Ricci flow at non-collapsing finite time singularities. In Section 6 we apply these tools to prove the existence for the weak Chern-Ricci flow with initial degenerate data on compact complex varieties with log terminal singularities, proving Theorem C.
### Acknowledgement
The author would like to thank Chung-Ming Pan for careful reading the first draft and Tat-Dat To for useful discussions.
## 2. Preliminaries
### Recap on pluripotential theory
Let \(X\) be a compact complex manifold of dimension \(n\), equipped with a Hermitian metric \(\omega_{X}\). We fix \(\theta\) a smooth semi-positive (1,1)-form on \(X\).
#### 2.1.1. Quasi-plurisubharmonic functions and Lelong numbers
A function is quasi-plurisubharmonic (quasi-psh for short) if it is locally given as the sum of a smooth and a plurisubharmonic (psh for short) function.
**Definition 2.1**.: A quasi-psh function \(\varphi:X\to[-\infty,+\infty)\) is called \(\theta\)-_plurisubharmonic_ (\(\theta\)_-psh_ for short) if it satisfies \(\theta_{\varphi}:=\theta+dd^{c}\varphi\geq 0\) in the weak sense of currents. We let \(\operatorname{PSH}(X,\theta)\) denote the set of all \(\theta\)-psh functions which are not identically \(-\infty\).
The set \(\operatorname{PSH}(X,\theta)\) isendowed with the \(L^{1}(X)\)-topology. By Hartogs' lemma \(\varphi\mapsto\sup_{X}\varphi\) is continuous in this weak topology. Since the set of closed positive currents in a fixed \(dd^{c}\)-class is compact (in the weak topology), it follows that the set of \(\varphi\in\operatorname{PSH}(X,\theta)\), with \(\sup_{X}\varphi=0\) is compact. We refer the reader to [13, 29] for basic properties of \(\theta\)-psh functions.
Quasi-psh functions are in general singular, and a convenient way to measure their singularities is the Lelong numbers.
**Definition 2.2**.: Let \(x_{0}\in X\). Fixing a holomorphic chart \(x_{0}\in V_{x_{0}}\subset X\), the _Lelong number_\(\nu(\varphi,x_{0})\) of a quasi-psh function \(\varphi\) at \(x_{0}\in X\) is defined as follows:
\[\nu(\varphi,x_{0}):=\sup\{\gamma\geq 0:\varphi(z)\leq\gamma\log\|z-x_{0}\|+O(1 ),\text{ on }V_{x_{0}}\}.\]
We remark here that this definition does not depend on the choice of local charts. In particular, if \(\varphi=\log|f|\) in a neighborhood \(V_{x_{0}}\) of \(x_{0}\), for some holomorphic function \(f\), then \(\nu(\varphi,x_{0})\) is equal to the vanishing order \(\operatorname{ord}_{x_{0}}(f):=\sup\{k\in\mathbbm{N}:D^{\gamma}f(x_{0})=0, \forall\,|\gamma|<k\}\).
In some contexts, it is more convenient to deal with the integrability index instead of the Lelong numbers. The _integrability index_ of a quasi-psh function \(\varphi\) at a point \(x\in X\) is defined by
\[c(\varphi,x):=\sup\{c>0:e^{-2c\varphi}\in L^{1}(V_{x})\}\]
where \(V_{x}\) is some neighborhood around \(x\). As above this definition does not depend on the choice of open neighborhood \(V_{x}\). We denote by \(c(\varphi)\) the infimum of \(c(\varphi,x)\) for all \(x\in X\). Since \(X\) is compact it follows that \(c(\varphi)>0\).
Skoda's integrability theorem states that one can get the following "optimal" relation between the Lelong number of a quasi-psh function \(\varphi\) at a point \(x_{0}\in X\) and the local integrability index of \(\varphi\) at \(x_{0}\):
\[\frac{1}{\nu(\varphi,x_{0})}\leq c(\varphi,x_{0})\leq\frac{n}{\nu(\varphi,x_{0 })}. \tag{2.1}\]
In particular \(c(\varphi)=+\infty\) if and only if \(\nu(\varphi,x)=0\) for all \(x\in X\) (cf. [41] for Skoda's theorem or [56] for a uniform version).
#### 2.1.2. Monge-Ampere measures
The complex Monge-Ampere measure \((\theta+dd^{c}u)^{n}\) is well-defined for any \(\theta\)-psh function \(u\) which is bounded, as follows from Bedford-Taylor theory: if \(\beta=dd^{c}\varphi\) is a Kahler form such that \(\beta>\theta\) in a local open chart \(U\subset X\), the function \(u\) is \(\beta\)-psh hence the positive currents \((\beta+dd^{c}u)^{j}\) are well-defined for \(1\leq j\leq n\), one thus obtains
\[(\theta+dd^{c}u)^{n}:=\sum_{j=0}^{n}\binom{n}{j}(\beta+dd^{c}u)^{j}\wedge( \theta-\beta)^{n-j}.\]
as a positive Radon measure on \(X\). Indeed, by Demailly's regularization theorem we can approximate \(u\) be a decreasing sequence of smooth \((\theta+\varepsilon_{j}\omega_{X})\)-psh functions \(u_{j}\).We obtain that \((\theta+dd^{c}u)^{n}\) is the limit of positive measures \((\theta+\varepsilon_{j}\omega_{X}+dd^{c}u_{j})^{n}\), so is positive.
This definition does not depend on the choice of \(\beta\) by the same arguments. We refer to [17] for an adaptation of [2, 3] to the Hermitian context. We recall the following maximum principle:
**Lemma 2.3**.: _Let \(\varphi,\psi\) are bounded \(\theta\)-psh functions such that \(\varphi\leq\psi\). Then_
\[\mathbf{1}_{\{\varphi=\psi\}}(\theta+dd^{c}\varphi)^{n}\leq\mathbf{1}_{\{ \varphi=\psi\}}(\theta+dd^{c}\psi)^{n}.\]
Proof.: This is a direct consequence of Bedford-Taylor's maximum principle; see [29, Theorem 3.23]. We refer the reader to [26, Lemma 1.2] for a brief proof.
#### 2.1.3. Positivity assumptions
For our purpose we need to assume slightly stronger positivity property of the form \(\theta\) in the sense of [25].
**Definition 2.4**.: We consider
\[v_{-}(\theta):=\inf\left\{\int_{X}(\theta+dd^{c}\varphi)^{n}:\varphi\in \operatorname{PSH}(X,\theta)\cap L^{\infty}(X)\right\}\]
and
\[v_{+}(\theta):=\sup\left\{\int_{X}(\theta+dd^{c}\varphi)^{n}:\varphi\in \operatorname{PSH}(X,\theta)\cap L^{\infty}(X)\right\}\]
We emphasize that when \(\theta\) is Hermitian, the supremum and infimum in the definition of these quantities can be taken over \(\mathrm{PSH}(X,\theta)\cap\mathcal{C}^{\infty}(X)\) due to Demailly's regularization theorem and Bedford-Taylor's convergence results.
**Definition 2.5**.: We say that \(\theta\) is _uniformly non-collapsing_ if \(v_{-}(\theta)\geq c_{0}>0\).
This condition is not obvious even when \(\theta\) is Hermitian. We refer the reader to [1, Sect. 3] for several examples of uniformly non-collapsing Hermitian form.
Recall that a function \(\rho\) is said to have _analytic singularities_ if if there exists a constant \(c>0\) such that locally on \(X\), \(\rho=c\log\sum_{j=1}^{N}|f_{j}|^{2}+O(1)\) where the \(f_{j}\)'s are holomorphic functions.
**Definition 2.6**.: We say \(\theta\) is _big_ if there exists a \(\theta\)-psh function with analytic singularities such that \(\theta+dd^{c}\rho\geq\delta\omega_{X}\) for some \(\delta>0\). We let \(\Omega\) denote the Zariski open set where \(\rho\) is smooth.
Such a form appears in some contexts of complex differential geometry. For instance, if \(V\) is a compact complex space endowed with a hermitian form \(\omega_{Y}\) and \(\pi:X\to Y\) is a log resolution of singularities, then the form \(\theta:=\pi^{*}\omega_{Y}\) is big; see [20, Proposition 3.2]. Moreover, we can find a \(\theta\)-psh function \(\rho\) with analytic singularities such that \(\theta+dd^{c}\rho\geq\delta\omega_{X}\), and
\[\Omega=\{\rho>-\infty\}=X\setminus\mathrm{Exc}(\pi)=\pi^{-1}(Y_{\mathrm{reg}} )\simeq Y_{\mathrm{reg}}.\]
#### 2.1.4. Envelopes
**Definition 2.7**.: Given a measurable function \(h:X\to\mathbb{R}\), we define the \(\theta\)_-psh envelope_ of \(h\) by
\[P_{\theta}(h):=(\sup\{u\in\mathrm{PSH}(X,\theta):u\leq h\text{ on }X\})^{*}\]
where the star means that we take the upper semi-continuous regularization.
We have the following result which has been established in [26, Theorem 2.3].
**Theorem 2.8**.: _If \(h\) is bounded from below, quasi l.s.c, and \(P_{\theta}(h)<+\infty\) then_
1. \(P_{\theta}(h)\) _is a bounded_ \(\theta\)_-psh function;_
2. \(P_{\theta}(h)\leq h\) _in_ \(X\setminus P\)_, for some pluripolar set_ \(P\)_;_
3. \((\theta+dd^{c}P_{\theta}(h))^{n}\) _is concentrated on the contact set_ \(\{P_{\theta}(h)=h\}\)_._
The following \(\mathcal{C}^{0}\)-estimate is crucial in the sequel.
**Lemma 2.9**.: _Let \(\theta\) be a smooth real semi-positive and big (1,1)-form. Assume \(\varphi\in\mathrm{PSH}(X,\theta)\cap L^{\infty}(X)\) satisfies_
\[(\theta+dd^{c}\varphi)^{n}\leq e^{A\varphi-g}fdV_{X},\]
_where \(A>0\) and \(f\), \(g\) are measurable functions such that \(e^{A\varphi-g}f\in L^{q}(X)\) with \(q>1\), for some \(\psi\in\mathrm{PSH}(X,\delta\theta)\), with \(\delta\in(0,1)\). Then we have the following estimate_
\[\varphi\geq\psi-C\]
_where \(C\) is a positive constant only depending on \(n\), \(A\), \(\delta\), \(\theta\), \(q\) and a upper bound for \(\int_{X}e^{q(A\varphi-g)}f^{q}dV_{X}\)._
Proof.: We apply the approach which has recently developed by Guedj-Lu [24, 25]. Set \(u:=P_{(1-\delta)\theta}(\varphi-\psi).\) Since \(\varphi\) is bounded one has \(u=P_{(1-\delta)\theta}(\varphi-\max(\psi,-t))\) for \(t>0\) big enough, we can thus assume that \(\psi\) is also bounded. Since \(\varphi-\psi\) is bounded and quasicontinuous. It follows from Theorem 2.8 that \(((1-\delta)\theta+dd^{c}u)^{n}\) is supported on the contact set \(D:=\{u+\psi=\varphi\}\). We observe that \(u+\psi\) and \(\varphi\) are both \(\theta\)-psh functions satisfying \(u+\psi\leq\varphi\), it follows from Lemma 2.3 that
\[\mathbf{1}_{D}(\theta+dd^{c}(u+\psi))^{n}\leq\mathbf{1}_{D}(\theta+dd^{c} \varphi)^{n}.\]
From these, we have
\[((1-\delta)\theta+dd^{c}u)^{n} =\mathbf{1}_{D}((1-\delta)\omega+dd^{c}u)^{n}\] \[\leq\mathbf{1}_{D}(\theta+dd^{c}(u+\psi))^{n}\] \[\leq\mathbf{1}_{D}(\theta+dd^{c}\varphi)^{n}\] \[\leq\mathbf{1}_{D}e^{A\varphi-g}fdV_{X}\] \[\leq\mathbf{1}_{D}e^{Au}e^{A\psi-g}fdV_{X},\]
with \(e^{A\psi-g}f\in L^{q}(X)\). We argue the same as in the proof of [25, Theorem 3.4 (1) ] to ensure that there exists a constant \(C>0\) only depending on \(n\), \(q\), \(A\), \(\theta\), \(\delta\), and \(\|e^{A\psi-g}f\|_{L^{q}}\), such that \(u\geq-C\). This completes the proof.
### Equisingular approximation
Fix \(\varphi\) a \(\theta\)-psh function on \(X\). We aim at approximating \(\varphi\) by a decreasing sequence of quasi-psh functions which are less singular than \(\varphi\) and such that their singularities are somehow comparable to those of \(\varphi\). This leads us to make use of Demailly's equisingular approximation theorem. For each \(c>0\), we define the _Lelong super-level sets_
\[E_{c}(\varphi):=\{x\in X:\nu(\varphi,x)\geq c\}.\]
We also use the notation \(E_{c}(T)\) for a closed positive \((1,1)\)-current \(T\). A well-known result of Siu [40] asserts that the Lelong super-level sets \(E_{c}(\varphi)\) are analytic subsets of \(X\). We refer the reader to [11, Remark 3.2] for an alternative proof.
The following result of Demailly on the equisingular approximation of a quasi-psh function by quasi-psh functions with analytic singularities is crucial.
**Theorem 2.10** (Demailly's equisingular approximation).: _Let \(\varphi\) be a \(\theta\)-psh function on \(X\). There exists a decreasing sequence of quasi-psh functions \((\varphi_{m})\) such that_
1. \((\varphi_{m})\) _converges pointwise and in_ \(L^{1}(X)\) _to_ \(\varphi\) _as_ \(m\to+\infty\)_,_
2. \(\varphi_{m}\) _has the same singularities as_ \(1/2m\) _times a logarithm of a sum of squares of holomorphic functions,_
3. \(dd^{c}\varphi_{m}\geq-\theta-\varepsilon_{m}\omega x\)_, where_ \(\varepsilon_{m}>0\) _decreases to 0 as_ \(m\to+\infty\)_,_
4. \(\int_{X}e^{2m(\varphi_{m}-\varphi)}dV<+\infty\)_,_
5. \(\varphi_{m}\) _is smooth outside the analytic subset_ \(E_{1/m}(\varphi)\)_._
Proof.: We briefly sketch the idea for the convenience of the reader and which we believe is known to experts. We follow the proof of [11] by applying with the current \(T=dd^{c}\varphi\) and the smooth real (1,1) form \(\gamma=-\theta\). We also borrow notation from there.
For \(\delta>0\) small, let us cover \(X\) by \(N=N(\delta)\) geodesic balls \(B_{2r}(a_{j})\) with respect to \(\omega_{X}\) such that \(X=\cup_{j}B_{r}(a_{j})\) and in terms of coordinates \(z^{j}=(z^{j}_{1},\dots,z^{j}_{n})\),
\[\sum_{l=1}^{n}\lambda^{j}_{l}idz^{j}_{l}\wedge d\bar{z}^{j}_{l}\leq\gamma|_{B _{2r}(a_{j})}\leq\sum_{l=1}^{n}(\lambda^{j}_{l}+\delta)idz^{j}_{l}\wedge d \bar{z}^{j}_{l}\]
where we have diagonalized \(\gamma(a_{j})\) at the center \(a_{j}\). Here \(N\) and \(r\) are taken to be uniform. Set \(\varphi^{j}:=\varphi|_{B_{2r}(a_{j})}-\sum_{l=1}^{n}\lambda^{j}_{l}|z^{l}_{j}| ^{2}\). On each \(B_{2r}(a_{j})\), we define
\[\varphi_{j,\delta,m}:=\frac{1}{2m}\log\sum_{k\in\mathbf{N}}|f_{j,m,k}|^{2},\]
where \((f_{j,m,k})_{k\in\mathbf{N}}\) is an orthogonal basis of the Hilbert space \(\mathcal{H}_{B_{2r}(a_{j})}\left(m\varphi^{j}\right)\) of holomorphic functions on \(B_{2r}(a_{j})\) with finite \(L^{2}\) norm \(\|u\|=\int_{B_{2r}(a_{j})}|u|^{2}e^{-2m\varphi^{j}}dV(z^{j})\). Note that since \(dd^{c}\varphi\geq\gamma\) it follows that \(\varphi-\sum_{l=1}^{n}\lambda^{j}_{l}|z^{l}_{j}|^{2}\) is psh on \(B_{2r}(a_{j})\). The
Bergman kernel process applied on each ball \(B_{2r}(a_{j})\) have provided approximations \(\varphi_{j,\delta,m}\) of \(\varphi^{j}=\varphi|_{B_{2r}(a_{j})}-\sum_{l=1}^{n}\lambda_{l}^{j}|z_{l}^{j}|^{2}\), it thus remains to glue these functions into a function \(\varphi_{\delta,m}\) globally defined on \(X\). For this, we set
\[\varphi_{\delta,m}(x)=\frac{1}{2m}\log\left(\sum_{j}\theta_{j}(x)^{2}\exp \left(2m\left(\varphi_{j,\delta,m}+\sum_{l}(\lambda_{l}^{j}-\delta)|z_{l}^{j}|^ {2}\right)\right)\right)\]
where \((\theta_{j})_{1\leq j\leq N}\) is the partition of unity subordinate to the \(B_{r}(a_{j})\)'s. Now we take \(\delta=\delta_{m}\searrow 0\) slowly and \(\varphi_{m}=\varphi_{\delta_{m},m}\) the same computations as in [11, p. 16] ensure that
\[dd^{c}\varphi_{m}\geq\gamma-\varepsilon(\delta_{m})\omega_{X}\]
for \(m\geq m_{0}\) sufficiently large and \(\varepsilon_{m}=\varepsilon(\delta_{m})\searrow 0\) as \(m\to+\infty\). By the construction the properties (1), (2), (3) and (5) are satisfied.
The property (4) is crucial for later use whose proof should be provided. The argument originated from [15, Theorem 2.3, Step 2] using local uniform convergence and the strong Noetherian property. By the properties of functions \(\varphi_{m}\) it suffices to show that on each ball \(B_{j}=B_{r}(a_{j})\),
\[\int_{B_{j}}e^{2m\varphi_{m}-2m\varphi}dV=\int_{B_{j}}\left(\sum_{k\in\mathbb{ N}}|f_{j,m,k}|^{2}\right)e^{-2m\varphi}dV(z^{j})<+\infty.\]
We let \(\mathcal{F}_{1}\subset\mathcal{F}_{2}\subset\ldots\mathcal{F}_{k}\subset \ldots\subset\mathcal{O}(B_{2r}(a_{j})\times B_{2r}(a_{j}))\) denote the sequence of ideal coherent sheaves generated by the holomorphic functions \(\left(f_{j,m,l}(z)\overline{f_{j,m,l}(\varpi)}\right)_{l\leq k}\) on \(B_{2r}(a_{j})\times B_{2r}(a_{j})\). By the strong Noetherian property (see e.g. [13, C. II, 3.22]) the sequence \((\mathcal{F}_{k})\) is stationary on a compact subset \(B_{j}\times B_{j}\subset\subset B_{2r}(a_{j})\times B_{2r}(a_{j})\) at a index \(k_{0}\) large enough. Using the Cauchy-Schwarz inequality we have that the sum of the series \(U(z,w)=\sum_{k\in\mathbb{N}}f_{j,m,k}(z)\overline{f_{j,m,k}(\varpi)}\) is bounded from above by
\[\left(\sum_{k\in\mathbb{N}}|f_{j,m,k}(z)|^{2}\sum_{k\in\mathbb{N}}|f_{j,m,k}( \varpi)|^{2}\right)^{\frac{1}{2}}\]
hence uniformly convergent on every compact subset of \(B_{2r}(a_{j})\times B_{2r}(a_{j})\). Since the space of sections of a coherent ideal sheaf is closed under the topology of uniform convergence on compact subsets, the Noetherian property grantees that \(U(z,w)\in\mathcal{F}_{k_{0}}(B_{j}\times B_{j})\). Hence, by restricting to the conjugate diagonal \(w=z\), we obtain
\[\sum_{k\in\mathbb{N}}|f_{j,m,k}(z)|^{2}\leq C_{0}\left(\sum_{k\leq k_{0}}|f_{j,m,k}(z)|^{2}\right)\]
on \(B_{j}\). Since all terms \(f_{j,m,k}\) have \(L^{2}\)-norm equal to \(1\) with respect to the weight \(e^{-2m\varphi}\) this completes the proof.
Using this one obtains the following lemma which is slightly more general to the one in [16].
**Lemma 2.11**.: _Let \(\theta\) be a big form. Assume \(\varphi\in\mathrm{PSH}(X,\theta)\). Then for each \(\varepsilon>0\) there exist \(c(\varepsilon)>0\) and \(\psi_{\varepsilon}\in\mathrm{PSH}(X,\theta)\cap\mathcal{C}^{\infty}\left(X \setminus(\{\rho=-\infty\}\cup E_{c(\varepsilon)}(\varphi))\right)\) such that_
\[\int_{X}e^{\frac{2}{\varepsilon}(\psi_{\varepsilon}-\varphi)}dV_{X}<+\infty. \tag{2.2}\]
Proof.: The proof is quite close to that of [16, Lemma 2.7]. Recall that the bigness of \(\theta\) implies that there exists \(\rho\) an \(\theta\)-psh function with singularities and \(\sup_{X}\rho=0\) such that
\[\theta+dd^{c}\rho\geq 3\delta_{0}\omega_{X}\quad\text{for a fixed constant }\delta_{0}>0.\]
Let \(c(\varphi)\) be the integrability index of \(\varphi\). We can assume that \(c(\varphi)<+\infty\), otherwise we are done. By Theorem 2.10, we can find \((\varphi_{m})\) a Demailly's equisinglar approximant of \(\varphi\). We have that \(\varphi_{m}\) is smooth in the complement of the analytic subset \(E_{1/m}(\varphi)\) and
\[\theta+dd^{c}\varphi_{m}\geq-\varepsilon_{m}\delta_{0}\omega_{X}\]
for \(\varepsilon_{m}>0\) decreasing to zero as \(m\) goes to \(+\infty\). We notice here that the errors \(\varepsilon_{m}>0\) appear in the gluing process; see [11] for more details. We choose \(m=m(\varepsilon)\) to be the smallest positive integer such that
\[m>\frac{2}{\varepsilon(1+\varepsilon_{m})},\quad\frac{2\varepsilon_{m}}{ \varepsilon(1+\varepsilon_{m})}<c(\varphi).\]
We now set
\[\psi_{\varepsilon}:=\frac{\varphi_{m}}{1+\varepsilon_{m}}+\frac{\varepsilon_ {m}}{1+\varepsilon_{m}}\rho. \tag{2.3}\]
Thus, we have
\[\theta+dd^{c}\psi_{\varepsilon}\geq\frac{\varepsilon_{m}}{1+\varepsilon_{m}}2 \delta_{0}\omega_{X}:=2\kappa\omega_{X}.\]
Holder's inequality ensures that (2.2) holds, noticing that \(\rho\leq 0\). We easily see that \(\psi_{\varepsilon}\) is smooth in the complement of \(\{\rho=-\infty\}\cup E_{c(\varepsilon)}(\varphi)\) with \(c(\varepsilon)=m(\varepsilon)^{-1}\).
## 3. A priori estimates
### Notation
We use some notation as in [16, Sect. 3.1]. Until further notice, \(X\) denotes a compact complex manifold of dimension \(n\) endowed with a reference Hermitian form \(\omega_{X}\). Following the strategy in the introductory part, we assume in this section that \(\theta_{t}=\theta+t\chi\) with \(t\in[0,T_{\max})\) are Hermitian forms and \(\varphi_{0}\) is a smooth strictly \(\theta\)-psh function. We denote by \(\mu=fdV_{X}\) a positive measure with density \(\|f\|_{L^{p}}\leq C\) uniformly, for some \(p>1\). For more higher estimates, we assume moreover that
\[f=e^{\psi^{+}-\psi^{-}}\]
where \(\psi^{\pm}\) are smooth quasi-psh functions. Recall that \(\rho\) is a \(\theta\)-psh function with analytic singularities such that \(\theta+dd^{c}\rho\) dominates a Hermitian form. We may assume that \(\sup_{X}\rho=0\).
We consider \(\varphi_{t}\) a smooth solution of the following parabolic complex Monge-Ampere equation
\[\frac{\partial\varphi_{t}}{\partial t}=\log\left[\frac{(\theta_{t}+dd^{c} \varphi_{t})^{n}}{\mu}\right],\ \varphi|_{t=0}=\varphi_{0} \tag{3.1}\]
on \([0,T_{\max})\); see [52]. We should keep in mind that \(\varphi_{t}\) plays a role of its approximants \(\varphi_{t,j}\) in establishing a priori estimates. For brevity, we supress the index \(j\).
We fix \(T\) and \(S\) such that
\[\frac{p^{*}}{2c(\varphi_{0})}<T<S<T_{\max}.\]
where \(p^{*}\) is the conjugate exponent of \(p\), i.e. \(\frac{1}{p}+\frac{1}{p^{*}}=1\). Since we are interested in the behavior of the flow (3.1) near zero, we can assume that
\[\theta_{S}\geq(1-a)\theta,\quad\text{for }a\in[0,1/2).\]
It is truly natural in some several geometric context, for instance, \(\theta_{t}\) are the pull back of a Hermitian forms. Thus for each \(t\in[0,S]\) we have
\[\theta_{t}=\frac{t\theta_{S}}{S}+\frac{S-t}{S}\theta\geq\left(1-\frac{at}{S} \right)\theta.\]
During the proof, we use the notation \(\omega_{t}:=\theta_{t}+dd^{c}\varphi_{t}\) for the smooth path of Hermitian forms and denote \(\Delta_{t}=\operatorname{tr}_{\omega_{t}}dd^{c}\) the corresponding time-dependent Laplacian operator on functions.
We fix \(\varepsilon_{0}>0\) small enough, and let denote by \(\psi_{0}:=\psi_{\varepsilon_{0}}\) established in Lemma 2.11. By construction, \(\psi_{0}\) is smooth outside an analytic subset \(\{\rho=-\infty\}\cup E_{c(\varepsilon)}(\varphi_{0})\) and satisfies
\[\theta+dd^{c}\psi_{0}\geq 2\kappa\omega_{X}. \tag{3.2}\]
We let \(E_{1}\), \(E_{1}\) denotes the following quantities
\[E_{1}:=\int_{X}e^{\frac{2(\varphi_{0}-\varphi_{0})}{\varepsilon_{0}}}dV_{X}<+\infty,\]
\[E_{2}:=\int_{X}e^{-\frac{p^{*}\varphi_{0}}{T}}dV_{X}<+\infty.\]
Observe that \(E_{1}\) is finite thanks to Lemma 2.11, while \(E_{2}\) is finite since \(p^{*}/(2c(\varphi_{0}))<T\) and that \(\psi_{0}\) is less singular than \(\varphi_{0}\). One should emphasize that \(\varphi_{0}\) in this a priori estimate section plays a role of its approximating sequence \(\varphi_{0,j}\) (which are smooth strictly \(\theta\)-psh functions decreasing to \(\varphi_{0}\)). The corresponding sequence \(E_{1}^{j}\) are uniformly bounded from above in \(j\), hence we can pass to the limit.
In what follows we use \(C\) for a positive constant whose value may change from line to line but be uniformly controlled.
### Uniform estimate
We first look for a upper a priori bound for \(\varphi_{t}\). We recall that
\[\frac{1}{2}\theta\leq\theta_{t}\leq A\omega_{X},\ \forall\ t\in[0,T],\]
for \(A>0\) sufficiently large. It follows from [25, Theorem 3.4] (see also [31]) that there exists a constant \(c\) and a bounded \(A\omega_{X}\)-psh function \(\phi\) normalized by \(\inf_{X}\phi=0\) such that
\[\left(A\omega_{X}+dd^{c}\phi\right)^{n}=e^{c}fdV_{X}.\]
**Proposition 3.1**.: _For any \((t,x)\in[0,T]\times X\), there exists a uniform constant \(C>0\) such that_
\[\varphi_{t}(x)\leq C.\]
Proof.: For any \((t,x)\in[0,T]\times X\), we set \(v(t,x)=\phi(x)+ct+\sup_{X}\varphi_{0}\). Then we can check that
\[\frac{\partial v}{\partial t}=\log\left[\frac{(A\omega_{X}+dd^{c}v_{t})^{n}}{ \mu}\right],\ \text{while}\ \frac{\partial\varphi}{\partial t}\leq\log\left[\frac{(A\omega_{X}+dd^{c} \varphi_{t})^{n}}{\mu}\right],\]
and \(v_{0}\geq\varphi_{0}\). Hence, it follows from the classical maximum principle that \(v(t,x)\geq\varphi(t,x)\) for \((t,x)\in[0,T]\times X\). Therefore, one gets a upper bound for \(\varphi(t,x)\) by
\[\sup_{X}|\phi|+\max(c,0)T+\sup_{X}\varphi_{0}.\]
We fix two positive constants \(\alpha,\beta\) such that
\[\frac{p^{*}}{2c(T_{0})}<\frac{1}{\alpha}<\frac{1}{\alpha-\beta}<T_{\max},\]
hence
\[\theta+(\alpha-\beta)\chi\geq 0.\]
We observe that the density \(e^{-\alpha\varphi_{0}}f\) belongs to \(L^{q}\) for \(q>1\). Indeed, for any \(\delta>0\) we choose \(q>1\) so that \(\frac{1}{q}=\frac{1}{p}+\frac{1}{p^{*}+\delta}\). Holder's inequality and Skoda's theorem yield
\[\int_{X}e^{-\alpha q\varphi_{0}}f^{q}dV\leq\|f\|_{L^{p}}^{q}\left(\int_{X}e^{- \alpha(p^{*}+\delta)\varphi_{0}}dV\right)^{q/p^{*}+\delta}<+\infty.\]
It thus follows from [25] that there exists a bounded \(\theta\)-psh function \(u\) such that
\[\beta^{n}(\theta+dd^{c}u)^{n}=e^{\beta u-\alpha\varphi_{0}}fdV.\]
**Proposition 3.2**.: _For \(t\in(0,\alpha^{-1})\),_
\[(1-\alpha t)\varphi_{0}+\beta tu+n(t\log t-t)\leq\varphi_{t}.\]
_In particular, there exists a uniform constant \(C>0\) such that_
\[\varphi_{0}-C(t-t\log t)\leq\varphi_{t},\quad\forall\,t\in(1,\alpha^{-1}).\]
Proof.: The proof is identical to that of [30, Lemma 2.9]. Set \(u_{t}:=(1-\alpha t)\varphi_{0}+\beta tu+n(t\log t-t).\) We observe that
\[\theta_{t}+dd^{c}u_{t}=(1-\alpha t)\omega_{0}+\beta t\theta_{u}+t[(\alpha- \beta)\theta+\chi]\geq 0\]
by the choice of \(\alpha\), \(\beta\). Moreover, we can check that
\[(\theta_{t}+dd^{c}u_{t})^{n}\geq\beta^{n}t^{n}\theta_{u}^{n}=e^{\beta t_{t}}\mu\]
so \(u_{t}\) is a subsolution of (3.1). Together with \(u_{0}=\varphi_{0}\) the conclusion thus follows from the maximum principle.
Before finding a lower bound for solution \(\varphi_{t}\), we prove the following upper bound for \(\phi_{t}:=\frac{\partial\varphi}{\partial t}\).
**Proposition 3.3**.: _For all \((t,x)\in(0,T]\times X\),_
\[\phi_{t}(x)\leq\frac{\varphi_{t}(x)-\varphi_{0}(x)}{t}+n. \tag{3.3}\]
Proof.: We argue the same as in [30] (also in [27]). We consider the function
\[H(t,x):=t\phi_{t}(x)-(\varphi_{t}-\varphi_{0})(x)-nt.\]
Since \(\phi_{t}=\log(\omega_{t}^{n}/\mu)\) hence
\[\frac{\partial H}{\partial t}=t\Delta_{t}\phi_{t}+t\operatorname{tr}_{\omega_ {t}}\chi-n.\]
On the other hand, we compute
\[\Delta_{t}H=t\Delta_{t}\phi_{t}-\Delta_{t}(\varphi_{t}-\varphi_{0})=t\Delta_{ t}\phi_{t}-[n-t\operatorname{tr}_{\omega_{t}}(\chi)-\operatorname{tr}_{ \omega_{t}}(\theta+dd^{c}\varphi_{0})].\]
Therefore
\[\left(\frac{\partial}{\partial t}-\Delta_{t}\right)H=-\operatorname{tr}_{ \omega_{t}}(\theta+dd^{c}\varphi_{0})\leq 0.\]
By the maximum principle, \(H\) achieves its maximum along \((t=0)\). Since \(H(0,x)\equiv 0\) hence the desired inequality follows.
We use the same arguments as in [16] to establish the following uniform estimate for the complex parabolic Monge-Ampere equation.
**Theorem 3.4**.: _Fix \(\varepsilon>p^{*}\varepsilon_{0}\). For \(t\in[\varepsilon,T]\) we have the following estimate_
\[\varphi_{t}\geq\left(1-\frac{bt}{T}\right)\psi_{0}-C,\]
_for some uniform constant \(C>0\)._
Proof.: Fixing \(t\in[\varepsilon,T]\), it follows from Proposition 3.3 that
\[(\theta_{t}+dd^{c}\varphi_{t})^{n}=e^{\dot{\psi}_{t}}\leq e^{n+\frac{\varphi_{ t}-\varphi_{0}}{t}}fdV.\]
We set
\[\psi_{t}:=\left(1-\frac{bt}{T}\right)\psi_{0},\]
for \(b\in(a,1/2)\) close to \(a\). We recall that
\[\theta_{t}\geq\left(1-\frac{at}{S}\right)\theta,\]
it then follows that \(\psi_{t}\) is \(\delta\theta_{t}\)-psh with \(\delta\in(0,1)\) only depending on \(\varepsilon_{0}\), \(a\), \(b\), \(T\), \(S\) (more precisely, \(\delta=\frac{TS-b\varepsilon_{0}}{TS-aT\varepsilon_{0}}\)). Using the same arguments as in the proof of [16, Theorem 3.2], we can bound the following quantity
\[\int_{X}e^{\frac{q(\theta_{t}-\varphi_{0})}{t}}f^{q}dV<+\infty, \tag{3.4}\]
for some \(q>1\), in terms of \(\|f\|_{L^{p}}\), \(E_{1}\) and \(E_{2}\). Indeed, fixing \(\gamma>0\) small enough, we choose \(q>1\) so that
\[\frac{1}{q}=\frac{1}{p}+\frac{1}{2p^{*}+\gamma}+\frac{1}{2p^{*}+\gamma}.\]
Holder's inequality thus ensures that
\[\int_{X}e^{\frac{q(\varphi_{t}-\varphi_{0})}{t}}f^{q}dV\leq\|f\|_{L^{p}}^{q} \left(\int_{X}e^{\frac{(2p^{*}+\gamma)(\theta_{0}-\varphi_{0})}{t}}dV\right) ^{\frac{q}{2p^{*}+\gamma}}\left(\int_{X}e^{-\frac{(2p^{*}+\gamma)\theta_{0}}{ t}}dV\right)^{\frac{q}{2p^{*}+\gamma}}\]
The second term on the RHS is finite due to the construction of \(\psi_{0}\) in Lemma 2.11. Also, since \(\psi_{0}\) is less singular than \(\varphi_{0}\) hence the third term is finite.
From (3.4), we can apply Lemma 2.9 with \(A=1/t\) and \(g=\varphi_{0}/t-n\) to get the desired estimate. Note that our \(\mathcal{C}^{0}\)-estimate only depends on \(n\), \(\theta\), \(q\), our fixed parameters \(\varepsilon_{0}\), \(\varepsilon\), \(T\), \(S\), and a upper bound for \(E_{1}\) and \(E_{2}\).
**Remark 3.5**.: When \(\varphi_{0}\) is bounded or more general has zero Lelong numbers, it was shown in [55] (generalizing the result of [30] in Kahler context) that the estimate (3.3) ensures a lower bound for \(\varphi_{t}\) using Kolodziej-Nguyen's theorem [31]. Unfortunately, this method can not apply in more general case, namely when \(\varphi_{0}\) are more singular, for instance, it has a positive Lelong numbers. So, in order to analyze the singularities of the initial potential \(\varphi_{0}\), Guedj-Lu's approach [25] could help.
### Laplacian estimate
We recall the following classical inequality.
**Lemma 3.6**.: _Let \(\alpha\), \(\beta\) be two positive (1,1)-forms. Then_
\[n\left(\frac{\alpha^{n}}{\beta^{n}}\right)^{\frac{1}{n}}\leq tr_{\beta}( \alpha)\leq n\left(\frac{\alpha^{n}}{\beta^{n}}\right)(tr_{\alpha}(\beta))^{ n-1}.\]
We define
\[\Psi_{t}:=\left(1-\frac{bt}{S}\right)\psi_{0},\]
where \(\psi_{0}\) is defined in Lemma 2.11 with \(\varepsilon_{0}>0\) fixed.
In order to establish the \(C^{2}\) estimate it is necessary a lower bound for \(\dot{\varphi}_{t}=\frac{\partial\varphi}{\partial t}\).
**Proposition 3.7**.: _For \((t,x)\in(\varepsilon,T]\times X\),_
\[\dot{\varphi}_{t}(x)\geq n\log(t-\varepsilon)+A(\Psi_{t}-\varphi_{t})-C\]
_where \(A,C>0\) are positive constants only depending on \(\varepsilon\), \(T\), \(\|f\|_{L^{p}}\) and a upper bound of \(E_{1}\) and \(E_{2}\)._
Proof.: The proof is almost identical to that of [16, Proposition 3.5]. The only difference is that we use Theorem 3.4 instead of the corresponding one in [16]. We include the proof for the convenience of readers.
By [25, Theorem 3.4], there exist a bounded \(\theta\)-psh function \(\phi_{1}\) and a constant \(c_{1}\) such that
\[(\theta+dd^{c}\phi_{1})^{n}=e^{c_{1}}d\mu,\quad\sup_{X}\phi_{1}=0.\]
We set
\[G(t,x):=\dot{\varphi}_{t}(x)+A(\varphi_{t}-\Psi_{t})-\phi_{1}-n\log(t-\varepsilon)\]
for a constant \(A>0\) to be determined hereafter. We see that \(G\) attains its minimum at a point \((t_{0},x_{0})\in(\varepsilon,T]\times(X\backslash\{\psi_{0}=-\infty\})\). In the sequel, all our computations will take place at this point. We compute
\[\left(\frac{\partial}{\partial t}-\Delta_{t}\right)G=A\dot{\varphi}_{t}- \frac{n}{t-\varepsilon}+A\frac{b\psi_{0}}{S}-nA+A\operatorname{tr}_{\omega_{ t}}(\theta+dd^{c}\Psi_{t})+\operatorname{tr}_{\omega_{t}}(\chi+dd^{c}\phi_{1}).\]
We observe that
\[\theta_{t}+dd^{c}\Psi_{t} =\frac{t(b-a)}{S}\theta+\left(1-\frac{bt}{S}\right)(\theta+dd^{c }\psi_{0})\] \[\geq\frac{\varepsilon(b-a)}{S}\theta+\frac{1}{2}2\kappa\omega_{X}.\]
We now choose \(A>0\) so big that
\[A(\theta_{t}+dd^{c}\Psi_{t})+\chi\geq\theta.\]
Therefore
\[\left(\frac{\partial}{\partial t}-\Delta_{t}\right)G\geq A\dot{\varphi}_{t}- \frac{n}{t-\varepsilon}+A\frac{b\psi_{0}}{S}-nA+\operatorname{tr}_{\omega_{ t}}(\theta+dd^{c}\phi_{1}). \tag{3.5}\]
On the other hand, Lemma 3.6 ensures that
\[\operatorname{tr}_{\omega_{t}}(\theta+dd^{c}\phi_{1})\geq n\left(\frac{( \theta+dd^{c}\phi_{1})^{n}}{\omega_{t}^{n}}\right)^{1/n}=ne^{\frac{-\theta_{t }}{n}}.\]
Using the elementary inequality \(\gamma x-\log x\geq-C_{\gamma}\) for each small constant \(\gamma>0\), \(x>0\), we have that
\[A\phi_{t}+ne^{\frac{-\phi_{t}}{n}}\geq e^{\frac{-\phi_{t}}{n}-C_{1}}-C_{2}.\]
Plugging this into (3.5) it follows from the minimum principle that at \((t_{0},x_{0})\),
\[\dot{\varphi}_{t}\geq-n\log\left(C_{2}+\frac{n}{t-\varepsilon}-\frac{Ab\psi_ {0}}{S}+nA\right)-nC_{1},\]
hence
\[G(t_{0},x_{0})\geq-C_{3}-n\log\left(C_{2}(t_{0}-\varepsilon)+n-\frac{Ab(t_{0} -\varepsilon)\psi_{0}}{S}\right)-\frac{Abt_{0}(S-T)}{ST}\psi_{0}\]
where we have used Theorem 3.4. We thus obtain a uniform lower bound for \(G(t_{0},x_{0})\) again by \(\gamma x-\log x\geq-C_{\gamma}\) for \(x>0\). The desired lower bound follows.
We are now in position to establish the \(\mathcal{C}^{2}\)-estimate. We follow the computations of [52, Lemma 4.1] (see also [55, Lemma 6.4]) in which they have used the technical trick due to Phong and Sturm [38]. Recall that the measure \(\mu\) is of the form
\[\mu=e^{\psi^{+}-\psi^{-}}dV\]
where \(\psi^{\pm}\) are smooth \(K\omega_{X}\)-psh function on \(X\) for uniform constant \(K>0\). For simplicity, we assume \(K=1\) and normalize \(\sup_{X}\psi^{\pm}=0\).
**Theorem 3.8**.: _Fix \(\varepsilon>p^{*}\varepsilon_{0}\). For \((t,x)\in[\varepsilon,T]\times X\) we have the following bound_
\[(t-\varepsilon)\log tr_{\omega_{X}}(\omega_{t})\leq-B\psi_{0}-C\psi^{-}+C\]
_where \(A,C\) are positive constants only depending on \(\varepsilon\), \(T\), \(\|e^{-\psi^{-}}\|_{L^{p}}\) and a upper bound of \(E_{1}\), \(E_{2}\)._
Proof.: We follow the computations of [21, 55] (which are due to the trick of Phong and Sturm [38]) with a twist in order to deal with unbounded functions. The constant \(C\) denotes various uniform constants which may be different.
Consider
\[H:=(t-\varepsilon)\log tr_{\omega_{X}}(\omega_{t})-\gamma(u),\ (t,x)\in[ \varepsilon,T]\times X,\]
where \(\gamma:\mathbb{R}\to\mathbb{R}\) is a smooth concave increasing function such that \(\lim_{t\to+\infty}\gamma(t)=+\infty\), and
\[u(t,x):=\varphi_{t}(x)-\Psi_{t}(x)-\kappa\psi^{-}+1\geq 1,\]
as follows from Theorem 3.4 and \(\psi_{0},\psi^{-}\leq 0\). We are going to show that \(H\) is uniformly bounded from above for an appropriate choice of \(\gamma\).
We let \(g\) denote the Riemann metric associated to \(\omega_{X}\) and \(\tilde{g}\) the one associated to \(\omega_{t}:=\theta_{t}+dd^{c}\varphi_{t}\). Since \(H\) goes to \(-\infty\) on the boundary of \(X_{0}:=\{x\in X:\psi_{0}(x)>-\infty\}\), \(H\) attains its maximum at some point \((t_{0},x_{0})\in[\varepsilon,T]\times X_{0}\). If \(t_{0}=\varepsilon\) we are done. Assume that \(t_{0}>\varepsilon\). At this maximum point we use the following local coordinate systems due to Guan and Li [23, Lemma 2.1, (2.19)]:
\[g_{i\tilde{j}}=\delta_{ij},\ \frac{\partial g_{i\tilde{i}}}{\partial z_{j}}=0 \text{ and }\tilde{g}_{i\tilde{j}}\text{ is diagonal.}\]
Following the computations in [55, Eq. (3.20)], we have
\[\Delta_{t}\operatorname{tr}_{\omega_{X}}(\omega_{t})\geq\sum_{i,j}\tilde{g}^{ i\tilde{i}}\tilde{g}^{j\tilde{l}}\tilde{g}_{i\tilde{j}}\tilde{g}_{i\tilde{j}}- \operatorname{tr}_{\omega_{X}}\operatorname{Ric}(\omega_{t})-C_{1}\operatorname {tr}_{\omega_{X}}(\omega_{t})\operatorname{tr}_{\omega_{t}}(\omega_{X}). \tag{3.6}\]
From standard arguments as in [25, Eq. (4.5), p. 29], we obtain
\[\frac{|\partial\operatorname{tr}_{\omega_{X}}(\omega_{t})|_{ \tilde{g}_{t}}^{2}}{(\operatorname{tr}_{\omega_{X}}(\omega_{t}))^{2}} \leq\frac{1}{\operatorname{tr}_{\omega_{X}}(\omega_{t})}\left(\sum_{i,j} \tilde{g}^{i\tilde{l}}\tilde{g}^{j\tilde{l}}\tilde{g}_{i\tilde{l}}\tilde{g}_{ i\tilde{j}}\tilde{g}_{i\tilde{l}\tilde{j}}\right)+C\frac{\operatorname{tr}_{ \omega_{t}}(\omega_{X})}{(\operatorname{tr}_{\omega_{X}}(\omega_{t}))^{2}}\] \[\quad+\frac{2}{(\operatorname{tr}_{\omega_{X}}(\omega_{t}))^{2}} \operatorname{Re}\sum_{i,j,k}\tilde{g}^{i\tilde{l}}T_{i\tilde{j}\tilde{l}} \tilde{g}_{k\tilde{k}\tilde{k}}, \tag{3.7}\]
where \(T_{i\tilde{j}\tilde{l}}:=\tilde{g}_{j\tilde{l}\tilde{l}}-\tilde{g}_{i\tilde{l} \tilde{j}}\) is the torsion term corresponding to \(\omega_{t}\) which is under control: \(|T_{i\tilde{j}\tilde{l}}|\leq C\). Now at the point \((t_{0},x_{0})\), we have \(\partial_{\tilde{l}}H=0\), hence
\[(t-\varepsilon)\sum_{k}\tilde{g}_{k\tilde{k}\tilde{l}}=\operatorname{tr}_{ \omega_{X}}(\omega_{t})\gamma^{\prime}(u)u_{\tilde{l}}.\]
Cauchy-Schwarz's inequality yields
\[\left|\frac{2}{(\operatorname{tr}_{\omega_{X}}(\omega_{t}))^{2}}\text{Re}\sum_{i,j,k}\tilde{g}^{i\vec{i}}T_{i\vec{j}\vec{j}}\delta_{k\vec{k}\vec{k}}\right|\leq C \frac{\gamma^{\prime}(u)(t_{0}-\varepsilon)}{-\gamma^{\prime\prime}(u)}\frac{ \operatorname{tr}_{\omega_{t}}(\omega_{X})}{(\operatorname{tr}_{\omega_{X}}( \omega_{t}))^{2}}+\frac{-\gamma^{\prime\prime}(u)}{t_{0}-\varepsilon}|\partial u |^{2}_{\omega_{t}},\]
hence
\[\left|\frac{2}{(\operatorname{tr}_{\omega_{X}}(\omega_{t}))^{2}}\text{Re}\sum_{ i,j,k}\tilde{g}^{i\vec{i}}T_{i\vec{j}\vec{j}}\delta_{k\vec{k}\vec{k}}\right|\leq C \left(\frac{\gamma^{\prime}(u)T}{-\gamma^{\prime\prime}(u)}+1\right)\frac{ \operatorname{tr}_{\omega_{t}}(\omega_{X})}{(\operatorname{tr}_{\omega_{X}}( \omega_{t}))^{2}}+\frac{-\gamma^{\prime\prime}(u)}{t_{0}-\varepsilon}| \partial u|^{2}_{\omega_{t}},\]
using that \(|g_{k\vec{k}\vec{k}}-g_{k\vec{k}\vec{k}}|\leq C\). From this, the inequality (3.7) becomes
\[\begin{split}\frac{|\partial\operatorname{tr}_{\omega_{X}}( \omega_{t})|^{2}_{\omega_{t}}}{(\operatorname{tr}_{\omega_{X}}(\omega_{t}))^{2 }}&\leq\frac{1}{\operatorname{tr}_{\omega_{X}}(\omega_{t})}\left( \sum_{i,j}\tilde{g}^{i\vec{i}}\tilde{g}^{j\vec{j}}\tilde{g}_{j\vec{j}\vec{j}} \tilde{g}_{j\vec{j}}\right)\\ &\quad+C\left(\frac{\gamma^{\prime}(u)T}{-\gamma^{\prime\prime}( u)}+2\right)\frac{\operatorname{tr}_{\omega_{t}}(\omega_{X})}{(\operatorname{tr}_{ \omega_{X}}(\omega_{t}))^{2}}+\frac{-\gamma^{\prime\prime}(u)}{t_{0}- \varepsilon}|\partial u|^{2}_{\omega_{t}}.\end{split} \tag{3.8}\]
Set \(\alpha:=\operatorname{tr}_{\omega_{X}}(\omega_{t})\). We compute
\[\dot{\alpha} =\operatorname{tr}_{\omega_{X}}(\chi+dd^{c}\phi)=\operatorname{ tr}_{\omega_{X}}(\chi)-\operatorname{tr}_{\omega_{X}}\text{Ric}(\omega_{t})- \operatorname{tr}_{\omega_{X}}dd^{c}(\psi^{+}-\psi^{-})+\operatorname{tr}_{ \omega_{X}}(dV)\] \[\leq\operatorname{tr}_{\omega_{X}}(C_{1}\omega_{X}+dd^{c}\psi^{ -})-\operatorname{tr}_{\omega_{X}}\text{Ric}(\omega_{t})\]
where we have use the fact that \(\operatorname{tr}_{\omega}(\chi)\) is bounded from above together with the trivial inequality \(n\leq\operatorname{tr}_{\omega_{X}}(\omega_{t})\operatorname{tr}_{\omega_{t} }(\omega_{X})\). Combining this together with (3.6) and (3.8), we infer that
\[\begin{split}\frac{\dot{\alpha}}{\alpha}-\Delta_{t}\log\alpha& =\frac{\dot{\alpha}}{\alpha}-\frac{\Delta_{\omega_{t}}\alpha}{ \alpha}+\frac{|\partial\alpha|^{2}_{\omega_{t}}}{\alpha^{2}}\\ &\leq\frac{\operatorname{tr}_{\omega_{t}}(C_{1}\omega_{X}+dd^{c }\psi^{-})}{\alpha}+C\left(\frac{\gamma^{\prime}(u)T}{-\gamma^{\prime\prime}(u )}+2\right)\frac{\operatorname{tr}_{\omega_{t}}(\omega_{X})}{\alpha^{2}}+ \frac{-\gamma^{\prime\prime}(u)}{t_{0}-\varepsilon}|\partial u|^{2}_{\omega_{ t}}\end{split} \tag{3.9}\]
From this, at the maximum point \((t_{0},x_{0})\),
\[\begin{split} 0\leq\left(\frac{\partial}{\partial t}-\Delta_{t} \right)H&=\log\alpha+(t-\varepsilon)\left(\frac{\dot{\alpha}}{ \alpha}-\Delta_{t}\log\alpha\right)\\ &\quad-\gamma^{\prime}(u)\dot{u}+\gamma^{\prime}(u)\Delta_{t}u+ \gamma^{\prime\prime}(u)|\partial u|^{2}_{\omega_{t}}\\ &\leq\log\alpha+\frac{C_{3}\operatorname{tr}_{\omega_{t}}(\omega_{ X}+dd^{c}\psi^{-})}{\alpha}+C_{4}\left(\frac{\gamma^{\prime}(u)T}{-\gamma^{ \prime\prime}(u)}+2\right)\frac{\operatorname{tr}_{\omega_{t}}(\omega_{X})}{ \alpha^{2}}\\ &\quad-\gamma^{\prime}(u)\dot{\varphi}_{t}+\gamma^{\prime}(u)\dot{ \Psi}_{t}+\gamma^{\prime}(u)\Delta_{\omega_{t}}(\varphi_{t}-\Psi_{t}-\kappa \psi^{-}),\end{split} \tag{3.10}\]
with \(C_{3}\), \(C_{4}>0\) under control. Moreover, since \(\theta_{t}\geq\left(1-\frac{t}{3S}\right)\theta\) hence
\[\theta_{t}+dd^{c}\Psi_{t}\geq\left(1-\frac{bt}{s}\right)2\kappa\omega_{X}.\]
Thus we obtain
\[\Delta_{t}(\varphi_{t}-\Psi_{t})\leq n-\kappa\operatorname{tr}_{\omega_{t}}( \omega_{X}). \tag{3.11}\]
Plugging (3.11) into (3.10) we thus arrive at
\[\begin{split} 0&\leq\log\alpha+\frac{C_{3} \operatorname{tr}_{\omega_{t}}(\omega_{X}+dd^{c}\psi^{-})}{\alpha}-\gamma^{ \prime}(u)(n-\kappa\operatorname{tr}_{\omega_{t}}(\omega_{X}+dd^{c}\psi^{-})) \\ &\quad-\gamma^{\prime}(u)\dot{\varphi}_{t}-\gamma^{\prime}(u)\frac{ \psi_{0}}{2S}+C_{4}\left(\frac{\gamma^{\prime}(u)T}{-\gamma^{\prime\prime}(u)}+ 2\right)\frac{\operatorname{tr}_{\omega_{t}}(\omega_{X})}{(\operatorname{tr}_{ \omega_{X}}(\omega_{t}))^{2}}+C_{5}.\end{split}\]
We now choose the function \(\gamma\) to obtain a simplified formulation. We set
\[\gamma(u):=\frac{C_{3}+3}{\min(\kappa,1)}u+\ln(u).\]
Since \(u\geq 1\) we have
\[\frac{C_{3}+3}{\min(\kappa,1)}\leq\gamma^{\prime}(u)\leq 1+\frac{C_{3}+3}{\min( \kappa,1)},\qquad\frac{\gamma^{\prime}(u)T}{-\gamma^{\prime\prime}(u)}+2\leq C _{5}u^{2}.\]
Using \(\operatorname{tr}_{\omega_{X}}(\omega_{X}+dd^{c}\psi^{-})\leq\operatorname{ tr}_{\omega_{U}}(\omega_{X}+dd^{c}\psi^{-})\operatorname{tr}_{\omega_{X}}( \omega_{t})\) we obtain
\[0\leq\log\alpha-\gamma^{\prime}(u)\phi_{t}-\gamma^{\prime}(u)\frac{\psi_{0}}{2 S}-3\operatorname{tr}_{\omega_{t}}(\omega_{X})+C_{6}(u^{2}+1)\frac{\operatorname{ tr}_{\omega_{t}}(\omega_{X})}{\alpha^{2}}. \tag{3.12}\]
If at the point \((t_{0},x_{0})\), we have \(\alpha^{2}\leq C_{6}(u^{2}+1)\) then
\[H(t_{0},x_{0})\leq T\log\sqrt{C_{6}(u^{2}+1)}-\gamma(u)\leq C_{7},\]
we are done. Otherwise, we assume that at \((t_{0},x_{0})\), \(\alpha^{2}\geq C_{6}(u^{2}+1)\). Applying Lemma 3.6 we obtain
\[\log\alpha=\log\operatorname{tr}_{\omega_{X}}(\omega_{t})\leq(n-1)\log \operatorname{tr}_{\omega_{t}}(\omega_{X})+\log n+\phi_{t}-\psi^{-}\]
using that \(\sup_{X}\psi^{+}=0\). Plugging this into (3.12) we obtain
\[0\leq C_{5}+(n-1)\log\operatorname{tr}_{\omega_{t}}(\omega_{X})-2\operatorname {tr}_{\omega_{t}}(\omega_{X})-(\gamma^{\prime}(u)-1)\phi_{t}-\gamma^{\prime}( u)\frac{\psi_{0}}{2S}-\psi^{-},\]
or, equivalently
\[2\operatorname{tr}_{\omega_{t}}(\omega_{X})\leq C_{8}-(\gamma^{\prime}(u)-1) \phi_{t}-\gamma^{\prime}(u)\frac{\psi_{0}}{2S}-\psi^{-} \tag{3.13}\]
since \((n-1)\log y-2y\leq-y+O(1)\) for \(y>0\). In particular, we have
\[\phi_{t}\leq\frac{C_{5}}{\gamma^{\prime}(u)-1}-\frac{\gamma^{\prime}(u)}{ \gamma^{\prime}(u)-1}\frac{\psi_{0}}{2S}\leq\frac{C_{5}}{A-1}-\frac{A\psi_{0}} {(A-1)2S}-\frac{\psi^{-}}{A-1} \tag{3.14}\]
at \((t_{0},x_{0})\) since \(\operatorname{tr}_{\omega_{t}}(\omega_{X})\geq 0\) and \(A\leq\gamma^{\prime}(u)\leq A+1\) with \(A=:\frac{C_{3}+3}{\min(\kappa,1)}\). It follows from Lemma 3.6 that
\[\operatorname{tr}_{\omega_{t}}(\omega_{X})\geq ne^{\frac{-\phi_{t}+\psi^{-}}{ n}}.\]
Plugging this into (3.13) we obtain
\[\operatorname{tr}_{\omega_{t}}(\omega_{X})\leq C_{9}-\gamma^{\prime}(u)\frac {\psi_{0}}{2S}-\gamma^{\prime}(u)\psi^{-}\leq C_{9}-\frac{(A+1)\psi_{0}}{2S}- (A+1)\psi^{-}\]
with \(C_{9}>0\) under control, since \(be^{y}-By\geq-C(b,B)\) for \(y\in\mathbb{R}\). Again Lemma 3.6 yields
\[\log\alpha\leq(n-1)\log\left(C_{9}-\frac{(A+1)\psi_{0}}{2S}-(A+1)\psi^{-} \right)+\log n+\phi_{t}-\psi^{-}.\]
Combining this together with (3.14) we have at \((t_{0},x_{0})\)
\[H \leq C_{10}-A\left[\phi_{t}-\left(1-\frac{t}{2S}-\frac{t-\varepsilon }{2(A-1)S}\right)\psi_{0}\right]+\left(Ax-1-\frac{1}{A-1}\right)\psi^{-}\] \[\quad+(t-\varepsilon)(n-1)\log\left(C_{9}-\frac{(A+1)\psi_{0}}{2S} -(A+1)\psi^{-}\right).\]
Up to increasing \(A>0\) if necessary, so that
\[\delta:=\frac{\varepsilon}{2T}-\frac{\varepsilon}{2S}-\frac{T}{2(A-1)S}>0,\]
since \(\psi_{0}\leq 0\) we obtain, at \((t_{0},x_{0})\),
\[H \leq C_{10}-A\left[\varphi_{t}-\left(1-\frac{t}{2T}\right)\psi_{0} \right]+A\delta\psi_{0}+A\kappa/2\psi^{-}\] \[\quad+(t-\varepsilon)(n-1)\log\left(C_{9}-\frac{(A+1)\psi_{0}}{2 S}-(A+1)\psi^{-}\right).\]
The second term is uniformly bounded from above thanks to Theorem 3.4. Since \(-by+\log y\) is bounded from above for \(y>0\), we obtain that \(H\) attains a uniform bound at \((t_{0},x_{0})\). This finishes the proof.
### More estimates
Recall that there exists \(\rho\) an \(\theta\)-psh function with analytic singularities such that \(\sup_{X}\rho=0\) and
\[\theta+dd^{c}\rho\geq 3\delta_{0}\omega_{X}\]
for some \(\delta_{0}>0\). By [25, Theorem 3.4], there exist a bounded \(\theta\)-psh function \(\phi_{1}\) and a constant \(c_{1}\) such that
\[(\theta+dd^{c}\phi_{1})^{n}=e^{c_{1}}d\mu,\quad\sup_{X}\phi_{1}=0.\]
**Proposition 3.9**.: _Assume that \(\phi_{1}\), \(\phi_{2}\) are two smooth \(\omega_{X}\)-psh functions satisfying_
\[\phi_{0}\geq C_{1}\psi_{1},\quad\varphi_{0}\geq\frac{1}{2}(\rho+\delta_{0}\psi _{2})\]
_for some constants \(C_{1}>0\). Fix \(T_{1}\in(0,T_{\max})\) such that \(\theta_{t}>\frac{1}{2}\theta\) for all \(t\in[0,T_{1}]\). Then there exists a uniform constant \(C_{2}>0\) only depending on \(C_{1}\), \(\delta_{0}\), \(T_{1}\) and \(\sup_{X}|\phi_{1}|\) such that_
\[\phi_{t}\geq C_{2}(\rho+\delta_{0}\psi_{2}+1)+C_{1}\psi_{1},\ \forall\ t\in[0,T_{1}].\]
Proof.: The proof is identical to that of Proposition 3.7. We consider
\[H(t,x):=\phi_{t}-C_{1}\psi_{1}+A\left(\varphi_{t}-\frac{1}{2}(\rho+\delta_{0} \psi_{2})\right)-\phi_{1},\]
for \(A>0\) to be chosen hereafter. We observe that \(H\) attains its minimum at some point \((t_{0},x_{0})\in[0,T_{1}]\times X\). If \(t_{0}=0\) we are done by assumptions. Otherwise, by the minimum principle we have at \((t_{0},x_{0})\),
\[0\geq\left(\frac{\partial}{\partial t}-\Delta_{t}\right)H\geq-An+A\phi_{t}+(- C_{1}+A\delta_{0})\operatorname{tr}_{\omega_{t}}(\omega_{X})+\operatorname{tr} _{\omega_{t}}(dd^{c}\phi_{1})\]
where are have used that \(\theta_{t}+dd^{c}\frac{1}{2}(\rho+\delta_{0}\psi_{2})\geq\delta_{0}\omega_{X}\). Now, we choose \(A=\delta_{0}(C_{1}+1)\) thus
\[\operatorname{tr}_{\omega_{t}}(\omega_{X}+dd^{c}\phi_{1})\geq n\left(\frac{( \theta+dd^{c}\phi_{1})^{n}}{\omega_{t}^{n}}\right)^{1/n}=ne^{\frac{-\phi_{t}}{ n}}.\]
using Lemma 3.6. Together with the inequality \(e^{y}\geq By-C_{B}\) we obtain a uniform lower bound for \(\phi_{t}\) at \((t_{0},x_{0})\). On the other hand, by Proposition 3.2 we see that \(\varphi_{t}\geq\varphi_{0}-c(t)\) so \(\varphi_{t}\geq\frac{1}{2}(\rho+\delta_{0}\psi_{2})-c(t)\) where \(c(t)\to 0\) as \(t\to 0\). The lower bound for \(H(t_{0},x_{0})\) thus follows, this completes the proof.
**Proposition 3.10**.: _Assume that \(\psi_{1}\), \(\psi_{2}\) are two smooth \(\omega_{X}\)-psh functions satisfying_
\[\Delta_{\omega_{X}}\varphi_{0}\leq e^{-C_{1}\psi_{1}},\quad\varphi_{0}\geq \frac{1}{2}(\rho+\delta_{0}\psi_{2})\]
_for some constants \(C_{1}>0\). Fix \(T_{1}\in(0,T_{\max})\) such that \(\theta_{t}>\frac{1}{2}\theta\) for all \(t\in[0,T_{1}]\). Then there exists uniform constants \(C_{2}>0\), \(C_{3}>0\) only depending on \(C_{1}\), \(\delta_{0}\) and \(T_{1}\) such that_
\[\operatorname{tr}_{\omega_{X}}(\omega_{t})\leq C_{3}e^{-C_{1}\psi_{1}-C_{2}( \rho+\delta_{0}\psi_{2}+\delta_{0}\psi^{-})},\ \forall\ t\in[0,T_{1}].\]
Proof.: Consider
\[H(t,\cdot)=\log\operatorname{tr}_{\omega_{X}}(\omega_{t})+C_{1}\psi_{1}-\gamma(u)\]
where \(\gamma:\mathbb{R}\to\mathbb{R}\) is a smooth concave increasing function such that \(\lim_{t\to+\infty}\gamma(t)=+\infty\), and
\[u(t,x):=\varphi_{t}(x)-\frac{1}{2}(\rho(x)+\delta_{0}\psi_{2}(x))+\delta_{0} \psi^{-}(x)+1.\]
We observe that \(H\) attains its maximum at a point \((t_{0},x_{0})\in[0,T_{1}]\times\{\rho>-\infty\}\). If \(t_{0}=0\) then \(H(0,\cdot)\leq\log n-\gamma(1)\). Otherwise, assume \(t_{0}>0\). We compute from now on at this point. By the maximum principle and the arguments in Theorem 3.8 we have
\[0\leq\left(\frac{\partial}{\partial t}-\Delta_{t}\right)H \leq\frac{C\operatorname{tr}_{\omega_{t}}(\omega_{X}+dd^{c}\psi^ {-})}{\operatorname{tr}_{\omega_{X}}(\omega_{t})}-\gamma^{\prime}(u)(n-\delta _{0}\operatorname{tr}_{\omega_{t}}(\omega_{X}+dd^{c}\psi^{-}))+C\] \[\quad-C_{1}\operatorname{tr}_{\omega_{t}}(dd^{c}\psi_{1})-\gamma ^{\prime}(u)\phi_{t}+C\left(\frac{\gamma^{\prime}(u)}{-\gamma^{\prime\prime}( u)}+2\right)\frac{\operatorname{tr}_{\omega_{t}}(\omega_{X})}{(\operatorname{tr}_{ \omega_{X}}(\omega_{t}))^{2}}. \tag{3.15}\]
Here we use that \(\theta_{t}+dd^{c}\frac{1}{2}(\rho+\delta_{0}\psi_{2})\geq\delta_{0}\omega_{X}\). We set
\[\gamma(u):=\frac{C+C_{1}+3}{\min(\kappa,1)}u+\ln(u).\]
We proceed the same as in the proof of Theorem 3.8 to obtain the uniform upper bound for \(H(t_{0},x_{0})\). This finishes the proof.
## 4. Degenerate Monge-Ampere flows
### Proof of Theorem B
By Demailly's regularization theorem (Theorem 2.10) we can find two sequences \(\psi_{j}^{\pm}\in\mathcal{C}^{\infty}(X)\) such that
* \(\psi_{j}^{\pm}\) decreases pointwise to \(\psi^{\pm}\) on \(X\) and the convergence is in \(\mathcal{C}^{\infty}_{\operatorname{loc}}(U)\);
* \(dd^{c}\psi^{\pm}\geq-\omega_{X}\).
We note that \(|\sup_{X}\psi_{j}^{\pm}|\) is uniformly bounded and for all \(j\),
\[\|e^{-\psi_{j}^{-}}\|_{L^{p}}\leq\|e^{-\psi^{-}}\|_{L^{p}}.\]
Thanks to Demailly's regularization theorem again, we can find a smooth sequence \((\varphi_{0,j})\) of strictly \(\theta+2^{-j}\omega_{X}\)-psh functions decreasing towards \(\varphi_{0}\). We set \(\theta_{t,j}=\theta_{t}+2^{-j}\omega_{X}\) and \(\mu_{j}=e^{\psi_{j}^{+}-\psi_{j}^{-}}\). It follows from [52, Theorem 1.2] (see also [55]) that there exists a unique function \(\varphi_{j}\in\mathcal{C}^{\infty}([0,T[\times X)\) such that
\[\begin{cases}\frac{\partial\varphi_{t,j}}{\partial t}=\log\left[\frac{( \theta_{t,j}+dd^{c}\varphi_{t,j})^{n}}{\mu_{j}}\right]\\ \varphi_{j}|_{t=0}=\varphi_{0,j}.\end{cases} \tag{4.1}\]
It follows from the maximum principle that the sequence \(\varphi_{t,j}\) is decreasing with respect to \(j\). Moreover, Proposition 3.1 yields that \(\sup_{X}\varphi_{t,j}\) is uniformly bounded from above. It follows from Proposition 3.2 that \(j\to+\infty\), the family \(\varphi_{t,j}\) decreases to \(\varphi_{t}\), which is a well-defined \(\theta_{t}\)-psh function on \(X\). Following the same arguments in [55, Sect. 4.1], we get that \(\varphi_{t}\to\varphi_{0}\) in \(L^{1}(X)\) as \(t\to 0^{+}\).
We next study the partial regularity of \(\varphi_{t}\) for small \(t\). We fix \(\varepsilon_{0}>0\) and \(\varepsilon>p^{*}\varepsilon_{0}\). Let \(T\) and \(S\) as in Section 3.1. Let \(\rho\) be a \(\theta\)-psh function with analytic singularities along \(D\) such that \(\theta+dd^{c}\rho\) dominates a Hermitian form. Thanks to Lemma 2.11 we
can find a function \(\psi_{0}\in\mathrm{PSH}(X,\theta)\cap\mathcal{C}^{\infty}(X\setminus(D\cup E_{c}( \varphi_{0})))\) (the constant \(c=c(\varepsilon_{0})\) is defined as in Lemma 2.11) such that
\[\int_{X}e^{\frac{2(\varphi_{0}-\varphi_{0})}{\varepsilon_{0}}}dV_{X}<+\infty,\]
We assume w.l.o.g that \(\psi_{0}\leq 0\). Since \(\frac{p^{*}}{2c(\varphi_{0})}<T\) and \(\psi_{0}\) is less singular than \(\varphi_{0}\) we also have
\[\int_{X}e^{\frac{-p^{*}\varphi_{0}}{T}}dV_{X}<+\infty.\]
We mention that since \(\varphi_{0}\) is a decreasing limit of a smooth sequence \(\varphi_{0,j}\), the corresponding constant for \(\varphi_{0,j}\) are uniformly bounded (in \(j\)) and we can pass the limit when \(j\to+\infty\).
Recall that \(\psi^{\pm}\) are smooth (merely locally bounded) in a Zariski open set \(U\subset X\setminus D\). We are going to show that \(\varphi_{t}\) is smooth on \(U\setminus(D\cup E_{c}(\varphi_{0}))\) for each \(t>\varepsilon\). Let \(K\) be an arbitrarily compact subset of \(U\setminus(D\cup E_{c}(\varphi_{0}))\). It follows from Proposition 3.1, Theorem 3.4 and the remark above that
\[\sup_{[\varepsilon,T]\times K}|\varphi_{j}|\leq C(\varepsilon,T,K).\]
Next, Proposition 3.7 yields
\[\sup_{[\varepsilon,T]\times K}|\phi_{j}|\leq C(\varepsilon,T,K).\]
Moreover, thanks to Theorem 3.8 we also have a uniform bound for \(\Delta\varphi_{t}^{j}\):
\[\sup_{[\varepsilon,T]\times K}|\Delta\varphi_{j}|\leq C(\varepsilon,T,K).\]
Using the complex parabolic Evans-Krylov theory together with parabolic Schauder's estimates (see e.g. [5, Theorem 4.1.4]), we then obtain higher order estimates for \(\varphi_{j}\) on \([\varepsilon,T]\times K\) :
\[\|\varphi_{j}\|_{\mathcal{C}^{k}([\varepsilon,T]\times K)}\leq C(\varepsilon, T,K,k).\]
This shows the smoothness of \(\varphi_{t}\) on \(U\setminus(D\cup E_{c}(\varphi_{0}))\) for each \(t>\varepsilon\) since \(K\) was taken arbitrarily. Passing to the limit in (4.1) and we deduce that \(\varphi\) satisfies (1.4) in the classical sense on \([\varepsilon,T]\times\Omega_{\varepsilon}\) with \(\Omega_{\varepsilon}=X\setminus(D\cup E_{c(\varepsilon)}(\varphi_{0}))\).
### Uniqueness
We now follow the argument in [30] to prove that the solution \(\varphi\) to the equation (1.4) constructed in previous part is the unique maximal solution in the sense of the following:
**Proposition 4.1**.: _Let \(\psi_{t}\) be a weak solution to the equation (1.4) with initial data \(\varphi_{0}\). Then \(\psi_{t}\leq\varphi_{t}\) for all \(t\in(0,T_{\max})\)._
Proof.: By construction in previous paragraph, \(\varphi_{t,j}\) are smooth \((\theta_{t}+2^{-j}\omega_{X})\)-psh functions decreasing pointwise to \(\varphi_{t}\). It thus suffices to show that \(\psi_{t}\leq\varphi_{t,j}\) for all fixed \(j\).
Fix \(0<T<T_{\max}\) and \(2^{-j}>\varepsilon>\delta>0\). We let \(U_{\varepsilon}\subset X\) denote the Zariski open set in which \(\psi_{t+\varepsilon}\) is smooth. We can find a \(\omega_{X}\)-psh function \(\phi\) with analytic singularities along \(X\setminus U_{\varepsilon}\); see e.g. [14]. We apply the maximum principle to the function \(H:=\psi_{t+\varepsilon}-\varphi_{t+\varepsilon,j}+\delta\phi\). Suppose that \(H\) attains its maximum on \([0,T-\varepsilon]\times X\) at \((t_{\varepsilon},x_{\varepsilon})\) with \(t_{\varepsilon}>0\). Note that \(x_{\varepsilon}\in U_{\varepsilon}\). We thus have
\[0\leq\frac{\partial}{\partial t}H\leq\log\left[\frac{(\theta_{t+\varepsilon}+ dd^{c}\varphi_{t+\varepsilon,j}-\delta dd^{c}\phi)^{n}}{(\theta_{t+\varepsilon}+2^{-j} \omega_{X}+dd^{c}\varphi_{t+\varepsilon,j})^{n}}\right]<0\]
using that \(-dd^{c}\phi\leq\omega_{X}\), which is a contradiction. Thus we have by letting \(\delta\searrow 0\),
\[\psi_{t+\epsilon}(x)-\varphi_{t+\epsilon,j}(x)\leq\sup_{X}(\psi_{\epsilon}- \varphi_{\epsilon,j}).\]
Moreover, since \((\epsilon,x)\mapsto\varphi_{\epsilon,j}(x)\) is continuous it follows from Hartogs' lemma that
\[\sup_{X}(\psi_{\epsilon}-\varphi_{\epsilon,j})\xrightarrow{\epsilon\to 0} \sup_{X}(\varphi_{0}-\varphi_{0,j})\]
The proof is complete.
The uniqueness property we have just shown is called "maximally stretched" by P. Topping in Riemann surface; cf. [47, Remark 1.9].
### Short time behavior
In this subsection we study the behavior of the solution to the degenerate Monge-Ampere flow in short time. We show that the solution \(\varphi_{t}\) to the equation starting from a current with positive Lelong numbers also has positive Lelong numbers for very short time. This follows almost verbatim from the Kahler case [16, Sect. 4.2].
**Theorem 4.2**.: _If \(\varphi_{0}\) has positive Lelong number, then_
\[E_{c}(\varphi_{0})\subset E_{c(t)}(\varphi_{t}),\qquad c(t)=c-2nt.\]
_In particular, the maximal solution \(\varphi_{t}\) has positive Lelong numbers for any \(t<1/2nc(\varphi_{0})\)._
Proof.: This is identical to that of [16, Theorem 4.5]. We give a sketch of proof here. Fixing \(x_{0}\in E_{c}(\varphi_{0})\) we can find a cut-off function \(\chi\in\mathcal{C}^{\infty}(X)\) with support near \(x_{0}\) and being identical to \(1\) in a neighborhood of \(x_{0}\). Thus \(\phi:=\chi(x)c\log\|x-x_{0}\|\) is \(\text{B}\omega_{X}\text{-psh}\) and \(e^{2\phi/c}\in\mathcal{C}^{\infty}(X)\). Since \(x_{0}\in E_{c}(\varphi_{0})\) we can choose \(\phi\) so that \(\phi\geq\varphi_{0}\) by adding a positive constant. Lemma 4.3 yields
\[\varphi_{t}\leq(1-2nt/c)\phi+Ct,\]
hence \(\nu(\varphi_{t},x_{0})\geq c-2nt.\) If \(t<1/2nc(\varphi_{0})\) then by Skoda's integrability theorem, \(e^{-2\varphi_{0}/c}\) is not integrable for \(2nt<c<1/c(\varphi_{0})\). Thus \(E_{c}(\varphi_{0})\) is not empty, neither is \(E_{c(t)}(\varphi_{t})\) for \(t>0\) sufficiently small.
**Lemma 4.3**.: _Assume that \(\phi\in\operatorname{PSH}(X,\omega_{X})\) satisfies \(e^{\gamma\phi}\in\mathcal{C}^{\infty}(X)\) for some constant \(\gamma>0\) and \(0\geq\psi^{\pm}\geq\phi\geq\varphi_{0}\). Then there exists a positive constant \(C\) depending on an upper bound for \(dd^{c}e^{\gamma\phi}\) such that_
\[\varphi(t)\leq(1-n\gamma t)\phi+Ct,\quad\forall\,t\in[0,1/n\gamma].\]
Proof.: Assume that \(\theta_{t}\leq\omega_{X}\) for \(t\in[0,1/(n\gamma+1)]\). As argued in [16, Lemma 4.4] we can assume that \(\phi\) is smooth and work with approximants \(\varphi_{t,j}\) instead. We choose \(C>0\) depending only on a upper bound of \(dd^{c}e^{\gamma\phi}\) such that \(dd^{c}\phi\leq Ce^{-\gamma\phi}\omega_{X}.\) Consider
\[\phi_{t}:=(1-(n\gamma+1)t)\phi+t\log(2^{n}C^{n}).\]
We observe that
\[0\leq\omega_{X}+dd^{c}\phi\leq 2Ce^{-\gamma\phi}\omega_{X},\]
hence
\[(\omega_{X}+dd^{c}\phi_{t})^{n}\leq(2C)^{n}e^{-n\gamma\phi}\omega_{X}^{n} \leq e^{\phi_{t}+\psi^{+}-\psi^{-}}\omega_{X}^{n}.\]
Therefore \(\phi_{t}\) is a supersolution to the parabolic equation
\[(\omega_{X}+dd^{c}u_{t})^{n}=e^{it_{t}+\psi^{+}-\psi^{-}}\omega_{X}^{n}\]
while \(\varphi_{t,j}\) is a subsolution. The classical maximum principle thus yields that \(\phi_{t}\leq\varphi_{t,j}\) for any fixed \(j\). This finishes the proof.
### Convergence at time zero
We study in this part the convergence at zero of the degenerate complex Monge-Ampere flow.
We recall the quasi-monotone convergence in the sense of Guedj-Trusiani [28]: \(\varphi_{j}\to\varphi\) quasi-monotonically if \(P_{\theta}(\inf_{l\geq j}\varphi_{j})\) a sequence of \(\theta\)-psh functions that increases to \(\varphi\).
**Theorem 4.4**.: _The flow \(\varphi_{t}\) converges quasi-monotonically to \(\varphi_{0}\) as \(t\to 0^{+}\)._
Proof.: By Proposition 3.2, we have for \(t\) small
\[\varphi_{t}\geq\varphi_{0}-C(t-t\log t)\]
for \(t\) small. It follows that
\[P_{\theta}\left(\inf_{0<s\leq t}\varphi_{s}\right)\geq\varphi_{0}-C(t-t\log t),\]
finishing the proof.
**Theorem 4.5**.: _Assume that \(\varphi_{0}\) is continuous in an open set \(U\subset X\). Then \(\varphi_{t}\) converges to \(\varphi_{0}\) in \(L^{\infty}_{\mathrm{loc}}(U)\)._
Proof.: The proof is almost verbatim from the Kahler case [16]. We assume without loss of generality that \(\varphi_{t}\leq 0\). By Proposition 3.2, there is a uniform constant \(C>0\) and small time \(t_{0}\) such that
\[\varphi_{s}-C(t-s)\log(t-s)-C(t-s)\leq\varphi_{t}\]
for \(0\leq s<t\leq t_{0}\) small. Set \(u_{t}:=\varphi_{t}+(C+\log 4)t-Ct\log t\). Substituting \(s=t/2\) we infer that \(u_{t}\geq u_{t/2}\), hence the sequence \(u_{t_{0}2^{-j}}\) decreases to \(u_{0}=\varphi_{0}\). The conclusion therefore follows from Dini's theorem.
We also have the same result as in the Kahler case [16, Theorem 6.3]. We assume that \(\theta\) is a big form and that \(f=e^{\psi^{+}-\psi^{-}}\in L^{p}\), \(p>1\) and \(\psi^{\pm}\) are quasi-psh functions. Assume moreover that \(\psi^{-}\in L^{\infty}_{\mathrm{loc}}(X\setminus D)\) for some closed set \(D\subset X\). It follows from [25, Theorem 4.1] that there exists a bounded \(\theta\)-psh function \(\varphi_{0}\) such that \(\sup_{X}\varphi_{0}=0\) and
\[(\theta+dd^{c}\varphi_{0})^{n}=cfdV.\]
We recall that there is \(\rho\in\mathrm{PSH}(X,\theta)\) with analytic singularities along a closed subset \(E\) such that \(\theta+dd^{c}\rho\geq 2\delta\omega_{X}\) for some \(\delta>0\). Set \(U:=X\setminus(D\cup E)\).
**Theorem 4.6**.: _Assume \(\varphi_{0}\) is as above. Let \(\varphi_{t}\) be the weak solution of the equation (1.4) with initial data \(\varphi_{0}\). Then \(\varphi_{t}\) converges to \(\varphi_{0}\) in \(\mathcal{C}^{\infty}_{\mathrm{loc}}(U)\)._
Proof.: The proof is identical to that of [16, Theorem 6.3]. We sketch the proof here for convenience's readers. We first approximates \(\psi^{\pm}\) by smooth their smooth approximants \(\psi^{\pm}_{j}\), thanks to [11]. We next apply Tosatti-Weinkove's theorem [49] to obtain smooth \((\theta+2^{-j}\omega_{X})\)-psh functions such that \(\sup_{X}\varphi_{j}=0\) and
\[(\theta+2^{-j}\omega_{X}+dd^{c}\varphi_{0,j})^{n}=c_{j}e^{\psi^{+}_{j}-\psi^{ -}_{j}}dV.\]
Note here that \(f_{j}=e^{\psi^{+}_{j}-\psi^{-}_{j}}\) have uniform \(L^{p}\)-norms. The same arguments in [25, Theorem 4.1] shows that
* \(c_{j}\to c>0\);
* for any \(\varepsilon>0\), \(\varphi_{j}\geq\varepsilon(\rho+\delta\psi^{-})-C(\varepsilon)\);
* \(\Delta_{\omega_{X}}\varphi_{0,j}\leq e^{-C(\varepsilon)(\rho+\delta\psi^{-})}\).
Let \(\varphi_{t,j}\) be a smooth solution to the equation (1.4) with initial data \(\varphi_{0,j}\). The sequence \(\varphi_{t,j}\) converges to the unique weak solution \(\varphi_{t}\). We use Proposition 3.9 and Proposition 3.10 together with boostrapping arguments to obtain locally uniform estimates of all derivatives of \(\varphi_{t,j}\). This implies the convergence in \(\mathcal{C}^{\infty}_{\mathrm{loc}}(U)\)
## 5. Finite time singularities
In this section we study finite time singularities of the Chern-Ricci flow, and provide the proof of Theorem A.
We consider a family of Hermitian metrics \(\omega(t)\) evolving under the Chern -Ricci flow (1.1) with initial Hermitian metrics \(\omega_{0}\). Suppose that the maximal existence time of the flow \(T_{\max}<\infty\). The form \(\alpha_{T_{\max}}:=\omega_{0}-T_{\max}\mathrm{Ric}(\omega_{0})\) is nef in the sense of [26], i.e. for each \(\varepsilon>0\) there exists \(\psi_{\varepsilon}\in\mathcal{C}^{\infty}(X)\) such that \(\alpha_{T_{\max}}+dd^{c}\psi_{\varepsilon}\geq-\varepsilon\omega_{0}\). Indeed, for \(\varepsilon>0\),
\[\alpha_{T_{\max}}+\varepsilon\omega_{0}=(1+\varepsilon)\left(\omega_{0}- \frac{T_{\max}}{1+\varepsilon}\mathrm{Ric}(\omega_{0})\right)\]
and since \(\frac{T_{\max}}{1+\varepsilon}<T_{\max}\) we have \(\omega_{0}-\frac{T_{\max}}{1+\varepsilon}\mathrm{Ric}(\omega_{0})+dd^{c}\psi>0\) for some smooth function \(\psi\). We assume that \(\alpha_{T_{\max}}\) is _uniformly non-collapsing_, i.e.,
\[\int_{X}(\alpha_{T_{\max}}+dd^{c}\psi)^{n}\geq c_{0}>0,\quad\forall\ \psi\in\mathcal{C}^{\infty}(X). \tag{5.1}\]
This condition implies that the volume of \((X,\omega(t))\) does not collapse to zero as \(t\to T_{\max}^{-}\).
**Theorem 5.1**.: _Let \(\alpha\) be a nef (1,1) form satisfying the uniformly non-collapsing condition (5.1). If \(X\) admits a Hermitian metric \(\omega_{X}\) such that \(v_{+}(\omega_{X})<+\infty\) then \(\alpha\) is big._
_Conversely, if \(\alpha\) is big and \(v_{-}(\omega_{X})>0\) then \(\alpha\) is uniformly non-collapsing._
When \(\alpha\) is semi-positive or closed the result was proved by Guedj-Lu [26, Theorem 4.6, Theorem 4.9], answering the transcendental Grauert-Riemenschneider conjecture [14, Conjecture 0.8]. For our purpose, we would like to extend it in the case that \(\alpha\) is no longer closed.
Proof.: The proof is almost identical to that of [26, Theorem 4.6] which follows the idea of Chiose [7]. We give the details here for reader's convenience. By the Hahn-Banach theorem as in [32, Lemme 3.3], the bigness of \(\alpha\), i.e., \(\exists\,\rho\in\mathrm{PSH}(X,\alpha)\) with analytic singularities such that \(\alpha+dd^{c}\rho\geq\delta\omega_{X}\) with some \(\delta>0\), is equivalent to
\[\int_{X}\alpha\wedge\eta^{n-1}\geq\delta\int_{X}\omega_{X}\wedge\eta^{n-1}\]
for all Gauduchon metrics \(\eta\). Suppose by contradiction that for each \(\varepsilon>0\) there exists Gauduchon metrics \(\eta_{\varepsilon}\) such that
\[\int_{X}\alpha\wedge\eta_{\varepsilon}^{n-1}\leq\varepsilon\int_{X}\omega_{X} \wedge\eta_{\varepsilon}^{n-1}.\]
We can normalize \(\eta_{\varepsilon}\) so that \(\int_{X}\omega_{X}\wedge\eta_{\varepsilon}^{n-1}=1.\) We fix a function \(\psi_{\varepsilon}\in\mathcal{C}^{\infty}(X)\) such that \(\alpha_{\varepsilon}:=\alpha+\varepsilon\omega_{X}+dd^{c}\psi_{\varepsilon}\) is a Hermitian form. By the main result of [49] there exist \(c_{\varepsilon}>0\) and \(\varphi_{\varepsilon}\in\mathrm{PSH}(X,\alpha_{\varepsilon})\cap\mathcal{C}^ {\infty}(X)\) such that \(\sup_{X}\varphi_{\varepsilon}=0\) and
\[(\alpha_{\varepsilon}+dd^{c}u_{\varepsilon})^{n}=c_{\varepsilon}\omega_{X} \wedge\eta_{\varepsilon}^{n-1}.\]
By normalization we have
\[c_{\varepsilon}=\int_{X}(\alpha_{\varepsilon}+dd^{c}u_{\varepsilon})^{n}\geq \int_{X}(\alpha+dd^{c}(\psi_{\varepsilon}+u_{\varepsilon}))^{n}\geq c_{0}>0.\]
We apply [26, Lemma 4.13] which reformulates the one in [39, Lemma 3.1] to obtain
\[\int_{X}(\alpha_{\varepsilon}+dd^{c}u_{\varepsilon})\wedge\eta_{\varepsilon}^ {n-1}\times\int_{X}(\alpha_{\varepsilon}+dd^{c}u_{\varepsilon})^{n-1}\wedge \omega_{X}\geq\frac{c_{\varepsilon}}{n}. \tag{5.2}\]
The first term on the left-hand side can be written as \(\int_{X}(\alpha+\varepsilon\omega_{X})\wedge\eta_{\varepsilon}^{n-1}\) since \(\eta_{\varepsilon}\) is Gauduchon and by assumption,
\[\int_{X}(\alpha+\varepsilon\omega_{X})\wedge\eta_{\varepsilon}^{n-1}\leq 2\varepsilon.\]
For the second term, it follows by assumption that
\[\int_{X}(\alpha_{\varepsilon}+dd^{c}u_{\varepsilon})^{n-1}\wedge\omega_{X} \leq v_{+}(\omega_{X}) \tag{5.3}\]
is bounded from above. Therefore we obtain
\[2\varepsilon v_{+}(\omega_{X})\geq\frac{c_{0}}{n}\]
which is a contradiction as \(\varepsilon\to 0\).
The proof of the last statement follows the same lines as in [26, Theorem 4.6] which we omit here.
**Remark 5.2**.: When \(\omega_{0}\) is closed or, more generally, is a Guan-Li metric, i.e., \(dd^{c}\omega_{0}=dd^{c}\omega_{0}^{2}=0\), the condition (5.1) is simply written as \(\int_{X}a_{T_{\max}}^{n}>0\). The assumption \(v_{+}(\omega_{X})<\infty\) or \(v_{-}(\omega_{X})>0\) is independent of the choice of the Hermitian \(\omega_{X}\) due to [26, Proposition 3.2]. We refer the reader to [1] for some examples of such \(X\). In particular, \(X\) is arbitrarily compact complex surface.
This result is a slight generalization of the one [34, Theorem 4.3] when \(\alpha\) is closed semi-positive and \(X\) admits a pluriclosed metric, i.e., \(dd^{c}\omega_{X}=0\). Indeed, in this case, the LHS of (5.3) is equal to
\[\int_{X}\alpha_{\varepsilon}^{n-1}\wedge\omega_{X}<\infty.\]
As a consequence of Theorem 5.1, we give a slight improvement of the main result of [50] (see also [34, Theorem 4.1]) which extends the one of Demailly [12] to the non-Kahler setting.
**Theorem 5.3**.: _Let \(X\) be a compact complex \(n\)-manifold equipped with a Hermitian metric \(\omega_{X}\) satisfying \(v_{+}(\omega_{X})<\infty\). Let \(\alpha\) be a nef (1,1) form. Assume that \(x_{1},\dots,x_{N}\in X\) are any fixed points and positive constants \(\tau_{1},\dots,\tau_{N}\) such that_
\[0<\sum_{j=1}^{N}\tau_{j}^{n}<\int_{X}(\alpha+dd^{c}\psi)^{n},\ \forall\,\psi\in \operatorname{PSH}(X,\alpha)\cap\mathcal{C}^{\infty}(X).\]
_Then there exists an \(\alpha\)-psh function \(\varphi\) with logarithmic poles_
\[\varphi(z-x_{j})\leq\tau_{j}\log\|z-x_{j}\|+O(1)\]
_in local coordinates near \(x_{j}\), for all \(j=1,\dots,N\)._
Proof.: By Theorem 5.1 we know that \(\alpha\) is big. The rest of proof exactly follows the same as that of [48, Theorem 1.3].
We go back to the Chern-Ricci flow. Observe that one can deduce the Chern-Ricci flow (1.1) to a parabolic complex Monge-Ampere equation
\[\frac{\partial\varphi_{t}}{\partial t}=\log\left[\frac{(\alpha_{t}+dd^{c} \varphi_{t})^{n}}{\omega_{0}^{n}}\right],\quad\alpha_{t}+dd^{c}\varphi>0,\ \varphi(0)=0\]
where \(\alpha_{t}:=\omega_{0}-t\mathrm{Ric}(\omega_{0})\). We assume that the form \(\alpha_{T_{\max}}\) is uniformly non-collapsing. By Theorem 5.1, there exists a function \(\rho\) with analytic singularities such that
\[\alpha_{T_{\max}}+dd^{c}\rho\geq 2\delta_{0}\omega_{0}\]
for some \(\delta_{0}>0\). We observe that
\[\alpha_{t}+dd^{c}\rho =\frac{1}{T_{\max}}\left((T_{\max}-t)(\omega_{0}+dd^{c}\rho)+t( \alpha_{T_{\max}}+dd^{c}\rho)\right)\] \[\geq\delta_{0}\omega_{0} \tag{5.4}\]
for \(t\in[T_{\max}-\varepsilon,T_{\max}]\) with sufficiently small \(\varepsilon>0\). Set
\[\Omega:=X\setminus\{\rho=-\infty\}\]
We establish uniform \(C_{\mathrm{loc}}^{\infty}\) estimates on \(\Omega\).
**Lemma 5.4**.: _There is a uniform constant \(C_{0}>0\) such that on \([0,T_{\max})\times X\) we have_
1. \(\varphi\leq C_{0}\)_;_
2. \(\phi\leq C_{0}\)_;_
3. \(\varphi\geq\rho-C_{0}\)_;_
4. \(\phi\geq C_{0}\rho-C_{0}\)__
Proof.: The proofs of \((i)\) and \((ii)\) directly follow from the classical maximum principle; see e.g. [52, Lemma 4.1] (which follow almost verbatim from the Kahler case [46]).
For \((iii)\), we set \(\psi:=\varphi-\rho\). Note that the function \(\psi+At\geq-C\) holds on \([0,T_{\max}-\varepsilon]\) with \(\varepsilon\) as above. Fix \(T_{\max}-\varepsilon<T^{\prime}<T_{\max}\), assume that \(\psi+At\) attains its minimum at \((t_{0},x_{0})\in[0,T^{\prime}]\times X\). Note that \(x_{0}\in\Omega\). We compute at this minimum point,
\[\frac{\partial\psi}{\partial t} =\log\frac{(\alpha_{t}+dd^{c}\rho+dd^{c}\psi)^{n}}{\omega_{0}^{n }}-A\] \[\geq\log\frac{(\delta_{0}\omega_{0})^{n}}{\omega_{0}^{n}}-A\geq-C+A\]
where we have used the estimate (5.4). If we choose \(A>C\) then \(t_{0}\) must be zero. This implies the lower bound for \(\psi\), hence we are done.
For \((iv)\), we apply the minimum principle to
\[Q=\dot{\varphi}+A\psi+Bt\]
where \(A\) and \(B\) are large constants to be chosen later. Our goal is to show that \(Q\geq-C\) on \(X\times[0,T_{\max})\). As above, we observe that \(Q\geq-C\) on \([0,T_{\max}-\varepsilon]\times X\). It thus suffices to show that given any \(T_{\max}-\varepsilon<T^{\prime}<T_{\max}\) the minimum of \(Q\) on \([0,T^{\prime}]\times X\) is attained on \([0,T_{\max}-\varepsilon]\). Let \((x_{0},t_{0})\) be the point in \((T_{\max}-\varepsilon,T^{\prime}]\times X\) where \(Q\) attains its minimum. Note that \(x_{0}\in\Omega\). At this point we have
\[0\geq\left(\frac{\partial}{\partial t}-\Delta_{\omega}\right)Q =-\operatorname{tr}_{\omega}\operatorname{Ric}(\omega_{0})+A\dot {\varphi}-An+A\operatorname{tr}_{\omega}(\alpha_{t}+dd^{c}\rho)+B\] \[\geq\delta_{0}\operatorname{tr}_{\omega}\omega_{0}+A\log\frac{ \omega^{n}}{\omega_{0}^{n}}+\operatorname{tr}_{\omega}\omega_{0}-An+B\]
where \(A\) is chosen so large that
\[(A-1)(\alpha_{t}+dd^{c}\rho)+\chi\geq\omega_{0}\]
for \(t\in[T_{\max}-\varepsilon,T_{\max}]\). But since \(A\log y-\delta_{0}y^{1/n}\) is bounded from above for \(y>0\) the arithmetic-geometric inequality yields
\[\delta_{0}\operatorname{tr}_{\omega}\omega_{0}+A\log\frac{\omega^{n}}{\omega _{0}^{n}}\geq\delta\left(\frac{\omega_{0}^{n}}{\omega^{n}}\right)^{1/n}+A\log \frac{\omega^{n}}{\omega_{0}^{n}}\geq-C_{1}\]
for uniform constant \(C_{1}>0\). If we choose \(B=C_{1}+An\) we obtain
\[0\geq\left(\frac{\partial}{\partial t}-\Delta_{\omega}\right)Q\geq \operatorname{tr}_{\omega}\omega_{0}>0\]
a contradiction. The desired estimate follows.
**Lemma 5.5**.: _There is a uniform constant \(C>0\) such that on \([0,T_{\max})\times X\) we have_
\[tr_{\omega_{0}}\,\omega(t)\leq Ce^{-C\rho}.\]
Proof.: Set \(\psi=\varphi-\rho+C_{0}\geq 0\). We apply the maximum principle to
\[Q=\log\operatorname{tr}_{\omega_{0}}\omega-A\psi+e^{-\psi}\]
for \(A>0\) to be determined hereafter. The idea of making use of the last term in \(Q\) is due to Phong and Sturn [38] and was used in the context of Chern-Ricci flow [51, 52, 55]. Note that \(e^{-\psi}\in(0,1]\).
It suffices to show that \(Q\) is uniformly bounded from above. Again, it follows from the definition of \(Q\) that \(Q\leq C\) on \([0,T_{\max}-\varepsilon]\times X\) for a uniform \(C>0\). Fixing \(,T-\varepsilon<T^{\prime}<T_{\max}\), suppose that \(Q\) attains its maximum at some point \((t_{0},x_{0})\in[0,T^{\prime}]\times X\) with \(t\in[T-\varepsilon,T^{\prime}]\). In what follows, we compute at this point. From [52, Prop. 3.1] (also [52, (4.2)]) we have
\[\left(\frac{\partial}{\partial t}-\Delta_{\omega}\right)\log\operatorname{tr }_{\omega_{0}}\omega\leq\frac{2}{(\operatorname{tr}_{\omega_{0}}\omega)^{2}} \mathrm{Re}(g^{\mathfrak{A}k}(T_{0})_{kp}^{p}\partial_{\bar{q}}\operatorname {tr}_{\omega_{0}}\omega)+C\operatorname{tr}_{\omega}\omega_{0},\]
where \((T_{0})_{kp}^{p}\) denote the torsion terms of \(\omega_{0}\). At the maximum point \((x_{0},t_{0})\) of \(Q\) we have \(\partial_{i}Q=0\) hence
\[\frac{1}{\operatorname{tr}_{\omega_{0}}\omega}\partial_{i}\operatorname{tr}_ {\omega_{0}}\omega-A\partial_{i}\psi-e^{-\psi}\partial_{i}\psi=0.\]
Therefore, the Cauchy-Schwarz inequality yields
\[\left|\frac{2}{(\operatorname{tr}_{\omega_{0}}\omega)^{2}} \mathrm{Re}(g^{\mathfrak{A}k}(T_{0})_{kp}^{p}\partial_{\bar{q}} \operatorname{tr}_{\omega_{0}}\omega)\right| \leq\left|\frac{2}{(\operatorname{tr}_{\omega_{0}}\omega)^{2}} \mathrm{Re}((A+e^{-\psi})g^{\mathfrak{A}k}(T_{0})_{kp}^{p}\partial_{\bar{q}} \psi\right|\] \[\leq e^{-\psi}|\partial\psi|_{\omega}^{2}+C(A+1)^{2}e^{\psi} \frac{tr_{\omega}\omega_{0}}{(\operatorname{tr}_{\omega_{0}}\omega)^{2}}.\]
for uniform \(C>0\) only depending on the torsion term. It thus follows that, at the point \((x_{0},t_{0})\),
\[0\leq\left(\frac{\partial}{\partial t}-\Delta_{\omega}\right)Q \leq C(A+1)^{2}e^{\psi}\frac{tr_{\omega}\omega_{0}}{(\operatorname{tr}_{ \omega_{0}}\omega)^{2}}+C\operatorname{tr}_{\omega}\omega_{0}\] \[\quad-(A+e^{-\psi})\phi+(A+e^{-\psi})\operatorname{tr}_{\omega}( \omega-(\alpha_{t}+dd^{c}\rho))\] \[\leq C(A+1)^{2}\frac{tr_{\omega}\omega_{0}}{(\operatorname{tr}_{ \omega_{0}}\omega)^{2}}+(C-(A+1)\delta_{0})\operatorname{tr}_{\omega}\omega_ {0}+(A+1)\log\frac{\omega_{0}^{n}}{\omega^{n}} \tag{5.5}\]
where we have used \(\alpha_{t}+dd^{c}\rho\geq\delta_{0}\omega_{0}\). If at \((x_{0},t_{0})\), \((\operatorname{tr}_{\omega_{0}}\omega)^{2}\leq C(A+1)^{2}\) we are done. Otherwise, we choose \(A=\delta^{-1}(C+2)\). Hence, from (5.5) one gets
\[\operatorname{tr}_{\omega}\omega_{0}\leq C\log\frac{\omega_{0}^{n}}{\omega^{ n}}+C.\]
By Lemma 3.6 we obtain
\[\operatorname{tr}_{\omega_{0}}\omega\leq(\operatorname{tr}_{\omega}\omega_{0})^{n-1}\frac{\omega^{n}}{\omega_{0}^{n}} \leq C\frac{\omega^{n}}{\omega_{0}^{n}}\left(\log\frac{\omega_{0}^{n}}{\omega^ {n}}\right)^{n-1}+C\leq C^{\prime}\]
since \(\omega^{n}/\omega_{0}^{n}\leq C_{0}\) by Lemma 5.4 and \(y\mapsto y|\log y|^{n-1}\) is bounded from above as \(y\to 0\). Thanks to Lemma 5.4 (iii), \(Q\) is bounded from above at its maximum, this finishes the proof.
Proof of Theorem A.: The existence a Hermitian metric \(\omega_{X}\) satisfying \(v_{+}(\omega_{X})<\infty\) holds when \(\dim X=2\), we can generalize to the case that any \(n\)-manifold \(X\) admitting such a metric. Let \(K\subset\Omega\) be any compact set. It follows from Lemma 5.4 and Lemma 5.5 that on \(K\times[0,T_{\max})\),
\[C_{K}^{-1}\omega_{0}\leq\omega(t)\leq C_{K}\omega_{0}.\]
Applying the local higher order estimates of Gill [21, Sect. 4], we obtain uniform \(\mathcal{C}^{\infty}\) estimates for \(\omega(t)\) on compact subsets of \(\Omega\). We exactly proceed the same as in [52, Theorem 1.6] to obtain the convergence. This finishes the proof.
## 6. The Chern-Ricci flow on varieties with log terminal singularities
In this section we extend our previous analysis to the case of compact complex varieties with _mild singularities_. We refer the reader to [18, Sect. 5] for a brief introduction to complex analysis on mildly singular varieties.
We assume here that \(Y\) is a \(\mathbb{Q}\)-Gorenstein variety, i.e., \(Y\) is a normal complex space such that its canonical divisor \(K_{Y}\) is \(\mathbb{Q}\)-Cartier. We denote the singular set of \(Y\) by \(Y_{\text{sing}}\) and let \(Y_{\text{reg}}:=Y\setminus Y_{\text{sing}}\). Given a log resolution of singularities \(\pi:X\to Y\) (which may and will always be chosen to be an isomorphism over \(Y_{\text{reg}}\) ), there exists a unique (exceptional) \(\mathbb{Q}\)-divisor \(\sum a_{i}E_{i}\) with simple normal crossings (snc for short) such that
\[K_{X}=\pi^{*}K_{Y}+\sum_{i}a_{i}E_{i},\]
The coefficients \(a_{i}\in\mathbb{Q}\) are called _discrepancies_ of \(Y\) along \(E_{i}\).
**Definition 6.1**.: We say that \(X\) has _log terminal_ (_lt_ for short) singularities if and only if \(a_{i}>-1\) for all \(i\).
The following definition of _adapted measure_ which is introduced in [18, Sect. 6]:
**Definition 6.2**.: Let \(h\) be a smooth hermitian metric on the \(\mathbb{Q}\)-line bundle \(\mathcal{O}_{Y}(K_{Y})\). The corresponding adapted measure \(\mu_{Y,h}\) on \(Y_{\text{reg}}\) is locally defined by choosing a nowhere vanishing section \(\sigma\) of \(mK_{Y}\) over a small open set \(U\) and setting
\[\mu_{Y,h}:=\frac{(i^{mn^{2}}\sigma\wedge\bar{\sigma})^{1/m}}{|\sigma|_{h^{m}} ^{2/m}}.\]
The point of the definition is that the measure \(\mu_{Y,h}\) does not depend on the choice of \(\sigma\), so is globally defined. The arguments above show that \(Y\) has It singularities if and only if \(\mu_{Y,h}\) has finite total mass on \(Y\), in which case we can consider it as a Radon measure on the whole of \(Y\). Then \(\chi=dd^{c}\log\mu_{Y,h}\) is well-defined smooth closed \((1,1)\)-form on \(Y\).
Given a Hermitian form \(\omega_{Y}\) on \(Y\), there exists a unique hermitian metric \(h=h(\omega_{Y})\) of \(K_{Y}\) such that
\[\omega_{Y}^{n}=\mu_{Y,h}.\]
We have the following definition.
**Definition 6.3**.: The _Ricci curvature form_ of \(\omega_{Y}\) is \(\operatorname{Ric}(\omega_{Y}):=-dd^{c}\log h\).
We recall the _slope_ of a quasi-psh function \(\phi\) at \(y\) in the sense of [4]. Choosing local generators \((f_{j})\) of the maximal ideal \(\mathfrak{m}_{y}\) of \(\mathcal{O}_{Y,y}\), we define
\[s(\phi,y)=\sup\{s\geq 0:\varphi\leq s\log\sum|f_{j}|+O(1)\}.\]
Note that this definition is independent of the choice of \((f_{j})\). By [4, Theorem A.2] there is \(C>0\) such that for any log resolution \(\pi:X\to Y\),
\[\nu(\phi\circ\pi,E)\leq Cs(\phi,y)\]
with \(E\) a prime divisor lying above \(y\). In particular, the Lelong numbers of \(\phi\circ\pi\) is sufficiently small if the \(s(\phi,y)\) is also sufficiently small at all points \(y\in Y\).
Applying the analysis in the previous section, we have the existence for the Chern-Ricci flow on log terminal singularities. This extends the one of the author [9, Theorem E].
**Theorem 6.4**.: _Let \(Y\) be a compact complex variety with log terminal singularities. Assume that \(\theta_{0}\) is a Hermitian metric such that_
\[T_{\max}:=\sup\{t>0:\ \exists\ \psi\in\mathcal{C}^{\infty}(Y)\ \text{such that}\ \theta_{0}-t\text{Ric}(\theta_{0})+dd^{c}\psi>0\}>0.\]
_Assume that \(S_{0}=\theta_{0}+dd^{c}\phi_{0}\) is a positive (1,1)-current with small slopes. Then there exists a family \((\omega_{t})_{t\in[0,T_{\max})}\) of positive (1,1) current on \(Y\) starting at \(S_{0}\) such that_
1. \(\omega_{t}=\theta_{0}-t\text{Ric}(\theta_{0})+dd^{c}\varphi_{t}\) _are positive (1,1) currents;_
2. \(\omega_{t}\to S_{0}\) _weakly as_ \(t\to 0^{+}\)_;_
3. _for each_ \(\varepsilon>0\) _there exists a Zariski open set_ \(\Omega_{\varepsilon}\) _such that on_ \([\varepsilon,T_{\max})\times\Omega_{\varepsilon})\)_,_ \(\omega\) _is smooth and_ \[\frac{\partial\omega}{\partial t}=-\text{Ric}(\omega).\]
Proof.: It is classical that solving the (weak) Chern-Ricci flow is equivalent to solving a complex Monge-Ampere equation flow. Let \(\chi\) be a closed smooth (1,1) form that represents \(c_{1}^{\text{BC}}(K_{Y})\). Given \(T\in(0,T_{\max})\), there is \(\psi_{T}\in\mathcal{C}^{\infty}(Y)\) such that \(\theta_{0}-t\text{Ric}(\theta_{0})+dd^{c}\psi_{T}>0\) we set for \(t\in[0,T]\)
\[\hat{\theta}_{t}:=\theta_{0}+t\chi,\ \text{with}\ \chi=-\text{Ric}(\theta_{0}) +dd^{c}\frac{\psi_{T}}{T}\]
which defines an affine path of Hermitian forms. Since \(\chi\) is a smooth representative of \(c_{1}^{\text{BC}}(K_{Y})\), one can find a smooth metric \(h\) on the Q-line bundle \(\mathcal{O}_{Y}(K_{Y})\) with curvature form \(\chi\). We obtain \(\mu_{Y,h}\) the adapted measure corresponding to \(h\). The Chern-Ricci flow is equivalent to the following complex Monge-Ampere flow
\[(\hat{\theta}_{t}+dd^{c}\phi_{t})^{n}=e^{\hat{a}_{t}\phi}\mu_{Y,h}. \tag{6.1}\]
Now let \(\pi:X\to Y\) be a log resolution of singularities. We have seen that the measure
\[\mu:=\pi^{*}\mu_{Y,h}=fdV\quad\text{where}\ \ f=\prod_{i}|s_{i}|^{2a_{i}}\]
has poles (corresponding to \(a_{i}<0\)) or zeroes (corresponding to \(a_{i}>0\)) along the exceptional divisors \(E_{i}=(s_{i}=0)\), \(dV\) is a smooth volume form. Passing to the resolution, the flow (6.1) becomes
\[\frac{\partial\varphi}{\partial t}=\log\left[\frac{(\theta_{t}+dd^{c}\varphi _{t})^{n}}{\mu}\right] \tag{6.2}\]
where \(\theta_{t}:=\pi^{*}\hat{\theta}_{t}\) and \(\varphi:=\pi^{*}\phi\). Since \((\hat{\theta}_{t})_{t\in[0,T]}\) is a smooth family of Hermitian forms, it follows that the family of semi-positive forms \([0,T]\ni t\mapsto\theta_{t}\) satisfies all our requirements. We also have that \(\theta:=\pi^{*}\theta_{0}\), the latter is smooth, semi-positive and big, but no longer hermitian. We fix a \(\theta\)-psh function \(\rho\) with analytic singularities along a divisor \(E=\pi^{-1}(Y_{\text{sing}})\) such that \(\theta+dd^{c}\rho\geq 2\delta\omega_{X}\) with \(\delta>0\). If we set \(\psi^{+}=\sum_{a_{i}>0}2a_{i}\log|s_{i}|\), \(\psi^{-}=\sum_{a_{i}<0}-2a_{i}\log|s_{i}|\), we observe that \(\psi^{\pm}\) are quasi-psh functions with logarithmic poles along the exceptional divisors, smooth on \(X\setminus\text{Exc}(\pi)=\pi^{-1}(Y_{\text{reg}})\), and \(e^{-\psi^{-}}\in L^{p}(dV)\) for some \(p>1\). We observe that since the Lelong numberw \(\nu(\varphi_{0},x)\) are sufficiently small, so we have the assumption \(p^{*}/2c(\varphi_{0})<T_{\max}\) by Skoda's integrability theorem. The result therefore follows from Theorem B. |
2301.04592 | A Model for Gradual Phase Heating Driven by MHD Turbulence in Solar
Flares | Coronal flare emission is commonly observed to decay on timescales longer
than those predicted by impulsively-driven, one-dimensional flare loop models.
This discrepancy is most apparent during the gradual phase, where emission from
these models decays over minutes, in contrast to the hour or more often
observed. Magnetic reconnection is invoked as the energy source of a flare, but
should deposit energy into a given loop within a matter of seconds. Models
which supplement this impulsive energization with a long, persistent ad hoc
heating have successfully reproduced long-duration emission, but without
providing a clear physical justification. Here we propose a model for extended
flare heating by the slow dissipation of turbulent Alfv\'en waves initiated
during the retraction of newly-reconnected flux tubes through a current sheet.
Using one-dimensional simulations, we track the production and evolution of MHD
wave turbulence trapped by reflection from high-density gradients in the
transition region. Turbulent energy dissipates through non-linear interaction
between counter-propagating waves, modeled here using a phenomenological
one-point closure model. AIA EUV light curves synthesized from the simulation
were able to reproduce emission decay on the order of tens of minutes. We find
this simple model offers a possible mechanism for generating the extended
heating demanded by observed coronal flare emissions self-consistently from
reconnection-powered flare energy release. | William Ashfield IV, Dana Longcope | 2023-01-11T17:38:46Z | http://arxiv.org/abs/2301.04592v1 | # A Model for Gradual Phase Heating Driven by MHD Turbulence in Solar Flares
###### Abstract
Coronal flare emission is commonly observed to decay on timescales longer than those predicted by impulsively-driven, one-dimensional flare loop models. This discrepancy is most apparent during the gradual phase, where emission from these models decays over minutes, in contrast to the hour or more often observed. Magnetic reconnection is invoked as the energy source of a flare, but should deposit energy into a given loop within a matter of seconds. Models which supplement this impulsive energization with a long, persistent _ad hoc_ heating have successfully reproduced long-duration emission, but without providing a clear physical justification. Here we propose a model for extended flare heating by the slow dissipation of turbulent Alfven waves initiated during the retraction of newly-reconnected flux tubes through a current sheet. Using one-dimensional simulations, we track the production and evolution of MHD wave turbulence trapped by reflection from high-density gradients in the transition region. Turbulent energy dissipates through non-linear interaction between counter-propagating waves, modeled here using a phenomenological one-point closure model. AIA EUV light curves synthesized from the simulation were able to reproduce emission decay on the order of tens of minutes. We find this simple model offers a possible mechanism for generating the extended heating demanded by observed coronal flare emissions self-consistently from reconnection-powered flare energy release.
+
Footnote †: journal: ApJ
## 1 Introduction
Solar flares are typically characterized by the increase in radiation observed during their onset. Accepted as the result of energy released through magnetic reconnection, the sudden rise in emission across different wavelengths is considered to be an indication of impulsive plasma heating. Following the conclusion that flare loops convert magnetic energy on the order of the Alfvenic transit time across a given loop (Kopp & Pneuman, 1976; Priest & Forbes, 2002), numerical flare loop models aiming to study the impulsive phase have commonly used energy source terms that reflect these short timescales. Many of these models have also been constrained by observations to infer energy deposition rates -- either through hard X-rays (Kowalski et al., 2017; Graham et al., 2020) or UV
emission (Longcope & Bradshaw, 2010; Qiu et al., 2012; Ashfield et al., 2022) -- where the heating duration was found to last up to tens of seconds.
Although impulsively-driven models have been a successful tool for explaining a number of flare phenomena, their failure to reproduce gradual phase emission is widely recognized. Observations in soft X-rays and EUV have shown the gradual decay of hot coronal plasma to last between tens of minutes to several hours following the rise phase. The cooling of coronal plasma, marked by the successive peaks of emission light curves from increasingly cooler ion species, happens through a combination of heat conduction (Culhane et al., 1970, 1994) and radiative losses (Aschwanden & Alexander, 2001; Vrsnak et al., 2006). Compared to the characteristic cooling timescales modeled by individual flare loops (e.g. minutes, Kerr et al., 2020), the sustained duration of observed coronal emissions is considerably longer than it would be if regulated by cooling alone.
The discrepancy between modeled and observed decay rates during the gradual phase has led many to believe that additional heating is required beyond the impulsive phase to offset the effects of radiative losses (Withbroe, 1978; Svestka, 1989; Ryan et al., 2013; Sun et al., 2013). A popular interpretation of this heating has been the successive energization of multiple individual flux tubes within a flare arcade -- so-called multi-loop models (Dere & Cook, 1979; Hori et al., 1997; Warren, 2006; Reep et al., 2022) -- with each loop emulating the prescribed impulsive energy release. While these models are consistent with observation, several investigations were unable to rectify the lack of sustained coronal emissions (Reeves & Warren, 2002; Qiu et al., 2012; Liu et al., 2013; Kerr et al., 2020).
In response to this persistent discrepancy, Qiu & Longcope (2016) recently developed a model where individual loops were driven with a two-part heating profile consisting of an impulsive energy release followed by a prolonged, low-rate heating. This slow-tail heating, lasting on the order of 20 minutes, was able to successfully forward-model key characteristics of EUV lightcurves measured using the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012), including long cooling timescales. While motivated by flare UV footpoint emissions that display similar, two-phase behavior (see also Cheng et al., 2012; Qiu et al., 2013; Zhu et al., 2018), the heating profiles introduced in this work were only ad hoc. Physical justifications behind extended loop heating were not provided outside of a few, potentially viable scenarios.
Several mechanisms for producing such extended heating rates alluded to in Qiu & Longcope (2016) and other investigations are contingent upon post-reconnection flare loops retracting under magnetic tension and releasing energy (Forbes & Acton, 1996; Linton & Longcope, 2006). One such speculation is the continual heating from slow-shocks generated by the resistance loops receive from earlier formed loops that exist lower in the flare arcade (Cargill & Priest, 1982, 1983). Another related mechanism is the excitation of MHD waves by reconnection outflows (Aschwanden, 2006) that then decay and heat plasma within a post-reconnection loop over timescales comparable to slow-tail heating (Wang, 2011).
Central to both of the scenarios described above is the interaction between retracting flare loops and their surroundings. Previous investigations of outflows driven by magnetic tension have modeled flux tubes moving through an ideal current sheet unobstructed, and have therefore retracted at the local Alfven speed (Forbes & Priest, 1983; Longcope & Klimchuk, 2015). Observed outflows -- inferred from the motion of supra-arcade downflows (SADs; McKenzie & Hudson, 1999; Savage & McKenzie, 2011; Reeves et al., 2017) interpreted to be the wakes behind retracting flux tubes (Savage et al., 2012)
-- however, move at fractions of the Alfven speed. In the case of a non-ideal current sheet, such as that with a high-\(\beta\) plasma (Scott et al., 2016), a retracting loop would likely undergo a loss of momentum and speed as it imparted work on the surrounding medium. This interaction, modeled as a simple aerodynamic drag force, was recently found to slow outflow speeds to within their observed ranges (Unverferth & Longcope, 2021).
The interpretation of SADs as the wakes of retracting flux tubes experiencing a drag force would also suggest the generation of turbulence, given the aerodynamic analogy. In fact, evidence for turbulence has been reported in current sheets containing SADs, inferred through measurements of non-thermal line broadening (Ciaravella & Raymond, 2008; Warren et al., 2018; Cheng et al., 2018) and local correlation tracking (McKenzie, 2013). Separate observations using AIA and the Extreme-ultraviolet Imaging Spectrometer (EIS; Culhane et al., 2007) have also shown turbulence to coexist in regions with hot, \(\sim 30\,\)MK downward moving loops, but did not make reference to SADs (Imada et al., 2013; Doschek et al., 2014). Using observations of several strong flares, Larosa & Moore (1993) found that the interactions between individual flux tubes and interactions with a diffusive current sheet are likely to develop outflows with a turbulent structure. Moreover, they also found MHD turbulence to be capable of rapidly thermalizing bulk kinetic energy via a turbulent cascade, thus becoming a mechanism for impulsive phase energy release. It was further suggested by Jiang et al. (2006) that plasma wave turbulence could produce heating into the decay phase, indicated by the long cooling times of loop-top HXR sources. Although the interconnected behavior of retracting flux loops, turbulence, and extended flare heating has been well established by observation, it has yet to be modeled in detail.
The present work aims to develop a self-consistent mechanism for gradual-phase energy release. Motivated by the observations of SADs, we create a relatively simple model for the excitation of MHD turbulence along a thin flux tube (TFT) as it retracts through a current sheet. The retracting tube experiences a drag force, which converts energy from the tube into an internal population of turbulent Alfven waves that then propagate along the tube and reflect off high-density gradients in the transition region. Turbulent energy is dissipated through the non-linear interaction between counter-propagating waves, thus creating a local heat source within the tube. A model for drag with turbulent transport is outlined in Section 2. The behavior of our model is then demonstrated using an initial reference simulation in Section 3. Using the simulation results, synthetic AIA EUV emissions are created by which our model can be qualitatively compared to observation. The parameters critical to the duration of turbulent energy release -- the fraction of drag power converted to turbulent Alfven waves and the energy correlation length of the propagating waves -- are explored in Section 4. A final simulation run with a set of optimized parameters was found to extend the duration of EUV emission past 40 minutes. The duration of the heating produced from turbulent dissipation was also found to persist long after the retraction had ended. The results presented in this paper point to a physical mechanism that can produce late-phase heating required to sustain coronal flare emission for the first time.
## 2 Flare Model With Alfven Wave Turbulence
In order to model the energy released by reconnection and its conversion to MHD turbulence, we employ the TFT equations described by Longcope et al. (2009). The model assumes localized reconnection has already occurred within a small diffusion region, within the current sheet, to create a closed magnetic flux tube. The moment following reconnection is illustrated in Figure 1. Two
layers of equilibrium magnetic field are separated by a current sheet and have directions differing by a shear angle \(\Delta\theta\). The newly reconnected loop, shown in grey, is embedded within the current sheet that lies in the \(x\)-\(z\) plane. It is at this point the TFT model begins. Mechanisms of the reconnection process itself are not considered here.
Once the bent tube is initialized, it retracts through the current sheet under a magnetic tension force. The internal magnetic pressure of the tube is balanced by the magnetic pressure of the external magnetic fields separated by the current sheet, setting the magnetic field strength inside the tube. In the present case, we assume the external magnetic fields to be uniform, such that the magnetic field \(B\) is also uniform along the tube.
The TFT equations govern the evolution of the axis of a flux tube, \(\mathbf{r}(\ell,t)\), where \(\ell\) is the parameterized length coordinate of the curved tube. The fluid velocity of the plasma confined within the tube, \(\mathbf{v}=d\mathbf{r}/dt\), is advanced according to the momentum equation (Longcope & Klimchuk, 2015)
\[\rho\frac{d\mathbf{v}}{dt}=\Bigg{(}\frac{B^{2}}{4\pi}-p\Bigg{)}\frac{\partial \mathbf{\hat{l}}}{\partial\ell}-\mathbf{\hat{l}}\frac{\partial p}{\partial \ell}+\rho\mathbf{g}+\tfrac{4}{3}\mu\frac{\partial}{\partial\ell}\Bigg{(} \mathbf{\hat{l}}\cdot\frac{\partial\mathbf{v}}{\partial\ell}\Bigg{)}, \tag{1}\]
where \(p=\rho k_{B}T/\bar{m}\) is the plasma pressure and \(\bar{m}=0.593\)\(m_{p}\) is the mean particle mass of the plasma, assumed to be fully ionized with a coronal abundance. For a fluid element along the tube with differential length \(d\ell\) and mass per unit flux \(dm\), the mass density is given by \(\rho=B(dm/\,d\ell)\).
Outside of gravitational force \(\rho\mathbf{g}\), which acts in the downward direction, forces acting on a fluid element in Equation (1) are dictated by the tangent vector, \(\mathbf{\hat{l}}=\partial\mathbf{r}/\partial\ell\). The first term on the rhs describes the magnetic tension force along the tube's curvature vector, \(\partial\mathbf{\hat{l}}/\partial\ell\), perpendicular to the tube's axis. The second is the gas pressure directed along the tube. Viscous interactions between
Figure 1: Schematic of a reconnected flux tube embedded in a current sheet. (a) Face-on view of the current sheet. External magnetic fields on either side of the current sheet — the blue and red-dashed lines — are skewed according to the sheer angle \(\Delta\theta\). The grey tube shows the geometry of the newly formed flare loop immediately following reconnection. (b) Perspective orientation illustrating the deflection of the external magnetic fields around the flux tube, with the tube sliced through its apex. The grey area corresponds to the current sheet shown in (a). Only the \(\mathbf{\hat{z}}\)-component of the external magnetic fields is shown in the latter.
fluid elements are described by the final term, with \(\mu=0.012\)\(\kappa_{\rm sp}(T)/c_{v}\) being the temperature-dependent dynamical viscosity coefficient and the classical Spitzer-Harm thermal conductivity being \(\kappa_{\rm sp}=10^{-6}\)\(T^{5/2}\), in conventional cgs units.
The energy of the tube is advanced according to the temperature of a fluid element
\[c_{v}\rho\frac{dT}{dt}=-p\Bigg{(}\hat{\mathbf{l}}\cdot\frac{\partial\mathbf{v} }{\partial\ell}\Bigg{)}+\tfrac{4}{3}\mu\Bigg{(}\hat{\mathbf{l}}\cdot\frac{ \partial\mathbf{v}}{\partial\ell}\Bigg{)}^{2}-n_{e}^{2}\Lambda(T)+\frac{ \partial}{\partial\ell}\left(\kappa\frac{\partial T}{\partial\ell}\right), \tag{2}\]
where \(c_{v}=3k_{B}/2\bar{m}\) is the specific heat and \(n_{e}=0.874\)\((\rho/m_{p})\) is the electron number density. The first term on the rhs constitutes the adiabatic compression done on the gas. This term is followed by viscous heating, which arises from the loss of kinetic energy in Equation (1). Optically thin radiative losses are given by the third term, where the radiative loss function \(\Lambda(T)\) is taken from the output of CHIANTI 7.1 (Landi et al., 2012). The final term describes the field-aligned thermal conduction. A flux limiter is included in the model, such that the thermal conductivity is restricted to the theoretical electron free-streaming limit at sufficiently large temperature gradients (Longcope & Klimchuk, 2015).
### Drag Force and Alfven Wave Turbulence
The TFT model describes the evolution of flare loops retracting under a tension force brought about by a change in magnetic topology. Previous studies assumed the retraction occurs without any interaction with the background current sheet, and therefore saw typical outflow velocities reaching the local Alfven speed on the order of \(3\,\mathrm{Mm}\,\mathrm{s}^{-1}\)(Longcope et al., 2018; Unverforth & Longcope, 2020). Alfvenic outflow speeds are common to all idealized models of magnetic reconnection, but are not supported by much observational evidence. Supra-arcade downflows (SADs McKenzie & Hudson, 1999) are often taken to be signatures of post-reconnection retraction, but always seem to move well below the local Alfven speed (Savage & McKenzie, 2011; Savage et al., 2012).
Based on observations of sub-Alfvenic outflow speeds it has been proposed that the retracting flux must somehow interact with the surrounding plasma, possibly by deforming the flux outside the sheet or entraining plasma (see Figure 1b and discussions in Linton & Longcope, 2006; Scott et al., 2013). Most forms of interaction would leave the surrounding plasma with greater magnetic or kinetic energy, at the expense of the retracting tube. This would appear as some kind of drag force on the retracting flux, removing some of its energy and reducing the retraction speed. Unverforth & Longcope (2021) investigated this possibility, introducing a drag force modeled using a high Reynolds number aerodynamic drag (Choudhuri & Gilman, 1987)
\[\mathbf{f}_{d}=-D\,\rho\,\,|\mathbf{v}_{\perp}|\,\,\mathbf{v}_{\perp}. \tag{3}\]
Here \(\mathbf{v}_{\perp}=\mathbf{v}-\hat{\mathbf{l}}(\hat{\mathbf{l}}\cdot\mathbf{v})\) is the component of fluid velocity perpendicular to the flux tube and \(D\) is a constant proportional to the traditional drag coefficient. Because the interaction between the flux tube and the current sheet is likely more intricate than the conventional drag force exerted by a neutral fluid on a rigid body, \(D\) is taken to be a free parameter to capture these unknown complexities (Unverforth & Longcope, 2020). This force, per unit area, is added to the momentum equation (1).
The force exerted by the surrounding plasma ultimately opposes the acceleration of the loop by transferring momentum and energy away. Energy is therefore removed from the retracting loop at a rate of
\[P_{d}=\rho^{-1}\,\mathbf{v}\cdot\mathbf{f}_{d}\leq 0, \tag{4}\]
which is always negative. The energy is not really lost, but must appear in some form in the surrounding plasma. We assume here that it takes the form of MHD turbulence, of which some fraction remains on the tube, but at length scales heretofore unresolved. We therefore assume a fraction of \(P_{d}\) takes the form of unresolved Alfven wave turbulence occurring on the tube itself, even as it retracts.
### Evolution of Alfven Wave Turbulence
Gaining inspiration from turbulent transport models of the solar wind (i.e. Marsch & Tu, 1989; Matthaeus et al., 1999; Dmitruk et al., 2001; Verdini & Velli, 2007; Lionello et al., 2014), the equations used for the evolution of turbulent wave energy in this work are developed by first decomposing the velocity and magnetic fields into large-scale contributions, denoted by capitals, and small-scale perturbations, denoted by lower case:
\[{\bf v}={\bf U}+{\bf u} \tag{5}\] \[{\bf B}={\bf B}_{0}+{\bf b}=B\hat{\bf l}+{\bf b}. \tag{6}\]
The perturbations can then be collected in terms of the Elsasser variables, \({\bf z}_{\pm}\equiv{\bf u}\pm{\bf b}/\sqrt{4\pi\rho}\).
The evolution of the fluctuations is expressed by the scale-separated, linearized MHD equations in their Elsasser representation (Zhou & Matthaeus, 1990a, b; Zank et al., 2011)
\[\frac{D{\bf z}_{\pm}}{Dt}=\pm({\bf V}_{A}\cdot\nabla){\bf z}_{\pm}+{{1\over 2 }}({\bf z}_{\mp}-{\bf z}_{\pm})\nabla\cdot\bigg{(}\frac{{\bf U}}{2}\pm{\bf V}_ {A}\bigg{)}. \tag{7}\]
Here, \({\bf V}_{A}={\bf B}_{0}/\sqrt{4\pi\rho}\) is the local Alfven speed along the loop. We assume the large-scale fields of \({\bf U}\) and \({\bf B}\) vary slowly across the flux tube and have therefore ignored the gradient terms in Equation (7). The source and sink terms for the Elsasser variables are also temporarily neglected here and are instead discussed in detail below.
The aggregate energy densities of the turbulent Alfven waves are then found by spatially averaging over the small-scale fluctuations. The energy per unit mass of the two turbulent waves is given by:
\[w_{\pm}={{1\over 4}}\langle{\bf z}_{\pm}\cdot{\bf z}_{\pm}\rangle \tag{8}\]
with the total energy density of the perturbations being \(w_{\rm tot}=w_{+}+w_{-}=\langle u^{2}/2\rangle+\langle b^{2}/8\pi\rho\rangle\). By averaging over the perturbations, we remove their explicit dependence from the system and instead model their collective evolution implicitly.
Taking the inner product of \({\bf z}_{\pm}\) with Equation (7) for \({\bf z}_{\pm}\), respectively, gives the expression for each Elsasser energy \(w_{\pm}\):
\[\frac{Dw_{\pm}}{Dt}=\pm({\bf V}_{A}\cdot\nabla)w_{\pm}-w_{\pm}\nabla\cdot \bigg{(}\frac{{\bf U}}{2}\pm{\bf V}_{A}\bigg{)}+R_{\pm}. \tag{9}\]
Here, \(R_{\pm}\) is a term proportional to \(\langle{\bf z}_{+}\cdot{\bf z}_{-}\rangle\) that accounts for wave reflection. Analogous to the'mixing' effects described in Zhou & Matthaeus (1990b), the interaction between opposite Elsasser variables allows for energy to be redistributed between the two populations, such that \(w_{\pm}\) will generate counter-propagating \(w_{\mp}\). This process is linked to large-scale gradients in the system, and is likely to be most effective where \(\partial{\rm ln}\rho/\partial\ell\) is large (Hollweg, 1981; Velli, 1993). Computing \(R_{\pm}\) self-consistently
would require additional equations beginning with one for the evolution of \(\langle\mathbf{z}_{+}\cdot\mathbf{z}_{-}\rangle\). In the interest of simplicity, we forego this approach and instead set \(R_{\pm}=0\) and account for wave reflection with a boundary condition described below.
Following the TFT model, we parameterize the wave energy equations according to length coordinate \(\ell\). As the large-scale fields are taken to vary only in the direction parallel to \(\hat{\mathbf{l}}\), divergences in \(\mathbf{U}\) and \(\mathbf{V}_{A}\) reduce to spatial derivatives in \(\ell\). The full turbulent transport equations in their conservative forms are then
\[\frac{dw_{\pm}}{dt}=\pm v_{A}\frac{\partial w_{\pm}}{\partial\ell}-\tfrac{1}{2 }w_{\pm}\frac{\partial v_{\parallel}}{\partial\ell}\mp w_{\pm}\frac{\partial v _{A}}{\partial\ell}+S_{\pm}+NL_{\pm}, \tag{10}\]
for \(v_{\parallel}=\hat{\mathbf{l}}\cdot\mathbf{v}\). Because the terms in the above expression are dependent only on system variables aligned with the mean magnetic field, Equation (10) describes the energies of small-scale Alfven waves propagating along the field in the \(\pm\hat{\mathbf{l}}\) direction. Propagation of the wave energies is described by the first three terms on the rhs. The first is the advective term, showing how the wave energies propagate at the Alfven speed. The second and third terms describe work done on the waves by compression of the plasma and magnetic pressures against that of the wave, respectively.
### Sources and Sinks of Alfven Wave Turbulence
The last two terms of Equation (10), absent from Equation (9), are added to describe the source, \(S_{\pm}\), and sink, \(NL_{\pm}\), of the turbulent energies, respectively. The source, as described above, is a fraction, \(f_{\mathrm{turb}}\), of the power the tube loses through drag as it retracts through the current sheet described by Equation (4):
\[S_{\pm}=-\tfrac{1}{2}\,f_{\mathrm{turb}}\,P_{d}. \tag{11}\]
Here the \(\tfrac{1}{2}\) prefactor assumes the input energy is divided equally between the counter-propagating wave species.
The loss of turbulent energy in the system is modeled according to the simple phenomenological decay rate used in numerous MHD turbulence investigations (e.g. Hossain et al., 1995; Matthaeus et al., 1999; Dmitruk et al., 2001; Zank et al., 2011). Following one-point closure models for hydrodynamic turbulence first described by de Karman & Howarth (1938), the energy loss arises from the non-linear interaction between the counter-propagating waves. This interaction is assumed to produce a cascading spectrum of turbulent Alfven waves -- defined by a set of wave numbers that correspond to the cascade of energy-containing eddies into increasingly smaller scales -- that ultimately results in the dissipation of turbulent energy.
The dissipation rate for the Elsasser energies is expressed as
\[NL_{\pm}=-\frac{w_{\pm}\sqrt{w_{\mp}}}{\lambda_{\perp}}, \tag{12}\]
where \(\lambda_{\perp}\) is the single similarity length scale that characterizes the transverse dimensions of the energy-containing eddies for both the rightward and leftward propagating waves. As such, \(\lambda_{\perp}\) is the characteristic length scale that couples the non-linear spectral transfer between the counter-propagating modes, and can thus be thought to correspond to the correlation length between the two energy populations (Batchelor, 1953). Although turbulence is typically described by a range of length scales (i.e. wave numbers), this simple one-point model assumes the decay can be represented by a single non-linear term.
We assume that all energy lost from the turbulence is ultimately thermalized and appears as a heat source for the plasma. This is achieved simply by adding the term
\[H_{\rm turb}=\rho\,\Big{(}\,NL_{+}+NL_{-}\Big{)}, \tag{13}\]
to the rhs of Equation (2). With this additional term, our model establishes a connection, albeit indirect, between the energy lost to drag and a heating rate that can be used to explain long-duration EUV emission seen during the gradual phase of flares.
The TFT equations are further modified by considering the effects of turbulence on the tube's thermal conductivity. As magnetic field perturbations will lengthen the field lines along which thermal electrons carry heat, the thermal conductivity will consequently decrease. We account for this suppression by modifying the classical Spitzer-Harm conductivity
\[\kappa_{\rm sp}\to\kappa_{\rm sp}^{\rm(turb)}=\frac{\kappa_{\rm sp}}{1+\, \rho(w_{+}+w_{-})/B^{2}}, \tag{14}\]
such that the turbulent energy will directly impede the heat flux in the tube. Although this is a relatively simple addition to the model, it attempts to address the notion of heat conduction suppression via turbulence in a self-consistent manner.
Finally, we achieve turbulent wave reflection from the chromosphere using the boundary conditions
\[w_{-}(\ell_{0})=\eta\,w_{+}(\ell_{0})\quad,\quad w_{+}(\ell_{1})=\eta\,w_{-}( \ell_{1}), \tag{15}\]
where \(\ell_{0}\) and \(\ell_{1}\) are the left and right boundaries, respectively. Because reflections are likely to occur in the transition region where temperature and density gradients are large, these boundaries are set by a characteristic temperature \(T_{\rm TR}\) such that \(\ell_{0(1)}=\min(\max)\,\{T>T_{\rm TR}\}\). In this case, waves incident on the transition region at either end of the loop will be transformed into their respective counter-propagating waves, effectively trapping the wave energy in the corona. Furthermore, the boundary condition assumes a reflection coefficient of \(\eta\leq 1\) to account for energy lost from wave transmission. The value of this coefficient is taken as a free parameter and is discussed, along with the other free parameters of our model, in the following section.
## 3 Simulation with Alfven Wave Turbulence
To illustrate the behavior of our model, we construct and run a single reference simulation. Rather than attempt to model a particular observation, we aim at properties typical of a long-duration event, including peak temperatures above 20 MK and flux retraction speeds of order 500 km/s. We attempt to extend the cooling time to values comparable to those typically found. For concreteness, we use values reported in Qiu & Longcope (2016), since they also infer input energy. To make a meaningful comparison, we choose parameters to give the simulation the same energy flux as they report. The run and its parameters are discussed alongside their motivations below.
### Initial Conditions
Prior to the simulation run, a flux tube is initialized in a configuration analogous to a bent field line created from reconnection. The tube is confined to the \(x\)-\(z\) plane, corresponding to the current sheet that separates uniform layers of magnetic flux differing by shear angle \(\Delta\theta=120^{\circ}\), as illustrated by Figure 1a. The initial tube is therefore bent at its apex by \(180^{\circ}-\Delta\theta=50^{\circ}\), and is set to have
a uniform magnetic field of magnitude \(B=100\,\)G. Because this process forms two straight segments joined at a single point, the apex of the loop is rounded to be a semi-circle composed of eight grid points to prevent issues arising from under-resolution.
Once bent, the flux tube is initialized into three components: a coronal loop and two footpoints attached to a rudimentary chromosphere. The pre-flare chromosphere is stratified under gravity with pressure increasing exponentially according to scale height \(H=500\,\)km. Taken to be isothermal with temperature \(T_{\rm min}\)=0.01 MK, the chromosphere primarily serves as a mass reservoir of cool, dense plasma for evaporated material. The coronal region of our flux tube is structured according to the relations given in Rosner et al. (1978). Defined by an apex temperature set to \(T_{\rm co,0}\)=1.3 MK, the initial corona is configured in isobaric equilibrium maintained by an _ad hoc_ volumetric heating input. The heating required to maintain equilibrium, however, is only used during the initialization process. Subsequent evolution of the loop does not contain this heating term in order to more directly observe the consequences of turbulent heating induced along the tube.
The initial length of the tube is calculated from the amount of flare energy we want to be released by the system. In the TFT model, the source of this energy comes from contracting magnetic field lines, releasing magnetic free energy into other forms as the flux tube retracts. The magnetic energy per unit of magnetic flux of the tube is (Longcope & Klimchuk, 2015)
\[W_{M}=\frac{1}{4\pi}\int B[{\bf r}(\ell)]d\ell. \tag{16}\]
Because the strength of the magnetic field is fixed, a change in magnetic energy is therefore powered by a reduction in the tube's length. A net length decrease \(\Delta L\), releases flare energy, per magnetic flux,
\[E_{\rm fl}=\frac{B\Delta L}{4\pi}\sin^{2}\bigg{(}\frac{\Delta\theta}{4}\bigg{)}. \tag{17}\]
Here, the sine-squared factor represents the fraction of magnetic energy converted to kinetic energy parallel to the axis of the tube through rotational discontinuities that form during retraction (Longcope & Klimchuk, 2015). This mechanism constitutes the primary mode of magnetic energy conversion in previous studies using the TFT model to investigate reconnection dynamics, with and without drag (Guidoni & Longcope, 2010; Unverforth & Longcope, 2021).
To make contact with the work done in Qiu & Longcope (2016), we incorporate the same flare energy deposited by the impulsive part of their heating profile, \(H(t)\), into the energy released in our model1. By integrating over the heating rate, the total energy per unit area is calculated to be \(\int H(t)dt=\zeta E_{\rm fl}B=\zeta 2.9\times 10^{11}\,\)erg cm\({}^{-2}\). The prefactor \(\zeta\) is introduced to account for imperfect conversion between magnetic energy and other forms. Assuming the primary energy driving flare phenomena arises from MHD turbulence, as we argue below, the conversion is ultimately related to the fraction of power lost through drag, such that \(\zeta=1/f_{\rm turb}\). We initially take this fraction to be reasonably conservative at \(f_{\rm turb}\)=0.2, giving a total energy release of \(1.45\times 10^{12}\,\)erg cm\({}^{-2}\).
Footnote 1: The heating rate, given by Equation (6) in Section 4 of Qiu & Longcope (2016), is a piecewise expression composed of three Gaussians. The first two describe the impulsive flare energy release as inferred from AIA 1600 Å ribbon emission observations, while the third is appended to model gradual-phase heating. Here we integrate the first two, along with the same parameter values described in the same section, to calculate a total energy per unit area.
Equation(17) is then solved for \(\Delta L\), giving \(\Delta L=73\,\)Mm. The loop is initialized with length \(L_{0}=L_{\rm f}+\Delta L\), with \(L_{\rm f}\) being the final length of the loop once retraction is complete. We choose \(L_{\rm f}\) to be 93 Mm, which gives an initial loop length of \(L_{0}\)=166 Mm.
### Parameters of the Drag Model
Although the total energy released by the loop through retraction is set by Equation (17), the inclusion of a drag force in our model changes the nature of energy release in comparison to previous TFT models. In these earlier works, parallel kinetic energy is responsible for shocks that compress and thermalize the plasma via viscous heating, while the energy found in perpendicular flows is found to have little effect on the overall magnetic energy conversion. Unverfeth & Longcope (2021) showed that kinetic energy will be effectively removed from the tube by including drag. While this reduction was shown to affect the perpendicular kinetic energy primarily, rotational discontinuities were also shown to be less effective at driving parallel flows as well, thus limiting the available energy to be converted to heat. In the case of drag-induced Alfven wave turbulence, perpendicular kinetic energy is no longer lost. Instead, a fraction is reintroduced to the tube, as dictated by Equations (4) and (11). We expect the subsequent transfer of turbulent energy to heat from dissipation will capture a portion of the magnetic energy that would otherwise have gone into perpendicular flows and be a source of heat comparable to, if not greater than, heat driven by parallel plasma flows.
The drag coefficient, \(D\), in Equation (3) controls both the amount of turbulent energy siphoned from perpendicular kinetic energy and the rate of retraction in the \(-\hat{\mathbf{z}}\) direction. As described in the introduction, drag was first motivated by observations of reconnection outflows (e.g. SADs Savage et al., 2012), and serves as the mechanism by which sub-Alfvenic speeds can become consistent with Petschek-like reconnection models. Here we set \(D=50\,\mathrm{M}\mathrm{m}^{-1}\) in order for our model to agree with measured outflow speeds. An initial investigation using this coefficient shows the loop retracting at speeds \(\sim\)\(500\,\mathrm{k}\mathrm{m}\,\mathrm{s}^{-1}\), which is comparable to the average velocity of SADs measured in Longcope et al. (2018).
The turbulence model includes several new parameters, used here for the first time. These will be varied in order to understand their effects, but we first describe their significance and plausible ranges. We will then return in the discussion to consider what their variation has revealed about the physics of turbulence.
The turbulent energy dissipation rate is set by the energy correlation length, \(\lambda_{\perp}\). Investigations into the effects of turbulence on coronal heating have assumed \(\lambda_{\perp}\) to be on the order of inter-network spacing of \(30\,\mathrm{M}\mathrm{m}\)(Matthaeus et al., 1999; Downs et al., 2016). Abramenko et al. (2013) measured the characteristic length of the energy-containing structures using local correlation tracking in the photosphere, finding \(\lambda_{\perp}\) to lie in the range of \(0.6\)-\(2\,\mathrm{k}\mathrm{m}\). Although these studies deal with the issue of coronal heating driven by convective motions in the photosphere, it is not unreasonable to take the correlation length in the coronal to be of similar values. If we instead take \(\lambda_{\perp}\) to be on the order of the diameter of a given flux tube, then \(\lambda_{\perp}\sim 0.1-2\,\mathrm{M}\mathrm{m}\), given the distribution of loop widths measured using images taken of coronal loops with AIA and the Hi-C imager (Aschwanden & Peter, 2017).
There is an alternative approach through the observed decay time of Alfven wave energy. If that decay is due only to a non-linear turbulent cascade, then the decay time will be given by the eddy turnover time
\[\tau_{\mathrm{nl}}=\frac{\lambda_{\perp}}{\langle v_{\mathrm{nth}}\rangle}= \frac{\lambda_{\perp}}{\sqrt{w}}. \tag{18}\]
Investigations into the non-thermal broadening of EIS Fe xxiv 255 A lines during flares have seen spatially averaged non-thermal velocities \(\langle v_{\mathrm{nth}}\rangle\) on the order of \(60\)-\(100\,\mathrm{k}\mathrm{m}\,\mathrm{s}^{-1}\) decay to pre-flare values
in roughly 20 minutes (Kontar et al., 2017; Stores et al., 2021). Setting Equation (18) to this time, and using this turbulent velocity, suggests a correlation length of \(\lambda_{\perp}=30-150\,\mathrm{Mm}\).
There is at least one other aspect to include in our interpretation of \(\lambda_{\perp}\). Traditional derivations of the non-linear dissipation captured by Equation (12) assume the counter-propagating waves are uncorrelated with one another. Our waves, however, are trapped between reflecting ends and are likely to be correlated. This fact could be accounted for by one more multiplicative factor, \(0<\xi<1\), in Equation (12). Rather than including one more free parameter, we combine the two into a single free parameter \(\lambda_{\perp}^{\prime}=\lambda_{\perp}/\xi\), which will thus be larger than the turbulent length scale. We hereafter interpret \(\lambda_{\perp}\) as \(\lambda_{\perp}^{\prime}\).
Altogether, the above analysis produces a very rough range of \(\lambda_{\perp}=0.1\)-\(150\,\mathrm{Mm}\). Nonetheless, given the simplicity of this model, these values are only meant to demonstrate that a variety of length scales can be used to characterize turbulent energy dissipation, while also being grounded in observation. For the purpose of a single reference model, \(\lambda_{\perp}\) is initially set to \(2\,\mathrm{Mm}\) -- a value consistent with the diameter of observed coronal loops. This choice thereby assumes the energy correlation length to be on the order of the small-scale transverse fluctuations along the flux tube. The effects of different length scales on the duration of turbulent heating are explored further in the following section.
Lastly, the initial value of the reflection coefficient for the propagating energy waves is set to be \(\eta=0.95\). This value, albeit close to its upper threshold of 1, is chosen so that energy losses arising from turbulent dissipation can be distinguished from losses due to energy escape through the transition region. The characteristic temperature \(T_{\mathrm{TR}}\) defining the reflection boundaries is chosen to correspond to regions with a high-density gradient. For our tube initialized with \(T_{\mathrm{co,0}}\)=\(1.3\,\mathrm{MK}\), we therefore set \(T_{\mathrm{TR}}\)= \(0.5\,\mathrm{MK}\).
### The Reference Simulation
With the flux tube initialized as described above, Equations (1) (2), and (10) are solved using the PREFT numerical code as documented in Longcope and Klimchuk (2015). Fluid elements along the tube are represented by cells on a 1D Lagrangian grid, with the density and differential length of each calculated to ensure each has a constant mass per unit flux. Aside from the evolution of temperature, which is advanced semi-implicitly, all expressions are advanced explicitly. In order to ensure stability in the system, each time step is chosen such that the Courant-Friedrichs-Lewy conditions are satisfied. Initial parameters and the resulting properties of the reference simulation are summarized in Table 1.
The evolution of the tube during its retraction is illustrated in Figure 2. Starting with an apex at height \(z_{0}\)=\(71\,\mathrm{Mm}\), the tube retracts until its total length has decreased to \(L_{f}=93\,\mathrm{Mm}\) at time \(t=452\,\mathrm{s}\) (\(7.5\,\mathrm{min}\)), yielding a final apex height of \(z_{f}=18\,\mathrm{Mm}\). The change in apex height, \(\Delta z=z_{0}-z_{f}=53\,\mathrm{Mm}\), is analogous to a post-flare loop moving through a current sheet of the same length (Longcope et al., 2018, for example). Once the retraction is complete, the tube is artificially straightened to lie on the \(\hat{\mathbf{x}}\)-axis, mimicking the effect of the tube reaching the post-flare arcade. Velocity flows after this point are set to equal the parallel velocity \(v_{\parallel}\) immediately prior to the straightening, and the perpendicular flow is set to zero. The simulation is then run for an additional 32 minutes in the straightened configuration. With no perpendicular flow, the source of Alfven waves is \(S_{\pm}=0\), so the wave energies are left to decay freely.
Figure 2: Dynamics of the reference simulation over the course of retraction. A face-on view of the tube axis \(\mathbf{r}(\ell)\) at seven times is shown in the top panel. The following three panels show the corresponding electron number density \(n_{e}\), velocity parallel to the axis of the tube \(v_{\parallel}\), and temperature \(T\). Each panel is plotted against the x-coordinate, scaled such that x=0 Mm corresponds to the leftmost boundary of the tube.
During the retraction, the corona undergoes characteristic flare dynamics. Temperature quickly increases from \(T_{\mathrm{co,0}}=1.3\,\mathrm{MK}\) to values well over \(30\,\mathrm{MK}\) before decaying down to \(\sim 10\,\mathrm{MK}\). The parallel velocity along the tube shows that evaporation starts by \(t=20\,\mathrm{s}\), reaching speeds up to nearly \(1\,\mathrm{Mm}\,\mathrm{s}^{-1}\). By \(t=5\,\mathrm{minutes}\), most of \(v_{\parallel}\) has dropped out completely. Once evaporation has begun, the tube's density increases by several orders of magnitude in the corona, and remains elevated after the temperature and velocity in the tube have subsided.
The explosive initial evolution appears more reminiscent of drag-free solutions (e.g. Longcope & Klimchuk, 2015), than the gentle dynamics seen in Unverferth & Longcope (2021). The effect of drag, however, can be seen from the apparent deceleration of the tube in Figure 2. To illustrate the change in downward velocity, Figure 3 plots the apex height of the tube throughout its retraction. Here the retraction appears to occur in two nearly constant-velocity phases; a fast downflow, lasting a little less than one minute, followed by a dramatic reduction of speed that remains constant until the tube is straightened. A straightforward explanation for the deceleration is provided by evaporation. The upward driving of chromospheric material increases the density in the corona, subsequently lowering the local Alfven speed and slowing the retraction.
Linear fits to each phase show the tube initially moving downward at a speed of \(540\,\mathrm{km}\,\mathrm{s}^{-1}\) before slowing down to \(57\,\mathrm{km}\,\mathrm{s}^{-1}\). The initial phase agrees with downflows measured in Longcope et al. (2018), where two-thirds of the 35 downward moving features had velocities lower than \(600\,\mathrm{km}\,\mathrm{s}^{-1}\). While the features analyzed in their work saw only straight streaks in a height-time map of a current sheet viewed edge-on, observations taken using TRACE 195 A filtergrams saw downflows decelerating in a distinct two-step fashion (for example, see Figure 2 in Sheeley et al., 2004). This deceleration of roughly an order of magnitude occurred prior to the feature reaching the post-flare arcade. Our model thus offers a novel insight into the behavior of these observations.
The initial phase of the flux tube's evolution is shown in Figure 4, where we see the dynamics of the turbulent Alfven wave energies \(w_{\pm}\). The total energy of the waves \(w_{\mathrm{tot}}\) peaks at \(t=0.3\,\mathrm{s}\) (cyan) and subsequently decreases as the waves propagate towards the footpoints of the loop. At \(t=4.3\,\mathrm{s}\), the waves reach the boundary set by the characteristic temperature \(T_{\mathrm{TR}}\)=\(0.5\,\mathrm{MK}\), corresponding to
\begin{table}
\begin{tabular}{l c c c} \hline \hline & & Reference & Optimized \\ Drag Parameter & & & \\ \hline \(f_{\mathrm{turb}}\) &... & 0.2 & 0.6 \\ \(\lambda_{\perp}\) & Mm & 2 & 100 \\ \(D\) & Mm\({}^{-1}\) & 50 & 50 \\ \(\eta\) &... & 0.95 & 0.95 \\ Model Result & & & \\ \hline \(T_{\mathrm{peak}}\) & MK & 41 & 32 \\ Peak \(\dot{Q}\) & \(10^{10}\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\) & 2.15 & 1.02 \\ Retraction time & min. & 7.5 & 8.1 \\ \(\dot{Q}\) duration & min. & 13 & 41 \\ EUV duration & min. & 32 & 40 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation properties of the reference and optimized runs
position \(\pm x=2.6\,\)Mm from the edge of the tube. Upon reaching the boundary, the waves then begin to reflect, the effects of which can be seen at \(t=5.4\,\)s. The solid lines in the upper left panels correspond to the leftward propagating Elsasser energy and the dashed lines correspond to the rightward, counter-propagating energy created from the reflection. Over times \(t>4.3\,\)s, the point of reflection moves progressively downward as the position where \(T=0.5\,\)MK moves by thermal conduction.
Counter-propagating waves appear first at the loop top, where drag creates both waves together. Interaction between waves results in significant loop-top heating according to Equation (13). Turbulent dissipation drives the temperature of the loop to a peak temperature of \(T_{\text{peak}}=41\,\)MK that rapidly drives thermal conduction fronts outward towards the footpoints. The steep temperature gradients shown in the lower left panels of Figure 4 are evidence of the flux limiter taking effect.
Once the reflection begins (\(t>4.3\,\)s), interactions between the incident and reflected waves create a second locus of heating near the footpoint. A novel result of this simulation is the interaction between the thermal conduction fronts and the increase in temperature driven by the turbulent heating at the reflection boundary. As the conduction fronts move leftward in Figure 4, they are impeded by a localized temperature increase in the transition region that grows over time. This growth, initially
Figure 3: Apex position of the flux tube over the retraction period. Linear fits to the fast (red dashed) and slow (blue dashed) retraction phases are over-plotted.
due to \(H_{\rm turb}\), is likely compounded by the conduction front, as the initial temperature bump seen at \(t=5.4\,\)s gradually smooths out with increasing temperature. Nevertheless, the localized heating from reflection creates a separate conduction front, as well as a pressure increase above the transition region. Remarkably, it is this conduction front that appears to drive evaporation upon reaching the chromosphere, the beginnings of which are seen at \(t=6.3\,\)s in both parallel velocity and pressure. It should be noted that the leftward movement of the reflection boundary, as shown by the leftward propagation of \(w_{\pm}\), is also due to the localized conduction front, which heats the plasma and moves \(T_{\rm TR}\) in that direction.
Figure 4: Propagation of six flux tube variables over the first seven seconds of the reference simulation. Middle panels show each variable over the entire left half of the loop, while the corresponding leftmost and rightmost panels show enlargements of the chromosphere and upper transition region in detail. Working top to bottom, the variables plotted are (a) the leftward \(w^{+}\) —solid — and rightward \(w^{-}\) — dashed — propagating Elsässer energies (b) total Elsässer energy \(w_{\rm tot}\) (c) Heat produced from turbulent energy dissipation \(H_{\rm turb}\) (d) parallel velocity \(v_{\parallel}\) (e) temperature \(T\) and (f) pressure \(p\).
Figure 5 plots the evolution of the total turbulent Alfven wave energies alongside their corresponding sources \(S=S_{+}+S_{-}\) for the first minute of the simulation. Here, the amplitude of the source peaks immediately after the retraction begins and is centered around the loop's apex. As the tube continues to retract, the source amplitude decreases by two orders of magnitude and spreads along the extent of the tube. This effect illustrates how the sharp initial bend of the tube decomposes into a more rounded shape via the drag force, subsequently allowing for additional segments of the tube to be deflected downward -- an effect that compounds throughout the retraction. Even so, most of the power in drag remains concentrated at the loop center, leading to a continual creation of turbulent Alfven waves there.
After the Elsasser energies reach their peak, they then decrease along with the source. Of particular note is the shape of the energies themselves, where the amplitude sharply decreases before undergoing reflection at the boundary. To investigate this behavior, we calculate an envelope representing the energy per unit mass in the waves that would otherwise be stationary without sources or reflection. To do this we solve the static version of equation (10), with \(dw_{\pm}/dt=v_{\parallel}\,\partial w_{\pm}/\partial\ell\), and the left two terms omitted. The result
\[w_{\pm}^{(\mathrm{env})}(\ell)\ =\ A\,\exp\Bigg{[}\int\frac{1}{(v_{\mathrm{A}} \mp v_{\parallel})}\,\frac{\partial(v_{\mathrm{A}}\pm v_{\parallel}/2)}{ \partial\ell^{\prime}}\,d\ell^{\prime}\,\Bigg{]}\,, \tag{19}\]
is an envelope profile plotted as a dot-dashed line alongside the normal wave at \(t=7\,\mathrm{s}\) in Figure 5, with arbitrary scaling \(A\) adjusted for clarity. The envelope shows a drop in wave energy due mainly to a decrease in the Alfven speed in the transition region. Excess energy density around \(x=48\,\mathrm{M}\mathrm{m}\)
Figure 5: Total Elsässer energy \(w_{\mathrm{tot}}\) (left) and the corresponding source from drag power \(S\) (right) over the first \(60\,\mathrm{s}\) of the reference simulation. An instance of the wave energy envelope per unit mass without sources or reflection (Equation (19)) is shown as the dashed-dotted curve at \(t\)=\(7.0\,\mathrm{s}\).
is then due to reflection and the production of counter-propagating waves. Similarly, although the source is still relatively high, turbulent dissipation brings the energy density below the envelope for \(x\gtrsim\)58 Mm.
The long-term evolution of the tube is shown by the energies plotted in Figure 6. The magnetic energy (dark orange) lost from its initial value of \(W_{M,0}=1.317\times 10^{11}\,\mathrm{erg\,Mx^{-1}}\) is shown in the bottom panel. Over its retraction, the tube releases a total of \(\Delta W_{M}=5.76\times 10^{10}\,\mathrm{erg\,Mx^{-1}}\) -- 44% of the initial amount. Drag (purple) is responsible for removing nearly all free magnetic energy released. The total work done by drag by the end of the retraction is \(5.61\times 10^{10}\,\mathrm{erg\,Mx^{-1}}\), constituting over 97% of the available magnetic energy. The slight deflection seen at \(t\sim 60\,\mathrm{s}\) results from the change in retraction speed, while constant values starting at \(t=452\,\mathrm{s}\) indicate the tube's straightening.
Figure 6: Evolution of energies within the tube in units of \(10^{8}\,\mathrm{erg\,Mx^{-1}}\), plotted against a logarithmic time axis. The bottom panel shows free magnetic energy (dark orange), net drag loss (purple), and radiative losses (magenta). The top panel shows turbulent wave energy (blue), kinetic (orange), parallel kinetic (dashed green), and thermal (red) on an enlarged scale. The total energy (teal) is the sum of the wave, kinetic, thermal, and magnetic energies. Note the kinetic and parallel kinetic energies are virtually indistinguishable.
The top panel of Figure 6 shows the remaining energies present in the tube. The total energy found in turbulent Alfven waves (blue) mirrors the drag power, reaching a maximum of \(12.8\times 10^{8}\) erg Mx\({}^{-1}\) before non-linear interactions between counter-propagating waves dominate. The inflection seen in the wave energy at \(t=\)7.8 s therefore corresponds to an uptick in thermal energy, shown in red, as the heating from turbulent dissipation increases. The source of wave energy vanishes once the retraction ends (\(t=452\) s), and the wave energy decays rapidly thereafter. It seems that the non-linear dissipation term works on relatively short time scales provided there is enough reflected power to facilitate non-linear interaction.
The kinetic energy (orange) increases notably at \(t=10\) s, corresponding to the onset of evaporation flows. The parallel kinetic energy (green dashed) is effectively identical to the kinetic energy and makes up 99% of the total. This is a direct effect of the drag limiting the perpendicular velocity; in drag-free solutions, the parallel flow contributes only a small fraction to the total kinetic energy (Longcope & Klimchuk, 2015). Furthermore, thermal energy decreases when the kinetic energy has subsided at \(t=206\) s. This moment, also corresponding to an increase in the radiative losses, can be used to indicate the cooling phase of the loop. By the end of the run, \(1.25\times 10^{10}\) erg Mx\({}^{-1}\) has been lost to radiation.
The importance of wave energy in driving the initial flare evolution is readily seen when compared to the parallel kinetic energy. As alluded to in Section 3.1, the energy contained in turbulent Alfven waves is found to be comparable to that contained in parallel plasma flows. Moreover, wave energy in the tube increases immediately upon the start of retraction, whereas kinetic energy increases 10 s later, after much of the initial dynamics have already taken place. The kinetic energy here is due to flows driven by evaporation -- shown above to be a result of heating via turbulent dissipation at the reflection boundary in the transition region -- and not from plasma deflected by rotational discontinuities traveling at supersonic speeds (Longcope et al., 2009; Guidoni & Longcope, 2010). That is to say, the evolution of energies seen here supports the notion of turbulence acting as the primary mode of heating during impulsive flare energy release.
The close relation between explosive flare behavior and turbulent wave energy is also illustrated in the temperature evolution shown in Figure 7. The apex temperature of the reference simulation, shown in blue, is plotted over the entire evolution of the tube. The evolution is characteristic of those observed during flares, reaching a peak of \(T=41\) MK before slowly decaying down to its pre-flare value. In this case, the decay time for the flare is on the order of 35 minutes. We note that the additional peak at \(t\sim 2\) minutes is an effect of evaporation. For comparison, a simulation was run using the same drag coefficient, \(D\)=50 Mm\({}^{-1}\), as the reference, but without including turbulent Alfven waves (i.e. \(f_{\rm turb}=0\)). The result, shown in green, only reaches a peak temperature of \(T=5.5\) MK. While the temperature decay is on the same order of time as the reference run, the initial flare evolution is considerably weaker, suggesting kinetic energy alone is insufficient to power strong flare dynamics.
It seems that the interaction of waves at the loop apex explains much of the peak temperature evolution. To demonstrate this notion, we lowered the reflection coefficient to \(\eta\)=0.10, while leaving the drag and source term the same. The results, plotted in red on Figure 7, shows the temperature evolution closely following that of the \(\eta\)=0.95 run, albeit at slightly lower values, reaching a peak of \(T=32\) MK and decaying to the pre-flare value in virtually the same time. The lower temperatures seen in this case suggest that reflected waves play some role in heating, albeit relatively minor.
The small role played by reflection is consistent with the observation above that non-linear wave dissipation is strong enough to eliminate the wave energy rapidly. It is evidently also capable of dissipating the energy over length scales short enough that most does not reach the feet.
### Effect of Suppressed Thermal Conduction
We briefly note that the effects of thermal conduction suppression, given by the modification in Equation (14), were found to be negligible. At its largest deviation, conductivity remained within less than 1% of its classical value. This is a natural consequence of the low energy density in Alfven waves (\(\sqrt{w}\simeq 0.5\) Mm s\({}^{-1}\)) relative to the loop's Alfven speed \(v_{\rm A}\simeq 10\) Mm s\({}^{-1}\). The magnetic perturbations will deflect the field lines by a random angle \(\sim\sqrt{w}/v_{\rm A}\simeq 0.05\). This deflection results in only a minor increase in field line path lengths, resulting in only a minor decrease in diffusivity.
### Synthetic EUV Emission
As mentioned in the introduction, a major motivation for including turbulent Alfven waves in the TFT model was to reproduce observations of sustained coronal flare emission. In order to create a benchmark by which our model can be qualitatively compared to such observations, light curves corresponding to the six coronal EUV channels measured by AIA were synthesized using the results of our reference simulation.
Figure 7: Evolution for apex temperature from the results of the reference simulation (blue). Plotted for comparison are apex temperatures for a simulation with a reflection coefficient \(\eta=0.10\) (red) and a simulation with no turbulent Alfvén waves (green).
For a given wavelength channel \(i\), the pixel value \(p_{i}\) for the total emission integrated along the flux tube is calculated by
\[p_{i}=\int_{0}^{\infty}n_{e}^{2}(\ell)K_{i}[T(\ell)]\ d\ell, \tag{20}\]
given the temperature-response function \(K(T)\)(Boerner et al., 2012). These functions were accessed using the aia_get_response function in the SolarSoft IDL library (Freeland & Handy, 1998). Because we are only interested in the time evolution, and not the integrated intensity, the light curves are normalized according to their maximum value, making it unnecessary to assign a cross-sectional area to the 1D tube.
Synthesized emissions for each AIA EUV channel -- 131, 94, 335, 211, 193, and 171 A-- are shown in Figure 8. The lightcurves evolve in a typical fashion and peak successively according to their characteristic temperatures, decaying from 20 to 0.4 MK. The higher temperature passbands of 131, 94, and 335 A, also have noticeably wider emission profiles than the lower temperature passbands of 211, 193, and 171 A. This difference in duration indicates the temperature evolution through these passbands, representing a temperature range of 0.4-2 MK, occurs more quickly. To be expected, the overall evolution of the light curves agrees well with the temperature decay in Figure 7. If we quantify the decay of the EUV emission by the time it takes the lowest channel in 171 A to peak, then the duration of the flare emission in this case is approximately 32 minutes long.
To illustrate the effect of turbulent Alfven waves on the duration of the EUV emission, the total heat from turbulent dissipation integrated along the tube, \(\dot{Q}=\int H_{\rm turb}d\ell\), is plotted alongside the light curves, converted here into a heating rate per unit area. The heating rate roughly tracks the evolution of the total wave energy in Figure 6, climbing to a peak of \(2.15\times 10^{10}\,\rm erg\,cm^{-2}\,s^{-1}\) in the initial seconds of the simulation before sharply decreasing. Because the loop starts to cool at \(t=206\,\rm s\), turbulent heating after this time can be considered the gradual-phase heating supplemental to the impulsive energy release -- analogous to the so-called slow-tail heating used in (Qiu & Longcope, 2016). Here, the gradual-phase heating component lasts much less than the duration of the synthesized emission, falling to zero 13 minutes after the simulation start time. Seeking an explanation for extended gradual-phase heating, and longer duration of EUV emission, we revisit the initial parameters of our turbulent transport model in the following section.
## 4 Exploring Parameters
The preceding analysis found the dissipation of turbulent Alfven waves to effectively drive characteristic flare behavior and produce long-duration EUV emissions on the order of 30 minutes. We explore what is required to extend long-term coronal emission by adjusting the parameters of our turbulent transport model. Having already visited the reflection coefficient and the modification to the Spitzer-Harm conductivity, we now focus on the two remaining parameters: the percentage of drag power converted to turbulent wave energy, \(f_{\rm turb}\), and the energy correlation length, \(\lambda_{\perp}\).
### Converted Drag Power
The amount of energy converted into MHD turbulence from drag losses is dictated by the fraction \(f_{\rm turb}\) in Equation (11). Because the generation of Alfven wave perturbations is not explicitly modeled from the interaction between the flux tube and the current sheet, it is unclear what value of \(f_{\rm turb}\) would serve as a good approximation for this interaction. We therefore explore the response in turbulent energy to a change in drag loss conversion by varying the degree of interaction through
\(f_{\rm turb}\). To this end, six simulations are run with \(f_{\rm turb}\) ranging from 0.05-0.8. The remaining loop parameters match the reference simulation outlined in Section 3.1.
Results from the six simulations are shown in the top two panels of Figure 9. The apex temperature evolution -- used here as a proxy for the effectiveness of a set of parameters to produce gradual-phase heating -- is plotted on the left. The overall evolution in each run is similar to that of the reference simulation, shown in green (\(f_{\rm turb}\)=0.2), with temperatures quickly a peak before decaying to ground. In particular, the peak temperature is seen to scale with \(f_{\rm turb}\). As more drag loss becomes available to be converted into turbulent energy, the tube reaches higher internal temperatures from increased dissipation and subsequent heating. This conclusion is supported by the integrated heating rates \(\dot{Q}\), plotted on the right, where higher degrees of drag conversion correspond to larger heating rates.
Greater degrees of drag conversion were also found to produce more prolonged heating rates. Compared to the \(f_{\rm turb}\)=0.05 case, in which \(\dot{Q}\) decayed to zero within 8 minutes, a conversion of \(f_{\rm turb}\)=0.8 resulted in an additional 7 minutes of heating. Regardless, the time taken for the apex temperature to decay to its pre-flare level remained nearly the same for each of the six simulations, despite increasing heating duration. The similarity between these times is ultimately due to the different evaporation strengths. Stronger evaporation, arising from increased heating following a higher \(f_{\rm turb}\), leads to a higher density in the coronal segment of the loop, making the losses from radiative cooling
Figure 8: Bottom panel: Synthetic light curves of AIA EUV channels 131, 94, 335, 211, 193, and 171 Å produced using results from the reference simulation. Heating due to the dissipation of turbulent Alfvén waves is shown in purple, read against the axis on the right. Top panel: Apex temperature evolution as shown in Figure 7. Symbols denote the times when \(T_{\rm apex}\) crosses the characteristic formation temperature(s) of the respective AIA channel.
more effective and the loop cools faster. As a result, the duration of the temperature evolution appears to be insensitive to the amount of energy put into turbulent Alfven waves.
### Energy Correlation Length
Figure 9: Apex temperature evolution (left) and integrated heating rate \(\dot{Q}\) (right) for 11 simulations run with different values of drag power conversion \(f_{\rm turb}\) (top) and correlation length \(\lambda_{\perp}\) (bottom). The cases of \(f_{\rm turb}\)=0.2 and \(\lambda_{\perp}\)=2 Mm in the top and bottom panels, respectively, correspond to the reference simulation analyzed in Section 3. Vertical dashed lines in the upper right panel indicate the time at which retraction ends.
The heat generated via turbulence occurs at a rate set by the non-linear interaction between the counter-propagating waves. Because we adopt a one-point closure model, this interaction is characterized by a single parameter, \(\lambda_{\perp}\), indicative of the degree of correlation between the two energy populations. In Section 3.1, a range of possible values for \(\lambda_{\perp}\) was motivated from observation. Here, \(\lambda_{\perp}\) is taken to be the parameter controlling the efficiency of energy loss from the waves, adjusted from a phenomenological perspective to produce decay timescales that best agree with observation.
The bottom two panels of Figure 9 show the apex temperature evolution (left) and integrated heating rate (right) for five simulations run with a range of \(\lambda_{\perp}\)=0.1-1000 Mm. As \(\lambda_{\perp}\) scales inversely with \(\dot{Q}\), smaller correlation lengths produce larger heating rates and subsequently larger apex temperatures. Apex temperature evolution is virtually indistinguishable, except during the earliest phase, for all values \(\lambda_{\perp}\leq 10\) Mm. Heating rates for these runs are also similar following the initial phase.
This similarity appears to be a consequence of energy dissipation within a single transit. The net energy input into every case is roughly the same. For cases with \(\lambda_{\perp}\leq 10\) Mm, that energy is mostly dissipated before reaching the feet, and thus does not remain on the loop beyond the end of retraction at \(t=7.5\) min. This end was noted for the illustration case (\(\lambda_{\perp}=2\) Mm), and is repeated in the olive curve. Those curves with still smaller values of \(\lambda_{\perp}\) drop even more rapidly after the termination of retraction (see \(\lambda_{\perp}=0.1\) Mm in red).
The two simulations with \(\lambda_{\perp}>10\) Mm produced markedly different behavior. Only in these cases does significant wave energy remain beyond the end of retraction at \(t=7.5\) min (see inset in the lower right of Figure 9). It seems that the non-linear dissipation is small enough for waves to reflect several times. This persistent wave energy is partly responsible for apex temperatures which decayed to pre-flare levels in 42 and 50 minutes (\(\lambda_{\perp}=100\) and 1000 Mm, respectively) rather than the 30-minute decay observed in all other cases. Larger correlation lengths are accompanied by less dissipation in the early phases, resulting in more gentle flare behavior, as evidenced by peak apex temperatures under 20 MK. The conductive flux to the feet is therefore smaller, leading to weaker evaporation flows, lower loop density, and thereby diminished radiative cooling at late times. This is a second factor extending the cooling time of the loop. We therefore see that larger correlation lengths, and the weaker non-linear damping they produce, are critical for our model to extend the duration of coronal emissions.
### Optimized Simulation
To capitalize on our better understanding of the role wave energy can play in extending cooling times we perform one final simulation. This uses a large correlation length, \(\lambda_{\perp}\)=100 Mm, and high conversion efficiency of \(f_{\rm turb}\)=0.6. Although the parameter exploration of \(f_{\rm turb}\) determined that temperature evolution was not sensitive to the degree of drag conversion, a higher value of \(f_{\rm turb}\) is used to produce higher apex temperatures to agree with flare observations.
AIA EUV light curves synthesized from this tuned run are shown in Figure 10. Compared to the emissions from the reference simulation in Figure 8, the profiles here for the hotter channels are broader and peak at later times, illustrating how the temperature remains elevated during this period. The cooler channels, however, are still relatively narrow. At this point in the simulation, the heating rate has decreased from its peak of \(1.0\times 10^{10}\) to \(0.2\times 10^{10}\) erg cm\({}^{-2}\) s\({}^{-1}\), indicating that energy dissipated from turbulence is not enough to counteract energy lost from radiative cooling.
In this case, the duration of the heating rate lasted 41 minutes and produced coronal emissions lasting for approximately 40 minutes, given the peak in 171 A. Even though the correlation length and degree of drag loss conversion were 50 and 3 times larger than the reference simulation, respectively, the duration was only 8 minutes longer. Notable differences between the optimized and the reference simulations are summarized in Table 1. Additional simulations run with increased \(f_{\text{turb}}\) and \(\lambda_{\perp}\), while not shown here, also failed to extend the duration of the emission past 40 minutes, further signifying the balance between strong flare dynamics and gradual turbulent dissipation. As such, the emission duration shown in Figure 10 represents the upper limit of our turbulent transport model.
Alfven wave turbulence should be evident in an excess broadening of hot spectral lines. The turbulent energy per mass gives the turbulent velocity squared averaged over all fluctuation time scales:
\[w_{\text{tot}}=w_{+}+w_{-}=\frac{1}{2}\langle|\mathbf{v}|^{2}\rangle+\frac{1}{2 }\left\langle\frac{|\mathbf{b}|^{2}}{4\pi}\right\rangle=\langle|\mathbf{v}|^{ 2}\rangle, \tag{21}\]
after assuming turbulence is Alfvenic. The net effect on an unresolved observation of spectral line \(\lambda\) is characterized by a spatial average weighted by the local intensity of the line, \(I_{\lambda}(\ell)\),
\[\bar{w}_{\lambda}=\left(\int I_{\lambda}(\ell)\,d\ell\right)^{-1}\,\int w_{ \text{tot}}(\ell)\,I_{\lambda}(\ell)\,d\ell. \tag{22}\]
If the axis of the tube is viewed perpendicularly, then the averaged broadening of the spectral line \(\lambda\) along the line of sight, say \(\mathbf{\hat{z}}\), is
\[\sigma_{\lambda}=\sqrt{\langle v_{z}^{2}\rangle}=\sqrt{\frac{\bar{w}_{\lambda }}{2}}. \tag{23}\]
Figure 11 shows the broadening of three lines typically associated with flares, both versus time (right) and normalized line intensity (left). Cooler lines show lower broadening, and all decrease with time as the turbulence decays.
Figure 10: Synthetic EUV emission produced from the optimized simulation, plotted in the same format as Figure 8.
## 5 Discussion
We have extended a post-reconnection flare model to include the production and dissipation of turbulent Alfven waves. This extension involved a system of turbulent transport equations inspired by existing models of solar wind turbulence. One-dimensional simulations tracked the evolution of MHD turbulence through their aggregate energy densities, which remained trapped in the corona by reflection from high-density gradients in the transition region. Turbulent energy dissipated through the non-linear interaction between counter-propagating waves and produced heating longer than previous versions of the PREFT flare code. Synthetic light curves from AIA EUV bands showed increased similarity with typical flare observations.
Ours is the first model of its kind to generate Alfven wave turbulence from reconnection outflow and model its effect on long-term flare signatures. In the spirit of a first attempt, we chose to represent the physical processes involved using simple, fixed parameters. Energy is removed from the retracting flux by a simple aerodynamic drag force, parameterized by the parameter \(D\). We chose the value \(D=50\,\mathrm{Mm}^{-1}\) in order to slow the retraction to a level consistent with typical observations. A fraction, \(f_{\mathrm{turb}}\), of the energy lost to drag appears as unresolved Alfven wave turbulence. While this process is likely to be very complicated, with a fraction varying in time as well as space, we chose to use a fixed value of \(f_{\mathrm{turb}}\). We performed a series of runs and different values of \(f_{\mathrm{turb}}\) and found its most important effect was in setting the peak flare temperature.
The physics of Alfven wave propagation, reflection, and dissipation is also captured through a small number of free parameters. Reflection is modeled through a reflection coefficient, \(\eta\), applied at the
Figure 11: Turbulent line broadening, defined in Equation (23) for spectral lines of Fe xxiv 192Å (red), Fe xxi 129 Å (violet), and Fe xviii 94 Å (blue). The left panel plots these against the normalized intensity of the line, and the right against time when the line is bright. Time flows respectively from top to bottom for each curve in the left panel. The time of loop straightening (\(t=8.13\) min) is indicated by a vertical dashed line and triangles on each curve. Diamonds along the right axis are the average values for each line.
point where \(T=T_{\rm TR}=0.5\) MK. Wave dissipation occurs through non-linear dissipation involving waves of both species. This process is parameterized through an effective correlation length, \(\lambda_{\perp}\), which was varied to explore its effects on the flare evolution.
The duration of turbulent heating was found to be most dependent on the correlation length \(\lambda_{\perp}\). Correlation lengths \(\lambda_{\perp}\leq 10\) Mm lead to very rapid dissipation and wave energy vanishes rapidly after retraction ends. Only in those with \(\lambda_{\perp}\gtrsim 100\) Mm does wave energy persist beyond the end of retraction, thereby prolonging the cooling of the flare. If Alfven waves are actually responsible for the long flare cooling times observed, non-linear wave dissipation would need to be characterized by such large correlation lengths.
The finding described above requires reconsideration of the physical significance behind the parameter \(\lambda_{\perp}\). Our turbulence model assumes the small-scale fluctuations responsible for the MHD turbulence to be on scales smaller than the radius of the flux tube, so \(\lambda_{\perp}\) cannot be simply taken as the correlation length of the turbulence. It was mentioned above that our parameter incorporates the correlation between the counter-propagating waves, such that the true correlation length is \(\lambda^{\prime}_{\perp}=\lambda_{\perp}/\xi\), where \(\xi\) is the fraction of the counter-propagating turbulence which is actually uncorrelated. For an effective correlation length of \(\lambda_{\perp}=100\) Mm and a tube with radius \(R=\lambda_{\perp}=\)2 Mm, this gives \(\xi\)=0.02, meaning that only 2% of the counter-propagating waves are uncorrelated and able to produce a turbulent cascade. While an imperfect correlation should be expected between the two energy populations, it is worth noting that the length scale for the small-scale fluctuations (e.g. \(R\)) and the length scale for the energy correlation do not necessarily need to be related (Zank et al., 2011). Should this perspective be taken into account, it would also substantiate the range of length scales derived in Section 3.1 using the decay rates of non-thermal broadenings observed in Fe xxiv A spectral lines, where \(\lambda_{\perp}\sim\)30-150 Mm.
The heating rates produced in this work also fit the description of a two-step heating profile as described in Qiu and Longcope (2016). The initial phase of turbulent dissipation was highly impulsive in all cases, reaching a peak within 20 s before quickly decaying. Following the impulsive energy release, defined here by the time it takes a loop to enter a regime of increased radiative losses, heating from dissipation then became more gradual and lasted anywhere from 5 to 50 minutes. In particular, the heating rate produced in Figure 10 was found to match the general heating profile described in Zhu et al. (2018), where energy deposited during the impulsive and gradual phases was required to be 60% and 40% of the total energy released in order to reproduce sustained EUV emission. Here, after an impulsive phase that lasted 240 s, the gradual phase released 46% of the total turbulent wave energy over 36 minutes. To the best of our knowledge, this mechanism was the first to reproduce these two-step heating rates in a self-consistent manner.
A byproduct of using turbulent Alfven waves for gradual-phase heating was their capacity to be equally effective at driving characteristic flare behavior during the initial phase of the simulation. By inducing turbulence from drag, we were able to reintroduce energy back into the tube that would have otherwise been lost to the background of our flux tube. Hence, our model successfully reproduced impulsive flare behavior while still keeping the speed of reconnection outflows within their observed ranges. Investigations into the earlier phases of turbulent dynamics presented here, such as the forward-modeling of Fe xxi A lines, may explain the large broadenings of such lines seen during flares (Polito et al., 2019) and could serve to constrain our model.
While our model produced long-duration heating in conjunction with impulsive flare dynamics, it was not able to reproduce sustained EUV emission on the order of hours, as seen in observation. One possible explanation for this discrepancy is the value of the drag coefficient \(D\)=50 Mm\({}^{-1}\) used in this paper. Increasing \(D\) would lead to lower retraction rates, likely resulting in more prolonged heating and more energy available to convert to turbulent Alfven waves. Because the first stage of our reconnection downflow was observed to be 540 km s\({}^{-1}\), the drag coefficient could be increased to the point of generating retraction rates on the order of 50 km s\({}^{-1}\) -- the lower speed limit of SADs observed in Savage et al. (2012).
In this new model, the chromospheric response to reconnection is driven in two different ways. The first is thermal conduction, which was part of even the earliest flare loop models (Nagai, 1980; Pallavicini et al., 1983). Augmenting this, Alfven waves are created by retraction at the loop apex and propagate to the chromosphere where they are dissipated by interacting with their own reflection. This second transport channel drives chromospheric evaporation beyond that attributable to conduction alone. It is worth noting that a fraction \(1-f_{\text{turb}}\) of the drag work will appear as turbulence on field lines neighboring that just reconnected. Some of the neighboring flux will still be unreconnected, and will now experience some chromospheric evaporation _before_ reconnecting. There are several lines of observational evidence suggesting that flare reconnection occurs on field lines whose density is higher than one would expect in the ambient, pre-flare corona (Veronig and Brown, 2004; Veronig et al., 2005; Guo et al., 2012). Our model offers a possible explanation for this previously puzzling fact.
Another mechanism proposed to extend the cooling time of a flare loop is through suppression of thermal conduction. Previous investigations into the evolution of flare loops using zero-dimensional models have shown that alterations to the mean-free path of thermal electrons extended cooling times considerably (Zhu et al., 2018; Bian et al., 2018). Our model accounts, in Equation (14), for a single MHD mechanism for decreasing \(\kappa\): decreasing the mean axial step size \(\ell_{z}\) by the lengthening of the actual path, \(\ell\), due to field line meandering on sub-resolution scales, i.e. MHD turbulence. When the perturbation velocity is significantly lower than the local Alfven speed, as implied by observations and explicitly found in our model, then field lines are lengthened only slightly. The result is the very small reduction in \(\kappa\) we have observed. Effective reduction in the mean-free path must occur not through MHD mechanisms but through the kinds of particle kinetics effects suggested by others (Bian et al., 2016).
The level of turbulence predicted here can be compared to observation through turbulent line broadening of high-temperature spectral lines. The level in our optimized run produces \(\sigma\) between 100-400 km/s, a factor of 2-3 higher than typically observed (Warren et al., 2018; Tian et al., 2014; Polito et al., 2019). There are, however, many reasons to expect our estimate to be too large. The estimate in Equation (23) assumes the field lines are viewed exactly perpendicularly and that the observation integrates over all turbulent frequencies. The turbulence consists of Alfven waves on the loop, whose frequencies will extend down to the fundamental with a period of two Alfven transit times -- perhaps up to 20 seconds. Integration times shorter than this will omit the lower frequency component of the spectrum, which contains the most energy. This will reduce the value of \(\sigma\) actually observed. It should be noted that the turbulent broadening in our model naturally decreases on an approximately 15-minute scale. This is consistent with observations (Kontar et al., 2017; Warren et al., 2018).
The model presented here is self-consistent to the extent that turbulence is ultimately generated by the retraction of a flux tube through a current sheet. The interaction between the current sheet and the tube responsible for both the drag force and the excitement of turbulent Alfven waves, however, remains an assumption of our model and is not directly investigated here. Moreover, the resolution of PREFT required MHD turbulence to be modeled according to its aggregate energy density. Small-scale perturbations in the velocity and magnetic fields that constitute the turbulence were not considered. While the simplicity of the turbulent transport model in this work warrants further study, any improvements would likely require techniques beyond the formalism of 1D flare loop simulations. Despite these limitations, the results presented here illustrate the efficacy of MHD turbulence as a mechanism for flare energy transport and heating, and provides an additional tool by which other flare phenomena can be studied.
|
2306.11886 | SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling | Pre-training robot policies with a rich set of skills can substantially
accelerate the learning of downstream tasks. Prior works have defined
pre-training tasks via natural language instructions, but doing so requires
tedious human annotation of hundreds of thousands of instructions. Thus, we
propose SPRINT, a scalable offline policy pre-training approach which
substantially reduces the human effort needed for pre-training a diverse set of
skills. Our method uses two core ideas to automatically expand a base set of
pre-training tasks: instruction relabeling via large language models and
cross-trajectory skill chaining through offline reinforcement learning. As a
result, SPRINT pre-training equips robots with a much richer repertoire of
skills. Experimental results in a household simulator and on a real robot
kitchen manipulation task show that SPRINT leads to substantially faster
learning of new long-horizon tasks than previous pre-training approaches.
Website at https://clvrai.com/sprint. | Jesse Zhang, Karl Pertsch, Jiahui Zhang, Joseph J. Lim | 2023-06-20T20:59:10Z | http://arxiv.org/abs/2306.11886v3 | # SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling
###### Abstract
Pre-training robot policies with a rich set of skills can substantially accelerate the learning of downstream tasks. Prior works have defined pre-training tasks via natural language instructions, but doing so requires tedious human annotation of hundreds of thousands of instructions. Thus, we propose SPRINT, a scalable offline policy pre-training approach which substantially reduces the human effort needed for pre-training a diverse set of skills. Our method uses two core ideas to automatically expand a base set of pre-training tasks: instruction relabeling via large language models and cross-trajectory _skill chaining_ through offline reinforcement learning. As a result, SPRINT pre-training equips robots with a much richer repertoire of skills. Experimental results in a household simulator and on a real robot kitchen manipulation task show that SPRINT leads to substantially faster learning of new long-horizon tasks than previous pre-training approaches. Website at [https://clvrai.com/sprint](https://clvrai.com/sprint).
## 1 Introduction
When humans learn a new task, e.g., how to cook a new dish, we rely on a large repertoire of previously learned _skills_, like "_chopping vegetables_" or "_boiling pasta_", that make learning more efficient. Similarly, much work in robot learning aims to equip robots with a set of useful skills [12, 13, 14, 15, 16] for improving learning efficiency. A common approach to acquiring a rich skill set is to pre-train policies on a wide range of tasks. Recent works have employed _language instructions_ as a way for humans to manually define such tasks for policy training, typically via hindsight annotation of large, pre-collected robot experience datasets [14, 15, 16]. While the resulting policies show impressive capabilities, generalization to new tasks requires a _large_ set of pre-trained skills and thus many pre-training tasks. As a result, prior works resorted to annotating robot trajectory datasets with _hundreds of thousands_ of human instruction labels [15], limiting their application outside industrial contexts. Can we instead devise a pre-training approach that similarly equips robots with a wide repertoire of skills but _minimizes_ the need for human task annotations?
We introduce SPRINT (**S**calable **Pre**-training via **R**elabeling **I**n**S**r**unctions), a scalable pre-training approach that equips robots with a large set of skills while substantially reducing human labeling effort (see Figure 1). SPRINT uses extensive _automated_ relabeling to expand an initial set of pre-training tasks. Given a dataset of robot trajectories with initial language instruction annotations, we leverage two core ideas to grow the number of tasks. First, we leverage the rich knowledge captured in large language models (LLMs) to iteratively combine consecutive language instructions into more complex tasks, e.g., "_place mug in coffee machine_" and "_press brew button_" into "_make coffee_". Second, we propose a language-conditioned offline RL objective that "stitches_" multiple trajectory segments from the training data to form new tasks, a process we call "skill chaining" since it allows the policy to learn longer-horizon skills. Through the combination of both techniques, SPRINT creates a much richer pre-training task set. We demonstrate that SPRINT-pre-trained robots can leverage their resulting larger skill repertoire to more efficiently learn downstream tasks.
In summary, our contributions are threefold: (1) we propose SPRINT, a scalable pre-training approach for robot policies that minimizes human task annotation effort via LLM-based aggregation and cross-trajectory skill chaining, (2) we introduce ALFRED-RL, an RL benchmark for the popular ALFRED household task simulator [Shridhar et al., 2020], to test our pre-trained agents on a rich set of long-horizon, semantically meaningful tasks, (3) we demonstrate that policies pre-trained with SPRINT learn downstream tasks more efficiently than prior pre-training approaches, both on challenging ALFRED tasks and in a real robot kitchen manipulation setup.
## 2 Related Work
**Language in RL.** There is a large body of work at the intersection of natural language processing and behavior learning for robotics, and the field has been further accelerated by the recent successes in training large, general-purpose language models. Language has been used to structure agents' representations [Andreas et al., 2017a, Nair et al., 2022], learn reward functions [Fan et al., 2022], guide task learning via recipes [Branavan et al., 2009, Andreas et al., 2017b] and perform long-horizon planning [Huang et al., 2022a, Ahn et al., 2022, Huang et al., 2022b, Singh et al., 2023]. Another line of work has used language to define a wide range of tasks for pre-training policies, resulting in impressive generalization capabilities [Lynch and Sermanet, 2021, Lynch et al., 2022, Brohan et al., 2022]. Yet, these works require collecting hundreds of thousands of costly human language instructions. Our approach SPRINT builds on this line of work but introduces two novel objectives for _automatic relabeling_ of training task instructions, thereby substantially reducing the amount of human labeling required for successful pre-training. Prior works have also investigated automated language instruction generation [Colas et al., 2020, Cideron et al., 2020, Li et al., 2022], but they focus on online learning and make assumptions that are hard to scale, e.g., hand-defined grammars [Colas et al., 2020] or privileged state information [Li et al., 2022, Cideron et al., 2020]. In contrast, we perform _offline_ training and use large language models for _scalable_ task generation.
**Pre-training Policies for RL.** Developing policy pre-training approaches for faster downstream learning has been investigated for many years [Ijspeert et al., 2002, Theodorou et al., 2010, Hester et al., 2018]. Recent advances in offline reinforcement learning [Levine et al., 2020] enabled approaches that can pre-train agents offline and effectively finetune them on online tasks [Peng et al., 2019, Singh et al., 2020, Nair et al., 2020, Kostrikov et al., 2022]. However, these approaches require target-task reward annotations on the pre-training data, and the resulting policies are only pre-trained to solve the target task. Meta-RL approaches, on the other hand, pre-train on a range of tasks and thus allow fast adaptation to _unseen_ downstream tasks [Duan et al., 2016, Finn et al., 2017, Rakelly et al., 2019, Nam et al., 2022], yet require the tedious manual definition of pre-training tasks by experts. To avoid manual task design, other works have explored unsupervised pre-training approaches based on behavior diversification [Achiam et al., 2018, Eysenbach et al., 2019, Sharma et al., 2019], extraction of behavior priors from offline agent experience [Pertsch et al., 2020, Ajay et al., 2020, Singh et al., 2021] or goal state reaching [Mendonca et al., 2021, Chebotar et al., 2021]. Yet, such unsupervised pre-training approaches learn skill repertoires without semantic meaning, which, as we demonstrate in Section 4, lead to worse downstream task transfer than SPRINT's skill repertoire learned via language.
Figure 1: We propose SPRINT, a scalable approach for pre-training robot policies with a rich repertoire of skills while minimizing human annotation effort. Given a dataset of robot trajectories with an initial set of task instructions for offline pre-training, SPRINT expands the pre-training task set without additional human effort via language-model-based **instruction relabeling** and **cross-trajectory skill chaining**. SPRINT-pre-trained policies enable efficient finetuning on unseen target tasks.
**Pre-trained Models for Data Augmentation.** Obtaining robot (pre-)training data at scale is costly. Thus, recent works have explored using world knowledge captured in large pre-trained models for enriching robot learning datasets, e.g., by increasing the visual diversity of trajectories (Yu et al., 2023; Chen et al., 2023; Mandi et al., 2023) or annotating unlabeled data (Xiao et al., 2023). Our approach similarly leverages pre-trained (language) models for automated data augmentation. By investigating an orthogonal augmentation direction, aggregation and chaining of natural language instructions, SPRINT is complementary to these methods.
## 3 SPRINT: Scalable Policy Pre-Training with Language Instructions
In this paper, we propose SPRINT (**S**calable **P**re-training via **R**elabeling **L**anguage **I**Ns**Tructions), an approach for pre-training robot policies that equips them with a rich repertoire of skills to enable efficient finetuning on unseen tasks. Following prior work on agent pre-training, SPRINT assumes access to a large offline dataset \(\mathcal{D}\) of agent experience (Gupta et al., 2019; Lynch et al., 2020; Pertsch et al., 2020; Chebotar et al., 2021; Ebert et al., 2022; Pertsch et al., 2021), collected e.g., from prior RL runs or via teleoperation. We further assume that the data is annotated with an initial set of natural language task instructions, e.g., "_put a mug in the coffee machine_" or "_push the brew button_", that can be collected _in hindsight_ via platforms like Amazon Mechanical Turk (Lynch and Sermanet, 2021; Shridhar et al., 2020). Given a sequence \(\tau\) of states and actions from the dataset \(\mathcal{D}\), annotators can label sub-trajectories \(\tau_{1}=[s_{0},a_{0},s_{1},\dots],\tau_{2}=\dots\) with free-form language descriptions \(z_{1},z_{2},\dots\) of the skills executed in the respective sub-trajectories (see Figure 2, left), resulting in a _language-annotated_ dataset \(\mathcal{D}^{L}\).
**Approach Overview.** SPRINT equips policies with a diverse repertoire of skills via language-instruction-conditioned offline RL: given a natural language task description \(z\), the policy \(\pi(a|s,z)\) is rewarded for successfully executing the instruction (Section 3.1). Intuitively, the richer the set of task instructions during pre-training, the more skills the policy will learn and the more downstream tasks it can finetune on efficiently. Thus, SPRINT introduces two approaches for increasing the scale and diversity of the pre-training task instructions without requiring additional costly human inputs. Firstly, SPRINT leverages pre-trained language models to aggregate consecutive instructions into new tasks (Figure 2, middle, Section 3.2). Secondly, SPRINT introduces an objective for cross-trajectory skill-chaining via offline RL that generates novel instruction chains (Figure 2, right, Section 3.3). SPRINT pre-trains policies on the combined set of tasks and thereby equips them with a richer skill repertoire. In our experiments (Section 4) we demonstrate that this leads to more effective learning of new tasks. See appendix, Alg. 1 for the pseudocode of our method.
Figure 2: SPRINT overview. We assume access to a dataset of agent experience with language instructions for the performed skills **(1)**. Collecting such instructions with human hindsight annotation is a flexible yet costly approach for defining pre-training tasks. Thus, SPRINT introduces two approaches for automatically growing the set of pre-training tasks without additional human effort: **(2)** by aggregating language instructions with an LLM and adding the relabeled trajectories back into the pre-training dataset (Section 3.2), **(3)** by performing cross-trajectory chaining of skills to enable pre-training of skills that are unseen in the offline agent experience (Section 3.3).
### Instruction-Conditioned Offline RL
To pre-train our policy \(\pi\) with the natural language instruction dataset \(\mathcal{D}^{L}\), we take inspiration from goal-conditioned RL (Kaelbling, 1993; Schaul et al., 2015; Chebotar et al., 2021): instead of rewarding the policy for reaching goal states, we condition our policy \(\pi(a|s,z)\) on _language instructions_\(z\) from \(\mathcal{D}^{L}\) and provide a sparse reward \(R(s,a,z)\) to the agent for reaching the end-state \(s_{T_{s}}\) of the sub-trajectory. Formally, we define the reward as:
\[R(s,a,z)=\begin{cases}1,&\text{for }s=s_{T_{s}}\\ 0,&\text{otherwise.}\end{cases} \tag{1}\]
We train our policy \(\pi(a|s,z)\) to maximize this reward with offline RL (Levine et al., 2020) using an instruction-conditioned critic \(Q(s,a,z)\). Specifically, we use Implicit Q-Learning (Kostrikov et al., 2022) as it is easy to tune.
### Language-Model-Based Instruction Aggregation
Large language models (LLMs), trained on massive corpora of internet text data, have been shown to be effective at performing a variety of tasks - from question answering to program synthesis - when prompted with relevant text (Devlin et al., 2018; Brown et al., 2020; Wang and Komatsuzaki, 2021; Rae et al., 2021; Hoffmann et al., 2022; Zhang et al., 2022; Chowdhery et al., 2022). Here we use LLMs to _aggregate_, i.e., paraphrase, the existing language instructions in \(\mathcal{D}^{L}\) (see Figure 2, middle). Given a trajectory that contains multiple sub-trajectories, we can aggregate adjacent sub-trajectories into a longer trajectory and relabel its natural language annotation with a summary of the individual instructions generated by the LLM, thereby generating a new _higher-level_ pre-training task that encompasses instructions from multiple sub-trajectories.2 We use a simple summarization prompt to instruct the language model (see Figure 3). Specifically, we aggregate with LLAMA-13B (Touvron et al., 2023), an open-source 13 billion parameter LLM (see Section D.1 for qualitative examples). Like in Section 3.1, the reward for this new aggregated sub-trajectory is 1 at the last transition and 0 otherwise. For example, we prompt the LLM to summarize the two skills (\(z_{1}:\)"_Put a mug in the coffee machine_," \(z_{2}:\)"_Push the brew button_"), resulting in a new annotation \(\hat{z}_{1:2}\) describing both skills (e.g., "_Make coffee_"). We then add the new trajectory back to our dataset \(\mathcal{D}^{L}\). Using this technique, we generate new language annotations for all \(\binom{N}{2}\) tuples of consecutive sub-trajectories in our dataset. In practice, this increases the number of task instructions by 2.5x in ALFRED and 2x in our robot manipulation dataset (see Section 4).
Footnote 2: Other relabeling operations, such as splitting an instruction into lower-level instructions, can also be performed by the LLM. However, such operations require grounding the LLM in the agent’s observations to determine sub-trajectory split points. We leave investigating this to future work.
### Cross-Trajectory Chaining
Agents trained with offline RL can combine behaviors from multiple trajectories via value propagation, i.e., perform "stitching" (Levine et al., 2020). For example, if trajectory (A) shows cleaning the mug in the sink while trajectory (B) starts with placing the mug in the coffee machine, offline RL algorithms are able to learn to clean the mug in the sink and then place it in the coffee machine (see Figure 2, right) and thus learn long-horizon behaviors that are unseen in the training data. In our case of _instruction-conditioned_ offline RL, enabling such stitching requires special care. Due to the different language instruction conditionings for the critic \(Q(s,a,z_{\mathcal{A}})\) and \(Q(s,a,z_{B})\), values do not naturally propagate from trajectory (B) back to trajectory (A). Instead, we must actively add "chaining examples." to our training dataset (Chebotar et al., 2021). To build such chaining examples, we first sample two sub-trajectories: instead of restricting sampling to consecutive trajectory segments as in Section 3.2, we can now sample \(\tau_{z_{A}}\) and \(\tau_{z_{B}}\) from _different_ trajectories (see Figure 4). Next, we create an aggregate instruction \(\hat{z}\) which indicates that the agent first finishes skill (A) and then finishes skill (B), e.g., "_clean the coffee mug (A) and place it in the coffee machine (B)_".3
Footnote 3: Note that we could generate \(\hat{z}\) using the same LLM summarization as in Section 3.2. Yet we found the resulting summaries to often be confusing since randomly paired instructions _from different trajectories_ can rarely be summarized meaningfully. We got the best empirical results by simply concatenating the sampled instructions with the word “and”.
Finally, we _cannot_ use the reward function from Section 3.2 that simply sets the reward to 1 at the end of every subtrajectory, since the last state \(s_{T_{A}}\) of the first trajectory does not solve the combined instruction \(\hat{z}\). Yet, only setting a reward of 1 on the final state of the second trajectory \(s_{T_{B}}\) would not accurately propagate value back into trajectory
Figure 3: A shortened example of the LLM prompt. See the full prompt in appendix, Section 3.1.1.1.1 (Touvron et al., 2023), an open-source 13 billion parameter LLM (see Section D.1 for qualitative examples). Like in Section 3.1, the reward for this new aggregated sub-trajectory is 1 at the last transition and 0 otherwise. For example, we prompt the LLM to summarize the two skills (\(z_{1}:\)“_Put a mug in the coffee machine_,” \(z_{2}:\)“_Push the brew button_”), resulting in a new annotation \(\hat{z}_{1:2}\) describing both skills (e.g., “_Make coffee_”). We then add the new trajectory back to our dataset \(\mathcal{D}^{L}\). Using this technique, we generate new language annotations for all \(\binom{N}{2}\) tuples of consecutive sub-trajectories in our dataset. In practice, this increases the number of task instructions by 2.5x in ALFRED and 2x in our robot manipulation dataset (see Section 4).
(A), since the two trajectories are _separate_, potentially requiring many additional transitions to be labeled with the same composite instruction \(\hat{z}\) to "bridge" values back from the initial state of \(\tau_{z_{B}}\) to the final state of \(\tau_{z_{A}}\) (see Figure 4). Instead of relying on this propagation to happen by chance, we can _anchor_ the final state of \(\tau_{z_{A}}\) with a reward that is proportional to the likelihood of finishing the overall task \(\hat{z}\). How can we compute such a reward?
Q functions trained with TD learning (Sutton and Barto, 2018) for the sparse reward from Equation 1 intuitively represent a value that is proportional to the probability of reaching the goal state \(s_{T_{A}}\) at time \(T\)(Eysenbach et al., 2022; Chebotar et al., 2021):
\[Q^{\pi}(s_{t},a_{t},z) =\mathbb{E}[\sum_{t^{\prime}=t}^{\gamma^{t^{\prime}}}R(s_{t^{ \prime}},a_{t^{\prime}},z)]\] \[=\mathbb{E}\left[\gamma^{T-t}\mathbbm{1}\left[s_{T}=g_{z}\right] \right]\propto P^{\pi}(s_{T}=g_{z}|s_{t},a_{t}). \tag{2}\]
where \(\gamma\in(0,1)\) denotes the discount factor. When anchoring the final state of the first trajectory, we want to use a reward that is proportional to the likelihood of finishing _the remainder_ of the combined task \(\hat{z}\), i.e., proportional to the likelihood of finishing the second subtask \(z_{B}\) from \(s_{T_{A}}\). Following Eq. 2, we can directly use \(Q(s_{T_{A}},a_{T_{A}},z_{B})\) to represent that probability:
\[R(s,a,\hat{z})=\begin{cases}1,&\text{for }s=s_{T_{B}}\\ Q(s,a,z_{B}),&\text{for }s=s_{T_{A}}\\ 0,&\text{otherwise.}\end{cases} \tag{3}\]
Note that unlike in Section 3.2, we treat both relabeled \(\tau_{z_{A}},\tau_{z_{B}}\) as separate trajectories as we do not know the states and actions required to transition from the last state of (A) to states in (B). For a discussion how chaining preserves the structure of the original MDP, see Appendix Section B.6.1. Finally, since \(Q\) changes during training, we compute the rewards in Eq. 3 online while training.
## 4 Experiments
In our experiments, we investigate how well an agent pre-trained with SPRINT performs on challenging unseen tasks. Specifically, we answer the following questions: (1) Does SPRINT enable more efficient finetuning on unseen target tasks than previous pre-training approaches? (2) Can SPRINT agents execute unseen language instructions zero-shot? (3) Does _language_ pre-training lead to better generalization to unseen environments than unsupervised pre-training?
### Experimental Setup
We evaluate our approach on two image-based control environments (see Figure 5): ALFRED-RL, a new RL benchmark we develop in the popular ALFRED household task simulator (Shridhar et al., 2020), and a real robot kitchen manipulation task with a Jaco 2 robot arm.
Figure 4: SPRINT chains skills from \(\tau_{1}\) and \(\tau_{2}\) into a new trajectory. This is added to the buffer as two separate trajectories with updated language instructions and appropriate reward values.
Figure 5: **Left:** ALFRED provides a rich set of long-horizon, meaningful tasks and a dataset of 6.6k language-annotated demos (figure drawn w/ permission from Shridhar et al. (2020)). We introduce the ALFRED-RL Benchmark which tests finetuning of RL agents on unseen tasks and scenes. **Right**: Our Jaco robot arm with RGB image-based control.
**ALFRED-RL Benchmark.** Our goal is to compare different pre-training approaches on a diverse set of semantically meaningful, long-horizon tasks. Yet, existing multi-task RL environments typically evaluate only on short-horizon or semantically meaningless tasks [Yu et al., 2019, Mees et al., 2022]. Thus, we introduce a new RL benchmark based on the ALFRED household task simulator [Shridhar et al., 2020]. While ALFRED abstracts away low-level agent control into discrete actions like "pick up" or "turn left," its 100+ rich indoor scenes with many interactable objects allow to evaluate an agent's capabilities for solving long-horizon household tasks from a rich task distribution. Although the original benchmark focuses on imitation learning, we extend it to support RL training of agents through an OpenAI gym interface with \(300\times 300\) egocentric RGB observations and an action space consisting of 12 discrete action choices and 82 interactable object types [Pashevich et al., 2021]. We create three evaluation task sets that test progressively more challenging axes of generalization: \(\textit{EVAL}_{\textit{\tiny\emph{INSRTUCT}}}\) uses unseen human-generated instructions on familiar scenes, \(\textit{EVAL}_{\textit{\tiny\emph{LENGTH}}}\) uses tasks that are longer than any observed in pre-training, testing "stitching" capabilities, and \(\textit{EVAL}_{\textit{\tiny\emph{SCENE}}}\) uses tasks in unseen floorplans. For more details about ALFRED-RL and evaluation set construction, see Appendix, Section C.1.2.
**Real-World Robot Kitchen Manipulation.** To evaluate pre-training approaches on end-to-end _low-level_ robot control, we design a set of stylized kitchen manipulation tasks with a Kinova Jaco 2 robot arm. The policy's inputs are RGB images from a wrist-mounted and a third-person camera and it produces continuous end-effector displacement actions and a discrete gripper open/stay/close action at a control frequency of 10Hz. We collect a dataset of 329 long-horizon trajectories via human teleoperation, each consisting of multiple language-annotated sub-trajectories like _"pick up the apple fruit"_, _"place the black bowl in the dish rack,"_ etc. For evaluation, we construct three long-horizon tasks, sequencing 2 to 8 "primitive skills" like the ones mentioned above, in environment configurations that are unseen in the pre-training data. We collect 25 demonstrations for each of the three tasks to evaluate offline fine-tuning performance of different pre-trained policies.
**Comparisons.** We compare SPRINT against common policy pre-training approaches: behavioral cloning and offline goal-conditioned RL: **Language-conditioned BC (L-BC)**[Jang et al., 2021, Lynch and Sermanet, 2021]: Behavior cloning (BC) conditioned on the individual language instructions, **Episodic Transformers (ET)**[Pashevich et al., 2021]: BC conditioned on sequences of language instructions - ET is the best-performing end-to-end learned policy on the ALFRED leaderboard that _does not_ use privileged domain knowledge like hand-engineered policies or voxel maps, **Actionable Models (AM)**[Chebotar et al., 2021]: Goal-conditioned offline RL with randomly sampled goal observations from the same training data as SPRINT.
**Implementation Details.** All methods use the same transformer-based architecture and hyperparameters where possible and have access to the same training data \(\mathcal{D}^{L}\). For more implementation details, see appendix Section B. All results are means and standard deviations over 3 seeds.
### SPRINT Solves Long-Horizon Tasks Zero-Shot
We first test the effectiveness of SPRINT's pre-training by analyzing zero-shot performance across 100 unseen tasks in the \(\textit{EVAL}_{\textit{\tiny\emph{INSTRUCT}}}\) evaluation set. We report results in Figure 6 (left). Our approach, SPRINT, achieves 2-8x
Figure 6: Evaluation results on the ALFRED-RL benchmark. **Left**: Zero shot performance on the \(\text{EVAL}_{\text{\tiny\emph{INSTRUCT}}}\) and \(\text{EVAL}_{\text{\tiny\emph{LENGTH}}}\) task sets. SPRINT (green) is able to complete substantially more subtasks than prior pre-training approaches. **Middle**: Breakdown of zero shot performance by task length. SPRINT performs well on challenging, long-horizon tasks. Numerical results in appendix Table 4. **Right**: Finetuning performance in unseen floor plans of the \(\text{EVAL}_{\text{\tiny\emph{SCENE}}}\) task set. SPRINT learns tasks in new floor plans more effectively by reaching higher peak performance.
higher zero-shot task performance than prior pre-training approaches AM and L-BC. Even though ET too trains to condition on long-horizon instruction sequences like SPRINT, ours still outperforms it overall by 2x. To better understand the differences between the methods, we report the breakdown of returns by length of the evaluation task in Figure 6 (middle). We find that L-BC, ET, and SPRINT achieve similar performance on length 1 tasks. However, on long-horizon tasks, SPRINT achieves much higher returns than all baselines since it can leverage the language-model to automatically generate longer-horizon pre-training tasks. In contrast, standard L-BC approaches train only on the human-provided, shorter-horizon annotations and thus cannot zero-shot perform long-horizon tasks. Similar to our approach, AM trains to reach long-horizon goals during pre-training but the results in Figure 6 (left) show that its pre-training with goal-state conditioning is _less_ effective than our language-conditioned pre-training. These results also hold for the _EVALLENGTH_ task set, which tests generalization to task horizons beyond the ones seen during training. On these most challenging tasks, SPRINT outperforms the best baseline by 2.5x (see appendix Figure 9 for an example trajectory, Section D.2 for qualitative comparisons).
### SPRINT Agents Finetune Effectively in Unseen Environments
**ALFRED-RL.** We test SPRINT's finetuning performance to unseen tasks on the most challenging _EVALSCENE_ task set in unseen household floor plans with 50k environment interactions. This corresponds to a realistic scenario in which an agent is placed in a new household environment and needs to leverage skills learned during pre-training to solve new tasks with minimal environment interaction. To implement finetuning for SPRINT and AM, we condition the policy on a language instruction or goal image from the target task respectively and then run IQL with online data collection. For L-BC and ET, we first pre-train a language-conditioned critic with IQL on the pre-training dataset and then finetune both the policy and critic with online IQL.
We report finetuning results in Figure 6 (right), with qualitative examples in Appendix, Section D.2. SPRINT quickly achieves higher downstream task return than the best prior work. Specifically, L-BC converges to lower peak performance than SPRINT. ET performs the worst across all comparisons, perhaps because transferring from instruction sequences to high-level task descriptions is challenging. Meanwhile, AM performs similarly to L-BC, possibly because unseen goal states are more difficult to learn from. In contrast, SPRINT's pre-training with language conditioning allows for effective transfer even to unseen environments since the semantics of the tasks transfer well: the language description "_place cup in coffee machine_" transfers to many environments while the goal image for the same task might look very different. Thus, pre-training with language instructions can enable better transfer for learning tasks in new environments than pre-training to reach goal states.
**Real Robot.** We also measure finetuning performance on an unseen environment on our real robot setup. We evaluate on three tasks consisting of 2, 4, and 8 subgoals, respectively, e.g., _Serve milk in the bowl and butter and baked bread in the plate_ (see Section C.2 for task details). We collect 25 demonstrations per task for offline finetuning. We compare SPRINT against **L-BC**, a version of L-BC trained on full sequences of concatenated language instructions (**L-BC Composite**), and a method that is trained only on the downstream task demonstrations (**No pre-train**).
Results in Table 1 demonstrate that _No Pre-train_ performs poorly, indicating that pre-training is necessary in this difficult real-world setup. Among the other pre-training methods, SPRINT achieves the best success rates and completes the most subgoals on all tasks. Compared to L-BC Composite, SPRINT achieves higher returns and success rates on the more challenging Length 4 and 8 tasks. See Figure 7 for an example of a length 8 task evaluation and appendix Section D.2 for more visualizations.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Length 2} & \multicolumn{2}{c}{Length 4} & \multicolumn{2}{c}{Length 8} \\ \cline{2-7} Method & Success & \# Tasks & Success & \# Tasks & Success & \# Tasks \\ \hline SPRINT (ours) & **100\%** & **2.0** & **60\%** & **3.4** & **40\%** & **6.2** \\ L-BC Composite & **100\%** & **2.0** & 40\% & 2.8 & 20\% & 5.2 \\ L-BC & **100\%** & **2.0** & 40\% & 0.4 & 0\% & 2.0 \\ No pre-train & 0\% & 1.0 & 0\% & 0.0 & 0\% & 0.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Success rates and number of subgoals completed after fine-tuning on the tabletop arrangement displayed on the left with unseen object combinations, averaged over 5 trials.
Figure 7: Successful rollout of a SPRINT agent offline finetuned on 25 demos of the challenging task above with object combinations not in the pre-training data. SPRINT solves all 8 tasks in sequence.
### Ablation Studies
We verify the effectiveness of the components of our approach, with the following ablations: **SPRINT w/o chain** removes cross-trajectory chaining (Section 3.3), instead trains only on within-trajectory human-provided and LLM-aggregated tasks; **SPRINT w/o LLM-agg** additionally removes the LLM aggregation (Section 3.2), thus trains offline RL agent only on the human-provided task annotations. We report zero-shot evaluation results on ALFRED in Table 2. The results show that each component of our approach improves zero-shot evaluation performance. We observe a particularly large performance loss when removing the LLM aggregation of pre-training tasks.
## 5 Discussion and Limitations
We presented SPRINT, an approach for scalable agent pre-training that automatically generates training tasks for offline RL via LLM relabeling and cross-trajectory skill chaining. We demonstrated that SPRINT pre-training leads to higher zero-shot and finetuning performance on diverse household tasks in the ALFRED simulator and on real-robot kitchen manipulation tasks.
**Limitations.** Currently, SPRINT can _only_ leverage language-annotated data. Investigating approaches that can _combine_ language-annotated and unannotated data for effective pre-training is an interesting future direction. Furthermore, the cross-trajectory chaining objective of SPRINT randomly concatenates sentences together, regardless of whether their combination is semantically meaningful. Future work can investigate filtering mechanisms for chaining to only include meaningful tasks. Finally, the development of real robot benchmarks with rich sets of long-horizon tasks is an open challenge.
## Acknowledgements
We would like to thank Shirin Dass, Sidhant Kaushik, Laura Smith, Siddarth Verma, and Jullian Yapeter for assisting with task instruction labeling. We thank Taewook Nam for assisting with the actionable models implementation we based some of our code on, Xiang Ren for detailed feedback on earlier versions of the paper, and Yevgen Chebotar for helping us tune the actionable models baseline. Finally, we thank all of the CLVR lab members at KAIST and USC for their constructive feedback.
This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grants (No.2019-0-00075, Artificial Intelligence Graduate School Program, KAIST; No.2022-0-00077, AI Technology Development for Commonsense Extraction, Reasoning, and Inference from Heterogeneous Data) and National Research Foundation of Korea (NRF) grant (NRF-2021H1D3A2A03103683), funded by the Korean government (MSIT).
\begin{table}
\begin{tabular}{l l l} \hline \hline Ablation & _EVAL\_ANSTRUCT_ & _EVAL\_ENGTH_ \\ \hline SPRINT (ours) & **1.92 \(\pm\) 0.01** & **4.31 \(\pm\) 0.51** \\ SPRINT w/o Chain & 1.71 \(\pm\) 0.14 & 4.10 \(\pm\) 0.15 \\ SPRINT w/o LLM-agg & 0.37 \(\pm\) 0.01 & 0.19 \(\pm\) 0.11 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablations. SPRINT achieves the highest return with both objectives. |
2307.03173 | Data processing of Visible Emission Line Coronagraph Onboard ADITYA L1 | ADITYA-L1 is India's first dedicated mission to observe the sun and its
atmosphere from a halo orbit around L1 point. Visible emission line coronagraph
(VELC) is the prime payload on board at Aditya-L1 to observe the sun's corona.
VELC is designed as an internally occulted reflective coronagraph to meet the
observational requirements of wide wavelength band and close to the solar limb
(1.05 Ro). Images of the solar corona in continuum and spectra in three
emission lines 5303{\AA} [Fe xiv], 7892{\AA} [Fe xi] and 10747 [Fe xiii]
obtained with high cadence to be analyzed using software algorithms
automatically. A reasonable part of observations will be made in synoptic mode,
those, need to be analyzed and results made available for public use. The
procedure involves the calibration of instrument and detectors, converting the
images into fits format, correcting the images and spectra for the instrumental
effects, align the images etc. Then, develop image processing algorithms to
detect the occurrence of energetic events using continuum images. Also derive
physical parameters, such as temperature and velocity structure of solar corona
using emission line observations. Here, we describe the calibration of
detectors and the development of software algorithms to detect the occurrence
of CMEs and analyze the spectroscopic data. | Muthu Priyal, Jagdev Singh, B. Raghavendra Prasad, Chavali Sumana, Varun Kumar, Shalabh Mishra, S. N. Venkata, G. Sindhuja, K. Sasikumar Raja, Amit Kumar, Sanal krishnan, Bhavana S. Hegde, D. Utkarsha, Natarajan Venkatasubramanian, Pawankumar Somasundram, S. Nagabhushana, PU. Kamath, S. Kathiravan, T. Vishnu Mani, Suresh Basavaraju, Rajkumar Chavan, P. Vemareddy, B. Ravindra, S. P. Rajaguru, K. Nagaraju, Wageesh Mishra, Jayant Joshi, Tanmoy Samanta, Piyali Chatterjee, C. Kathiravan, R. Ramesh | 2023-07-06T17:55:13Z | http://arxiv.org/abs/2307.03173v1 | # Data processing of Visible Emission Line Coronagraph Onboard ADITYA-L1
###### Abstract
ADITYA-L1 is India's first dedicated mission to observe the sun and its atmosphere from a halo orbit around L1 point. Visible emission line coronagraph (VELC) is the prime payload on board at Aditya-L1 to observe the sun's corona. VELC is designed as an internally occulted reflective coronagraph to meet the observational requirements of wide wavelength band and close to the solar limb (1.05 Ro). Images of the solar corona in continuum and spectra in three emission lines 5303A [Fe xi), 7892A [Fe xi] and 10747A [Fe xiii] obtained with high cadence to be analyzed using software algorithms automatically. A reasonable part of observations will be made in synoptic mode, those, need to be analyzed and results made available for public use. The procedure involves the calibration of instrument and detectors, converting the images into fits format, correcting the images and spectra for the instrumental effects, align the images etc. Then, develop image processing algorithms to detect the occurrence of energetic events using continuum images. Also derive physical parameters, such as temperature and velocity structure of solar corona using emission line observations. Here, we describe the calibration of detectors and the development of software algorithms to detect the occurrence of CMEs and analyze the spectroscopic data.
VELC; Aditya-L1 mission; Coronagraph; Imaging and spectroscopic observations; Data pipeline architecture; Data flow +
Footnote †: journal: Advances in Space Research
0273-1177/(c) 2023 COSPAR. Published by Elsevier Ltd All rights reserved.
## 1 Introduction
A reliable spectroscopic observation with good photometric accuracy and high spectral resolution of the emission corona up to 1.5Ro (Ro - Solar radius) or beyond will help to understand the physical and dynamical nature of solar corona (Singh et al (2006) [9], Singh et al (2019) [12]). Most of the spectroscopic or imaging data in visible emission lines have been recorded during the occurrences of total solar eclipses or using ground-based coronagraphs during excellent
clear sky conditions. The ground-based coronal observations are insufficient due to limited number of hours per day with very clear coronagraphic sky conditions (Singh et al (2004) [8], Singh et al (2011) [11], Ichimoto et al (1999) [2]). The increase in sky brightness because of scattering of sun-light from water vapors and aerosols in the earth's atmosphere makes it challenging to observe the weak coronal signal. Such a problem does not arise in space, and we can observe 24 hours a day round the year by placing the satellite in a halo orbit around Lagrangian L1 point.
ADITYA-L1 mission is a space based solar observatory with seven payloads on-board, which is planned to be placed in the first Lagrange point (L1) of the sun-earth system. Visible Emission Line Coronagraph (VELC) (Singh et al (2011) [10] and Prasad et al (2017) [5]) is a space based solar coronagraph, a major payload on-board Aditya-L1 to study the solar corona. The VELC onboard Aditya-L1 is designed to perform imaging of solar corona at 500 nm, simultaneous spectroscopy of solar corona in emission lines centered around 530.3 nm [Fe XIV], 789.2 nm [Fe XI], 1074.7 nm [Fe XIII] and Spectro-polarimetry at 1074.7 nm [Fe XIII]. FOV of the imaging channel is 1.05 - 3.0 Ro and 1.05 - 1.5 Ro for spectroscopy channels. The payload's uniqueness stems from the fact that observations of solar corona closer to the limb (1.05Ro) with a high cadence (Prasad et al (2017) [5] and Kumar et al (2018) [3]) are possible. Further, imaging of the solar corona in continuum will yield the speed of CME\({}^{\prime}\)s in the plan of sky and spectroscopic emission line observations in the line-of-sight at the same time. Hence, it will be possible to derive the true velocity of CME\({}^{\prime}\)s and study the acceleration or de-acceleration of CME's.
Several corrections are required for the solar data obtained from space instruments compared to the data taken by ground based telescope. There are many space based missions to study the sun and its atmosphere (Brueckner et al (1995) [1] and Wulser et al (2018) [16]). It is convenient to transmit the data to the ground station from space instruments in binary and compressed format to permit more observations within the same volume of data to be sent through telemetry. Therefore, as a first step of the calibration, the raw data needs to be decompressed and convert to Flexible Image Transport (FITS) format. Then many corrections to the data such as, check file size, dark current, flat fielding of image / spectra, geometrical calibration, wavelength calibration, time-dependent corrections, replacement of spikes / bad pixels and aligning the images are need to be made. The procedure to perform the basic corrections is discussed in earlier paper (Singh et al (2022) [13]). After doing the basic corrections, one needs to convert the observed counts using the digital cameras to absolute numbers such as flux or absolute intensity and correct for the instrument characteristics. The other effects due to scattered light, broadening of line profiles because of instrument, geometrical distortion of spectra because of optics need to be corrected after the basic analysis of data. To apply these corrections, calibration of the instrument in laboratory before the launch and in space after the launch is needed. Here, we shall discuss the calibration of detectors carried out in the laboratory and development of software to analyse the data to derive physical parameters from the observations made. To examine the performance of the developed software codes we have used the coronal images from C-2 coronagraph onboard SOHO and coronal spectroscopic data obtained with 25cm coronagraph at Norikura Observatory at Japan. The provision was made to record observations in 4 emission lines, simultaneously, at the 25-cm coronagraph at Norikura observatory, Japan. The 5303A [Fe xiv], 1074A [Fe xiii] and 10798A [Fe xiii] were common to record the spectra. In addition, one of the two, 6374A [Fe x] or 7892A [Fe xi] spectral lines could be chosen for observations. We have also used the laboratory calibration data to verify the codes and confirm the satisfactory working of the instrument as desired. Here, we discuss the calibration of the detectors, some specific corrections and the verification of the codes using observations with SOHO, Norikura coronagraph and data obtained
during total solar eclipses.
## 2 Calibration of the Instrument
The specifications of various optical components of the instrument, performance of mechanical units, responses of detectors need to be examined, individually first and later, the integrated payload are described in Venkata et al (2017) [14], Venkata et al (2021) [15]. There are three CMOS and one IR detectors. The performance of all the three CMOS detector is similar.
### **Calibration of CMOS detectors in Laboratory before launch**
The important parameters for a detector to be determined are bias, dark current variations with exposure time, linearity in response of the detector to light level and exposure time, noise level and signal variation over pixels with different exposures of same time.
#### 2.1.1 **Bias and dark calibration**
The camera electronics does not permit to measure the dark current with zero exposure time known as bias. Therefore, we have taken the images with minimum exposure time of 0.010 s to estimate the bias and its variation with time. We find that mean bias is about 142 and 139 counts in the low gain (LG) and high gain (HG), respectively which does not vary with time. We have measured the bias current at temperatures from -4\({}^{\circ}\)C to -7\({}^{\circ}\)C with an incremental value of 1 deg C. This is the expected operating temperature of CMOS detectors on-board VELC instrument during the mission life and found that these bias values do not change within this temperature range. Two left and right panels in the upper row of Figure 1 show the dark current in counts as a function of exposure time (0.01 to 100 seconds) for the low and high gain, respectively. The dark current plotted in figure is mean over all the pixels in the image. Two panels in the upper panel indicate that dark current increase marginally with exposure time up to 100 seconds. The mean variation in dark current with time indicate that dark build up is at the rate of 0.004 and 0.032 counts / second in the low and high gain, respectively. These variations are insignificant for the projected exposure times. Two panels in the bottom row show the histogram of count for four representative exposure times in the range of 0.113, 1.0, 10.0 and 100 seconds in different colours. The histogram plots indicate that dark current is stable in both the low and high gain for different exposure times. The distribution of dark current agrees very well for the LG and differs by insignificant amount for HG. The FWHM of the dark current distribution is \(\sim\) 30 counts for all the exposure times up to 100 seconds for low and high gains. The small fluctuations in the histograms are due to variation in signal in different rows caused by different amplifiers of CMOS detector. Most of the dark count values range between 100 and 180 for all gains. These variations are mostly due to different gains of amplifiers and photon noise.
#### 2.1.2 **Calibration of CMOS detector with uniform light source in laboratory**
After taking the dark data we have taken the images with uniform light source at different intensity and exposure times till the near saturation of the detectors. Left and Right side panels in the upper row of Figure 2 show the mean signal over the image (counts) as a function of exposure time for the low and high gains, respectively. Upper curve (red) in both the panels indicate the mean counts with light and lower curve after subtracting averaged dark image of 16 individual images from the light image. The low and high gain plots indicate that response of the detector is almost linear to the exposure time. For an exposure time \(>\) 100 ms, the increase in signal is linear with time till 90 % of the
saturation value for the detectors. Both the LG (1X and 2X) and HG (10X and 30X) show the similar behaviour. The experiment was repeated with different known intensity levels and found that detectors show linear response in the range of 5 - 90 % of the full well capacity of the detectors for all the gains.
Left and right panels in the bottom row of Figure 2 show the histograms of the count values for the dark image in blue, image with uniform light source in black and the dark subtracted light image in Red for LG and HG, respectively. The histograms of dark and light image show some departure from the Gaussian distribution due to fixed pattern noise in the data. But the Gaussian distribution of histogram of (light - dark) image indicates the noise due different response of amplifiers (fixed pattern noise in the data) has been corrected. For the LG image, the FWHM of about 32 counts of the corrected intensity distribution indicates the variation in the signal is well within the photon noise considering average signal count of \(\sim\) 850. For the HG data, FWHM of \(\sim\) 55 counts for a mean signal of 840 indicates that variations in the signal are almost equal to the photon noise. This is likely due to additional noise in detector at high gain. All the three CMOS cameras have been calibrated and behave in a similar way.
### **Calibration of IR detector**
First, we study the variation of mean dark current over an image with exposure time for the temperatures from -14\({}^{\circ}\)C to -19\({}^{\circ}\)C, expected range of temperature of the detector onboard, at an interval of one degree. Left and right panels in the upper row of Figure 3a show the variation of dark count for the LG and HG of the camera, respectively for three temperatures, -14\({}^{\circ}\)C, -17\({}^{\circ}\)C and -19\({}^{\circ}\)C. The figure indicates that dark count is dependent on temperature and increases with increase of temperature of the detector. The difference is less for small exposure times and the difference goes on increasing with the increase in exposure time. Left and right panels of the Figure 3b show the histogram of the dark
Fig. 1: The left and right panels in the upper row show the variation of dark current with exposure time for the low(LG) and high(HG) gain of CMOS camera, respectively. The left and right panels in the bottom row show the histogram of the dark current at different exposure times for the low and high gain.
current for the LG and HG, respectively, for four exposure times. The exposure times are 103ms (minimum), 5.3s, 20s and 50s (maximum) for LG and 103ms, 1.003s, 5.3s and 10s for HG as the detector gets saturated for exposure \(>\) 10s in HG for dark image, itself. The histogram plots for dark current show double peaks. The reason for this is not clear. However, the double peaks disappear in the dark corrected light (light - dark) images. Therefore, the double peak may be due to some fixed pattern noise in the detector. Further, dark count increases with exposure time significantly unlike the behavior of CMOS detector for which dark count increases negligibly with exposure time up to 100 seconds. The mean dark current is \(\sim\) 490 and \(\sim\) 190 counts for the LG and HG, respectively, for an exposure time of 103ms. The histograms of the dark current for LG, for different exposure time show that dark count value increases by \(\sim\) 50 % with an exposure time of 20 seconds as compared to that with 103ms, decreasing the dynamic range significantly. In addition, the increase in width in the distribution of dark count with increasing exposure times indicates significant increase in the dark noise. Thus, the exposure time more than 20 seconds will have impact on dynamic range and photometric accuracy in LG observations. The right side panel of Figure 3b shows that for exposure time \(>\) 2s in HG, the distributions of dark count become very broad and mean value increases at a faster rate. The computed values of dark build up indicate that dark count increase at rate of 12 counts/sec in the LG and 220 counts/ sec in HG. Hence, it is advisable to keep the exposure time \(<\) 5s for HG observations considering the dark build up and noise in the data for larger exposure times. In Figure 4, we plot the mean signal (Light - dark) in counts for LG (left panel) and HG (right panel) for images with uniform light source, as a function of exposure time.
## 3 Observations in continuum channel at 500 nm
After generating fits files from binary files, the dark current and flat-field corrections, the images (Singh et al (2022) [13]) will be aligned depending on the satellite data about yaw, roll and pitch angles. To begin with all these images will
Figure 2: Left and right panel in the upper row of the figure show the variation of mean signal in counts for light and (light – dark) with exposure time for the low and high gain for the CMOS camera, respectively. The left and right panels in the bottom row show the histogram of counts for dark (Blue), light (Black) and light-dark (Red) for the low and high gain, respectively.
Figure 4: Left and right panels of the figure show the mean signal in counts (light – dark) for image with uniform light source as a function of exposure time for IR detector at - 14\({}^{\circ}\) C for the LG and HG, respectively.
Figure 3: (a) The left and right panels show the variation of mean dark count over an image with exposure time for the low (LG) high gain (HG), respectively, for the IR camera at temperatures of -14\({}^{\circ}\)C, -17\({}^{\circ}\)C and -19\({}^{\circ}\)C. (b) The top and bottom panels of the figure show the histogram of dark count with exposure times of 103ms, 5s, 20s and 50s for the LG and for HG for exposure time of 103ms, 1s, 5s and 10s for the IR camera at -17\({}^{\circ}\)C.
be scanned visually in the form of video to detect the occurrence of CME.
### Detection of CMEs using continuum images
We have developed a code to detect the occurrence of CME\({}^{\prime}\)s, automatically. First, the aligned FITS images of a single day are taken for generating the background image (\(I_{bac}\)). The procedure involved is to create a minimum image (\(I_{min}\)) such that each pixel in \(I_{min}\) corresponds to the minimum intensity of all the images on that day. The intensity in the outer corona is very less, little more than the dark value. Sometimes signal is lost in the photon noise. Large number images need to be added to increase the signal to noise ratio (SNR) in the outer corona. Sometimes, some pixels, especially in the outer corona in the dark subtracted images show negative values due to photon noise. To avoid this type of noise in an image, the minimum background is taken as zero for those pixels. This generated \(I_{min}\) is used to produce the azimuthally averaged background image (\(I_{bac}\)). To make the background image, the minimum image \(I_{min}\) is rotated from 0\({}^{\circ}\) to 360\({}^{\circ}\) deg at an increment of 1\({}^{\circ}\) to get 360 images. Then, we generate the azimuthally averaged image (\(I_{bac}\)) by averaging over the 360 images such that each pixel in \(I_{bac}\) corresponds to the average of all the intensities over 360 images at that pixel. To detect the occurrence CME (Icme), we subtract the contribution of background from the coronal image to enhance the contrast of the image, using the azimuthally averaged background image (\(I_{bac}\)) and the following relation:
\[I_{cme}=(Image\text{-}I_{min})/I_{bac} \tag{1}\]
As a case study, we have applied the developed algorithm to broad-band coronal images obtained by LASCO-C2. We have downloaded number of Level 0.5 images from the archive of LASCO-C2, in which north pole is aligned up. We have subtracted the dark offset from the Level-0.5 images and generated the back ground image. Then applied the CME detection code to detect the occurrence of CME. We found that the code works well and Figure 5 shows one such example of CME occurrence on June 2, 1998 at 10:29:34 UT. The left and middle panels of the figure show the raw image and the generated background image considering all the images obtained on that day. Figure 6 shows another example of the image obtained on July 7, 2001 at 00:05:55 UT using C2 coronagraph onboard SOHO showing the coronal streamer structures clearly after the analysis. It may be noted that downloaded images does not show the streamer structures. It does indicate the occurrence of faint structure that may be CME, which need to be confirmed. The panel in the right side of the figure shows image (\(I_{cme}\)) with the detected bright CME\({}^{\prime}\)s and long lived streamer structures of the solar corona.
We have tested our algorithm for various events with Lasco-C1 and Lasco-C2 data. Our algorithm works well in detecting the Coronal features like CME\({}^{\prime}\)s and streamers. We plan to develop the code to determine the speed of CME in the plane of sky.
### Merging the LG and HG images
Two images of solar corona will be taken, one in LG and other in HG, simultaneously, due to limited dynamic range of CMOS detector and large difference in the intensity in the inner and outer corona. The signal, therefore, is likely to be very less above the dark noise at the outer corona in LG and saturated at the inner corona in HG. Considering the conversion factor of LG and HG, the images will be merged to generate a single image using a software code developed. To develop the code we have used the LASCO-C2 image. We termed the image as a LG as seen in left side panel of
Figure 5: The Left and middle panels of the figure shows Level 0.5 image of the LASCO-C2 image obtained on June 2, 1998 at 10:29:34 UT and the background image (\(I_{bac}\)) generated using all the images of that day. The right side panel shows the CME and coronal streamer structures for this image (\(I_{cme}\)). The vertical bar shows the relative brightness of various structures.
Figure 6: The Left and middle panels of the figure shows Level 0.5 image of the LASCO-C2 image obtained on July 7, 2001 at 00:05:55 UT and the background image (\(I_{bac}\)) generated using all the images of that day. The right side panel shows the coronal streamer structures for this image (\(I_{cme}\)). The vertical bar shows the relative brightness of coronal structures.
Figure 7. Then generated the HG image by considering the gain factor of 5 (2X and 10X gains in case of VELC) as shown in middle panel of the figure. It may be noted that part of the image is saturated in HG. Considering these images and the gain factor we combined the LG and HG images to generate full image as seen in the right side panel of the figure. It may appear that there is not much difference in the LG and combined image, probably original data is 16-bit format. We expect to see the difference in the 11-bit images taken with VELC in the continuum channel.
### **Equal intensity contour maps of corona**
We plan to make equal intensity contour maps of solar corona as function of solar radii on daily basis in units using the solar disk observations obtained onboard. We combine the LG and HG images obtained. Average the images over 60 minutes. Using this average image, we make the average intensity profiles as a function of solar radii at an interval of 5\({}^{\circ}\) in azimuth angle. Normalize these intensity profiles using solar disk data. To develop the code, we used the coronal images in red emission line taken during the total solar eclipse of July 22, 2009. Figure 8 shows the intensity profiles at 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\) azimuth angle. Then join the chosen set of equal intensity points at 5\({}^{\circ}\) interval to generate equal intensity contour map as seen in Figure 9. It may be noted that Figures 8 and 9 are on relative intensity scale but we plan to make such maps in the scale of solar disk intensity. The equal intensity contours of the solar corona can be used to study long term solar cycle variations. In addition, variations in the quiet coronal structures can be studied due to occurrence of energetic events.
Figure 8: The figure shows the intensity profiles at 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\) azimuth angle.
Figure 9: The figure shows the equal intensity contour map of coronal images in red emission line during the total solar eclipse of July 22, 2009.
and at chosen interval, generally referred as "sit and stare mode". The second mode of observations is to move the coronal image on the slits at certain chosen steps in multiple of 10 microns and at chosen time interval using linear scan mechanism (LSM) and record the spectral image at each step, referred as "Raster scan". The analysis procedure of spectral images is the same in both the cases. In case of "sit and stare" mode one investigates temporal variations at the locations of the slits, whereas in raster scan observations, one studies the spatial variation over the corona by making the 2-dimensional (2-D) images of solar corona using spectra. One can also determine the temporal variations by taking multiple raster scans. It takes longer time to make raster scan and thus one can study slow variations on 2-D image. In case of sit and stare mode one can study relatively faster variations but on limited coronal region along the slits. It is also planned to take the spectra in the sit and stare mode at longer interval (\(\sim\)1 minute) for longer periods to study the CMEs. The Gaussian fit to the emission line profile at each spatial location in the corona will be made to compute the peak (intensity), line-width (FWHM) and central position of the peak. From these data, one will be able to generate the intensity, velocity in the line-of-sight and line-width maps of solar corona including CME. The images in continuum and velocity maps will help to determine the true velocity of CME. We plan to make catalog of various features of CMEs and put in website for the scientific use of the data.
### Dark, flat-field and geometrical corrections
Using the dark, detector\({}^{\prime}\)s flat-field spectra and other information about the calibration of detectors such as hot and dead pixels, the spectra will be corrected (Singh et al (2022) [13]). Sometimes the recorded spectrum shows curvature due to optics and tilt in the spectra because of mounting of detector in the instrument. The absorption lines in the solar disk spectrum show very small velocity (\(<\) 1 km/sec) as compared to the velocity of plasma in dynamic solar corona. Further, disk spectrum due to scattered light of the sun at different locations along the slit shows much less velocity because of contribution from the whole solar disk. Generally, the coronal spectrum shows two parts, one due to disk
Figure 10: The figure shows the location of 4 slits when LSM is at home position. The slits are 50 micron wide and separated by 3.75 mm.
scattered light in the instrument and other because of emission from the hot coronal plasma. Using the absorption line in the spectrum, the spectra of various locations are shifted and aligned to a reference spectrum (chosen spectra at the center of slit) to correct for the curvature and tilt in all the spectra. The left panel of Figure 11 shows the spectrum of the solar disk taken with VELC at 5303A. M2 mirror of the VELC was illuminated with sunlight using a fiber bundle to take the disk spectra. There are 4 spectra due to 4 slits of the spectrograph. The missing part of the spectra due to 2 middle slits is because of hole in the M2 mirror. To correct for the geometrical corrections spectrum due to extreme left slit was separated as seen in the middle panel of the top row of the figure. The enlarged view of the spectrum shows the curvature in the absorption line. The profile at a spatial location of 1200 row was chosen and all the spectra at other locations were shifted such that minimum of absorption line coincide with that of 1200 location. Right panel in the top row shows the spectrum after the geometrical corrections indicating that curvature in the spectrum has been corrected. Further, the left panel in the bottom row shows that the minimum of the absorption line at different locations differ by 5 - 6 pixels but after the geometrical corrections the minimum of absorption line at various spatial locations coincide as seen in the right panel in the bottom row of the figure. Similarly, we make the geometrical corrections for the spectra due to other slits choosing the respective reference location.
### Conversion of pixel scale to wavelength scale
We have developed a code to create a file having the values of pixel versus wavelength by comparing the absorption lines in the disk or coronal spectrum with the atlas spectrum ([https://nispdata.nso.edu/ftp/pub/atlas/fluxatl/](https://nispdata.nso.edu/ftp/pub/atlas/fluxatl/)) of the sun. In this process two absorption lines are identified in the disk or coronal spectra and the corresponding lines are selected in the solar atlas spectrum. After comparing the average centers (away from active regions) of these absorption lines, the code computes the wavelength of each pixel and the dispersion of the spectrum.
Figure 11: The left panel of figure shows the spectrum of the solar disk taken with VELC at 5303Å. Middle panel in the top row shows the spectrum due to one slit (left) of the 4 slits. Right panel shows the spectrum after the geometrical corrections. The left and right panels in the bottom row show the profiles of the absorption line before and after the geometrical corrections at various spatial locations.
### Correction for Narrow-band filter transmission for Multi-slit observations
The use of narrow band filters to avoid the overlap of spectra due to one slit with other slits complicates the data analysis. After the dark, flat-field and geometrical corrections as shown in figure 12, the spectra need to be compensated for the transmission curve of the filter. It is easy to handle the analysis by separating the spectra due to each slit and later combining the results. Here, we have considered the multi-slit spectra obtained in 530.3 nm [Fe xiv] emission line during the total solar eclipse of 2010 at Easter Island (Samanta et al (2016) [6]). The exponential decrease in coronal intensity with increasing solar radii adds to complications. After determining the contribution due to transmission filter at each location in solar corona and at each wavelength for continuum part of the spectra, the spectra were corrected for the transmission profile of the filter to make uniform background at continuum part. Left panel of Figure 12 shows the spectra due to 3 slits obtained during the total solar eclipse of July 11, 2010. Middle and right panels of the figure show spectrum of the extreme left slit before and after compensating the transmission of the narrow band filter.
### Scattered light correction
The coronal spectra include the disk light spectra due to scattering of solar disk light by earth's atmosphere in case of ground base observations or by instrument while observing from space. In some cases, absorption line of the disk spectrum is blended with the emission line. For example, the 789.19 nm absorption line lies at the centre of [Fe xi] emission line at 789.2 nm and another absorption line at 530.27 nm at blue wings of [Fe xiv] emission line. The contribution of the continuum and these absorption lines to the emission line needs to be removed to determine profile of the emission line.
Left and right side of top panel in the figure 13 show the coronal and disk spectra around 7892A [Fe xi] emission line, respectively, obtained with 25-cm coronagraph at Norikura observatory. Using the disk spectra and absorption lines we remove the contribution of scattered sunlight and resulting emission line spectra is seen in the bottom panel of the figure 13. This panel clearly shows the emission line at 7892A. There is still a signature of absorption lines against the
Figure 12: Left panel of the figure shows the spectra due to 3 slits obtained during the total solar eclipse of July 11, 2010. The spectrum due to 4th slit at extreme right is very faint and does not show emission line. Middle and right panels of the figure show spectrum of the extreme left slit before and after compensating the transmission of the narrow band filter.
uniform background. The intensity of remanant of absorption lines is very less and do not have any effect in fitting the Gaussian profile to the emission line and in the determination of emission line parameters, such as peak intensity, central wavelength and line-width. The developed code will be applied to the spectroscopic observation obtained with VELC.
### Determination of emission line parameters
The faint absorption lines (Figure 13) seen in the emission line spectra are due to small residuals and do not have any impact on the determination parameters of emission line using Gaussian fit to the observed profiles. Defining the approximate centre of emission line, interval of the Gaussian fit, information about the pixel to wavelength scale conversion, we determine the peak intensity, full width half maximum (FWHM) and Doppler velocity at all the spatial locations along the slit. First column of the Figure 14 shows the observed profiles at three representative spatial locations obtained during the total solar eclipse of July 11, 2010 using multi-slit spectrograph (Samanta et al (2016) [6]). The middle and right side columns show the contribution of transmission profile of filter and the remnant emission line profile after compensating the transmission profile of the narrow band filter. A Gaussian fit to the emission line profile is also shown to compute the line-width, intensity and line-of-sight velocity at that spatial location. The values of peak intensity, position of peak intensity, FWHM in pixels and FWHM in Angstrom of the emission line after the correction for the instrumental profile are given in Table 1. The spectrum appears noisy because of very short exposure time to study the temporal oscillations. This analysis will be done at each spatial location along the slits. It may be noted the code has a provision, not to consider certain number of pixels while making a Gaussian fit to the emission line because of residual signal at those pixels due to absorption line. In case of raster scan observations these parameters will be
Figure 13: Top-left panel of the figure shows the coronal spectra before scattering correction, top-right panel shows the disk spectra and bottom panel shows the spectra after removing the scattered light component due to sky and instrument. The 7892 Å [Fe xi] emission line becomes dominant with very faint absorption line spectrum in the background. These observations were obtained with 25-cm coronagraph at Norikura observatory.
combined to make intensity, Doppler and line-width maps of the scanned region. The data of all the 4 slits will be combined to generate image of the observed solar corona. The Figure 15 shows an example of a coronal region observed with 25-cm coronagraph in a similar procedure.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Sp.Location** & **Peak Intensity** & **Peak Position** & **FWHM from Gaussian fit** & **FWHM(Å) Instrument corrected** \\ \hline Row-135 & 65.21 & 133.92 & 16.29 & 0.98 \\ \hline Row-145 & 62.69 & 134.92 & 15.5 & 0.93 \\ \hline Row-155 & 48.04 & 135.65 & 14.57 & 0.87 \\ \hline \end{tabular}
\end{table}
Table 1: The values of spatial location in the corona, peak intensity, position of peak in intensity, FWHM of emission line and FWHM in Å after correcting for the instrumental profile are listed.
Figure 14: The figure shows the observed profiles at three representative spatial locations obtained during the total solar eclipse of July 11, 2010 using multi-slit spectrograph. The middle and right side columns show the contribution of transmission profile of filter and the remnant emission line profile after compensating the transmission profile of the narrow band filter. A Gaussian fit to the emission line profile is shown in red color.
### **Alignment and re-scaling of images**
The CMOS detectors to record Fe[xi] and Fe[xiv] lines and IR detector for Fe[xiii] emission line being used have different pixel spatial scales, 1.25 arcsec / pixel for CMOS and 4.8 arcsec / pixel for IR detector. The images made from the observed spectra need to be aligned with each other and made of equal format, that means of equal spatial scale by adjusting their spatial scales to compare the parameters of different emission lines. This will be achieved by taking the spectra with a cross wire put on the slits of spectrograph. A code has been developed and tested to do it.
### **Temperature and Doppler maps of solar corona**
The intensity of these coronal lines is temperature sensitive as the abundance of respective ions depends on the temperature of the plasma. By taking the ratio of intensity of these lines e.g., Fe[xiii] / Fe[xi] and Fe[xiv] / Fe[xi] we shall generate the temperature maps of the solar corona. Using this temperature map and line width information, we shall be able derive the non-thermal component of the plasma at each location of the solar corona and thus generate Doppler map of solar corona. Multi-temperature intensity, velocity, line-widths, and Doppler maps are expected to provide detailed physical and dynamical nature of solar corona and coronal loops.
### **Alignment of maps**
The location of a point in the solar corona is generally defined in terms of solar radii from the centre of sun and the angle measured from the north pole of sun towards east. Hence, images will be rotated considering the "Yaw" angle of the satellite and "P" angle of the sun at the time of observations to make the north pole of the sun vertical and east on the left side of the image as is the general norm. The image will also be moved depending on the "roll and pitch" angles of the satellite at the time of observations.
Figure 15: The figure shows the generated intensity, doppler velocity and FWHM images of the scanned region. Top panels show the intensity distribution of the 6374Å (left) and 5303Å (right) coronal emission lines observed simultaneously in a coronal region of 240 \(\times\) 500 size on 25 October 2003 (22:09 UT). Middle and bottom panels show the Doppler velocity and FWHM of the emission lines, respectively.
## 5 Analysis of spectro-polarimetric data
While making the spectro-polarimetric observations of the solar coronal in Fe[xiii] emission line, other two channels may also be recording the spectra in Fe[xi] and Fe [xiv] emission lines. There will be two spectra in Fe[xiii] line for each slit because of polarizing beam splitter. The analysis of the spectra till the generation of emission line profiles will be done in a similar way as partly explained in Nagaraju et al (2021) [4] and Sasikumar et al (2022) [7]. To derive the I, Q, U and V parameters, a complete methodology will be adopted which will be described separately.
## 6 Availability of software codes
It is planned that information about codes will be shared with data users before and after the launch of the payload by arranging meetings at Indian Institute of Astrophysics, Bengaluru. Some training will also be given to the participants. These codes need to be verified with the actual data and working of the instrument during the PV (payload verification) phase. After making the required changes in codes and confirming the proper working of codes, these will be put in public domain. Again, information and training to the users will be provided in workshops during the PV and GT phase. It may be noted that proposal submission form to make the observational plan, is being worked and tested. It has gone through number of revisions because of changes in hardware and control electronics. The details of proposal submission form will be shared in all the proposed workshops and meetings.
## 7 Acknowledgements
We thank all the Scientists/Engineers at the various centres of ISRO such as URSC, LEOS, SAC, VSSC etc. and Indian Institute of Astrophysics who have made great contributions to the mission to reach at the present state. We gratefully acknowledge the financial support from ISRO for this project. The coronal spectroscopic data used here was obtained by Prof. Jagdev Singh at Norikura observatory, Japan. The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut for Sonnensystemforschung (MPS, Germany), Laboratoire d\({}^{\prime}\)Astronomie (LAS, France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA.
|
2303.11080 | Gravitational radiation with kinetic recoil | In this manuscript, we examine the gravitational radiation emitted by binary
systems using an Unruh-DeWitt detector coupled to gravitons. Recoil is
incorporated into the system via a kinetic energy term in the energy gap of the
detector. We find a splitting of the gravitational wave frequency due to the
recoil. Implications for the recoil velocity and force are discussed. | Morgan H. Lynch | 2023-03-17T00:55:10Z | http://arxiv.org/abs/2303.11080v1 | # Gravitational radiation with kinetic recoil
###### Abstract
In this manuscript, we examine the gravitational radiation emitted by binary systems using an Unruh-DeWitt detector coupled to gravitons. Recoil is incorporated into the system via a kinetic energy term in the energy gap of the detector. We find a splitting of the gravitational wave frequency due to the recoil. Implications for the recoil velocity and force are discussed.
## I Introduction
The emission, and detection, of gravitational waves due to binary inspiral has ushered in the era of graviational wave astronomy [1]. When black holes, or other compact objects, are in a binary orbit, the emission of gravitational radiation causes the decay of the orbit and eventual merger of the two objects. When this emission is asymmetric, linear momentum is radiated away by the system which then imparts a kick or recoil velocity onto the binary system or final state compact object [2; 3; 4]. The nature of this recoil or radiation reaction poses an interesting problem in the dynamics of gravitational wave emission due to the fact that the recoil velocity may also be large enough to provide the necessary escape velocity to eject the remnant from its host galaxy, \(v\sim 500\) km/s [5; 6]; especially in the case of precessing binaries. Given that the properties of a gravitational wave signal encodes the properties of the binary system [7], e.g. gravitational wave frequency yields the orbital frequency, chirps determine the luminosity distance, etc., one can ask if there is also a signature of recoil in the measured signal of gravitational waves. The catalog of gravitational wave sources demonstrates that the mass radiated during the inspiral is typically \(1-10\%\) of the total mass of the system [8]. Such large fractions of total mass of the system radiated apriori implies the presence of radiation reaction or recoil. Thus it stands to reason that from a purely observational point of view, the parameter space of gravitational wave signals would imply that recoil would play a role in they dynamics which is on par with other characteristics of binary systems such as spin and eccentricity.
The problem of radiation reaction in gravitational wave emission is, modulo differences due to the polarization of vectorial and tensorial modes, equivalent to that of photon emission [2]. Broadly speaking, when the energy of emitted radiation becomes comparable to the rest mass of the emitting particle, one expects a measurable presence of recoil to occur. As such, the recent experimental observation of radiation reaction at CERN-NA63 [9; 10] offers us a striking window into how to incorporate radiation reaction into the problem of binary inspiral. Of particular use in gaining an insight into recoil is the Unruh-DeWitt detector [11; 10]. There, the incorporation of radiation reaction is accomplished by simply including the recoil kinetic energy of photon emission into the energy gap of the detector. Moreover, the utility of the Unruh-DeWitt detector as a model for a classical radiating source has also been firmly established [12]. This also applies to composite objects [13; 14; 15; 16; 17] such as hadrons, atoms, and, as we will demonstrate, gravitationally coupled binary systems. Then, by adapting the Unruh-DeWitt detector formalism to incorporate the emission of gravitons, we can also examine the recoil produced in these gravitational setting. This not only gives us a foothold into analyzing binary inspiral with recoil, but also extends the already robust arena of quantum field theory in curved spacetime [18; 19; 20; 21] to include graviton emission from Unruh-DeWitt detectors.
The power radiated by binary inspiral serves as a benchmark in the observation and analysis of gravitational wave detection. In particular, the power emitted from point masses in a binary orbit describes the measured signals, in the classical regime, quite well. Known as the Peters-Mathews equation [22], we will reproduce this result using an Unruh-DeWitt detector and, with the added degree of freedom of the energy gap, incorporate recoil into the analysis. This will note only enable us to look for signatures of recoil in the measured gravitational wave signals but also give us insight into the recoil velocities and forces produced by the radiation reaction. We will start by computing the graviton response function and then apply the formalism to the standard binary system of gravitational wave emission. Here and throughout, unless otherwise stated, we use natural units \(\hbar=c=G=1\).
The graviton response function
To begin our analysis we must first define the graviton response function, i.e. the transition rate of an Unruh-DeWitt detector coupled to gravitons. The Unruh-DeWitt detector [23; 24] will be coupled to the energy momentum tensor of our emitter and will be used to incorporate any local change in energy, e.g. a recoil kinetic energy, during the emission process. In this regard we turn to our interaction action for a graviton [25; 26], \(\hat{h}^{\mu\nu}(x)\), coupled to an energy momentum tensor, \(\hat{T}_{\mu\nu}(x)\). Thus we have,
\[\hat{S}_{I}=\frac{1}{2}\kappa\int d^{4}x\hat{T}_{\mu\nu}(x)\hat{h}^{\mu\nu}(x). \tag{1}\]
Here, our gravitational coupling is defined by \(\kappa=\sqrt{32\pi G}\) and \(d^{4}x=d^{3}xdt\). We will now use this action in order to examine an energy transition in our Unruh-DeWitt detector, from \(E_{i}\) to \(E_{f}\), accompanied by the simultaneous emission of a graviton with momentum \(\mathbf{k}\). We will then formulate the following amplitude;
\[\mathcal{A}=i\left\langle\mathbf{k}\right|\otimes\left\langle E_{f}\right| \hat{S}_{I}\left|E_{i}\right\rangle\otimes\left|0\right\rangle. \tag{2}\]
The differential emission probability per unit final state graviton momenta is given by, \(\frac{d\mathcal{P}}{d^{3}k}=|\mathcal{A}|^{2}=\mathcal{A}(x)\mathcal{A}^{*}(x ^{\prime})\). Evaluation yields
\[\frac{d\mathcal{P}}{d^{3}k} = |\left\langle\mathbf{k}\right|\otimes\left\langle E_{f}\right| \frac{1}{2}\kappa\int d^{4}x\hat{T}_{\mu\nu}(x)\hat{h}^{\mu\nu}(x)\left|E_{i} \right\rangle\otimes\left|0\right\rangle|^{2} \tag{3}\] \[= \frac{\kappa^{2}}{4}\int d^{4}x\int d^{4}x^{\prime}|\left\langle E _{f}\right|\hat{T}_{\mu\nu}(x)\left|E_{i}\right\rangle|^{2}|\left\langle \mathbf{k}\right|\hat{h}^{\mu\nu}(x)\left|0\right\rangle|^{2}.\]
Note, these matrix elements are functions of both \(x\) and \(x^{\prime}\), e.g. \(|\left\langle E_{f}\right|\hat{T}_{\mu\nu}(x)\left|E_{i}\right\rangle|^{2}= \left\langle E_{f}\right|\hat{T}_{\mu\nu}(x)\left|E_{i}\right\rangle\left\langle E _{i}\right|\hat{T}_{\mu\nu}^{*}(x^{\prime})\left|E_{f}\right\rangle\). As such, the above probability factorizes into an energy momentum tensor matrix element contracted with the graviton matrix element. We will evaluate our energy momentum component first. The energy momentum tensor itself is, in principle, comprised of all potential sources of gravitation in the binary system. We will restrict our analysis to only the mass of the system, as a first approximation, and ignore additional sources such as electromagnetic fields. To this end, we shall take our energy momentum tensor to be that of a point particle, see e.g. [25] page 44, coupled to an Unruh-DeWitt detector. Hence,
\[\hat{T}_{\mu\nu}(x)=\mu v_{\mu}v_{\nu}\hat{m}(t)\delta^{3}(\vec{x}-\vec{x}_{ tr}(t)). \tag{4}\]
Here, we have defined our rest mass, or reduced mass in the case of binary inspiral, of our system, \(\mu\). Moreover, we evolve the detector and trajectory via the coordinate time since we will be assuming a non-relativistic velocity of our binary system, i.e. \(\gamma=1\). The energy momentum tensor is defined by the coordinate velocity of our emitter, \(v_{\mu}\). The monopole moment operator \(\hat{m}(t)\) is Heisenberg evolved via \(\hat{m}(t)=e^{i\hat{H}t}\hat{m}(0)e^{-i\hat{H}t}\) with \(\hat{m}(0)\) defined as \(\hat{m}(0)\left|E_{i}\right\rangle=\left|E_{f}\right\rangle\) with \(E_{i}\) and \(E_{f}\) the initial energy and final energy of our energy transition which accompanies the emission along the trajectory, \(\vec{x}_{tr}(t)\). The matrix element for our energy momentum tensor then yields,
\[|\left\langle E_{f}\right|\hat{T}_{\mu\nu}(x)\left|E_{i}\right\rangle|^{2} = |\left\langle E_{f}\right|\gamma v_{\mu}v_{\nu}(x)e^{i\hat{H}t} \hat{m}(0)e^{-i\hat{H}t}\delta^{3}(\vec{x}-\vec{x}(t))\left|E_{i}\right\rangle| ^{2} \tag{5}\] \[= \mu^{2}V_{\mu\nu\sigma\rho}[x^{\prime},x]\delta^{3}(\vec{x}-\vec{ x}_{tr}(t))\delta^{3}(\vec{x}^{\prime}-\vec{x}_{tr}^{\prime}(t^{\prime}))e^{-i \Delta E(t^{\prime}-t)}\]
Here we have defined the energy gap as \(\Delta E=E_{f}-E_{i}\) and normalized our detector states via \(|\left\langle E_{f}\right|\hat{m}(0)\left|E_{i}\right\rangle|^{2}=1\). We have also defined a "velocity tensor" via \(V_{\mu\nu\sigma\rho}[x^{\prime},x]=v_{\nu}(x)v_{\mu}(x)v_{\sigma}(x^{\prime})v_ {\rho}(x^{\prime})\). Next, we shall evaluate the graviton matrix element. For this we will also use the integral over the final state momenta in order to so we may obtain the total emission probability. Hence,
\[\int d^{3}k|\left\langle\mathbf{k}\right|\hat{h}^{\mu\nu}(x)\left|0 \right\rangle|^{2} = \int d^{3}k\left\langle 0|\hat{h}^{\dagger\sigma\rho}(x^{\prime}) \left|\mathbf{k}\right\rangle\left\langle\mathbf{k}\right|\hat{h}^{\mu\nu}(x )\left|0\right\rangle \tag{6}\] \[= \left\langle 0|\hat{h}^{\dagger\sigma\rho}(x^{\prime})\hat{h}^{\mu \nu}(x)\left|0\right\rangle\] \[= G^{\mu\nu\sigma\rho}[x^{\prime},x].\]
Note we have utilized the completeness relation, \(\int dk\left|k\right\rangle\left\langle k\right|=1\), so we may formulate the graviton Wightman function, \(G^{\mu\nu\sigma\rho}[x^{\prime},x]\). The tensor indices encode the polarization of the graviton. Using our graviton two point function and the energy momentum matrix element we can formulate the transition probability. Hence,
\[\mathcal{P} =\frac{\kappa^{2}}{4}\int d^{3}k\int d^{4}x\int d^{4}x^{\prime}| \left\langle E_{f}\right|\hat{T}_{\mu\nu}(x)\left|E_{i}\right\rangle|^{2}| \left\langle\mathbf{k}\right|\hat{h}^{\mu\nu}(x)\left|0\right\rangle|^{2}\] \[=\frac{\kappa^{2}\mu^{2}}{4}\int dtdt^{\prime}e^{-i\Delta E( \tau^{\prime}-\tau)}V_{\mu\nu\sigma\rho}[t^{\prime},t]G^{\mu\nu\sigma\rho}[t^ {\prime},t]\] \[=\frac{\kappa^{2}\mu^{2}}{4}\int d\xi d\eta e^{-i\Delta E\xi}V_{ \mu\nu\sigma\rho}[\xi,\eta]G^{\mu\nu\sigma\rho}[\xi,\eta] \tag{7}\]
Here we have transformed our integration to the difference and average time variables; \(\xi=t^{\prime}-t\) and \(\eta=(t^{\prime}+t)/2\) respectively. Finally, by formulating the transition probability per unit time, we obtain our graviton response function, \(\Gamma=\frac{d\mathcal{P}}{d\eta}\). Hence,
\[\Gamma=\frac{\kappa^{2}\mu^{2}}{4}\int d\xi e^{-i\Delta E\xi}V_{\mu\nu\sigma \rho}[\xi,\eta]G^{\mu\nu\sigma\rho}[\xi,\eta]. \tag{8}\]
Due to the fact that we must contract our 4-velocities with the polarization tensors of our graviton field, let us now explicitly examine our graviton two-point function. To this end, we will use the plane wave mode decomposition for the our graviton field in the transverse traceless gauge [25; 27],
\[\hat{h}^{\mu\nu}(x)=\int\frac{d^{3}k}{(2\pi)^{3/2}}\frac{\sum_{i}\epsilon_{i} ^{\mu\nu}}{\sqrt{2\omega}}\left[\hat{a}_{k}e^{i(\mathbf{k}\cdot\mathbf{x}- \omega t)}+\hat{a}_{k}^{\dagger}e^{-i(\mathbf{k}\cdot\mathbf{x}-\omega t)} \right]. \tag{9}\]
The vacuum to vacuum Wightman function will then reduce to an integral over the momentum. Hence,
\[\left\langle 0\right|\hat{h}^{\dagger\sigma\rho}(x^{\prime})\hat{h}^ {\mu\nu}(x)\left|0\right\rangle = \left\langle 0\right|\int\frac{d^{3}k^{\prime}}{(2\pi)^{3/2}} \frac{\sum_{\lambda^{\prime}}\epsilon_{\lambda^{\prime}}^{\prime\dagger \sigma\rho}}{\sqrt{2\omega^{\prime}}}\left[\hat{a}_{k^{\prime}}e^{i(\mathbf{k}^ {\prime}\cdot\mathbf{x}^{\prime}-\omega^{\prime}t^{\prime})}+\hat{a}_{k^{ \prime}}^{\dagger}e^{-i(\mathbf{k}^{\prime}\cdot\mathbf{x}^{\prime}-\omega^{ \prime}t^{\prime})}\right] \tag{10}\] \[\times\int\frac{d^{3}k}{(2\pi)^{3/2}}\frac{\sum_{\lambda}\epsilon_ {\lambda}^{\mu\nu}}{\sqrt{2\omega}}\left[\hat{a}_{k}e^{i(\mathbf{k}\cdot \mathbf{x}-\omega t)}+\hat{a}_{k}^{\dagger}e^{-i(\mathbf{k}\cdot\mathbf{x}- \omega t)}\right]\left|0\right\rangle\] \[= \frac{1}{(2\pi)^{3}}\frac{1}{2}\int\frac{d^{3}k}{\omega}\sum_{ \lambda\lambda^{\prime}}\epsilon_{\lambda^{\prime}}^{\mu\nu}\epsilon_{ \lambda^{\prime}}^{\prime\dagger\sigma\rho}e^{i(\mathbf{k}\cdot\Delta\mathbf{x }-\omega(t^{\prime}-t))}.\]
Here we see that the graviton two point function is formally the same as a scalar field but with polarization tensors lending their indices. It is this two point function that we will evaluate along our trajectory, i.e. \(\Delta\mathbf{x}\rightarrow\Delta\mathbf{x}_{tr}\). Combining all the pieces, our response function then takes the following form,
\[\Gamma=\frac{\kappa^{2}\mu^{2}}{8}\frac{1}{(2\pi)^{3}}\int d\xi\int\frac{d^{3} k}{\omega}Ve^{-i(\Delta E\xi-\mathbf{k}\cdot\Delta\mathbf{x}_{tr}+\omega \Delta t)}. \tag{11}\]
Here we defined the velocity-polarization product \(V=\sum_{\lambda\lambda^{\prime}}\epsilon_{\lambda}^{\mu\nu}\epsilon_{\lambda^{ \prime}}^{\dagger\sigma\rho}V_{\mu\nu\sigma\rho}[\xi,\eta]\) for brevity. Let us first evaluate the sum of our polarization tensors. Recalling the polarizations are real valued and will only have spatial components, we then have \(\sum_{\lambda\lambda^{\prime}}\epsilon_{\lambda}^{\mu\nu}\epsilon_{\lambda^{ \prime}}^{\dagger\sigma\rho}=\sum_{\lambda\lambda^{\prime}}\epsilon^{ij} \epsilon^{kl}\). Then we can consider the following graviton polarization identity [27],
\[\sum_{\lambda\lambda^{\prime}}\epsilon_{ij}\epsilon_{k\ell}=\delta_{ik}\delta_{j \ell}+\delta_{i\ell}\delta_{jk}-\delta_{ij}\delta_{k\ell}+\hat{k}_{i}\hat{k}_{ j}\hat{k}_{k}\hat{k}_{\ell}+\hat{k}_{i}\hat{k}_{j}\delta_{k\ell}+\hat{k}_{k}\hat{k}_{ \ell}\delta_{ij}-\hat{k}_{i}\hat{k}_{k}\delta_{jk}-\hat{k}_{i}\hat{k}_{k}\delta _{j\ell}-\hat{k}_{j}\hat{k}_{k}\delta_{ik}-\hat{k}_{j}\hat{k}_{k}\delta_{i\ell}. \tag{12}\]
Here we have defined the unit graviton momentum vector, \(\hat{k}=(\cos\left(\phi\right)\sin\left(\theta\right),\sin\left(\phi\right)\sin \left(\theta\right),\cos\left(\theta\right))\), using the standard spherical coordinate chart. Then, by contracting the above polarization identity with our 4-velocity tensor we have,
\[v_{\nu}(x)v_{\mu}(x)v_{\sigma}(x^{\prime})v_{\rho}(x^{\prime})\sum_{ \lambda\lambda^{\prime}}\epsilon_{ij}\epsilon_{k\ell} = 2(v\cdot v^{\prime})^{2}-v^{2}{v^{{}^{\prime}}}^{2}+(v\cdot\hat{k} )^{2}(v^{\prime}\cdot\hat{k})^{2} \tag{13}\] \[+ (v\cdot\hat{k})^{2}{v^{{}^{\prime}}}^{2}+(v^{\prime}\cdot\hat{k} )^{2}v^{2}-4(v\cdot\hat{k})(v^{\prime}\cdot\hat{k})(v\cdot v^{\prime}).\]
Here we have all dot products, \(v\cdot v^{\prime}\) and \(v^{2}\), being strictly restricted to the spatial components of the 4 velocities. The above expression encodes the dynamics of our emitter and thus depends explicitly on the trajectory. For each scenario, the above velocity-polarization contraction needs to be evaluated and then utilized in the response function. Let us now apply the above formalism to the case of binary inspiral.
## III Binary inspiral
To begin our analysis of graviton emission by two orbiting compact objects, let us first make the assumption that our dynamics will be governed by [26] the reduced mass of the system, \(\mu\), orbiting at radius, \(a\), with frequency \(\Omega\). The reduced mass of our two orbiting compact objects, \(m_{1}\) and \(m_{2}\) is given by \(\mu=\frac{m_{1}m_{2}}{m_{1}+m_{2}}\) and their orbital parameters \(a\) and \(\Omega\) are related by Keplers law, \(a^{3}\Omega^{2}=m_{1}+m_{2}\). As such, our four-velocities, for circular rotation in the \(x-y\) plane will be given \(v^{\mu}=(1,-a\Omega\sin{(\Omega t)},a\Omega\cos{(\Omega t)},0)\). We will then have the following velocity-polarization contraction components,
\[v\cdot v^{\prime} = (a\Omega)^{2}(\sin{(\Omega t)}\sin{(\Omega t^{\prime})}+\cos{( \Omega t)}\cos{(\Omega t^{\prime})})\] \[v^{2} = (a\Omega)^{2}\] \[v^{{}^{\prime}}{}^{2} = (a\Omega)^{2}\] \[v\cdot\hat{k} = -a\Omega\sin{(\Omega t)}\cos{(\phi)}\sin{(\theta)}+a\Omega\cos{( \Omega t)}\sin{(\phi)}\sin{(\theta)}\] \[v^{\prime}\cdot\hat{k} = -a\Omega\sin{(\Omega t^{\prime})}\cos{(\phi)}\sin{(\theta)}+a \Omega\cos{(\Omega t^{\prime})}\sin{(\phi)}\sin{(\theta)} \tag{14}\]
For the following algebra, we will use the shorthand notation, \(S=\sin{(\Omega t)}\), \(S^{\prime}=\sin{(\Omega t^{\prime})}\), \(C=\cos{(\Omega t)}\), and \(C^{\prime}=\cos{(\Omega t^{\prime})}\). We will then have the following 4 velocity-polarization tensor contraction, \(V=\sum_{\lambda\lambda^{\prime}}\epsilon_{\lambda}^{\mu\nu}\epsilon_{\lambda^ {\prime}}^{\dagger\sigma\rho}V_{\mu\nu\sigma\rho}[x^{\prime},x]\),
\[V = 2(a\Omega)^{4}(SS^{\prime}+CC^{\prime})^{2}-(a\Omega)^{4} \tag{15}\] \[+ (a\Omega)^{4}\left[C\sin{(\phi)}\sin{(\theta)}-S\cos{(\phi)}\sin{ (\theta)}\right]^{2}\left[C^{\prime}\sin{(\phi)}\sin{(\theta)}-S^{\prime}\cos{ (\phi)}\sin{(\theta)}\right]^{2}\] \[+ (a\Omega)^{4}\left[C\sin{(\phi)}\sin{(\theta)}-S\cos{(\phi)}\sin{( \theta)}\right]^{2}\] \[+ (a\Omega)^{4}\left[C^{\prime}\sin{(\phi)}\sin{(\theta)}-S^{\prime} \cos{(\phi)}\sin{(\theta)}\right]^{2}\] \[- 4(a\Omega)^{4}(SS^{\prime}+CC^{\prime})\left[C\sin{(\phi)}\sin{( \theta)}-S\cos{(\phi)}\sin{(\theta)}\right]\left[C^{\prime}\sin{(\phi)}\sin{( \theta)}-S^{\prime}\cos{(\phi)}\sin{(\theta)}\right]\]
Here \(\theta\) is the angle of graviton emission relative to the the \(z\)-axis. We also recall that \(\Delta t=\xi\), and we will take the dipole approximation, \(\Delta x_{tr}\cdot k\ll 1\). Then writing our momentum integrals in our response function in spherical coordinates and aligning the momentum along the z-axis, we will have the following emission rate,
\[\Gamma=\frac{\kappa^{2}\mu^{2}}{8}\frac{1}{(2\pi)^{3}}\int d\xi\int d\theta d \phi d\omega\omega\sin{(\theta)}Ve^{-i(\Delta E+\omega)\xi}. \tag{16}\]
The angular integrations over each of the polarization-velocity contraction components yields the following;
\[\int d\theta d\phi\sin{(\theta)}\left[C\sin{(\phi)}\sin{(\theta)} -S\cos{(\phi)}\sin{(\theta)}\right]^{2}\left[C^{\prime}\sin{(\phi)}\sin{( \theta)}-S^{\prime}\cos{(\phi)}\sin{(\theta)}\right]^{2}=\frac{4}{15}\pi(2+ \cos{(2\Omega\xi)})\] \[\int d\theta d\phi\sin{(\theta)}\left[C\sin{(\phi)}\sin{(\theta)} -S\cos{(\phi)}\sin{(\theta)}\right]^{2}=\frac{4}{3}\pi\] \[\int d\theta d\phi\sin{(\theta)}\left[C^{\prime}\sin{(\phi)}\sin{( \theta)}-S^{\prime}\cos{(\phi)}\sin{(\theta)}\right]^{2}=\frac{4}{3}\pi\] \[\int d\theta d\phi\sin{(\theta)}\left[C\sin{(\phi)}\sin{(\theta)} -S\cos{(\phi)}\sin{(\theta)}\right]\left[C^{\prime}\sin{(\phi)}\sin{(\theta)} -S^{\prime}\cos{(\phi)}\sin{(\theta)}\right]=\frac{4}{3}\pi\cos{(\Omega\xi)}. \tag{17}\]
Here we recall that our time coordinates need to be expressed in terms of the difference and average times and thus have made use of the identity, \(\sin\left(\Omega t\right)\sin\left(\Omega t^{\prime}\right)+\cos\left(\Omega t \right)\cos\left(\Omega t^{\prime}\right)=\cos\left(\Omega\xi\right)\). As such, our angular integrations over polarization-velocity contraction yields the following form,
\[\int d\theta d\phi\sin\left(\theta\right)\sum_{\lambda\lambda^{\prime}}\epsilon _{\lambda}^{\mu\nu}\epsilon_{\lambda^{\prime}}^{\dagger\sigma\sigma\nu}V_{\mu \nu\sigma\rho}[\xi,\eta]=\frac{8\pi}{15}(a\Omega)^{4}\left[1+3\cos\left(2\Omega \xi\right)\right]. \tag{18}\]
Note, our response function, \(\Gamma=\frac{dP}{d\eta}\), is completely decoupled from the average time coordinate, \(\eta\). Now, combining all pieces together yields the following graviton emission rate,
\[\Gamma = \frac{\kappa^{2}\mu^{2}}{120\pi^{2}}(a\Omega)^{4}\int d\xi d \omega\omega\left[1+3\cos\left(2\Omega\xi\right)\right]e^{-i(\Delta E+\omega)\xi} \tag{19}\] \[= \frac{\kappa^{2}\mu^{2}}{120\pi^{2}}(a\Omega)^{4}\int d\xi d \omega\omega\left[1+\frac{3}{2}\left(e^{i2\Omega\xi}+e^{-i2\Omega\xi}\right) \right]e^{-i(\Delta E+\omega)\xi}.\]
Integration over the time, \(\xi\), will yield the following three delta functions which encode the conservation of energy for the emission; \(\delta_{0}(\Delta E+\omega)\), \(\delta_{-2}(-2\Omega+\Delta E+\omega)\), and \(\delta_{2}(2\Omega+\Delta E+\omega)\). In order to formulate the total classical emission rate [10; 12], we must also sum over transitions, both up and down, of the detector energy gap. Thus, we will have the following six delta functions; \(\delta_{0}^{\pm}(\pm\Delta E+\omega)\), \(\delta_{-2}^{\pm}(-2\Omega\pm\Delta E+\omega)\), and \(\delta_{2}^{\pm}(2\Omega\pm\Delta E+\omega)\). We wish to compute the total power radiated and therefore we must also weight the integration with an additional factor of frequency, \(\mathcal{P}=\int\Gamma\omega d\omega\). Finally, we recall \(\kappa=\sqrt{32\pi}\). Thus we have,
\[\mathcal{P}=\frac{8}{15}\mu^{2}(a\Omega)^{4}\int d\omega\omega^{2}\left[ \delta_{0}^{+}+\delta_{0}^{-}+\frac{3}{2}\left(\delta_{-2}^{+}+\delta_{2}^{+} +\delta_{-2}^{-}+\delta_{2}^{-}\right)\right] \tag{20}\]
Since we must restrict our emitted frequency to be positive, \(\omega>0\), in the limit of zero energy gap, \(\Delta E\to 0\), we will then be left with the following integrals over the delta functions; \(\frac{3}{2}\left(\delta_{-2}^{+}+\delta_{-2}^{-}\right)\). These two delta functions yield the standard gravitational wave frequency of \(\omega=2\Omega\pm\Delta E\). The first term we neglected should, in principle, correspond to a gravitational wave emitted by some transient decay-like process with energy \(\omega=\Delta E\), e.g. something like an echo [28]. As such, our total power radiated by our binary inspiral is given by,
\[\mathcal{P} = \frac{4}{5}\mu^{2}(a\Omega)^{4}\int d\omega\omega^{2}\left[ \delta(\omega+\Delta E-2\Omega)+\delta(\omega-\Delta E-2\Omega)\right] \tag{21}\] \[= \frac{4}{5}\mu^{2}(a\Omega)^{4}\left[8\Omega^{2}+2\Delta E^{2}\right]\] \[= \frac{32}{5}\mu^{2}a^{4}\Omega^{6}\left[1+\frac{\Delta E^{2}}{(2 \Omega)^{2}}\right].\]
What we find is precisely the Peters-Mathews [22] result with the additional contribution of some internal process, such as radiation reaction, which is gauged by the \(\Delta E\) term. The limit \(\Delta E\to 0\), which models classical radiating sources, reproduces the Peters-Mathews formula identically. Having successfully reproduced this standard result as a sanity check, let us now turn to the problem of including recoil.
## IV Gravitational radiation reaction
In order to analyze the effect of recoil on the gravitational wave emission of our binary inspiral, we shall turn to the example of recoil in photon emission. This affords us an opportunity to carry over the lessons of recoil from cherenkov, larmor, and channeling radiation to graviton emission in a manner that is backed by experiment [9; 10; 11]. The incorporation of recoil indeed finds a natural setting via the use of Unruh-DeWitt detectors. This is due to the fact that for radiating sources, the energy gap of the detector is defined as the difference between the initial and final state energy of the system during the radiation process. In other words, given an initial system described by, say, its mass, we have \(E_{i}=m\). Then, upon the emission of a quanta of radiation with energy, \(\omega\), we will have a final state energy with a recoil momentum, \(E_{f}=\sqrt{(\omega)^{2}+m^{2}}\). The difference in energy, \(\Delta E=E_{f}-E_{i}\approx\frac{\omega^{2}}{2m}\), is the recoil kinetic energy imparted on the system by the emission. Since, for binary inspiral, we are looking at very large
energy gravitational waves, we will need to consider the wave as being comprised of the coherent sum of \(n\) gravitons, per period of binary revolution, all of the same frequency \(\omega\). Each of these gravitons will contribute a kick, or recoil kinetic energy. As such, we will take our energy gap to be,
\[\Delta E=\frac{n\omega^{2}}{2m_{r}}. \tag{22}\]
Here we have defined the recoil mass, \(m_{r}\), which we will take to be the final mass of the system, i.e. the remnant mass. Let us now return to the power radiated by our binary system. We will set the energy gap equal to the recoil kinetic energy and sum over both transitions up and down, i.e. \(\Delta E=\pm\frac{n\omega^{2}}{2m_{r}}\). These kinetic energies will then be used in the same delta functions which reproduces the Peters-Mathews result, i.e. \(\delta_{-2}^{+}\) and \(\delta_{-2}^{-}\). As such, our total power of emission will then be comprised of the two following frequencies,
\[\delta_{-2}^{\pm}(\omega\pm\frac{n\omega^{2}}{2m_{r}}-2\Omega)\Rightarrow\ \omega_{\pm}=\frac{m_{r}}{n}\left[\mp 1 \pm\left[1\pm\frac{4n\Omega}{m_{r}}\right]^{1/2}\right]. \tag{23}\]
Note, we have found that the presence of recoil has split the measured gravitational wave frequency from fundamental frequency \(2\Omega\). To leading order, the recoil correction takes the form, \(\omega_{\pm}\approx 2\Omega\left[1\mp\frac{n\Omega}{m_{r}}\right]\), see Fig. 1 below for the frequency splitting applied to a binary black hole merger comprised of masses \(m_{1}=85m_{\odot}\) and \(m_{2}=66m_{\odot}\) with \(m_{\odot}\) being the standard solar mass. Note, we will use these parameters throughout the rest of the manuscript so as to model the gravitational wave observation GW190521 [29]. In order to integrate the subsequent delta functions, we will make use of the following jacobians; \(\delta_{-2}^{\pm}\Rightarrow\left[1\pm\frac{4n\Omega}{m_{r}}\right]^{1/2}\). Combining our pieces together, our gravitational wave power with kinetic recoil will be given by,
\[\mathcal{P}_{r} =\frac{4}{5}\mu^{2}(a\Omega)^{4}\int d\omega\omega^{2}\left[ \delta(\omega+\frac{n\omega^{2}}{2m_{r}}-2\Omega)+\delta(\omega-\frac{n\omega ^{2}}{2m_{r}}-2\Omega)\right]\] \[=\frac{4}{5}\mu^{2}(a\Omega)^{4}\int d\omega\omega^{2}\left[ \frac{\delta(\omega-\omega_{+})}{\left[1+\frac{4n\Omega}{m_{r}}\right]^{1/2}} +\frac{\delta(\omega-\omega_{-})}{\left[1-\frac{4n\Omega}{m_{r}}\right]^{1/2}}\right]\] \[=\frac{4}{5}\mu^{2}(a\Omega)^{4}\left[\frac{\frac{m_{r}^{2}}{n^{2 }}\left[-1+\left[1+\frac{4n\Omega}{m_{r}}\right]^{1/2}\right]^{2}}{\left[1+ \frac{4n\Omega}{m_{r}}\right]^{1/2}}+\frac{\frac{m_{r}^{2}}{n^{2}}\left[1- \left[1-\frac{4n\Omega}{m_{r}}\right]^{1/2}\right]^{2}}{\left[1-\frac{4n\Omega }{m_{r}}\right]^{1/2}}\right]. \tag{24}\]
This is our expression for the power radiated with recoil. We can simplify the above expression by defining the recoil enhancement, \(f_{r}(\Omega)\). This will allow us to better understand how it relates to the Peters-Mathews equation. As such, we will have
\[\mathcal{P}_{r} =\frac{32}{5}\mu^{2}a^{4}\Omega^{6}f_{r}(\Omega)\] \[f_{r}(\Omega) =\frac{m_{r}^{2}}{8n^{2}\Omega^{2}}\left[\frac{\left[-1+\left[1+ \frac{4n\Omega}{m_{r}}\right]^{1/2}\right]^{2}}{\left[1+\frac{4n\Omega}{m_{r} }\right]^{1/2}}+\frac{\left[\;1-\left[1-\frac{4n\Omega}{m_{r}}\right]^{1/2} \right]^{2}}{\left[1-\frac{4n\Omega}{m_{r}}\right]^{1/2}}\right]. \tag{25}\]
If we expand for small \(\frac{4n\Omega}{m_{r}}\), we find the leading order recoil correction to \(\mathcal{P}_{r}=\frac{32}{5}\mu^{2}a^{4}\Omega^{6}\left[1+15\frac{n^{2}\Omega^ {2}}{m_{r}^{2}}\right]\). Note, the above power formula applies for the average power radiated each period. As such, for the number of gravitons emitted, which determines the gravitational wave amplitude, we will also take to be the number emitted in during each period [30], \(n=\frac{n}{12}\mathcal{P}\). As an estimation, we will use the Peters-Mathews result, i.e. without recoil, in this expression for \(n\). Thus we will have,
\[n=\frac{32\pi}{5}\mu^{2}a^{4}\Omega^{4}. \tag{26}\]
We must also comment on the fact that the recoil correction is purely classical. Although the graviton number, with physical constants reinstered, \(n=\frac{32\pi}{5}\frac{G}{c^{3}k}\mu^{2}a^{4}\Omega^{4}\), contains a factor of \(\hbar\), our above recoil term, \(f_{r}(\Omega)\), is comprised of the combination, \(\frac{n\hbar\Omega}{m_{r}c^{2}}\) together. This additional factor of \(\hbar\) cancels the factor of \(\hbar\) in the graviton number. Thus we have a purely classical expression for recoil. Finally, to better understand the effect that recoil will have on a graviational wave observation, let us turn to the time dependence of the frequency or "chirp". Using Keplers law, \(\Omega^{2}a^{3}=(m_{1}+m_{2})\), and the gravitional energy, \(E=-\frac{m_{1}m_{2}}{2a}\), we can determine the change in frequency during the inspiral. As such, we will have the following time dependence in the frequency of our gravitational wave emission,
\[\frac{d\Omega}{dt}=\frac{96}{5}\frac{G^{5/3}}{c^{5}}\frac{m_{1}m_{2}}{(m_{1}+ m_{2})^{1/3}}\Omega^{11/3}f_{r}(\Omega). \tag{27}\]
As in the power radiated, we have the standard expression for the chirp along with the recoil correction. This can be integrated numerically to determine the frequency chirp during inspiral both with and without recoil.
## V Kinematics of recoil
In order to analyze the kinematics of the inspiral event let us turn to the power radiated by the binary system. The Peters-Mathews result gives the power radiated, \(\mathcal{P}_{PM}\), by the system without recoil present. By comparing this to the power radiated when we include the recoil correction, \(\mathcal{P}_{r}\), we can estimate the amount of power that goes into accelerating the binary remnant, \(\mathcal{P}_{a}=\mathcal{P}_{r}-\mathcal{P}_{pm}=\mathcal{P}_{pm}[f_{r}( \Omega)-1]\). This power, \(\frac{dE}{dt}\), is then the change in kinetic energy of the remnant, i.e. \(\mathcal{P}_{a}=m_{r}v\frac{dV}{dt}\). This can be integrated to yield the recoil velocity,
Figure 1: The measured gravitational wave frequency spread, relative to the orbital frequency \(\Omega\) of the binary, for \(m_{1}=85m_{\odot}\) and \(m_{2}=66m_{\odot}\) with the recoil mass \(m_{r}=.94(m_{1}+m_{2})\). These parameters are intended to examine the observed merger GW190521 [29]. The final orbital frequency, \(\sim 800\) Hz, corresponds, via Kepler’s law, to a final separation determined by the Schwarzschild radii of the two initial masses. Presented are the split frequencies, Eqn. (23), for the recoil “decay”, \(\omega_{-}\), and “excitation”, \(\omega_{+}\), along with their approximations, \(\tilde{\omega}_{-}\) and \(\tilde{\omega}_{+}\)
\[v_{r} = \left[\frac{2}{m_{r}}\int\mathcal{P}_{pm}\left[f_{r}(\Omega)-1 \right]dt\right]^{1/2} \tag{28}\] \[\approx \left[\frac{30}{m_{r}^{3}}\int\mathcal{P}_{pm}n^{2}(\Omega) \Omega^{2}dt\right]^{1/2}\]
Here we have made use of the first order correction to the recoil, \(f_{r}(\Omega)-1=15\frac{n^{2}\Omega^{2}}{m_{r}^{2}}\). We can also estimate the recoil velocity based on the spread in the measured gravitational wave frequencies measured at earth. This spread is given by the difference between the two frequencies, \(\Delta\omega=\omega_{-}-\omega_{+}\approx\frac{4m\Omega^{2}}{m_{r}}\). From this, we have our recoil correction \(f_{r}(\Omega)-1=\frac{15}{4}\left(\frac{\Delta\omega}{\omega_{0}}\right)^{2}\). Note, here we defined the fundamental frequency \(\omega_{0}=2\Omega\). Since the presence of recoil should only manifest at the very end of the inspiral event, we can take \(\frac{\Delta\omega}{\omega_{0}}\) to be constant throughout the integration and only consider the contribution from the last few orbits at peak frequency; this criterion will then be used to define the recoil time, \(t_{r}=\frac{1}{\omega_{0}}\). Then, the integral over the power will yield the total energy radiated away scaled by the ratio of the recoil time to the total time, \(E_{r}\frac{t_{r}}{t_{tot}}\). As such, we will then have the following final velocity,
\[v_{r}=\sqrt{\frac{15}{2}\frac{E_{r}}{m_{r}}\frac{1}{\omega_{0}t_{tot}}}\frac{ \Delta\omega}{\omega_{0}}. \tag{29}\]
Note that, modulo binding energy, the total energy radiated and remnant mass will obey the relation, \(m_{tot}=E_{rad}+m_{r}\). Also, based on the catalog of gravitational wave observations, the vast majority of energy is radiated away during the final inspiral event [8] and we thus take the total time of emission to only be about \(t_{tot}\sim.5\) s. The utility of the above equation is that for the measured chirps of gravitational wave signals, the recoil velocity can be inferred by the ratio of the frequency broadening, \(\Delta\omega\), to the frequency at maximum, \(\omega\). This of course can only occur if the frequency spread due to recoil is larger than all other sources of broadening in the system, e.g. harmonics due to eccentricity, spin, and/or tidal effects. Using the same methodologies we can also examine the forces necessary to impart the final state velocity upon the remnant. From the same examination of the power imparted into the recoil, we have \(\frac{dE}{dt}=v_{r}F_{r}\). As such, our force is directly proportional to the Peter-Mathews power and is given by
\[F_{r} = \frac{\mathcal{P}_{PM}}{v_{r}}\left[f_{r}(\Omega)-1\right] \tag{30}\] \[= \sqrt{\frac{15}{8}\frac{m_{r}}{E_{r}}\omega_{0}t_{tot}}\left( \frac{\Delta\omega}{\omega_{0}}\right)\mathcal{P}_{pm}.\]
Then, if we take the power from Peters-Mathews at maximum frequency, \(\omega_{0}\), along with the final state radius being determined by the Keplerian radius at peak frequency of the system, \(a=\frac{(m_{1}+m_{2})^{1/3}}{(\omega_{0}/2)^{2/3}}\), then we have \(\mathcal{P}_{PM}=\frac{1}{2^{10/3}}\frac{32}{5}\frac{(m_{1}m_{2})^{2}}{(m_{1} +m_{2})^{2/3}}\omega_{0}^{10/3}\). This gives us an expression for the maximum recoil force imparted on the remnant which can be inferred from the chirp signal. Thus,
\[F_{r}=\frac{1}{2^{10/3}}\frac{32}{5}\sqrt{\frac{15}{8}\frac{m_{r}}{E_{r}} \omega_{0}t_{tot}}\left(\frac{\Delta\omega}{\omega_{0}}\right)\frac{(m_{1}m_{ 2})^{2}}{(m_{1}+m_{2})^{2/3}}\omega_{0}^{10/3}. \tag{31}\]
As an example, if we examine the gravitational wave signal from event GW190521 [29], we see the final state frequency is about \(\omega_{0}\sim 70\)\(s^{-1}\). Then, using Eqn. (23), we find our frequency spread to be \(\Delta\omega\sim.036\)\(s^{-1}\). Using \(m_{r}=.94(85m_{\odot}+66m_{\odot})\) and \(E_{r}=.06(85m_{\odot}+66m_{\odot})\), we then find the remnant velocity and force given by \(v_{r}=18.03\) km/s and \(F_{r}=4.04\times 10^{36}\) N or \(F_{r}=3.3\times 10^{-8}\)\(F_{p}\) respectively. As such, we find a recoil velocity which, although is rather large, most likely is not strong enough to eject the remnant from the host galaxy. Interestingly enough, the force imparted on the remnant to yield such a velocity is on the order of 30 nano-Planck force, see Figures 2 and 3 below for plots of the velocities and forces for the same mass parameters as a function of binary orbital frequency. Note, these calculations were done using the approximations from Eqn.'s (29) and (31) which depend on the energy radiated, \(E_{r}\), and should only be considered as an upper bound since we did not take into account things like binding energy. Using the full formulae, we find \(v_{r}=16.1\) km/s and \(F_{r}=3.7\times 10^{-8}\)\(F_{p}\), which demonstrates the accuracy of the approximations employed.
Figure 3: The recoil force imparted on a binary system [29] with \(m_{1}=85m_{sun}\) and \(m_{2}=66m_{sun}\) and recoil mass \(m_{r}=.94(m_{1}+m_{2})\) as a function of the maximum binary orbital frequency. \(F_{max}\) is the upper bound on the force for the case when all energy radiated by the system goes into accelerating the remnant. We also compare the forces imparted during the recoil event to the Planck force, \(F_{p}=1.21\times 10^{44}\) N.
Figure 2: The recoil velocity for a binary system [29] with \(m_{1}=85m_{sun}\) and \(m_{2}=66m_{sun}\) and recoil mass \(m_{r}=.94(m_{1}+m_{2})\) as a function of the maximum binary orbital frequency. \(V_{max}\) is the upper bound on the velocity for the case when all energy radiated by the system goes into accelerating the remnant. Here, we compare the computed recoil velocity to the characteristic galactic escape velocity of \(V_{escape}=500\) km/s.
Conclusions
In this manuscript we examined the emission of gravitational waves from binary inspiral using an Unruh-DeWitt detector coupled to gravitons. We successfully reproduced the Peters-Mathews equation as well as examined the effect of recoil on the gravitational wave frequency. We find a splitting of the fundamental frequency as a signature of radiation reaction. We also computed the final state velocity and forces present due to the recoil and find that the higher the peak frequency, the larger the remnant velocity and thus forces present. The typical forces imparted on the remnant are also on the order of \(\sim\)10 nano-Planck force.
###### Acknowledgements.
This work has been supported by the National Research Foundation of Korea under Grants No. 2017R1A2A2A05001422 and No. 2020R1A2C2008103.
|
2305.15168 | Three-dimensional modelling of the shock-turbulence interaction | The complex interaction between shocks and plasma turbulence is extremely
important to address crucial features of energy conversion in a broad range of
astrophysical systems. We study the interaction between a supercritical,
perpendicular shock and pre-existing, fully-developed plasma turbulence,
employing a novel combination of magnetohydrodynamic (MHD) and small-scale,
hybrid-kinetic simulations where a shock is propagating through a turbulent
medium. The variability of the shock front in the unperturbed case and for two
levels of upstream fluctuations is addressed.We find that the behaviour of
shock ripples, i.e., shock surface fluctuations with short (a few ion skin
depths, $d_i$) wavelengths, is modified by the presence of pre-existing
turbulence, which also induces strong corrugations of the shock front at larger
scales. We link this complex behaviour of the shock front and the shock
downstream structuring with the proton temperature anisotropies produced in the
shock-turbulence system. Finally, we put our modelling effort in the context of
spacecraft observations, elucidating the role of novel cross-scale,
multi-spacecraft measurements in resolving shock front irregularities at
different scales. These results are relevant for a broad range of astrophysical
systems characterised by the presence of shock waves interacting with plasma
turbulence. | Domenico Trotta, Oreste Pezzi, David Burgess, Luis Preisser, Xochitl Blanco-Cano, Primoz Kajdic, Heli Hietala, Timothy S. Horbury, Rami Vainio, Nina Dresing, Alessandro Retino', Maria Federica Marcucci, Luca Sorriso-Valvo, Sergio Servidio, Francesco Valentini | 2023-05-24T13:59:18Z | http://arxiv.org/abs/2305.15168v1 | # Three-dimensional modelling of the shock-turbulence interaction
###### Abstract
The complex interaction between shocks and plasma turbulence is extremely important to address crucial features of energy conversion in a broad range of astrophysical systems. We study the interaction between a supercritical, perpendicular shock and pre-existing, fully-developed plasma turbulence, employing a novel combination of magnetohydrodynamic (MHD) and small-scale, hybrid-kinetic simulations where a shock is propagating through a turbulent medium. The variability of the shock front in the unperturbed case and for two levels of upstream fluctuations is addressed. We find that the behaviour of shock ripples, i.e., shock surface fluctuations with short (a few ion skin depths, \(d_{i}\)) wavelengths, is modified by the presence of pre-existing turbulence, which also induces strong corrugations of the shock front at larger scales. We link this complex behaviour of the shock front and the shock downstream structuring with the proton temperature anisotropies produced in the shock-turbulence system. Finally, we put our modelling effort in the context of spacecraft observations, elucidating the role of novel cross-scale, multi-spacecraft measurements in resolving shock front irregularities at different scales. These results are relevant for a broad range of astrophysical systems characterised by the presence of shock waves interacting with plasma turbulence.
keywords: shock waves - turbulence - plasmas
## 1 Introduction
Collisionless shocks are fundamental components of our universe, crucial in reconstructing the properties of a broad range of astrophysical environments (Amato & Blasi, 2018; Brunetti & Jones, 2014). Generally speaking, shock waves convert directed flow energy (upstream) into heat and magnetic energy (downstream). In the collisionless case, a fraction of the available energy can be channeled into the production of energetic particles, a pivotal feature to understand many aspects of _in-situ_ and remote observations (Burgess & Schoier, 2015). Thus, collisionless shocks play a fundamental role in energy conversion in a variety of systems, ranging from solar flares (Woo & Armstrong, 1981) to interacting galaxy clusters (Bykov et al., 2019). While some aspects of energy conversion at shock waves are not fully understood despite decades of research, a picture invoking a complex shock behaviour is emerging (e.g., Treumann, 2009).
One of the most important parameters controlling shock structure and behaviour is the shock normal angle, i.e., the angle between the normal to the shock surface and the upstream magnetic field, \(\theta_{Bn}\). Shocks with \(\theta_{Bn}\)\(\lesssim\) 45\({}^{\circ}\) (i.e., for which the upstream magnetic field and the shock normal are well-aligned) are called quasi-parallel, while in the quasi-perpendicular case \(\theta_{Bn}\)\(\gtrsim\) 45\({}^{\circ}\). Other important parameters are the shock Alfvenic and sonic Mach numbers, defined as \(\rm M_{A}=v_{sh}/v_{A}\) and \(\rm M_{S}=v_{sh}/c_{s}\), respectively, and the plasma \(\beta=v_{sh}^{2}/v_{A}^{2}\). Here, \(v_{sh}\) is the shock speed in the upstream flow frame, while \(v_{A}\), \(c_{s}\) and \(v_{h}\) are the Alfven, sound and thermal speed in the region upstream from the shock.
Shocks in the heliosphere are unique because they are accessible by _in-situ_ spacecraft exploration (Richter et al., 1985), thus providing the missing link to the remote observations of astrophysical systems. In this picture, the Earth's bow shock, resulting from the interaction between the supersonic solar wind and the Earth's magnetosphere, has become the most studied shock using direct observations (Formisano, 1979). More recently, the Magnetospheric MultiScale
mission (MMS, Burch et al., 2016), elucidated novel aspects of the overall energetics of the shock system (Schwartz et al., 2022). Other heliospheric shocks that can be observed _in-situ_ are interplanetary shocks, consequence of solar activity such as, for example, coronal mass ejections (Kilpua et al., 2015; Blanco-Cano et al., 2016). Such studies highlight the importance of various kinds of shock irregularities for understanding how plasma is processed across a shock wave (e.g., Lobzin et al., 2007; Wilson et al., 2009; Kajdic et al., 2019; Trotta et al., 2023).
A particularly interesting kind of shock irregularity is shock rippling, i.e., surface fluctuations, recently observed _in-situ_ with MMS at the quasi-perpendicular Earth's bow shock (Johlander et al., 2016). We distinguish this small-scale rippling from larger scale perturbations of the shock front due to self-generated upstream waves being advected back at the shock, also important, especially at geometries departing from the perpendicular one (see, for example Kajdic et al., 2019; Turc et al., 2023). Shock rippling in quasi-perpendicular geometries happens at supercritical (i.e., \(\mathrm{M_{A}}\gtrsim 3\)) shocks, where ion reflection at the shock front leads to the foot-ramp-overshoot structure (see Kivelson & Russell, 1995). Such structuring is characterised by highly anisotropic, non-thermal particle distributions in the foot and ramp, due to the presence of incident and reflected populations, often particularly challenging to observe in-situ. Earlier theoretical and numerical studies elucidated that the non-thermal distributions in the shock foot and ramp lead to shock ripples that have short wavelength (about a few ion skin depths) propagate along the shock front at the Alfven speed of the overshoot (Lowe & Burgess, 2003; Burgess et al., 2016). Shock rippling was also proven to be crucial for efficient electron acceleration at shocks in a variety of astrophysical environments (Trotta & Burgess, 2019; Kang et al., 2019; Kobzar et al., 2021).
Another important feature of quasi-perpendicular shocks, consequence of the behaviour discussed above, is the presence of a strong perpendicular temperature anisotropy, routinely observed downstream of the quasi-perpendicular bow shock of Earth (Eastwood et al., 2015). The small-scale pattern of the temperature anisotropy typical of quasi-perpendicular shocks has also been investigated using numerical simulations (Burgess & Scholer, 2007; Preisser et al., 2020; Ofman et al., 2021). Numerical modelling is invaluable for understanding details of the shock dynamics that are often challenging to observe (e.g., Krasnoselskikh et al., 2002; Caprioli & Spitkovsky, 2014; Matsumoto et al., 2015; Gedalin et al., 2018).
An ubiquitous property of our universe is plasma turbulence (e.g., Lazarian et al., 2012), crucial for energy dissipation in collisionless plasmas (e.g., Matthaeus et al., 2015, 2020; Pezzi et al., 2021). Turbulence is also a fundamental phenomenon leading to particle acceleration, as shown by Fermi's early works (Fermi, 1949, 1954) and in decades of subsequent research (e.g., Vlahos et al., 2004; Kowal et al., 2012; Guo et al., 2021) (see also (Khabarova et al., 2021; Pezzi et al., 2021) for a review). The shock - turbulence interaction is an important and often spectacular pathway to efficient energy conversion and particle acceleration (Zank et al., 2002; Guo et al., 2021), and the transport properties of shock accelerated particles have been shown to depend on the level of upstream fluctuations (Lario et al., 2022). Numerical simulations are particularly useful in addressing the complex interaction between shock waves and (pre-existing) plasma turbulence. Early efforts modelling shock waves propagating in an upstream medium perturbed with a prescribed set of fluctuations have shown that both the shock front behaviour and the production of energetic particles are influenced by the upstream conditions (Giacalone, 2005; Guo & Giacalone, 2015). The behaviour of energetic particles across turbulence-mediated shocks, and shocks interacting with trains of current sheets was also investigated by Nakanotani et al. (2021, 2022), revealing enhanced particle engression due to turbulence. Recently, Trotta et al. (2021) looked at the interaction between fully-developed turbulence and oblique shocks in two dimensions, finding enhanced particle transport in phase space in such an interaction, with pre-existing turbulence providing a source of strong upstream scattering for the shock-reflected particles. The important problem of how turbulent structures are transmitted across shock waves was also investigated with a combination of simulations and Earth's bow shock observations, revealing a magnetic helicity increase due to turbulent structures' compression at the shock (Trotta et al., 2022).
In this work, we address, in fully three-dimensional geometry, the interaction of a rippled, perpendicular shock front with fully-developed upstream turbulence. To this end, we employ a combination of MHD and small-scale, kinetic simulations with different pre-existing, upstream turbulence strength. The shock front dynamics are addressed, revealing a complex interplay in which ripples may
Figure 1: Overview of the simulations presented in this work, ordered for increasing level of perturbation \(\delta B/B_{0}=0\), 0.5 and 1 ((a), (b) and (c), respectively). In each volume, we plot the magnetic field magnitude for the \(z=0\), \(x=256\) and \(y=256\) planes. An isocontour for \(B>2B_{0}\) is shown in a subvolume, rendering the shock surface. Slices on the shock front and downstream are also shown together with some magnetic field lines integrated downstream. For the unperturbed case (a), streamlines of upstream bulk flow speed are also shown for reference (red arrows). All three renderings are done at simulation time \(\mathrm{T_{\mathrm{\Omega}_{\mathrm{ci}}}}=16\).
survive or get inhibited due to local perturbations. The temperature anisotropy across the shock transition is also studied, to see how the scenario in which a strong anisotropy generated at the shock ramp relaxes towards equilibrium downstream of the shock is modified by turbulent fluctuations. Finally, we show how a multi-scale, multi-spacecraft approach is needed to properly address the properties of the shock-turbulence system, in support for future missions such as HelioSwarm (Spence, 2019) and Plasma Observatory, a space mission proposal candidate to the next M7 call of the European Space Agency (Retino et al., 2022). The paper is organised as follows: in Section 2 the simulation methods are described; in Section 3 the results are presented and discussed, and Section 4 contains the conclusions of the work.
## 2 Methods
Our numerical simulations are carried out in two stages, as done in reduced, two-dimensional geometry in Trotta et al. (2021, 2022). In order to inspect the interaction of shock waves with the coherent structures of turbulence, first, MHD simulations are used to produce turbulent fields, which are then used in the second (main) stage of the simulations to perturb the initial condition of a hybrid Particle-In-Cell (PIC) shock simulation, obtaining a shock that propagates in a turbulent upstream plasma.
Three dimensional, compressible MHD simulations are used to generate fully-developed, decaying turbulence. To this purpose, a pseudo-spectral algorithm that adopts second-order Runge-Kutta scheme to advance in time the MHD equations was used. Such a code was recently extended to the fully three-dimensional configuration starting from a previous two-dimensional algorithm (Vasconez et al., 2015), already adopted to investigate, for example, the interaction of two counterpropagating Alfvenic wavepackets (Pezzi et al., 2017, 2017), and the parametric instability (Primaversa et al., 2019).
Two simulations of turbulence were performed, initialised with different levels of turbulence fluctuations, \(\delta B/B_{0}=0.5,1.0\), where \(\mathbf{B_{0}}=B_{0}\mathbf{\hat{z}}\) is the mean field, and \(\delta B\) is the rms level of the fluctuations. At the time instant in which turbulence is most intense in the MHD simulation, i.e. when \(\langle\left|\mathbf{j}\right|^{2}\rangle\) reaches its maximum value being \(\mathbf{j}=\nabla\times\mathbf{B}\), the output is stored to be used as an initial condition for the shock simulation, where magnetic field and ion bulk flow speed are perturbed, as done in two-dimensions in Trotta et al. (2021). In the MHD simulations, standard normalization has been adopted: time, space, and velocities are respectively scaled to the Alfven time \(t_{A}\), a generic length \(L_{A}\), and the Alfven speed \(\mathrm{v_{A}}=L_{A}/t_{A}\). The tri-periodic cubic box, of size \(L_{0}=2\pi L_{A}\), has been discretised with 256 gridpoints along each direction.
Shock simulations with perturbed and unperturbed (laminar) upstream conditions are then performed using the HYPSI code (e.g., Trotta et al., 2020). Here, protons are modelled as macroparticles and advanced using the standard PIC method (Birdsall & Langdon, 1991). The electrons, on the other hand, are modelled as a massless, charge-neutralizing fluid with an adiabatic equation of state. The HYPSI code is based on the Current Advance Method and Cyclic Leapfrog (CAM-CL) algorithm (Matthews, 1994). The shock is initiated by the injection method (Quest, 1985), in which the plasma flows in the \(x\)-direction with a defined (super-Alfvenic) velocity \(V_{\mathrm{in}}\). The right-hand boundary of the simulation domain acts as a reflecting wall, and at the left-hand boundary plasma is continuously injected. The simulation is periodic in the \(y\)- and \(z\)-directions. A shock is created as a consequence of reflection at the wall, and propagates in the negative \(x\)-direction. In the simulation frame, the (mean) upstream flow is along the shock normal. To ensure that the \(\nabla\cdot\mathbf{B}=0\) equation is satisfied in the non-periodic shock simulations, the perturbations go to zero a the simulation boundaries, and the perturbation introduced is therefore limited in space and time, due to the fact that freshly injected plasma at the left-hand side of the simulation is unperturbed.
In the hybrid simulations, distance is normalised to the ion inertial length \(d_{i}\equiv c/\omega_{pi}\), time to the inverse cyclotron frequency \(\Omega_{ci}{}^{-1}\), velocity to the Alfven speed \(\mathrm{v_{A}}\) (all referred to the unperturbed upstream state), and magnetic field and density to their unperturbed upstream values, \(B_{0}\) and \(n_{0}\), respectively. The nominal angle between the shock normal and the upstream magnetic field, \(\theta_{Bm}\), is \(90^{\circ}\), with the upstream magnetic field along the \(z\)-direction. We set the upstream flow speed to \(V_{\mathrm{in}}=4.5\mathrm{v_{A}}\), and the resulting Alfvenic Mach number of the shock is approximately \(M_{A}\sim 6\). The upstream ion distribution function is an isotropic Maxwellian and the ion \(\beta_{i}\) is 1 (typical of solar wind plasma (Wilson et al., 2018)). The simulation \(x-y-z\) domain is \(128\times 128\times 128\)\(d_{i}^{3}\). The spatial resolution used is \(\mathrm{\Delta}x=\Delta y\mathrm{\Delta}z=0.5\)\(d_{i}\). The final time for the simulation is 20 \(\Omega_{ci}^{-1}\), and the time step for particle (ion) advance is \(\Delta t=0.01\)\(\Omega_{ci}^{-1}\). Substepping is used for the magnetic field advance, with an effective time step of \(\Delta tB=\Delta t/10\). A small, nonzero resistivity is introduced in the magnetic induction equation and its value is set so that there are not excessive fluctuations at the grid scale. The number of particles per cell used is always greater than 50 (upstream).
Three simulations are presented in this work, with the same nom
Figure 2: One-dimensional (reduced) magnetic field spectra during the shock-turbulence interaction, computed along the \(z\)-direction of the mean magnetic field and the \(y\)-direction, perpendicular to both the mean magnetic field and the shock normal (dashed and continuous line, respectively). The spectra are averaged in the shock upstream and downstream (left to right) for all the simulations (top to bottom), respectively in the regions \(x\in[75,90]d_{i}\) and \(x\in[105,120]d_{i}\). The grey dotted-dashed lines show examples of power-law scaling relevant to turbulent spectra.
inal shock parameters in the unperturbed case \(\delta B/B_{0}\sim 0\) together with the two perturbed cases \(\delta B/B_{0}=0.5\) and \(1.0\).
## 3 Results and Discussion
### Perturbed shocks simulations overview
In Figure 1, we present an overview of our simulations, showing three snapshots taken during the shock-turbulence interaction. In the magnetic field rendering for the unperturbed case (Figure 1(a)), it is possible to see the rippled shock front, a result compatible with previous simulations of perpendicular shocks interacting with a laminar upstream flow (e.g., Burgess et al., 2016). In this case, the downstream region also reveals shock-induced fluctuations, with the overshoot - undershoot structure typical of supercritical shocks being visible immediately behind the shock.
The presence of upstream turbulence induces strong modifications with respect to the unperturbed case. As can be seen in Figure 1(b) and (c), the shock front appears distorted in the presence of upstream turbulence, due to convection of the fluctuations through the shock front. A more complex downstream scenario is also observed. Interestingly, in the moderately perturbed case, shock rippling survives the presence of upstream turbulence, and keeps operating at the distorted shock front. Finally, in the strongly perturbed case, the interplay between ripples and shock distortions due to turbulence becomes even more complex.
We further characterise the turbulent shock environments with the
Figure 3: Two-dimensional slices of the magnetic field magnitude for the \(x-z\), \(x-y\) and \(y-z\) (top to bottom) for the three simulations at time \(\mathrm{T}\mathrm{Q}_{\mathrm{ci}}=14\), organised with increasing level of turbulent fluctuations (left to right). The magenta arrows on the right-hand side of the figure display the mean magnetic field direction.
magnetic field spectral density in the shock upstream and downstream, for all cases. Figure 2 shows one-dimensional magnetic field spectra for the \(z\)- and \(y\)-directions, parallel and perpendicular to the mean magnetic field, respectively. For all the simulations, the spectra have been computed by using Fast Fourier Transforms. One-dimensional spectra along the \(z\)-direction are computed averaging over the \(y\) direction (and vice versa for the spectrum in the \(y\)-direction). Further averaging is performed along the nominal shock normal (\(x\)) direction. To this end, the upstream and downstream regions have been defined by the conditions \(75\,d_{i}<x<90\,d_{i}\) and \(105\,d_{i}<x<120\,d_{i}\), respectively, at simulation time \(\mathrm{T}\Omega_{\mathrm{ci}}=14\), when the average shock position is of \(95\,d_{i}\). Therefore, the shock front highlighted in Figure 1 is excluded from this diagnostic, focusing on the effect of the shock passage in the processing of turbulence.
In the unperturbed case (panels a and b of Figure 2), a spectrum of downstream fluctuations (blue lines) develops due to the shock passage, with a strong injection in the parallel spectrum that shows an energy bump at \(k_{z}d_{i}\sim 1\) associated with the ripples propagating parallel to the shock front and along the mean magnetic field (Figure 2(b)). The other panels of Figure 2 show how turbulence is affected by the shock crossing, with two major effects: (i) the increase of the level of turbulent fluctuations, and (ii) the isotropisation of turbulent energy. The upstream spectra (panels \(c\) and \(e\), red lines) show anisotropies in the \(k_{y}-k_{z}\) directions, a well-known feature of MHD turbulence (Shebalin et al., 1983). In both cases, the perpendicular spectrum presents a short Kolmogorov-like scaling (\(k^{-5/3}\)) at small wavevectors, this being limited by the dynamical range of underlying MHD simulations, while the parallel spectrum has smaller power and nearly no power-law scaling. At sub-ion scales, both spectra are steeper (\(\sim k^{-2.8}\)), indicating energy dispersion and dissipation. In panels c-f, the grey lines indicate typical plasma turbulence power-laws (e.g. Chen, 2016), shown for reference. This spectral behaviour is compatible with typical solar wind turbulence observations (e.g. Chen et al., 2014) and previous kinetic simulations (e.g., Perrone et al., 2012; Franci et al., 2018). The analysis of the downstream spectra (panels \(d\) and \(f\), blue lines) shows that the overall level of fluctuations increases due to the shock compression (notice the different range in y-axis of left and right panels in Fig. 2) (Pifta et al., 2017; Zhao, L.-L. et al., 2021). The downstream spectra show a behaviour compatible with observations of turbulence in the terrestrial magnetosheath (Huang et al., 2017), with the absence of a Kolmogorov scaling, replaced by an energy-containing \(k^{-1}\) range, followed by a transition to a marked steepening at sub-ion scales (Sahraoui et al., 2020). The spectral anisotropy is greatly reduced (Figure 2(d,f)), in particular for the intermediate case of turbulence strength (\(\delta B/B_{0}\sim 0.5\)). This may be due either to an isotropisation effect induced by the shock crossing or to the interplay between pre-existing fluctuations and shock-induced fluctuations.
### Shock front behaviour
In this section, we discuss the details of the observed shock front behaviour. Figure 3 shows two-dimensional slices of the shock-turbulence interaction simulations. The shock rippling is the predominant feature of the unperturbed shock front, with magnetic field fluctuations along the shock front showing at typical spatial scales of some \(d_{i}\). The ripples propagate along the shock front in the mean magnetic field direction, as elucidated in Burgess et al. (2016). In the top left panel of Figure 3, it is possible to appreciate how shock rippling participates in the shock overshoot-undershoot structuring, namely as a rapidly fluctuating feature visible in the plane containing the mean magnetic field, superimposed to the large scale structuring observed from the shock front and in the downstream region.
The upstream turbulence has the major effect of introducing shock front irregularities at the scales where the turbulent cascade is operating, clearly seen as shock front undulations in the perturbed cases, happening at larger scales than the self-induced shock rippling. It is important to note that the shock front irregularities introduced by the turbulence do not depend on the shock front behaviour. This represents a fundamental difference with respect to other cases where shock front corrugation is observed as a result of self-generated upstream waves/fluctuations, also generating shock front distorsion (see Kajdic et al., 2021; Turc et al., 2023). As hinted in the discussion above, small-scale shock rippling is clearly present in the moderately perturbed case, as it can be seen for the shock front in the \(\delta B/B_{0}\sim 0.5\) case. However, due to the changes in the mean magnetic field at turbulent fluctuation scales, their propagation becomes more complex along the shock front, in a scenario in which different "patches" of the shock front have ripples with different orientations, with potential implications for efficient particle acceleration. Furthermore, while ripples survive at the shock front, we note that the region downstream of the shock becomes much more complex than in the laminar case, due to the variability introduced by the turbulent fluctuations and the irregularity in the shock front. Finally, such complexity is further enhanced in the strongly perturbed case, where a highly dynamic shock front is observed. The signature of shock rippling becomes increasingly hard to disentangle with respect to other irregularities at play in the shock front.
We also note that, due to turbulent structures being transmitted
Figure 4: Departure from the nominal shock normal angle \(\Delta\theta_{\mathbf{B}x}\) along the shock front in all simulation cases, at time \(\mathrm{T}\Omega_{\mathrm{ci}}=14\), for increasing level of turbulence strength (left to right). A PDF of such values is shown in the right panel.
from upstream to downstream, the perturbed cases allow for larger amplitude depletions in magnetic field magnitude downstream, a feature consistent with studies carried out in reduced, two-dimensional geometry (Nakanotani et al., 2022). The three-dimensional behaviour of the transmitted turbulent structures and their importance as extra sources of energetic particles beyond energisation at the shock front through turbulent acceleration mechanisms (Drake et al., 2006; Comisso and Sironi, 2022) is an extremely interesting topic, which will be the subject of further investigation.
We further investigate the shock front behaviour by analysing the departures from the expected shock normal angle along the shock front. Given the simulation setup (see Section 2), the nominal shock \(\theta_{Bn}\) for the shocks simulated here is \(\theta_{Bn}=\theta_{Bx}=90^{\circ}\). Due to its three-dimensional structure, particularly important in the perturbed cases, we define the shock front position as where \(B>3B_{0}\) (following a similar procedure presented in Kajdic et al. (2019)). The local shock normal angle is then computed as \(\theta_{Bx}(y,z)=\cos^{-1}(B_{x}(y,z)/B(y,z))\).
Such analysis is shown in Figure 4, where the local values of \(\theta_{Bx}\) are displayed. These change rapidly in the unperturbed case, with even strong departures (up to about \(40^{\circ}\) from the nominal value, due to shock rippling, consistent with what previously shown in Trotta and Burgess (2019). When upstream turbulence is included, the picture significantly changes. Departures from the nominal shock geometry happen over a wider range of spatial scales, introduced by the turbulence, with important implications on the interplay between
Figure 5: Two dimensional slices of the proton temperature anisotropy \(\log(\mathbb{T}_{\perp}/\mathbb{T}_{\parallel})\), or the \(x-z\), \(x-y\) and \(y-z\) (top to bottom) for the three simulations at time \(\mathrm{T}\Omega_{\mathrm{ci}}=14\), organised with increasing level of turbulent fluctuations from left to right (as done in Figure 3 for the magnetic field). The black line shows the shock front position.
the shock and its surroundings. In the turbulent cases, the small-scale ripples appear to induce weaker changes in the local shock geometry at short wavelengths (see the Probability Density Function in Figure 4), due to the upstream mixing introduced by the turbulence. This result is extremely interesting and relevant when addressing the dynamics of upstream particles interacting with different portions of the shock front showing different local geometries in a variety of scales.
### Temperature anisotropies
Shock rippling is a consequence of the perpendicular temperature anisotropy driven in the shock foot by the reflected protons (Winske & Quest, 1988). It is therefore natural to study such temperature anisotropies in the simulations, addressing their relation with the observed shock irregularities.
Such analysis is carried out in Figure 5, where two-dimensional slices of the simulation domain for the quantity \(\log(\mathrm{T}_{\perp}/\mathrm{T}_{||})\) are shown, in the same format and time as Figure 3 for the magnetic field. The shock front position has been calculated with the same criterion used for Figure 4 (see Section 3.2). Here, the parallel and perpendicular temperatures have been computed by projecting the proton temperature tensor along the local magnetic field in the simulations. In the unperturbed case, the typical scenario for the supercritical perpendicular shock is recovered, with the presence of a strong perpendicular temperature anisotropy (\(\mathrm{T}_{\perp}/\mathrm{T}_{||}>1\)) at the shock front (see the left panels of Figure 5), relaxing in the downstream region. It is possible to identify oscillations in the temperature anisotropies, happening at wavelengths that increase with the distance from the shock (Lu & Wang, 2006; Preisser et al., 2020). We note that far downstream of the shock, plasma has not yet relaxed to an isotropic configuration, due to the limited size of the simulation domain. However, the main focus of this study is the shock front behaviour in response to upstream turbulence, and therefore the interesting study of asymptotic behaviour of the temperature anisotropy, and the associated instabilities (Hellinger et al., 2006; Kim et al., 2021) in presence of pre-existing turbulence is beyond scope.
When the shock propagates through turbulent media, many interesting features arise. It can be seen that the shock does not propagate anymore in an isotropic medium. Along the (distorted) shock front, a strong perpendicular temperature anisotropy is found, but the structuring seen in the unperturbed case is modified by the turbulent fluctuations, as it can be seen, for example, in the \(\delta B/B_{0}\sim 0.5\) case. The pre-existing fluctuations, together with the strongly distorted shock geometry allow for regions of parallel temperature anisotropy along the shock front, upstream of it and in the close downstream region (see the right-hand panels of Figure 5), an important aspect of the shock - turbulence interaction. Such a complexity in temperature anisotropy explains the modified rippling found in the magnetic field analysis.
Another crucial feature emerging from Figure 5 is the difference in the shock downstream regions for increasing levels of turbulence. In particular, comparing the \(\delta B/B_{0}\sim 0\) and \(\delta B/B_{0}\sim 1\) cases, we find that the shock downstream region in the strongly perturbed case appears more "isotropic" than the unperturbed case, that is, large regions of temperature isotropy are found downstream of the strongly perturbed shock.
To make this point more quantitative, we studied the PDF of the temperature anisotropy in \(y-z\) planes (parallel to the shock front) as a function of the distance from the shock in the three cases, shown in Figure 6. Here, PDFs with different colors are collcted at different distances from the shock (which is at zero), while the vertical magenta line indicates \(\mathrm{T}_{\perp}/\mathrm{T}_{||}=1\). Many interesting features are revealed by this analysis. First of all, the largest values for the perpendicular temperature anisotropy are achieved in the unperturbed case and in the vicinity of the shock front (top panel of Figure 6). Then, due to the increasing turbulence strength, in the most turbulent case the PDFs are closest to isotropy downstream, as hinted in the discussion above. Thus, when pre-existing turbulence is strong, the out-of equilibrium configurations induced by the shock front are decay faster (i.e., closer to the shock front) with respect to a laminar upstream plasma. Finally, it may be also noted that for stronger turbulence, configurations of parallel temperature isotropy become increasingly probable, due to the pre-existing population of fluctuations being transmitted across the shock front and also due to the strong local geometry changes induced by the turbulent fluctuations. Consequently, the probability of having, locally, populations of backstreaming ions becomes larger for larger upstream turbulent strength.
Figure 6: Temperature anisotropy PDFs, computed in \(y-z\) planes at simulation time \(\mathrm{T}\Omega_{\mathrm{ci}}=14\) (as in Figure 5) for different distances from the shock front (colors) for cases with increasing level of upstream turbulence (top to bottom). The vertical magenta line marks the isotropic configuration \(\mathrm{T}_{\perp}/\mathrm{T}_{||}=1\).
### Virtual spacecraft observations
Numerical simulations are a crucial tool to advance our knowledge of spacecraft observations and assisting the design of new missions owing to the possibility of generating synthetic, virtual spacecraft measurements (Valentini et al., 2016; Perri et al., 2017; Pecora et al., 2023). In this subsection, we discuss an example of such a study, applied to the interaction between shock and pre-existing turbulence.
An emerging picture from the current work is that, when studying the shock propagation in a turbulent medium, shock ripples happening at the short wavelengths of \(d_{i}\) are modulated by the turbulent fluctuations at larger scales, in a complex scenario for the shock front where different portions have different local geometry and environments. From a spacecraft measurements perspective, resolving simultaneously the short and long wavelength fluctuations present in cross-scale systems such as the one here described is extremely challenging. Such a challenge is inspiring new multi-scale, multi-spacecraft missions, such as HelioSwarm (Spence, 2019), devoted to analyze plasma turbulence in the solar wind up to sub-ion scales, and Plasma Observatory (Retino et al., 2022), mostly focused on unveiling the fundamental mechanisms responsible for particle acceleration in the near-Earth environment including shocks and jets.
To this end, we elucidate what would be observed by the Plasma Observatory constellation in our simulation domain. In Figure 7 we show renderings of the computational domains with seven virtual spacecraft arranged as two tetrahedra sharing one vertex and at a separation of 3 (green) and 30 \(d_{i}\) (purple), respectively few ion and fluid scales. For the purpose of these synthetic observations, we report the proxy for the proton heating, \(\mathrm{T_{p}/T_{upstream}}\), along a two dimensional slice of the simulation domain showing the shock front (Panels (b)-(e)). The virtual spacecraft measurements of magnetic field at short separation show the difference in magnetic field increase observed due to shock ripping, as it can be seen comparing the P1-3 with the P4 plots in Figure 7(c) and (g). The process of ion heating is highly structured at fluid scales, depending on several parameters such as the local magnetic field, which can be measured also at ion scales. While resolving the shock ripples, an important local property of the shock front, the tetrahedron with the larger spacing will resolve the larger scale shock front irregularity due to turbulence, as seen in the bottom panel Figure 7(c).
Resolving such complex features of shock front variability would also be invaluable to advance our knowledge of particle acceleration at shocks. Indeed, with such multi-spacecraft measurements at different scales, it would be possible to understand which portions of the shock front are the most efficient at energising particles efficiently is achieved, distinguishing between processes such shock rippling operating at small scales and larger scales fluctuations possibly due to pre-existing turbulence, for example through the measurement of the departure from an average shock geometry as done for the simulations in Figure 4. This theme is extremely relevant for particle acceleration at the Earth's bow shock (e.g., Sundberg et al., 2016; Lindberg et al., 2022) as well as for other systems, such as interplanetary shocks (see Lario et al., 2008, for example).
Figure 7: (a)-(d) Magnetic field rendering highlighting the shock front in the perturbed and laminar cases and the close upstream with seven virtual spacecraft arranged as two tetrahedra spaced at 3 and 30 \(d_{i}\) (green and purple, respectively). (b)-(e) Color map showing proton heating \(\mathrm{T_{p}/T_{upstream}}\) along the shock front. (d)-(f) Virtual spacecraft observations along the shock normal direction for magnetic field and proton heating performed with the spacecraft tetrahedra spaced ad 3 and 30 \(d_{i}\) (green and purple, respectively).
## 4 Conclusions
In this work, we studied the interaction between supercritical, perpendicular shocks and fully developed, pre-existing plasma turbulence. We employ a novel simulation model, in which MHD and hybrid kinetic simulations are combined to obtain a collisionless shock wave propagating into an upstream characterised by fully developed turbulence. Our method builds onto previous studies in reduced dimensionality (Trotta et al., 2021, 2022b), and is complementary to other methods looking at other interesting aspects of shock-turbulence interaction both in local configurations (Guo and Giacalone, 2012; Nakanotani et al., 2021, 2022) and in global setups looking, for example, at planetary magnetospheres (Behar et al., 2022).
The behaviour of a perpendicular, supercritical shock was studied in the unperturbed case and for two different levels of upstream turbulence, \(\delta B/B_{0}\sim 0\), 0.5 and 1, respectively. In the unperturbed case, shock rippling due to the perpendicular proton anisotropy driven by the reflected ion population is recovered, an important feature of perpendicular shocks, as studied in previous theoretical and numerical works (Hellinger et al., 1996; Burgess et al., 2016), and observed at the Earth's bow shock with closely-separated spacecraft constellations (Gingell et al., 2017; Johlander et al., 2018).
By coupling turbulent fields generated through compressible MHD simulations and hybrid kinetic simulations, for the first time in fully three-dimensional geometry, we addressed how turbulence is processed upon the shock crossing, with two interesting effects being observed: (i) increase in the level of fluctuations due to the compression at the shock, and (2) isotropisation of the magnetic field spectra in the close downstream. This may have important implications for the study of the nature of fluctuations associated with shock waves and their role in efficient particle acceleration, in particular for extra particle acceleration important in the shock downstream (Zank et al., 2015; Preisser et al., 2020; Trotta et al., 2020b). Further, interesting details of turbulence transmission across the shock, such as the study of the Yagdom law (Sorriso-Valvo et al., 2019), will be the object of a separate forthcoming work. Another important feature not studied here is the asymptotic behaviour of turbulence far downstream of the shock transition, for which simulations with larger domains and longer evolution times would be needed.
Concerning the shock transition when pre-existing turbulence is present, we discovered several interesting features. First of all, the shock front responds to upstream turbulence with corrugations following the turbulent field, an important feature that cannot be recovered considering only the fluctuations that are self-generated by the shock. In the moderately turbulent case \(\delta B/B_{0}\sim 0.5\), we still recover a rippled shock front, with ripples being modulated by the MHD-scale fluctuations. Such a behaviour may be important to understand the properties of shock accelerated particles interacting with such rippled portions of the shock front. For stronger perturbations, rippling becomes less prominent and the shock front is strongly distorted by the incoming turbulence. We found that, in the unperturbed case with the strongest rippled signature, the strongest local departures from the nominal shock geometry are achieved, with fluctuations happening over short \(\sim d_{i}\) wavelengths, while in the perturbed case such departures from the nominal shock geometry are modulated over larger spatial scales. This has important implications with respect to observations, where such a variability may be important when looking at spacecraft crossing and inferring local shock parameters (Koval and Szabo, 2008; Trotta et al., 2022a).
To explain the variability in shock surface fluctuations and the different behaviour of shock rippling, we studied the proton temperature anisotropies in the simulations. We found that the presence of upstream turbulence introduces further complexity in the shock system, with the result of accelerating the processes restoring the equilibrium downstream of the shock. Analysis of the temperature anisotropy along the (perturbed) shock fronts is consistent with the picture of modified shock rippling in the presence of turbulence, suggesting a complex scenario for proton heating across shock waves.
This study has important implications on the theme of energy conversion at perpendicular shocks in various space and astrophysical settings where the role of pre-existing upstream turbulence is often neglected, though it is important to note that the scales simulated here are much smaller than those relevant in such systems, due to computational limitations. It is important to note that the behaviour of the shock rippling at short wavelength and the shock front corrugation due to turbulence happening at larger scales cannot be simultaneously resolved by closely spaced spacecraft constellations, motivating cross-scale missions of the future such as Plasma Observatory. Thus, our modelling effort provides important input for future missions design, constraining the required spacecraft constellations required to capture the complexity of the shock-turbulence interaction.
## Acknowledgements
This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101004159 (SERPENTINE, www.serpentine-h2020.eu). Part of this work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk), under the project "dp031 Turbulence, Shocks and Dissipation in Space Plasmas". MHD simulations have been performed on the Newton HPC cluster at the University of Calabria, supported by "Progetto STAR 2-PIRO1 00008" (Italian Ministry of University and Research). L.S.-V. is supported by the Swedish Research Council (VR) Research Grant N. 2022-03352. D.B. is supported by STFC grants ST/T00018X/1 and ST/X000974/1. H.H. is supported by the Royal Society University Research Fellowship URFR1180671. N.D. is grateful for support by the Academy of Finland (SHOCKSEE, grant No. 346902). L.P. is supported by the Austrian Science Fund (FWF): P 33285-N. XBC is supported by PAPIIT DGAPA grant IN110921.
## Data availability
The simulation datasets used for the analyses in this work can be found and freely downloaded here: [https://doi.org/10.5281/zenodo.7964045](https://doi.org/10.5281/zenodo.7964045). The authors will share further datasets from the simulations upon request.
|
2304.05052 | Dynamics of an atom cavity field system in interacting Fock space | In this paper, we investigate one-time passing of a $V$-type three-level atom
through a single-mode interacting field in a cavity. We extend the idea of
elementary Jaynes-Cummings model by assuming that the field vector belongs to
interacting Fock space. In the process, we arrive at a state vector which will
be analyzed to study the nonclassicality of the evolved state of the system. | P. K. Das, Arpita Chatterjee | 2023-04-11T08:21:12Z | http://arxiv.org/abs/2304.05052v1 | # Dynamics of an atom cavity field system in interacting Fock space
###### Abstract
In this paper, we investigate one-time passing of a \(V\)-type three-level atom through a single-mode interacting field in a cavity. We extend the idea of elementary Jaynes-Cummings model by assuming that the field vector belongs to interacting Fock space. In the process, we arrive at a state vector which will be analyzed to study the nonclassicality of the evolved state of the system.
Keywords:interacting Fock space \(V\)-type atom nonclassicality Mandel's \(Q^{M}\) squeezing +
Footnote †: journal: Int. J. Theo. Phys.
## 1 Introduction
A manifold of nonclassical features of quantum light has received a notable attention in quantum optics [1; 2; 3] community for a number of reasons. For example, squeezed state can be used to reduce the noise level in one of the phase-space quadrature below the quantum limit [4]. Squeezed state is also used in continuous variable quantum cryptography [5], teleportation of coherent state [6] etc. Specifically, in the LIGO experiment, squeezed vacuum state is successfully used for the detection of the gravitational waves by reducing
the noise [7; 8; 9]. Entangled states produced in down-conversion process can be employed to test fundamental aspects of quantum physics, such as nonlocality [10]. These states have been proved to be useful for various quantum information tasks such as quantum teleportation, dense-coding, quantum cryptography etc. [11; 12; 13]. Photon anti-bunching can be visualized as a state of light field in which photons prefer to travel alone in comparison to traveling in a group [14]. Antibunched light exhibits nonclasscality [15]. Sunlight and light used at home are in bunched state whereas some light sources (e.g. lasers) neither show any preference for traveling alone nor for travelling in a company of other photons [16]. Such a state of light is considered as coherent state [17]. Anti-bunching is used for characterizing single photon sources [18] which are essential for the realization of various schemes for secure quantum communication. Not only that, by employing nonclassical light sources [19] the performances of optical technology such as metrology, communication and imaging can be improved beyond the limitation of classical physics. In another aspect, the preparation of quantum entangled states through a cavity QED is a subject of intense theoretical and experimental studies. Analyzing such states evokes insight into the fundamentals of quantum mechanics. Manipulation of a light field at the single-photon level [20] provides a basis for important applications in quantum information science. A desired field state can be obtained by applying two elementary operations on a single-mode field [21]. Using photon addition or subtraction or their superposition, one can generate a suitable nonclassical state from any classical state which is very useful in quantum information processing. For example, both the photon-subtracted and photon-added squeezed states are suggested to improve fidelity of continuous variable teleportation [22]. Thus manufacturing and handling nonclassical correlations in complex atomic systems united with radiation fields is one of the most challenging features of quantum information theory (QIT). In this context, we consider a three-level atomic system for studying the quantum features of the semiclassical atom-field system.
Jaynes-Cummings model helps us to understand the interaction between a single atom with a high-quality cavity [23; 24; 25]. The interaction of an atom and a laser beam in a cavity performed close to one of the atomic resonances works as a resource of light emission with a rich set of spectral and temporal features. Temporally, the emitted light shows anti-bunching with a second order correlation. Spectrally, with the increase of laser intensity, light emitted has symmetric side lobes around the central excitation frequency. This paper is also motivated by the fact that Fock states and superposition of Fock states can be produced in cavity QED by using resonant interactions of two or three-level atoms, one at a time, with a cavity mode. The production of a two-photon state in a high-\(Q\) cavity is also reported [26]. A recent paper has described a deformed atom-cavity field system constructed from the standard Jaynes-Cummings model by transforming the field operators and adding a nonlinear Kerr-like medium [27]. The zero space-time dimensional case of the interacting Fock space corresponds to a non-linear deformation of the usual (Boson, Fermion or q-deformed) one mode Fock spaces [28]. The concept of
quantum entanglement in the stochastic limit goes beyond the familiar notion of superposition and leads to the conclusion that under appropriate physical situations, nonlinearly interacting quantum system cannot be separated even at a kinematical level and behave as a single new quantum object satisfying new types of commutation relations and therefore new statistics [29]. The mathematical counterpart of this qualitative statement moves towards the formulation of interacting Fock space that has been developed for describing the state space of the interacting systems. Interacting Fock space is a generalized algebraic construction used in quantum mechanics to build the quantum state space of a variable or unknown number of identical particles from a single particle Hilbert space \(H\). For example, a so-called one-mode interacting Fock space is \(H=\mathbb{C}\).
The nonclassical properties [30] of three-level atomic systems have been well studied in quantum optics for understanding quantum-coherence phenomena such as electromagnetically induced transparency (EIT) [31], lasing without inversion [32], and coherent trapping [33]. Three-level atoms interacting with low strength driving fields, similar to EIT systems, have been used to generate entangled two-mode photon states which can be suitably manipulated to yield desired correlations [34]. The passage of two \(V\)-type three-level atoms through a cavity field have been resulted to transfer the classical cavity field into a nonclassical one [35]. The knowledge of nonclassical correlations carried by emitted photons in atomic systems may prove immensely useful in designing future QIT systems for communications and computations. However, the nonclassical nature of an atom-filed structure in interacting Fock space (a weighted Hilbert space with weights \(\{\lambda_{n}\}\)) is less reported, which motivated us to study the dynamics of nonclassicality parameters in interacting Fock space.
In this paper, we consider an interacting one-mode field which interacts in a cavity with the atom by letting a \(V\)-type atom passing through it. After tracing out the atomic part from the generated atom-field system we obtain the field left in the cavity and explore the nonclassical properties of the field.
In the beginning, we describe the basic idea of one-mode interacting Fock space. Then we give the time-dependent state of the system containing a \(V\)-type three-level atom [2; 3] which interacts with a single-mode of interacting field. In subsequent sections we show nonclassicality of the evolved state with the help of Mandel's \(Q\) parameter and the squeezing properties of the radiation field. Lastly, we give a conclusion.
## 2 Basic preliminaries and notations
As a vector space one-mode interacting Fock space \(\Gamma(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
for any \(n\in\mbox{$I\!\!N$}\) where \(\mbox{$I\!\!\!C$}|n\rangle\) is called the \(n\)-particle subspace. The different \(n\)-particle subspaces are orthogonal, that is, the sum in (1) is orthogonal. The square of the semi-norm of the vector \(|n\rangle\) is given by
\[\langle n|n\rangle=\lambda_{n} \tag{2}\]
where \(\lambda_{n}\) being a real number and \(\lambda_{n}\geq 0\) for each \(n\in\mbox{$I\!\!N$}\) and if for some \(n\) we have \(\lambda_{n}=0\), then \(\lambda_{m}=0\) for all \(m\geq n\). After taking quotient, the semi-norm in (2) becomes a norm which makes \(\Gamma(\mbox{$I\!\!\!C$})\) a pre-Hilbert space. In the following we will consider its completion, which, with an abuse of notation, will be denoted by \(\Gamma(\mbox{$I\!\!\!C$})\).
An arbitrary vector \(f\) in \(\Gamma(\mbox{$I\!\!\!C$})\) is given by
\[f\equiv c_{0}|0\rangle+c_{1}|1\rangle+c_{2}|2\rangle+\ldots+c_{n}|n\rangle+\ldots\]
for any \(n\in\mbox{$I\!\!N$}\) with \(\|f\|=(\sum_{n=0}^{\infty}|c_{n}|^{2}\lambda_{n})^{1/2}<\infty\).
We now consider the following actions on \(\Gamma(\mbox{$I\!\!\!C$})\) :
\[A^{\dagger}|n\rangle = |n+1\rangle\] \[A|n+1\rangle = \frac{\lambda_{n+1}}{\lambda_{n}}|n\rangle\]
\(A^{\dagger}\) is called the _creation operator_ and its adjoint \(A\) is called the _annihilation operator_.
The commutation relation takes the form
\[[A,A^{\dagger}]=\frac{\lambda_{N+1}}{\lambda_{N}}-\frac{\lambda_{N}}{\lambda_{ N-1}}\]
where \(N\) is the number operator defined by \(N|n\rangle=n|n\rangle\).
In a paper [41], we have proved that the set \(\left\{\left|\frac{n}{\sqrt{\lambda_{n}}}\right\rangle,n=0,1,2,3,\ldots\right\}\) forms a complete orthonormal set and the solution of the following eigenvalue equation
\[Af_{\alpha}=\alpha f_{\alpha}\]
is given by
\[f_{\alpha}=\psi(|\alpha|^{2})^{-1/2}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{ \lambda_{n}}|n\rangle\]
where \(\psi(|\alpha|^{2})=\sum_{n=0}^{\infty}\frac{|\alpha|^{2n}}{\lambda_{n}}\). We call \(f_{\alpha}\) a **coherent vector** in \(\Gamma(\mbox{$I\!\!\!C$})\).
Now, we observe that
\[AA^{\dagger}=\frac{\lambda_{N+1}}{\lambda_{N}},\ \ \ A^{\dagger}A=\frac{ \lambda_{N}}{\lambda_{N-1}}\]
We further observe that \(\left(\frac{\lambda_{N+1}}{\lambda_{N}}-\frac{\lambda_{N}}{\lambda_{N-1}}\right)\) commutes with both \(A^{\dagger}A\) and \(AA^{\dagger}\).
## 3 Time Evolution of State Vector
The scheme of the \(V\)-type three-level atomic system consists of two allowed transitions
\[|a\rangle\leftrightarrow|c\rangle\,\,\mbox{and}\,\,\,|b\rangle\leftrightarrow|c\rangle\]
where \(|a\rangle\), \(|b\rangle\) and \(|c\rangle\) are excited state, intermediate state and ground state respectively. Each interaction has a different mode of the field. In the rotating-wave approximation, its Hamiltonian is described by
\[H=H_{0}+H_{1}, \tag{3}\]
where taking \(\hbar=1\),
\[H_{0}=\omega_{a}|a\rangle\langle a|+\omega_{b}|b\rangle\langle b|+\omega_{c}|c \rangle\langle c|+\gamma A^{\dagger}A,\]
and
\[H_{1}=g_{1}A|a\rangle\langle c|+g_{1}A^{\dagger}|c\rangle\langle a|+g_{2}A|b \rangle\langle c|+g_{2}A^{\dagger}|c\rangle\langle b|.\]
Here \(A^{\dagger}\) and \(A\) are, respectively, the creation and annihilation operators for the field of frequency \(\gamma\). \(|i\rangle(i=a,b,c)\) is the eigenstate of the atom with eigenfrequency \(\omega_{i}\) and \(g_{1},\,g_{2}\) are the corresponding coupling constant. We assume the coupling constants to be real throughout the paper.
In the interaction picture, the state vector of this atom-field coupling system at time \(t\) can be described by
\[\begin{array}{l}|\psi(t)\rangle\\ =\sum_{n}\left(C_{a,n-1}\left|a,\frac{n-1}{\sqrt{\lambda_{n-1}}}\right.\right) +\,C_{b,n-1}\left|b,\frac{n-1}{\sqrt{\lambda_{n-1}}}\right.\right\rangle+\,C_ {c,n}\left|c,\frac{n}{\sqrt{\lambda_{n}}}\right.\right\rangle\end{array} \tag{4}\]
In the interaction picture, the Hamiltonian (3) is given by
\[\begin{array}{l}V=g_{1}e^{i\triangle_{1}t}A|a\rangle\langle c|+g_{1}A^{ \dagger}e^{-i\triangle_{1}t}|c\rangle\langle a|+g_{2}e^{i\triangle_{2}t}A|b \rangle\langle c|\\ \\ \hskip 14.226378pt+g_{2}A^{\dagger}e^{-i\triangle_{2}t}|c\rangle\langle b| \end{array} \tag{5}\]
Figure 1: Energy diagram of a \(V\)-shaped three-level atom interacting with one quantized cavity mode.
where
\[\Delta_{1}=\omega_{a}-\omega_{c}-\gamma\left(\frac{\lambda_{n+1}}{\lambda_{n}}- \frac{\lambda_{n}}{\lambda_{n-1}}\right),\]
\[\Delta_{2}=\omega_{b}-\omega_{c}-\gamma\left(\frac{\lambda_{n+1}}{\lambda_{n}}- \frac{\lambda_{n}}{\lambda_{n-1}}\right).\]
On solving the Schrodinger equation \(i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle=V|\psi(t)\rangle\) with help of (4) and (5) and assuming \(\Delta_{1}=\Delta_{2}=\Delta^{\prime}\), we get the equations of motion for probability amplitudes as
\[\left.\begin{array}{l}\dot{C}_{a,n-1}=-ig_{1}\sqrt{\frac{\lambda_{n}}{\lambda _{n-1}}}e^{i\triangle^{{}^{\prime}}t}C_{c,n},\\ \\ \dot{C}_{b,n-1}=-ig_{2}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}e^{i\triangle^{ {}^{\prime}}t}C_{c,n},\\ \\ \dot{C}_{c,n}\quad=-ig_{1}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}e^{-i \triangle^{{}^{\prime}}t}C_{a,n-1}-ig_{2}\sqrt{\frac{\lambda_{n}}{\lambda_{n- 1}}}e^{-i\triangle^{{}^{\prime}}t}C_{b,n-1}\end{array}\right\} \tag{6}\]
where we assume
\[\Delta^{{}^{\prime}}=\omega_{a}-\omega_{c}-\gamma\left(\frac{\lambda_{n+1}}{ \lambda_{n}}-\frac{\lambda_{n}}{\lambda_{n-1}}\right)\]
\[=\omega_{b}-\omega_{c}-\gamma\left(\frac{\lambda_{n+1}}{\lambda_{n}}-\frac{ \lambda_{n}}{\lambda_{n-1}}\right)\]
If the atom is initially in the state \(|\psi_{a}(0)\rangle=\cos\frac{\alpha}{2}|a\rangle+\sin\frac{\alpha}{2}e^{-i \psi}|b\rangle\) which means that the atom is in the coherent superposition state of its eigenkets \(|a\rangle\) and \(|b\rangle\), and the field is in the superposition of the photon number states at time \(t=0\), \(|\psi_{f}(0)\rangle=\sum_{n}F_{n}|\frac{n}{\sqrt{\lambda_{n}}}\rangle\) with \(\sum_{n}|F_{n}|^{2}=1\), then the state vector of the total system at \(t=0\) can be described as
\[|\psi(0)\rangle=\sum_{n}\left(\cos\frac{\alpha}{2}F_{n-1}\left|a,\frac{n-1}{ \sqrt{\lambda_{n-1}}}\right\rangle+\sin\frac{\alpha}{2}e^{-i\psi}F_{n-1}\left| b,\frac{n-1}{\sqrt{\lambda_{n-1}}}\right\rangle\right)\]
With this initial condition, solving (6) we get
\[C_{c,n}(t)=B_{1}\{e^{-i(\triangle^{{}^{\prime}}/2+\beta)t}-e^{-i(\triangle^{ {}^{\prime}}/2-\beta)t}\}\]
where
\[B_{1}=\frac{g_{1}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}\cos\frac{\alpha}{2}F _{n}+g_{2}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}\sin\frac{\alpha}{2}e^{-i \psi}F_{n}}{2\beta}\]
and
\[\beta^{2}=\Delta^{{}^{\prime}2}/4+(g_{1}^{2}+g_{2}^{2})\frac{\lambda_{n}}{ \lambda_{n-1}}\]
where \(\beta\) is associated with the frequency of the atomic Rabi oscillation. Similarly we get
\[C_{a,n-1}(t)=-g_{1}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}B_{1}\left[\frac{e ^{i(\Delta^{{}^{\prime}}/2+\beta)t}-1}{(\Delta^{{}^{\prime}}/2+\beta)}-\frac{ e^{i(\Delta^{{}^{\prime}}/2-\beta)t}-1}{(\Delta^{{}^{\prime}}/2-\beta)} \right]+\cos\frac{\alpha}{2}F_{n-1}\]
and
\[C_{b,n-1}(t)=-g_{2}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}B_{1}\left[\frac{e^{i( \Delta^{{}^{\prime}}/2+\beta)t}-1}{(\Delta^{{}^{\prime}}/2+\beta)}-\frac{e^{i( \Delta^{{}^{\prime}}/2-\beta)t}-1}{(\Delta^{{}^{\prime}}/2-\beta)}\right]+\sin \frac{\alpha}{2}e^{-i\psi}F_{n-1}\]
Substituting the values of \(C_{c,n}(t)\), \(C_{a,n-1}(t)\) and \(C_{b,n-1}(t)\) in (4) we can obtain the state vector of the system at time \(t\) in the interaction picture.
At this stage we assume that
\[\alpha=90^{0}\text{ and }\psi=0,\text{ Also }F_{n}\approx F_{n-1}\]
This reduces the coefficients of (4) into
\[\left.\begin{array}{l}C_{c,n}(t)=B_{1}\{e^{-i(\Delta^{{}^{\prime}}/2+\beta)t }-e^{-i(\Delta^{{}^{\prime}}/2-\beta)t}\}\\ \\ C_{a,n-1}(t)=-g_{1}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}B_{1}\left[\frac{e^ {i(\Delta^{{}^{\prime}}/2+\beta)t}-1}{(\Delta^{{}^{\prime}}/2+\beta)}-\frac{e ^{i(\Delta^{{}^{\prime}}/2-\beta)t}-1}{(\Delta^{{}^{\prime}}/2-\beta)}\right] +\frac{1}{\sqrt{2}}F_{n}\\ \\ C_{b,n-1}(t)=-g_{2}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}B_{1}\left[\frac{e ^{i(\Delta^{{}^{\prime}}/2+\beta)t}-1}{(\Delta^{{}^{\prime}}/2+\beta)}-\frac{e ^{i(\Delta^{{}^{\prime}}/2-\beta)t}-1}{(\Delta^{{}^{\prime}}/2-\beta)}\right] +\frac{1}{\sqrt{2}}F_{n}\\ \end{array}\right\} \tag{7}\]
with
\[B_{1}=\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}\frac{F_{n}(g_{1}+g_{2})}{2 \sqrt{2}\beta}\text{ \ and \ }\beta^{2}=\Delta^{{}^{\prime}}{}^{2}/4+(g_{1}^{2}+g_{2}^{2})\frac{\lambda_{n}} {\lambda_{n-1}}\]
We assume that the atom enters the cavity with the initial state
\[|\psi(0)\rangle=\sum_{n}\frac{1}{\sqrt{2}}F_{n-1}\left(\left|a,\frac{n-1}{ \sqrt{\lambda_{n-1}}}\right\rangle+\left|b,\frac{n-1}{\sqrt{\lambda_{n-1}}} \right\rangle\right)\]
and after the evolution for time \(t_{1}\), the state vector of the considered atom-field system becomes
\[\left|\psi(t_{1})\right\rangle=\sum_{n}\left[C_{a,n-1}(t_{1})\left|a,\frac{n-1 }{\sqrt{\lambda_{n-1}}}\right\rangle+C_{b,n-1}(t_{1})\left|b,\frac{n-1}{\sqrt{ \lambda_{n-1}}}\right\rangle\right. \tag{8}\]
Further assuming \(g_{1}=g_{2}=g\) with zero detuning and \(\Delta^{{}^{\prime}}=0\), the system evolves to \(|\psi(t_{1})\rangle\), given by (8), where
\[\left.\begin{array}{l}C_{c,n}(t_{1})=-iF_{n}\sin\beta t_{1}\\ \\ C_{a,n-1}(t_{1})=-\frac{1}{\sqrt{2}}F_{n}(\cos\beta t_{1}-2)\\ \\ C_{b,n-1}(t_{1})=-\frac{1}{\sqrt{2}}F_{n}(\cos\beta t_{1}-2)\\ \end{array}\right\} \tag{9}\]
The state vector \(\left|\psi(t_{1})\right\rangle\) describes the time evolution of the whole atom-field system but we now concentrate on some statistical properties of the single-mode cavity field. The field inside the cavity after departing the atom is obtained by tracing out the atomic part of \(\left|\psi(t_{1})\right\rangle\) as
\[\left|\psi(t_{1})\right\rangle_{f}=Tr_{a}[\left|\psi(t_{1})\right\rangle], \tag{10}\]
where we have used the subscript \(a\left(f\right)\) to denote the atom (field).
This \(\left|\psi(t_{1})\right\rangle_{f}\) will be of consideration throughout the next section to determine the statistical properties of the field left into the cavity.
## 4 Statistical properties of the radiation field
In this section we investigate about two nonclassical effects, namely, sub-Poissonian photon statistics and quadrature squeezing of the radiation field.
### Sub-Poissonian photon statistics
The Mandel parameter \(Q^{M}\) illustrates the nonclassicality through photon number distribution of a quantum state [44; 45]. It is defined as
\[Q^{M}\equiv\frac{\langle n^{(2)}\rangle}{\langle n\rangle}-\langle n\rangle \tag{11}\]
where
\[\langle n^{(2)}\rangle={}_{f}\langle\psi(t_{1})|A^{\dagger}A^{\dagger}AA|\psi (t_{1})\rangle_{f}\text{ and }\langle n\rangle={}_{f}\langle\psi(t_{1})|A^{\dagger}A|\psi(t_{1}) \rangle_{f}.\]
The negative values of \(Q^{M}\) parameter essentially indicate the negativity for \(P\) function and so it gives a witness for nonclassicality. For the Poissonian statistics it becomes 0, while for the sub-Poissonian (super-Poissonian) photon statistics it has negative (positive) values.
Using
\[A^{\dagger}A\left|\frac{n}{\sqrt{\lambda_{n}}}\right\rangle=\frac{\lambda_{n} }{\lambda_{n-1}}\left|\frac{n}{\sqrt{\lambda_{n}}}\right\rangle,\]
and (10), we have calculated the analytical expressions for the first and second order moments as
\[\langle A^{\dagger}A\rangle=\sum_{n}2C_{a,n-1}(t_{1})\bar{C}_{a,n-1}(t_{1}) \frac{\lambda_{n-1}}{\lambda_{n-2}}+\sum_{n}C_{c,n}(t_{1})\bar{C}_{c,n}(t_{1}) \frac{\lambda_{n}}{\lambda_{n-1}}\]
and
\[\langle A^{\dagger}A^{\dagger}AA\rangle=\sum_{n}2C_{a,n-1}(t_{1})\bar{C}_{a,n -1}(t_{1})\frac{\lambda_{n-1}}{\lambda_{n-3}}+\sum_{n}C_{c,n}(t_{1})\bar{C}_{c,n}(t_{1})\frac{\lambda_{n}}{\lambda_{n-2}}\]
Thus
\[Q^{M} = \frac{\langle A^{\dagger}A^{\dagger}AA\rangle}{\langle A^{\dagger}A \rangle}-\langle A^{\dagger}A\rangle\] \[= \frac{\sum_{n}2C_{a,n-1}(t_{1})\bar{C}_{a,n-1}(t_{1})\frac{ \lambda_{n-1}}{\lambda_{n-3}}+\sum_{n}C_{c,n}(t_{1})\bar{C}_{c,n}(t_{1})\frac{ \lambda_{n}}{\lambda_{n-2}}}{\sum_{n}2C_{a,n-1}(t_{1})\bar{C}_{a,n-1}(t_{1}) \frac{\lambda_{n-1}}{\lambda_{n-2}}+\sum_{n}C_{c,n}(t_{1})\bar{C}_{c,n}(t_{1}) \frac{\lambda_{n}}{\lambda_{n-1}}}\] \[-\sum_{n}2C_{a,n-1}(t_{1})\bar{C}_{a,n-1}(t_{1})\frac{\lambda_{n- 1}}{\lambda_{n-2}}-\sum_{n}C_{c,n}(t_{1})\bar{C}_{c,n}(t_{1})\frac{\lambda_{n }}{\lambda_{n-1}}\]
Substituting \(C_{c,n}(t_{1})\) and \(C_{a,n-1}(t_{1})\) from (9), we get
\[Q^{M}=\frac{A+B}{C+D}-\left[\sum_{n}|F_{n}|^{2}(\cos\beta t_{1}-2)^{2}\frac{ \lambda_{n-1}}{\lambda_{n-2}}+\sum_{n}|F_{n}|^{2}\sin^{2}\beta t_{1}\frac{ \lambda_{n}}{\lambda_{n-1}}\right]\]
with
\[A=\sum_{n}|F_{n}|^{2}(\cos\beta t_{1}-2)^{2}\frac{\lambda_{n-1}}{\lambda_{n-3}}\]
\[B=\sum_{n}|F_{n}|^{2}\sin^{2}\beta t_{1}\frac{\lambda_{n}}{\lambda_{n-2}}\]
\[C=\sum_{n}|F_{n}|^{2}(\cos\beta t_{1}-2)^{2}\frac{\lambda_{n-1}}{\lambda_{n-2}}\]
\[D=\sum_{n}|F_{n}|^{2}\sin^{2}\beta t_{1}\frac{\lambda_{n}}{\lambda_{n-1}}\]
If the radiation field is initially in a coherent state [3], then \(F_{n}(0)=\exp(-\bar{n}/2)\frac{\bar{n}^{n/2}e^{i\zeta n}}{\sqrt{n!}}\). Substituting \(F_{n}(0)\), assuming \(\beta t_{1}\equiv\theta_{1}\) and finally taking \(t_{1}=t\) so that \(\theta_{1}=\theta\) with \(\theta=\sqrt{2\frac{\lambda_{n}}{\lambda_{n-1}}}gt\), we get
\[Q^{M}=\frac{A^{{}^{\prime}}+B^{{}^{\prime}}}{C^{{}^{\prime}}+D^{{}^{\prime}}}-e ^{-\bar{n}}\left[\sum_{n}\frac{\lambda_{n-1}}{\lambda_{n-2}}\frac{\bar{n}^{n}} {n!}(\cos\theta-2)^{2}+\sum_{n}\frac{\lambda_{n}}{\lambda_{n-1}}\frac{\bar{n}^ {n}}{n!}(\sin\theta)^{2}\right] \tag{12}\]
where
\[A^{{}^{\prime}}=e^{-\bar{n}}\sum_{n}\frac{\lambda_{n-1}}{\lambda_{n-3}}\frac{ \bar{n}^{n}}{n!}(\cos\theta-2)^{2}\]
\[B^{{}^{\prime}}=e^{-\bar{n}}\sum_{n}\frac{\lambda_{n}}{\lambda_{n-2}}\frac{ \bar{n}^{n}}{n!}(\sin\theta)^{2}\]
\[C^{{}^{\prime}}=e^{-\bar{n}}\sum_{n}\frac{\lambda_{n-1}}{\lambda_{n-2}}\frac{ \bar{n}^{n}}{n!}(\cos\theta-2)^{2}\]
\[D^{{}^{\prime}}=e^{-\bar{n}}\sum_{n}\frac{\lambda_{n}}{\lambda_{n-1}}\frac{ \bar{n}^{n}}{n!}(\sin\theta)^{2}\]
In Fig. 2, the dependence of \(Q^{M}\) using (12) is shown over the scaled time \(gt\), for an initial single-mode coherent field with \(\bar{n}=0.5\). For simplicity, we have
considered \(g_{1}=g_{2}=g\) in all of our numerical calculations. The negative values of \(Q^{M}\) parameter essentially indicate the negativity of the \(P\) function and hence it represents a witness for nonclassicality. When \(\lambda_{n}=n!\), \(Q^{M}\) parameter fluctuates between certain positive and negative values, which illustrates the existence of sub-Poissonian nature of the state. For \(gt\approx 25\), the cavity field attains the most negative \(Q^{M}\) value (\(\approx-0.3\)), demonstrating the nonclassical behaviour of the radiation field [cf. Fig. 2(a)]. For \(\lambda_{n}=(n!)^{2}\), \(Q^{M}\) function oscillates but always remains positive. In this case, Mandel's parameter fails to identify the nonclassicality property of the cavity field. When \(\lambda_{n}\) takes the value \([n]=\frac{1-q^{n}}{1-q}\), \(q=0.5\), \(Q^{M}\) almost settles to the sub-Poissonian statistics. That means, the nonclassical behaviour of the radiation field depends on the choice of \(\lambda_{n}\).
So far we have considered a basic model which consists of a \(V\)-type three level atom passing through a high-\(Q\) cavity, interacting with a single-mode coherent field contained in that cavity. Although this simple atom and single photonic mode system is powerful and intuitive, it is quite challenging to realize in practice [46]. Because, in reality, there are typically numerous modes that all interact with an atom. All of these modes make up a continuous spectrum of vacuum fluctuations that all attempt to develop an excited atomic Rabi flopping. The resulting sum of all of the continuums of probability amplitudes that interfere gives rise to the more commonly observed exponential decay of an excited atom by dissipating the energy irreversibly into its environment. Thus, to demonstrate a system where the coupling strength between the atom and a particularly chosen single photonic mode (e.g., single mode defined by an optical fiber) is much stronger relative to all other dissipative
channels such as the vacuum/environment, is of great challenge. Also, when a radiation field propagates through the environment, it inevitably interacts with its surroundings, which causes the decoherence [47]. Since it is impossible to perfectly isolate a quantum system from its environment, decoherence effects are more or less unavoidable. It is well known that decoherence will deteriorate the degree of nonclassicality of the optical fields [48]. Here, we introduce only the cavity decay and study the effect of loss of cavity over the nonclasscality witnesses under consideration. Assuming that there is no photon leakage from the cavity, the evolution of the system is governed by the non-Hermitian Hamiltonian [49; 50]
\[H_{\mathrm{loss}}=H-\frac{ik}{2}A^{\dagger}A, \tag{13}\]
where \(k\) is the cavity decay rate and \(H\) is given by (3). Solving Schrodinger equation with respect to \(H_{\mathrm{loss}}\) and assuming \(\Delta^{\prime}=0\), \(g_{1}=g_{2}=g\), the state vector of this atom-field coupled system at time \(t_{2}\) can be described by \(|\psi^{\prime}(t_{2})\rangle\) with
\[\begin{split} C^{\prime}_{c,n}(t_{2})&=-2B_{1}e^{- \frac{kt_{2}}{4}}\sin\beta^{\prime}t_{2}\\ C^{\prime}_{a,n-1}(t_{2})&=-g\sqrt{\frac{\lambda_ {n}}{\lambda_{n-1}}}B_{1}\left[\frac{1}{\beta^{\prime 2}+\frac{k^{2}}{4}}(\cos \beta^{\prime}t_{2}-2)+\frac{k^{2}}{\beta^{\prime 2}+\frac{k^{2}}{4}}\sin \beta^{\prime}t_{2}\right]\\ &\quad+\frac{1}{\sqrt{2}}F_{n}\\ C^{\prime}_{b,n-1}(t_{2})&=-g\sqrt{\frac{\lambda_ {n}}{\lambda_{n-1}}}B_{1}\left[\frac{1}{\beta^{\prime 2}+\frac{k^{2}}{4}}(\cos \beta^{\prime}t_{2}-2)+\frac{k^{2}}{\beta^{\prime 2}+\frac{k^{2}}{4}}\sin \beta^{\prime}t_{2}\right]\\ &\quad+\frac{1}{\sqrt{2}}F_{n}\end{split} \tag{14}\]
with
\[B_{1}=\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}\frac{F_{n}\,g}{\sqrt{2}\beta^{ \prime}}\ \ \text{and}\ \ \beta^{\prime 2}=2g^{2}\left(\frac{\lambda_{n}}{\lambda_{n-1}}\right)-\frac{k^{2}}{16}\]
By using \(|\psi^{\prime}(t_{2})\rangle\) given in (14), the Mandel's parameter after taking into account the loss of cavity, \(Q_{\mathrm{loss}}^{M}\), can be easily obtained from (11). In Fig. 3, we have shown the variation of \(Q_{\mathrm{loss}}^{M}\) for cavity decay rate \(k=0.1\). Other parametric values remain same as in Fig. 2. We have noticed that for all varieties of \(\lambda_{n}\) such as \(n!,\ (n!)^{2},\ [n]\), the Mandel's parameter approaches non-negative values more quickly than without decay case. That means the presence of cavity decay causes a notable change in decreasing the nonclassicality of the radiation field. It is of further investigation that how nonclassicality of the cavity field is affected by higher values of \(k\).
### Squeezing properties of the radiation field
A general class of minimum-uncertainty states are known as squeezed states [51]. A squeezed may have less noise in one quadrature than a coherent state and thus to satisfy the requirements for being a minimum-uncertainty state, the noise in the other quadrature is greater than that of a coherent state. That means, if the fluctuation of the radiation field \(\triangle X\) in a quadrature \(X\) goes below the square root of the uncertainty product, the fluctuation \(\triangle Y\) in the other quadrature \(Y\) should be greater than that of and vice-versa [1]. To analyze the squeezing properties of the radiation field [52; 53], we introduce two hermitian quadrature operators
\[X=A+A^{\dagger},\;\;\;Y=-i(A-A^{\dagger})\]
These two quadrature operators satisfy the commutation relation
\[[X,Y]=2i\left(\frac{\lambda_{N+1}}{\lambda_{N}}-\frac{\lambda_{N}}{\lambda_{N- 1}}\right).\]
and thus the corresponding uncertainty relation is
\[\langle(\Delta X)^{2}\rangle\langle(\Delta Y)^{2}\rangle\geq\left(\frac{ \lambda_{N+1}}{\lambda_{N}}-\frac{\lambda_{N}}{\lambda_{N-1}}\right)^{2}.\]
A state is said to be squeezed if either \(\langle(\Delta X)^{2}\rangle\) or \(\langle(\Delta Y)^{2}\rangle\) is less than \(\left(\frac{\lambda_{N+1}}{\lambda_{N}}-\frac{\lambda_{N}}{\lambda_{N-1}}\right)\). To review the principle of squeezing, we consider an appropriate quadrature operator
\[X_{\theta}=X\cos\theta+Y\sin\theta=Ae^{-i\theta}+A^{\dagger}e^{i\theta}\]
which gives
\[\begin{array}{l}\langle(\Delta X_{\theta})^{2}\rangle=(\langle A^{2}\rangle- \langle A\rangle^{2})e^{-2i\theta}+({\langle{A^{\dagger}}^{2}\rangle}-\langle A ^{\dagger}\rangle^{2})e^{2i\theta}+\langle AA^{\dagger}\rangle\\ \qquad\qquad+\langle AA^{\dagger}\rangle-\langle A\rangle\langle A^{\dagger} \rangle+\langle A^{\dagger}A\rangle-\langle A^{\dagger}\rangle\langle A \rangle\end{array}\]
After observing \(\langle A\rangle=\overline{\langle A^{\dagger}\rangle}\), we obtain
\[\langle:(\Delta X_{\theta})^{2}:\rangle=\bar{\zeta}e^{-2i\theta}+\zeta e^{2i \theta}+2\langle A^{\dagger}A\rangle-2|\langle A^{\dagger}\rangle|^{2}\]
where \(\zeta={\langle{A^{\dagger}}^{2}\rangle}-\langle A^{\dagger}\rangle^{2}\). Finally we have
\[\begin{array}{l}S_{\rm opt}=\langle:(\Delta X_{\theta})^{2}:\rangle_{\rm min }\\ =-2|{\langle{A^{\dagger}}^{2}\rangle}-\langle A^{\dagger}\rangle^{2}|+2 \langle A^{\dagger}A\rangle-2|\langle A^{\dagger}\rangle|^{2}\end{array} \tag{15}\]
The expectations \({\langle{A^{\dagger}}^{2}\rangle}\) and \(\langle A^{\dagger}\rangle\) with respect to the state vector \(|\psi(t_{1})\rangle\) (8) can be calculated as
\[{\langle{A^{\dagger}}^{2}\rangle}=2\sum_{n}\sqrt{\frac{\lambda_{n+1}}{\lambda_ {n-1}}}C_{a,n-1}\bar{C}_{a,n-1}+\sum_{n}\sqrt{\frac{\lambda_{n+2}}{\lambda_{n} }}C_{c,n}\bar{C}_{c,n} \tag{16}\]
and
\[\langle A^{\dagger}\rangle=2\sum_{n}\sqrt{\frac{\lambda_{n}}{\lambda_{n-1}}}C_ {a,n-1}\bar{C}_{a,n-1}+\sum_{n}\sqrt{\frac{\lambda_{n+1}}{\lambda_{n}}}C_{c,n }\bar{C}_{c,n} \tag{17}\]
Substituting (16) and (17) in (15), we have obtained an expression of \(S_{\rm opt}\) for initial coherent state \(F_{n}(0)=\exp(-\bar{n}/2)\frac{\bar{n}^{n/2}e^{i\zeta n}}{\sqrt{n!}}\). We have investigated the possibility of observing squeezing analytically and plotted \(S_{\rm opt}\) as a function of scaled time \(gt\) for \(\bar{n}=0.3\), by choosing \(\lambda_{n}\sim n!,(n!)^{2}\) and \([n]!\) with \([n]=(1-q^{n})/(1-q),0<q<1\), respectively.
Fig. 4 describes the first-order squeezing in terms of the scaled time \(gt\) for different \(\lambda_{n}\). In all the cases, squeezing has been seen which is a clear evidence of the nonclassical nature of the radiation field. In Fig. 5, we have illustrated squeezing parameter with cavity decay rate \(k=0.5\). It is clear that the magnitude of squeezing lowers in presence of cavity decay. It is also observed that with this amount of decay, squeezing occurs while the negativity of \(Q_{\rm loss}^{M}\) disappears [see Fig. 3]. That means, the squeezing parameter is performing better than Mandel's \(Q^{M}\) to detect the nonclassical character of the radiation field while loss of cavity arises.
## 5 Conclusion
In this paper, we have studied an atom-cavity field interaction in interacting Fock space. After giving a brief introduction of the interacting Fock space, we have found the state vector for the atom-field coupling. The radiation field becomes super-poissonian when \(\lambda_{n}=(n!)^{2}\). The behavior of Mandel's \(Q^{M}\) also reveals that the state remains almost nonclassical when \(\lambda_{n}=[n]\). We have
also plotted the normal squeezing against the scaled time \(gt\) and shown that squeezing occurs for different \(\lambda_{n}\) as \(gt\) varies. We have also checked the effect of cavity decay rate over these nonclassicality witnesses. Our result establishes that cavity decay rate has an immense control over reducing the nonclassicality of the cavity field. This work clearly demonstrates the dynamics of a three-level squeezing.
Figure 4: (Color online) Squeezing as a function of \(gt\) for \(\alpha_{0}=0.3\) and for different values of \(\lambda_{n}\) such that \((a)\)\(\lambda_{n}=n!\); \((b)\)\(\lambda_{n}=(n!)^{2}\) and \((c)\)\(\lambda_{n}=[n]\).
atom-cavity interaction in interacting Fock space and the effects of different parametric values of \(\lambda_{n}\) over Mandel's \(Q^{M}\) and normal squeezing.
The atom-cavity model is an extremely purposeful model from the perspective of quantum information theory (QIT) and quantum computation. The nonclassical interaction between the atomic levels and the cavity modes gives rise to entanglement, which is the primary resource for performing various QIT tasks such as superdense coding, quantum teleportation and quantum key distribution [54; 55]. Another important aspect of studying this models is the distinct possibility of generating such bipartite interactions in the laboratory for investigating and applying the correlated system in future quantum tasks. Therefore understanding the basic dynamics of nonclassicality and related controlling factors in a simple generalized atom-field interaction is of utmost importance for realization of such quantum systems.
|
2306.04535 | PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts | A key component of modern conversational systems is the Dialogue State
Tracker (or DST), which models a user's goals and needs. Toward building more
robust and reliable DSTs, we introduce a prompt-based learning approach to
automatically generate effective adversarial examples to probe DST models. Two
key characteristics of this approach are: (i) it only needs the output of the
DST with no need for model parameters, and (ii) it can learn to generate
natural language utterances that can target any DST. Through experiments over
state-of-the-art DSTs, the proposed framework leads to the greatest reduction
in accuracy and the best attack success rate while maintaining good fluency and
a low perturbation ratio. We also show how much the generated adversarial
examples can bolster a DST through adversarial training. These results indicate
the strength of prompt-based attacks on DSTs and leave open avenues for
continued refinement. | Xiangjue Dong, Yun He, Ziwei Zhu, James Caverlee | 2023-06-07T15:41:40Z | http://arxiv.org/abs/2306.04535v1 | # PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts
###### Abstract
A key component of modern conversational systems is the Dialogue State Tracker (or DST), which models a user's goals and needs. Toward building more robust and reliable DSTs, we introduce a prompt-based learning approach to automatically generate effective adversarial examples to probe DST models. Two key characteristics of this approach are: (i) it only needs the output of the DST with no need for model parameters, and (ii) it can learn to generate natural language utterances that can target any DST. Through experiments over state-of-the-art DSTs, the proposed framework leads to the greatest reduction in accuracy and the best attack success rate while maintaining good fluency and a low perturbation ratio. We also show how much the generated adversarial examples can bolster a DST through adversarial training. These results indicate the strength of prompt-based attacks on DSTs and leave open avenues for continued refinement.
## 1 Introduction
Task-oriented dialogue systems aim to help users with tasks through a natural language conversation. Example tasks include booking a hotel or completing a do-it-yourself project. A key component for enabling a high-quality task-oriented dialogue system is the _Dialogue State Tracker_ (or DST) which plays an important role in understanding users' goals and needs Wu et al. (2019); Hosseini-Asl et al. (2020); Li et al. (2021); Dai et al. (2021); Feng et al. (2021); Zhao et al. (2021); Balaraman et al. (2021). For example in Figure 0(a), given the user utterance "I am looking for a cheap restaurant in the center of the city", the DST extracts the user's preference for booking a restaurant, which is typically represented as slot-value pairs such as (restaurant-price range, cheap) and (restaurant-area, center). The current state of the conversation is a primary driver of the subsequent dialogue components (e.g., what is the next action to take? what is the appropriate response to generate?).
For a conversational system designer, it is critical that a deployed DST be robust and reliable, even in the presence of a wide variety of user utterances. Many of these systems are trained over previous user utterances and so may have only limited coverage of the space of these utterances. Further, beyond these benign users, there is also a long history of spammers, trolls, and malicious users who aim to intentionally undermine deployed systems.
Indeed, recent work has demonstrated that careful construction of adversarial examples can cause failures in the DST Li et al. (2021); Liu et al. (2021), leading to incorrect slot-value pairs and degraded user experience. These approaches, however, are mainly hand-crafted or based on heuristics. As a result, there is a research gap in learning-based methods for probing DSTs centered around three key questions: (i) How can we systematically learn effective adversarial examples? (ii) What impact do such discovered examples have on the quality of state-of-the-art DSTs? and (iii) Can we
Figure 1: Dialogue examples and adversarial examples.
build more robust DSTs even in the presence of such adversarial examples? Further compounding these questions are the inherent challenges of adversarial examples in the context of a DST: that is, the examples should preserve the semantics of a non-adversarial input while leading to an incorrect prediction _even in the presence of the correct slot-value in the adversarial input_ as illustrated in Figure 0(b). For example, an adversarial example based on the user utterance "I am looking for a cheap restaurant" that maps to the slot-value pair (restaurant-price range, cheap) should preserve the user intent for "cheap" while leading to the incorrect prediction (restaurant-price range, expensive).
Hence, in this paper, we propose a novel prompt-based learning approach called _PromptAttack_ to automatically generate effective adversarial examples to probe DST models. Our approach builds on recent advances in prompt learning, which has demonstrated a strong ability in probing knowledge in pre-trained language models for many NLP tasks Gao et al. (2021); Li and Liang (2021); Liu et al. (2021); Zhu et al. (2022). Concretely, we first show how to find effective adversarial prompts in both a discrete and a continuous setting. In both cases, our approach needs only the output of the DST (e.g., (restaurant-price range, cheap)) with no need for model parameters or other model details. Second, we use the adversarial prompts to generate adversarial examples via a mask-and-filling protocol, resulting in natural language utterances that can be targeted at any DST. As a result, such a prompt-based attack can be widely applied.
Through experiments over four state-of-the-art DSTs and versus competitive baselines, we find that the prompt-based framework leads to the greatest reduction in accuracy for all DSTs, ranging from a 9.3 to 31.0 loss of accuracy of the DST making a correct slot-value prediction. Further, we observe that PromptAttack results in the best attack success rate (that is, how many of the adversarial examples lead to incorrect predictions). Moreover, the generated adversarial examples maintain good fluency and low perturbation ratio, evidence that they are close to legitimate non-adversarial user inputs. We also show how such a prompt-based attack can be used to bolster a DST by augmenting the original training data with adversarial examples, leading to a significant increase in accuracy (from 61.3 to 67.3). These and other results indicate the strength of prompt-based attacks on DSTs and leave open avenues for continued refinement.1
Footnote 1: Our code is publicly available at [https://github.com/dongxiangjue/PromptAttack](https://github.com/dongxiangjue/PromptAttack).
## 2 Related Work
Adversarial examples have been widely explored to investigate the robustness of models Goodfellow et al. (2015). Recent work in the NLP domain has targeted tasks like text classification and inference Pruthi et al. (2019); Ren et al. (2019); Morris et al. (2020); Jin et al. (2020); Li et al. (2020); Yang et al. (2022); Lei et al. (2022), reading comprehension Jia and Liang (2017); Bartolo et al. (2021), named entity recognition Simoncini and Spanakis (2021), and machine translation Belinkov and Bisk (2018). These works typically aim to construct examples that are imperceptible to human judges while misleading the underlying model to make an incorrect prediction, while also maintaining good fluency and semantic consistency with original inputs Li et al. (2020). Only a few works have begun to explore adversarial examples in DSTs like CoCo Li et al. (2021), which aims to test the robustness of models by creating novel and realistic conversation scenarios. They show that DST models are susceptible to both unseen slot values generated from in and out of the slot domain. Liu et al. (2021) propose a model-agnostic toolkit to test the robustness of task-oriented dialogue systems in terms of three aspects: speech characteristics, language variety, and noise perturbation. The adversarial examples are based on heuristics and it is unclear how to adapt such an approach to new victim models effectively without more hand-crafted templates. In contrast, we explore in this paper the potential of a learning-based approach to generate effective adversarial examples.
Prompt learning is a recently proposed paradigm for using prompts to better probe and adapt large pre-trained language models (PLMs) to a variety of NLP tasks, e.g., text classification and inference Gao et al. (2021); Yang et al. (2022); Wang et al. (2022), factual probing Zhong et al. (2021), summarization Li and Liang (2021), and dialogue systems Madotto et al. (2021); Lee et al. (2021); Zhu et al. (2022); Yang et al. (2023). With the increase in the size of PLMs, prompt learning has been shown to be parameter-efficient Liu et al. (2021); He et al. (2022); Lu et al. (2023). There are two types of prompts: discrete (or hard) prompts and continu
ous (or soft) prompts. Discrete prompts are human-designed text strings Brown et al. (2020) while continuous prompts are continuous embeddings. Soft prompts proposed by lester2021deep preend a sequence of continuous vectors to the input, freeze the language model parameters, and then back-propagate the error during tuning. In this paper, we explore both approaches in the design of our prompt-based attack framework.
Recent works have begun to explore how prompts can be helpful in exposing fundamental flaws in large language models. yang2022deep shows how to manually design prompts to flip the output of a model for classification tasks. However, it is time-consuming to design and find prompts that are most effective to generate adversarial examples capable of successfully attacking victim models. It is an open question how to leverage prompts for uncovering effective adversarial prompts.
## 3 PromptAttack
Our prompt-based learning approach proceeds in two stages. First, our goal is to identify adversarial prompts that can effectively probe a DST to reveal gaps in its robustness. In the second, we use these prompts to create adversarial examples that can attack DSTs successfully while maintaining good fluency. Figure 2 shows an overview of the proposed approach. In the following, we first formalize DSTs and the problem of probing a DST. Then, we introduce the details of PromptAttack.
### Task Formulation
Dst Task.Let \(C_{T}=\{(r_{1},u_{1}),\ldots,(r_{T},u_{T})\}\) represent a \(T\)-turn dialogue, where \(r_{i}\) and \(u_{i}(1\leq i\leq T)\) are the system response and user utterance at the \(i\)-th turn, respectively. Each turn \((r_{i},u_{i})\) contains several slots (e.g., arrive by, leave at) in a specific domain (e.g., taxi), where we denote the \(N\) domain-slot pairs as \(S=\{s_{1},\ldots,s_{N}\}\). At turn t, we denote current user utterance \(u_{t}\) and previous dialogue context \(C_{t}=\{(r_{1},u_{1}),\ldots,(r_{t-1},u_{t-1}),r_{t}\}\). A DST model aims to extract the dialogue belief state \(B_{t}=\{(s_{1},v_{1}),\ldots,(s_{N},v_{N})\}\) for \(u_{t}\), where \(v_{j}\) is the associated value for each slot \(s_{j}(1\leq j\leq N)\). For example, given a dialogue ("\(\ldots\) _I am looking for expensive Mediterranean food._"), the DST model aims to extract expensive for the slot restaurant-price range and Mediterranean for the slot restaurant-food.
Attacking a DST.Given dialogue history \(C_{t}\), current user utterance \(u_{t}\), and dialogue belief states \(B_{t}\), the purpose of an adversarial attack on a DST is to intentionally perturb the original user utterance \(u_{t}\) to get an adversarial example \(u_{t}^{{}^{\prime}}\) with the two following characteristics: (i) it should mislead the DST model \(f\) to incorrectly predict \(B_{t}^{{}^{\prime}}\), and (ii) it should be fluent in grammar and consistent with the semantics of the original utterance \(u_{t}\) by keeping the slot-value-related information in \(u_{t}\) unchanged. If the adversary can achieve \(f(u_{t}^{{}^{\prime}})=B_{t}^{{}^{\prime}}\), we say the adversarial example \(u_{t}^{{}^{\prime}}\) attacks \(f\) successfully.
### Finding Adversarial Prompts
We begin by focusing on the first stage of PromptAttack: how to find the most effective adversarial prompts. We explore both discrete prompts (as illustrated in Figure 1(a)) and continuous prompts (as illustrated in Figure 1(b)). A discrete prompt approach is a human-designed natural language prompt that is easy to interpret. We pair this with
Figure 2: Overview of PromptAttack.
a treatment of continuous prompts that have more representation capacity.
Discrete Prompt Construction.To begin with, how can we design discrete prompts? For the DST task, it is time-consuming to manually design sentences containing values that are opposite to the ground truth values for each slot as adversarial prompts. Thus, we apply an intuitive template derived from belief states as an adversarial prompt template: "belief states: [s] = [v];". First, we use the DST model to extract value \(v_{i}\) for each slot \(s_{i}\) in \(u_{t}\). If \(v_{i}\) is not empty, the corresponding slot name \(s_{i}\) is filled in [s]. Then we pick a random value \(v_{i}^{{}^{\prime}}\) from a predefined in-domain Slot-Value Dictionary (Li et al., 2021) where \(v_{i}^{{}^{\prime}}\) and \(v_{i}\) are under the same slot \(s_{i}\). The new random value \(v_{i}^{{}^{\prime}}\) is used to fill the [v] in the template. Thus, the adversarial prompt becomes "belief states: \(s_{i}\) = \(v_{i}^{{}^{\prime}}\);". As in Figure 2, given \(u_{t}\) ("I am looking for cheap food."), the predicted \(B_{t}\) is {(_restaurant-price range, cheap_}), then the adversarial prompt is _"belief states: restaurant-price range = expensive"_, where "expensive" is a random value that is different from the predicted value "cheap".
Such a template does not have access to true slot-value pairs of the test set and only utilizes the predictions from the victim models. Since the discrete prompts are human-designed, they are more human-readable and easier to interpret. However, to obtain a prompt for each input, victim models must be queried multiple times, which may be unrealistic in some scenarios. Hence, we take the next step to search for better prompts in the embedding space of the model. Specifically, we directly optimize the continuous input embedding space through continuous prompt tuning to find the adversarial prompt vectors that are most effective.
**Continuous Prompt Tuning.** Continuous prompts are input-agnostic sequences of embeddings with tunable parameters that are optimized directly in the continuous embedding space of the model, as shown in Figure 2. In our task, the length of continuous prompt \(\textbf{p}_{att}\) is \(m\), denoted as \(\textbf{p}_{att}=\textbf{p}_{1}\ldots\textbf{p}_{m}\) where each \(\textbf{p}_{i}\in\mathbb{R}^{d}(1\leq i\leq m)\) is a dense vector with the same dimension \(d\) as the DST's input embedding (e.g., 768 for TripPy). Given the initialization of \(\textbf{p}_{att}\), we concatenate it with the representation of user utterance \(\textbf{e}_{u}\) and update it by keeping all other model parameters fixed and optimize the loss of the training set. To find the adversarial prompts \(\textbf{p}_{att}\) that could lead DST models \(f\) to wrong predictions \(B_{t}^{{}^{\prime}}\) effectively, we maximize the loss for the ground truth belief states \(B_{t}\) for all user utterance in the training set with the following objective:
\[\operatorname*{arg\,max}_{\textbf{p}_{att}}\ \mathbb{E}_{\textbf{u}\sim\mathcal{U}} \left[\mathcal{L}\left(B_{t},f\left(\textbf{p}_{att};\textbf{e}_{u}\right) \right)\right],\]
where \(\mathcal{U}\) are user utterances and \(\mathcal{L}\) is the loss function of the DST task. By maximizing the loss for the ground truth belief states we aim to find prompts that force the model to make the most wrong predictions by pushing far apart from the ground truth, like guessing "expensive" instead of "cheap" for \(u_{t}\) ("I am looking for cheap food.").
In addition, we explore an alternative tuning objective - minimizing the loss. We replace all the non-empty values in \(B_{t}\) to empty (e.g., (restaurant-price range, expensive) changes to (restaurant-price range, none)) and then minimize the loss:
\[\operatorname*{arg\,min}_{\textbf{p}_{att}}\ \mathbb{E}_{\textbf{u}\sim\mathcal{U}} \left[\mathcal{L}\left(B_{t}^{{}^{\prime}},f\left(\textbf{p}_{att};\textbf{e} _{u}\right)\right)\right],\]
where \(B_{t}^{{}^{\prime}}\) is the set of target belief states. Different from our previous tuning objective, here we aim to find prompts that force the model to fail to extract the correct value for the slot from user utterances. For example, the DST will fail to extract "cheap" for slot _price range_ in \(u_{t}\) ("I am looking for cheap food.") and thus the predicted belief states will become (restaurant-price range, none).
### Adversarial Example Construction
Next, we focus on the second stage of Prompt-Attack: how can we use these prompts to create adversarial examples that can attack DSTs successfully while maintaining good fluency? After obtaining the adversarial prompts, we use them to generate adversarial examples via mask-and-filling (Li et al., 2021; Yang et al., 2022; Lei et al., 2022) by pre-trained masked language models. Specifically, we tokenize user utterance \(u_{t}\) to a list of tokens, \(u_{t}=[w_{u}^{1},w_{u}^{2},\ldots,w_{u}^{n}]\). Then we randomly mask tokens that are not values in \(B_{t}\), slot-related words, or stopwords with a special token [MASK] and denote the masked \(u_{t}\) as \(u_{t}^{m}=[w_{u}^{1},\texttt{[MASK]},\ldots,w_{u}^{n}]\). Shown in Figure 2, we concatenate the adversarial prompts and the masked utterance \(\textbf{u}_{t}^{m}\) and use a masked language model \(\mathcal{M}\) to predict masked text pieces and generate the perturbations based on surrounded context. As shown in Table 1, for discrete prompt \(\textbf{p}_{att}^{d}\), the input for \(\mathcal{M}\) would be
the concatenation of \(\textbf{p}^{d}_{att}\) and \(\textbf{u}^{m}_{t}\) while for continuous prompt \(\textbf{p}^{c}_{att}\), the input would be the concatenation of \(\textbf{p}^{c}_{att}\) and embedding of masked user utterance \(\textbf{e}^{1}_{u}\)[MASK] \(\textbf{e}^{n}_{u}\). Hence, with \(\textbf{p}_{att}\) and the capability of MLM, the model \(\mathcal{M}\) will fill in the blanks with context-consistent tokens which can keep the sentence fluency while maximizing the risk of the DST making wrong predictions, denoted as \(P(\texttt{[MASK]}=w|\textbf{p}_{att};\textbf{u}^{m}_{t})\), where \(w\) is the generated perturbation. After filling [MASK] with \(w\) and removing \(\textbf{p}_{att}\), the filled user utterances are used as adversarial examples to attack victim models.
## 4 Experimental Setup
Our experiments are designed to test the effectiveness of the proposed prompt-based approach to attack DST models. We structure the experiments around four research questions: **RQ1**: Are adversarial examples learned by PromptAttack effective and transferable? And how do these examples compare against baseline (non-prompt) approaches? **RQ2**: Are the generated adversarial examples of good quality? That is, are they fluent with a low perturbation ratio? **RQ3**: What impact do the design choices of PromptAttack have, i.e., the ratio of perturbed tokens and prompt length? **RQ4**: And finally, can the generated adversarial examples be used to improve the performance of current DST models to improve their robustness?
### Dataset
We evaluate our methods on the widely used and challenging multi-domain dialogue dataset, MultiWOZ 2.1 Eric et al. (2020),2 which contains over 10,000 dialogues spanning seven domains. Following existing work Li et al. (2021); Lee et al. (2021); Yang et al. (2022), we keep five domains (train, taxi, restaurant, hotel, and attraction) with 30 domain-slot pairs and follow the standard train/validation/test split.
Footnote 2: github.com/budzianowski/multiwoz, MIT License.
### Evaluation Metrics
We evaluate the proposed methods with a standard set of metrics Jin et al. (2020); Li et al. (2020, 2021); Simoncini and Spanakis (2021): **Joint goal accuracy (JGA):** the average accuracy of predicting all (domain-slot, value) pairs in a turn correctly. **Attack success rate (ASR):** the proportion of generated adversarial examples that successfully mislead model predictions. **Perturbation ratio (PER):** the percentage of perturbed tokens in the sentence. Each replace action accounts for one token perturbed. A lower perturbation ratio indicates more semantic consistency Li et al. (2020). **Perplexity (PPL):** a metric to evaluate the fluency of sentences. We calculate the perplexity of adversarial examples through GPT-2 Radford et al. (2019). PPL is calculated across all the adversarial examples. A lower PPL score indicates higher fluency and naturalness of the adversarial examples.
### Baseline Methods
We compare our methods with strong baselines capable of attacking a DST. **TP** and **SD** are two methods maintaining the dialogue act labels unchanged and implemented by the LAUG toolkit Liu et al. (2021). For a fair comparison, we do not apply slot value replacement which would modify the slot values in the original utterances. **TP** (Text Paraphrasing) uses SC-GPT Peng et al. (2020) to generate a new utterance conditioned on the original dialogue acts as data augmentation. **SD** (Speech Disfluency) mimics the disfluency in spoken language by filling pauses ("um"), repeating the previous word, restarting by prepending a prefix "I just" before the original user utterance, and repairing by inserting "sorry, I mean" between a random slot value and the original slot value Liu et al. (2021). **SC-EDA**Liu et al. (2021) injects word-level perturbations by synonym replacement, random insertions, swaps, and deletions without changing the true belief states. **BERT-M** is introduced in this paper as another baseline method. First, we randomly mask tokens that are not slot-value related and not stopwords. Then, we use BERT Devlin et al. (2019) to generate perturbations based on the top-\(K\) predictions via mask-and-filling, where in our experiments \(K=20\). We sorted the top 20 tokens based on the possibility scores and pick the one with the lowest possibility to fill the masked position. The filled user utterance is regarded as an adversarial example.
### Victim Models
We choose the **TripPy** DST Heck et al. (2020) as our base model to train our adversarial prompts
\begin{table}
\begin{tabular}{c c} \hline \hline
**Method** & \(\textbf{p}_{att}\) + \(\textbf{u}^{m}_{t}\) (or \(\textbf{e}^{m}_{u}\)) \\ \hline PromptAttack\({}_{d}\) & belief states: [s] = [v]; \(\textbf{u}^{*}_{t}\) [MASK] \(\textbf{t}^{n}_{u}\) \\ PromptAttack\({}_{c}\) & \(\textbf{p}_{1}\)\(\textbf{p}_{2}\)\(\ldots\)\(\textbf{p}_{m}\) & \(\textbf{e}^{n}_{u}\) [MASK] \(\textbf{e}^{m}_{u}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Adversarial example generation for discrete prompts and continuous prompts.
since classification-based models have better performance and are more robust than generation-based models Liu et al. (2021). Demonstrating the susceptibility of TripPy to our adversarial examples can reveal the limitations of current DSTs, but we further explore the _transferability_ of the prompt-based attacks.
Transferability reflects the generalization of the attack methods, meaning that adversarial examples generated for one model can also effectively attack other models Zhang et al. (2020). Hence, we also evaluate the prompt-based approach learned over TripPy by targeting our adversarial examples on other popular DSTs: **TRADE**Wu et al. (2019), **SimpleTOD**Hosseini-Asl et al. (2020), and **CoCo**Li et al. (2021), one of the state-of-the-art models.3 Additional information about the implementations can be found in Appendix A.
Footnote 3: These models are fine-tuned on MultiWOZ 2.1 using code from CoCo ([https://github.com/salesforce/coco-dst](https://github.com/salesforce/coco-dst)) and follow the same post-processing strategy as CoCo. BSD 3-Clause License.
## 5 Experimental Results
Given this setup, we now investigate the four experimental research questions in turn.
### Attack Effectiveness (RQ1)
First, are the adversarial examples learned by PromptAttack effective? Table 2 summarizes the results for three versions of PromptAttack versus the baselines for the four different DSTs (TripPy, CoCo, SimpleTOD, and TRADE). We consider the discrete version of PromptAttack (denoted as \(\textbf{PromptAttack}_{d}\)) and two continuous versions: one is optimized by maximizing the training loss (denoted as \(\textbf{PromptAttack}_{cx}\)), while the other one is optimized by minimizing the loss (denoted as \(\textbf{PromptAttack}_{cn}\)).
Attack Performance.First, let's focus on the TripPy column. All versions of PromptAttack are learned over TripPy and then applied here so we can assess the susceptibility of a popular DST to adversarial examples. The four baselines lead to some degradation in terms of accuracy (JGA), with SD performing the best with a JGA of 56.5 (a 4.8 drop from the original DST).4 The three prompt-based learning approaches result in strong degradation in terms of accuracy, ranging from 7.7 to 9.3 drops relative to the original. We observe that our PromptAttack models significantly outperform SC-EDA, TP, and BERT-M, the methods without introducing new slot values in the adversarial examples, in terms of JGA and ASR. Compared with the best baseline method among these three, BERT-M, PromptAttack\({}_{cn}\) decreases the JGA by 6.9 and increases ASR by 13.2, respectively. In addition, for the method introducing new slot values, SD, PromptAttack\({}_{cn}\) outperforms it by 4.5 and 8.9. Hence, these observations reveal the attack effectiveness of our proposed PromptAttack methods over these baselines no matter whether the methods introduce new slot values or not.
Footnote 4: We attribute this good attack performance since although this method maintains ground truth slot-value labels unchanged, it prepends new slot values before the original slot values in the user utterance. This operation is effective because it can easily confuse the model to decide which slot values are the truth slot values. In contrast, our prompt-based approaches are designed to make very few changes and to avoid introducing new slot values.
Transferability.To test the transferability of the generated adversarial examples, we take the examples trained over TripPy and then use them to attack other victim models CoCo, SimpleTOD, and TRADE. For CoCo and SimpleTOD, we see that PromptAttack outperforms these four baselines. Our best method PromptAttack\({}_{c}\) achieves
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Method** & **TripPy** & **CoCo** & **SimpleTOD** & **TRADE** \\ & **JGA** / \(\Delta\) / **ASR\(\uparrow\) & **JGA** / \(\Delta\) / **ASR\(\uparrow\) & **JGA** / \(\Delta\) / **ASR\(\uparrow\) & **JGA** / \(\Delta\) / **ASR\(\uparrow\) \\ \hline Original & 61.3 / - & 62.6 / - & - & 56.0 / - & - & 49.4 / - \\ \hline SC-EDA & 60.5 / - 0.8 / 1.9 & 61.9 / - 0.7 / 1.6 & 53.6 / - 2.4 / 9.5 & 48.8 / - 0.6 / 4.9 \\ TP & 60.3 / - 1.0 / 5.6 & 61.5 / - 1.1 / 4.7 & 52.6 / - 3.4 / 19.3 & 48.8 / - 0.6 / 14.1 \\ SD\({}^{*}\) & 56.5 / - 4.8 / 9.3 & 56.1 / - 6.5 / 11.4 & 38.8 / - 17.2 / 36.6 & **31.7 / -17.7 / 39.9** \\ BERT-M & 58.9 / - 2.4 / 5.0 & 60.1 / - 2.5 / 4.8 & 49.6 / - 6.4 / 16.4 & 45.9 / - 3.5 / 11.5 \\ \hline PromptAttack\({}_{d}\) & 53.6 / - 7.7 / 16.0 & 53.7 / - 8.9 / 16.9 & 38.9 / - 17.1 / 37.9 & 35.8 / - 13.6 / 34.0 \\ PromptAttack\({}_{cx}\) & 53.3 / - 8.0 / 16.3 & 54.1 / - 8.5 / 16.3 & **25.0 / -31.0 / 60.0** & 32.5 / - 13.7 / 24.1 \\ PromptAttack\({}_{cn}\) & **52.0 / - 9.3 / 18.2** & **52.8 / -9.8 / 18.4** & 37.4 / -18.6 / 40.6 & 35.8 / - 13.6 / 33.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Attack effectiveness results on MultiWOZ 2.1. **JGA** (%): joint goal accuracy; \(\Delta\) (%): the absolute difference between original JGA and JGA after attacking; **ASR** (%): attack success rate. \(\downarrow\) (\(\uparrow\)): denotes whether the lower (or higher) the better from an attack perspective. *: denotes the method that introduces new slot values. We highlight the **best** and the **second best** results.
52.8 and 25.0 JGA when attacking CoCo and SimpleTOD, showing better transferability than PromptAttack\({}_{d}\). For TRADE, PromptAttack\({}_{c}\) shows better attack performance than baselines without introducing new slot values significantly. Specifically, PromptAttack\({}_{cx}\) shows a decrease of 10.2 and an increase of 20.0 in terms of JGA and ASR, respectively. In general, our PromptAttack methods show good transferability: the adversarial examples generated for one victim model can also be used to attack another model effectively.
### Adversarial Example Quality (RQ2)
Next, we examine whether the generated adversarial examples are of good quality. First, are they fluent with a low perturbation ratio? We automatically measure the perturbation ratio (PER) between the original input and adversarial examples, and the fluency by computing the perplexity (PPL). The lower perturbation ratio represents fewer perturbed tokens in original utterances and lower perplexity indicates better fluency. From Table 3 we observe that the PromptAttack methods achieve low perplexity and show good fluency with quite a low perturbation ratio. Specifically, our method PromptAttack\({}_{cn}\) (7.7%) achieves 169.0 PPL, showing better fluency than PromptAttack\({}_{cn}\) (28.1%) and baselines. Although SC-EDA has a lower perturbation ratio than our PromptAttack\({}_{cn}\) (28.1%), it shows less attack effectiveness (Section 5.1) and worse fluency. Thus, there are trade-offs between perturbation ratio and attack effectiveness.
Second, do the adversarial examples preserve the semantics of the un-perturbed original sentences? That is, does an utterance asking for a cheap restaurant lead to an adversarial example that also asks for a cheap restaurant though tricking the DST to output expensive? To answer this question, we conduct a human evaluation on semantics preservation and grammatical correctness. We first shuffled 150 examples: 50 original un-perturbed sentences, 50 adversarial examples with a 7.7% perturbation ratio, and 50 with a 28.1% perturbation ratio (following the analysis in Section 5.3.1). For the adversarial examples, each attacks the victim model successfully leading to an accuracy of 0. Following Jin et al. (2020); Li et al. (2020), we ask three human judges to rate how well a randomly chosen sentence preserves the semantics of the original sentence (_semantic_), how grammatically correct the sentence is (_grammar_), on a scale from 1 to 5. We report the average score across the three judges in Table 3.
As we can see, the semantic score and grammar score of the adversarial examples are close to the original ones. We find that when the perturbation is reasonable (around 8%), the semantics of the original sentence are preserved quite well (scoring 4.3 for adversarial examples). Further, the grammatical quality of the sentence is also maintained well (4.8 versus 4.4). Even as the perturbation ratio increases to approximately 28%, our approach continues to uphold good semantic preservation (3.3) while retaining satisfactory grammar quality (3.8). Overall, our method consistently generates high-quality adversarial examples by effectively preserving semantics, maintaining grammatical quality and fluency, and keeping a low perturbation ratio.
### Impact of PromptAttack Design (RQ3)
We now explore the impact of different settings on our proposed methods.
#### 5.3.1 Ratio of Perturbed Tokens
First, our prompt-based approach can control how many tokens we want to change in the user utterances, which gives it flexibility. Since the perturbation ratio represents the semantic consistency between the original examples and adversarial examples and there are trade-offs between the attack effectiveness and perturbation ratio, it is important to investigate the influence of the ratio of perturbed tokens on attacking ability.
We take \(\max(1,perturbation\_ratio*l_{t})\) as the number of perturbed tokens, where \(l_{t}\) denotes the length of pre-processed utterances. We set the perturbation ratio of tokens that we could perturb to
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Method** & **PER\(\downarrow\)** & **PPL\(\downarrow\)** & **Semantic\(\uparrow\)** & **Grammar\(\uparrow\)** \\ \hline Original & - & 173.7 & - & 4.8 \\ \hline SC-EDA & 13.1 & 773.8 & 2.5 & 2.7 \\ TP & 74.4\({}^{\dagger}\) & 352.4 & 2.6 & **4.8** \\ SD* & 30.4\({}^{\dagger}\) & 270.4 & **4.3** & 4.1 \\ BERT-M & 28.1 & 221.3 & 2.8 & 4.3 \\ \hline Adv (7.7\%) & **7.7** & **169.0** & **4.3** & 4.4 \\ Adv (28.1\%) & 28.1 & 177.6 & 3.3 & 3.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic evaluation and human evaluation results. **PER**: perturbation ratio; **PPL**: perplexity of generated adversarial examples representing fluency. \(\downarrow\) (\(\uparrow\)) denotes whether the lower (or higher) is the better. \({}^{\dagger}\): results are from original papers. * denotes the method that introduces new slot values. Adv (*): adversarial examples from PromptAttack\({}_{cn}\) with different perturbation ratios which lead the victim model’s accuracy to 0. We highlight the **best** and the **second** best results.
10%, 30%, 50%, 80%, and 100%, that is 7.7%, 10.2%, 15.2%, 22.6%, and 28.1% of the average length of all input examples. More data analysis can be found in Appendix B.
Table 4 shows the evaluation of attack performance and fluency of generated adversarial examples from PromptAttack\({}_{cx}\) and PromptAttack\({}_{cn}\). We observe that for these two methods, the more tokens we perturb, the lower JGA and higher ASR we get, showing better attack ability, which is consistent with our intuition. Thus, as the ratio of perturbed tokens increases, our proposed method PromptAttack achieves better attack performance while maintaining good fluency.
#### 5.3.2 Prompt Length
Next, we explore the effect of different continuous prompt lengths. Shorter prompts have fewer tunable parameters, which means under the same training setting, it would be faster to optimize and find the most effective adversarial prompts. We train continuous prompts with different length: 5 tokens, 10 tokens, and 15 tokens using PromptAttack\({}_{cx}\). Table 5 shows that under different prompt lengths, with the increase of perturbation ratio, the model achieves better attack performance. Under the same perturbation ratios, the model with 5-token prompt achieves modest lower JGA and higher ASR. For example, when the perturbation ratio is 28.1%, PromptAttack\({}_{cx}\) with 5-token prompt gains lower JGA than PromptAttack\({}_{cx}\) with 10-token prompt and PromptAttack\({}_{cx}\) with 15-token prompt and 0.8, respectively, and higher ASR by 0.4 and 1.2, indicating slightly better attack performance.
### Defense against Attack (RQ4)
Finally, we turn to the challenge of defending a DST in the presence of such adversarial examples. We aim to answer two questions: i) can our generated adversarial examples be used to improve the performance of current DST models? and ii) can our attack method bypass such a defense method?
One of the most effective approaches to increase the robustness of a model is adversarial training, which injects adversarial examples into the training data to increase model robustness intrinsically (Bai et al., 2021). Specifically, we first apply our attack methods on the original training dataset to generate adversarial examples. Then we re-train the TripPy model on the training set augmented by the adversarial training examples and evaluate the performance on original test set. As shown in Table 6, the new defended DST model improves JGA on the original test set from 61.3 to 67.3 by 6.0, which outperforms results reported by the state-of-the-art DST model CoCo (62.6) by 4.7. This encouraging result shows that adversarial examples from our attack method can be a good source for data augmentation.
To evaluate the robustness of such an augmented DST model against our proposed attack methods, we next test how well our adversarial examples perform. From Table 6 we observe that the attack methods still show strong attack ability on the new DST model. Thus, there is an opportunity to explore stronger defense methods to strengthen DSTs against such prompt-based attacks.
## 6 Conclusion
In this paper, we present a prompt-based learning approach that can generate effective adversarial
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **JGA\({}_{d}\)** & **JGA\({}_{o}\)** & **ASR\({}_{d}\)** & **ASR\({}_{o}\)** \\ \hline Original & 67.3 & 61.3 & - & - \\ \hline SC-EDA & 66.5 & 60.5 & 1.8 & 1.9 \\ TP & 65.9 & 60.3 & 5.5 & 5.6 \\ SD\({}^{*}\) & 61.4 & 56.5 & 10.1 & 9.3 \\ BERT-M & 64.5 & 58.9 & 5.0 & 5.0 \\ \hline PromptAttack\({}_{d}\) & 60.0 & 55.8 & 12.6 & 11.3 \\ PromptAttack\({}_{cx}\) & 58.3 & 53.3 & 16.3 & 16.3 \\ PromptAttack\({}_{cn}\) & **56.8** & **52.0** & **18.5** & **18.2** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Defense results. d: defended DST model; o: original DST model.
\begin{table}
\begin{tabular}{c c|c c c c c} \hline \hline & & **7.7\%** & **10.2\%** & **15.2\%** & **22.6\%** & **28.1\%** \\ & & (1.0) & (1.5) & (2.3) & (3.5) & (4.4) \\ \hline \multirow{3}{*}{P\({}_{cx}\)} & JGA\(\downarrow\) & 59.0 & 58.1 & 56.6 & 55.1 & 53.3 \\ & ASR\(\uparrow\) & 4.6 & 6.3 & 9.5 & 12.8 & 16.3 \\ & PPL\(\downarrow\) & 159.4 & 155.9 & 157.5 & 167.0 & 175.5 \\ \hline \multirow{3}{*}{P\({}_{cn}\)} & JGA\(\downarrow\) & 58.9 & 58.1 & 56.4 & 54.0 & 52.0 \\ & ASR\(\uparrow\) & 4.9 & 6.2 & 9.7 & 14.4 & 18.2 \\ & PPL\(\downarrow\) & 169.0 & 173.7 & 177.1 & 172.2 & 177.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of PromptAttack\({}_{cx}\) (P\({}_{cx}\)) and PromptAttack\({}_{cn}\) (P\({}_{cn}\)) with different perturbation ratio. (*) denotes the average perturbed token numbers.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & **P\({}_{5}\)** & & **P\({}_{10}\)** & & **P\({}_{15}\)** \\ & JGA\(\downarrow\) & ASR\(\uparrow\) & JGA\(\downarrow\) & ASR\(\uparrow\) & JGA\(\downarrow\) & ASR\(\uparrow\) \\ \hline
7.7\% & 59.0 & 4.6 & 59.2 & 4.3 & 59.3 & 4.6 \\
10.2\% & 58.1 & 6.3 & 58.5 & 5.9 & 58.5 & 6.0 \\
15.2\% & 56.6 & 9.5 & 57.0 & 8.8 & 57.2 & 8.8 \\
22.6\% & 55.1 & 12.8 & 55.3 & 12.6 & 55.7 & 12.3 \\
28.1\% & 53.3 & 16.3 & 53.5 & 15.9 & 54.1 & 15.1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of PromptAttack\({}_{cx}\) with different prompts length and perturbation ratios. **P\({}_{*}\)** denotes the prompt length.
examples for probing DST models. Through experiments over four state-of-the-art DSTs, our framework achieves the greatest reduction in accuracy with the best attack success rate. Moreover, the generated adversarial examples maintain good fluency and low perturbation ratio, evidence that they are close to legitimate non-adversarial user inputs. We also show our generated adversarial examples can bolster a DST by augmenting the original training data with adversarial examples. We find that both discrete and continuous adversarial prompts are capable of generating effective adversarial examples. Discrete prompts are more interpretable while continuous prompting allows us to search for optimal adversarial prompts more efficiently, and generates more effective adversarial examples.
## Limitations
The natural idea to improve robustness is to add adversarial examples to the training set and retrain the model. However, generating adversarial examples for a large training set can be very time-consuming. Thus, it would be interesting to explore more efficient methods that implicitly involved adversarial examples in the training process, e.g., Yang et al. (2022).
## Ethics Statement
The proposed methods could also be applied to natural language generation tasks, like dialogue response generation. The misuse of such methods may generate biased or offensive responses.
## Acknowledgements
We appreciate the authors of the CoCo paper for making their code accessible to the public and for taking the time to address our inquiries.
|
2307.13360 | An Axiomatic Theory for Reversible Computation | Undoing computations of a concurrent system is beneficial in many situations,
e.g., in reversible debugging of multi-threaded programs and in recovery from
errors due to optimistic execution in parallel discrete event simulation. A
number of approaches have been proposed for how to reverse formal models of
concurrent computation including process calculi such as CCS, languages like
Erlang, and abstract models such as prime event structures and occurrence nets.
However it has not been settled what properties a reversible system should
enjoy, nor how the various properties that have been suggested, such as the
parabolic lemma and the causal-consistency property, are related. We contribute
to a solution to these issues by using a generic labelled transition system
equipped with a relation capturing whether transitions are independent to
explore the implications between various reversibility properties. In
particular, we show how all properties we consider are derivable from a set of
axioms. Our intention is that when establishing properties of some formalism it
will be easier to verify the axioms rather than proving properties such as the
parabolic lemma directly. We also introduce two new properties related to
causal consistent reversibility, namely causal liveness and causal safety,
stating, respectively, that an action can be undone if (causal liveness) and
only if (causal safety) it is independent from all the following actions. These
properties come in three flavours: defined in terms of independent transitions,
independent events, or via an ordering on events. Both causal liveness and
causal safety are derivable from our axioms. | Ivan Lanese, Iain Phillips, Irek Ulidowski | 2023-07-25T09:30:21Z | http://arxiv.org/abs/2307.13360v2 | # An Axiomatic Theory for Reversible Computation
###### Abstract.
Undoing computations of a concurrent system is beneficial in many situations, e.g., in reversible debugging of multi-threaded programs and in recovery from errors due to optimistic execution in parallel discrete event simulation. A number of approaches have been proposed for how to reverse formal models of concurrent computation including process calculi such as CCS, languages like Erlang, prime event structures and occurrence nets. However it has not been settled what properties a reversible system should enjoy, nor how the various properties that have been suggested, such as the parabolic lemma and the causal-consistency property, are related. We contribute to a solution to these issues by using a generic labelled transition system equipped with a relation capturing whether transitions are independent to explore the implications between these properties. In particular, we show how they are derivable from a set of axioms. Our intention is that when establishing properties of some formalism it will be easier to verify the axioms rather than proving properties such as the parabolic lemma directly. We also introduce two new notions related to causal consistent reversibility, namely causal liveness and causal safety, stating, respectively, that an action can be undone if and only if it is independent from all the following ones. We show that both causal liveness and causal safety are derivable from our axioms.
_E-mail addresses_: [email protected], [email protected], [email protected].
_Key words and phrases_. Reversible Computation, Labelled Transition System with Independence, Causal Consistency, Causal Safety, Causal Liveness.
There is widespread agreement in the literature about what properties characterise reversible computation in the sequential setting. Thus in reversible finite state automata [48], reversible cellular automata [22], reversible Turing machines [6] and reversible programming languages such as Janus [53] the main point is that the mapping from inputs to outputs is injective, and the reverse computation is deterministic.
Matters are less clear when it comes to reversible computation in the concurrent setting. Indeed, various reversible concurrent models have been studied, most notably in the areas of process calculi [14, 44, 29], event structures [50], Petri nets [4, 38] and programming languages such as Erlang [32].
A main result of this line of research is that the notion of reversibility most suited for concurrent systems is _causal-consistent reversibility_ (other notions are also used, e.g., to model biological systems [47]). According to an informal account of causal-consistent reversibility, any action can be undone provided that its consequences, if any, are undone beforehand. Following [14] this account is formalised using the notion of causal equivalent traces: two traces are causal equivalent if and only if they only differ for swapping independent actions, and inserting or removing pairs of an action and its reverse. According to [14, Section 3]
Backtracking an event is possible when and only when a causally equivalent trace would have brought this event as the last one
which is then formalised as the so called causal consistency (CC) [14, Theorem 1], stating that coinitial computations are causal equivalent if and only if they are cofinal. Our new proof of CC (Proposition 3.8) shows that it holds in essentially any reversible formalism satisfying the Loop Lemma (roughly, any action can be undone) and the Parabolic Lemma (roughly, any computation is equivalent to a backward computation followed by a forward one), and we believe that CC is insufficient on its own to capture the informal notion.
A formalisation closer to the informal statement above is provided in [32, Corollary 22], stating that a forward transition \(t\) can be undone after a derivation if and only if all its consequences, if any, are undone beforehand. We are not aware of other discussions trying to formalise such a notion, except for [46], in the setting of reversible event structures. In [46], a reversible event structure is _cause-respecting_ if an event cannot be reversed until all events it has caused have also been reversed; it is _causal_ if it is cause-respecting and a reversible event can be reversed if all events it has caused have been reversed [46, Definition 3.34].
We provide (Section 5) a novel definition of the idea above, composed by:
**Causal Safety (CS)::** an action cannot be reversed until any actions caused by it have been reversed;
**Causal Liveness (CL)::** we should allow actions to reverse in any order compatible with CS, not necessarily the exact inverse of the forward order.
We shall see that CC does not capture the same property as CS+CL (Examples 5.7, 5.8, 5.9), and that there are slightly different versions of CS and CL, which can all be proved under a small set of reasonable assumptions.
The main aim of this paper is to take an abstract model, namely labelled transition systems with independence equipped with reverse transitions (Section 2), and to show that the properties above (as well as others) can be derived from a small set of simple axioms (Sections 3, 4, 5, 6). This is in sharp contrast with the large part of works in the literature, which consider specific frameworks such as CCS [14], CCS with broadcast [40], CCB [23], \(\pi\)-calculus [13], higher-order \(\pi\)[29], Klaim [19], Petri nets [38], \(\mu\)Oz [35] and Erlang [32], and all give similar but formally unrelated proofs of the same main results. Such proofs will become instances of our general results. More precisely, our axioms will:
* exclude behaviours which are not compatible with causal-consistent reversibility (as we will discuss shortly);
* allow us to derive the main properties of reversible calculi which have been studied in the literature, such as CC (Proposition 3.8);
* hold for a number of reversible calculi which have been proposed, such as RCCS [14] and reversible Erlang [32] (Section 7).
Thus, when defining a new reversible formalism, one just has to check whether the axioms hold, and get for free the proofs of the most relevant properties. Notably, the axioms are normally easier to prove than the properties, hence the assessment of a reversible calculus gets much simpler.
As a reference, Table 1 lists the axioms and properties used in this paper.
In order to understand which kinds of behaviours are incompatible with a causal-consistent reversible setting, consider the following LTSs in CCS (see Figure 1):
* \(a.\mathbf{0}\xrightarrow{a}\mathbf{0}\)**, \(b.\mathbf{0}\xrightarrow{b}\mathbf{0}\)**::** from state \(\mathbf{0}\) one does not know whether to go back to \(a.\mathbf{0}\) or to \(b.\mathbf{0}\);
* \(a.\mathbf{0}+b.\mathbf{0}\xrightarrow{a}\mathbf{0}\)**, \(a.\mathbf{0}+b.\mathbf{0}\xrightarrow{b}\mathbf{0}\)**::** as above, but starting from the same process, hence showing that it is not enough to remember the initial configuration;
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Acronym** & **Name** & **Defined in** & **Proved in** & **Using** \\ \hline SP & Square Property & Def. 3.1 & Axiom & - \\ BTI & Backward Transitions are Independent & Def. 3.1 & Axiom & - \\ WF & Well-Founded & Def. 3.1 & Axiom & - \\ PCI & Propagation of Coinitial Independence & Def. 4.2 & Axiom & implied by LG or CLG \\ TRE & Independence Respects Events & Def. 5.3 & Axiom & implied by LG \\ CIRE & Coinitial Independence Respects Events & Def. 5.21 & Axiom & implied by IRE or CLG \\ BFCIRE & Backward-Forward CIRE & Def. 5.28 & Axiom & implied by CIRE \\ IEC & Independence of Events is Coinitial & Def. 5.10 & Axiom & \\ CLG & Coinitial Label-Generated & Def. 6.9 & Str. Ax. & \\ LG & Label-Generated & Def. 6.11 & Str. Ax. & - \\ IC & Independence is Coinitial & Def. 6.1 & Str. Ax. & implied by CLG \\ \hline PL & Parabolic Lemma & Def. 3.3 & Prop. 3.4 & BTI, SP \\ CC & Causal Consistency & Def. 3.7 & Prop. 3.8 & WF, PL \\ UT & Unique Transition & Def. 3.11 & Cor. 3.12 & CC \\ BLD & Backward Label Determinism & Def. 4.5 & Prop. 4.6 & SP, BTI, PCI \\ ID & Independence of Diamonds & Def. 4.9 & Prop. 4.10 & BTI, PCI \\ NRE & No Repeated Events & Def. 4.18 & Prop. 4.21 & Pre-rev. \\ RPI & Reversing Preserves Independence & Def. 5.11 & Prop. 5.12 & SP, PCI, IRE, IEC \\ CS\({}_{t}\) & Causal Safety & Def. 5.1 & Thm. 5.5 & Pre-rev., IRE \\ CL\({}_{t}\) & Causaliveness & Def. 5.1 & Thm. 5.6 & Pre-rev., IRE \\ ECh & Event Coherence & Def. 5.13 & Prop. 5.14 & Pre-rev., (IRE or IEC) \\ CS\({}_{ci}\) & coinitial Causal Safety & Def. 5.19 & Thm. 5.20 & Pre-rev. \\ CL\({}_{ci}\) & coinitial Causaliveness & Def. 5.19 & Thm. 5.29 & Pre-rev., BFCIRE \\ CS\({}_{<}\) & ordered Causal Safety & Def. 5.37 & Prop. 5.39 & Pre-rev. \\ CL\({}_{<}\) & ordered Causaliveness & Def. 5.37 & Prop. 5.39 & Pre-rev., BFCIRE \\ \hline \end{tabular}
\end{table}
Table 1. Axioms and properties for causal reversibility. ‘Str. Ax.’ abbreviates ‘Structural Axiom’ and ‘Pre-rev.’ abbreviates ‘Pre-reversible’, namely SP, BTI, WF, PCI (cf. Def. 4.3).
\(P\xrightarrow{a}P\) **where \(P=a.P\):**:**: one can go back forever, against the idea that a state models a process reachable after a finite computation.
We remark that all such behaviours are perfectly reasonable in CCS, and they are dealt with in the reversible setting by adding history information about past actions. For example, in the first case one could remember the initial state, in the second case both the initial state and the action taken, and in the last case the number of iterations that have been performed.
The paper is organised as follows. The next section introduces labelled transition systems with independence (LTSIs). Three basic axioms for reversibility (SP, BTI and WF) are defined in Section 3, and are used to prove the Parabolic Lemma and Causal Consistency. Events are defined in Section 4, where another basic axiom (PCI) is formulated. In Section 5 we discuss and define CS and CL properties, and introduce three further basic axioms (IRE, CIRE and IEC) that are used to prove them. We consider three versions of CS and CL: those based on independence of transitions, on independence of events, and on ordering of events, and we study their relationships. Section 6 considers two structured forms of independence, namely independence defined on coinitial transitions only, and independence defined on labels only. Eight case studies of reversible formalisms are presented in Section 7, where we demonstrate that our basic axioms are very effective in proving the main reversibility properties. Section 8 discusses relations with other works in the literature. The final section contains concluding remarks and suggests potential future work.
This paper is an extended version of [33]. The paper has been fully restructured, and now includes a number of additional or refined results. Beyond this, it includes full proofs of our results, as well as additional case studies, examples and explanations. We remark that the preliminary results in [33] have already been exploited in [26, 2, 9, 24, 3, 1, 8], which can be seen as further case studies for our approach.
## 2. Labelled Transition Systems with Independence
We want to study reversibility in a setting as general as possible. Thus, we base on the core of the notion of _labelled transition system with independence_ (LTSI) [49, Definition 3.7]. However, while [49] requires a number of axioms on LTSI, we take the basic definition and explore what can be done by adding or not adding various axioms. Also, we extend LTSI with reverse transitions, since we study reversible systems. We define first labelled transition systems (LTSs).
We consider the LTS of the entire set of processes in a calculus, rather than the transition graph of a particular process and its derivatives, hence we do not fix an initial state.
**Definition 2.1**.: A _labelled transition system (LTS)_ is a structure \((\mathsf{Proc},\mathsf{Lab},\rightarrow)\), where \(\mathsf{Proc}\) is the set of states (or processes), \(\mathsf{Lab}\) is the set of action labels and \(\rightarrow\subseteq\mathsf{Proc}\times\mathsf{Lab}\times\mathsf{Proc}\) is a _transition relation_.
Figure 1. Irreversible transition systems in CCS.
We let \(P,Q,\ldots\) range over processes, \(a,b,c,\ldots\) range over labels, and \(t,u,v,\ldots\) range over transitions. We can write \(t:P\xrightarrow{a}Q\) to denote that \(t=(P,a,Q)\). We call \(a\)-transition a transition with label \(a\).
**Definition 2.2** (LTS with independence).: We say that \((\mathsf{Proc},\mathsf{Lab},\rightarrow,\iota)\) is an _LTS with independence_ (LTSI) if \((\mathsf{Proc},\mathsf{Lab},\rightarrow)\) is an LTS and \(\iota\) is an irreflexive symmetric binary relation on transitions.
In many cases (see Section 7), the notion of independence coincides with the notion of concurrency. However, this is not always the case. Indeed, concurrency implies that transitions are independent since they happen in different processses, but transitions taken by the same process can be independent as well. Think, for instance, of a reactive process that may react in any order to two events arriving at the same time, and the final result does not depend on the order of reactions.
We shall assume that all transitions are reversible, so that the Loop Lemma [14, Lemma 6] holds. This does not hold in models of reversibility with control mechanisms [28] such as irreversible actions [15] or a rollback operator [27]. Nevertheless, when showing properties of models with controlled reversibility it has proved sensible to first consider the underlying models where all transitions are reversible, and then study how control mechanisms change the picture [19, 32]. The present work helps with the first step.
**Definition 2.3** (Reverse and combined LTS).: Given an LTS \((\mathsf{Proc},\mathsf{Lab},\rightharpoonup)\), let the _reverse LTS_ be \((\mathsf{Proc},\mathsf{Lab},\rightharpoonup)\), where \(P\stackrel{{ a}}{{\rightsquigarrow}}Q\) iff \(Q\stackrel{{ a}}{{\rightsquigarrow}}P\). It is convenient to combine the two LTSs (forward and reverse): let the reverse labels be \(\underline{\mathsf{Lab}}=\{\underline{a}:a\in\mathsf{Lab}\}\), and define the combined LTS to be \(\rightarrow\subseteq\mathsf{Proc}\times(\mathsf{Lab}\cup\underline{\mathsf{ Lab}})\times\mathsf{Proc}\) by \(P\xrightarrow{a}Q\) iff \(P\stackrel{{ a}}{{\rightharpoonup}}Q\) and \(P\xrightarrow{a}Q\) iff \(P\stackrel{{ a}}{{\rightsquigarrow}}Q\).
We stipulate that the union \(\mathsf{Lab}\cup\underline{\mathsf{Lab}}\) is disjoint. We let \(\alpha,\ldots\) range over \(\mathsf{Lab}\cup\underline{\mathsf{Lab}}\). For \(\alpha\in\mathsf{Lab}\cup\underline{\mathsf{Lab}}\), the _underlying_ action label \(\mathsf{und}(\alpha)\) is defined as \(\mathsf{und}(a)=a\) and \(\mathsf{und}(\underline{a})=a\). Let \(\underline{\underline{a}}=a\) for \(a\in\mathsf{Lab}\). Given \(t:P\xrightarrow{\alpha}Q\), let \(\underline{t}:Q\xrightarrow{\alpha}P\) be the transition which reverses \(t\). We define a labelling function \(\ell\) from transitions to \(\mathsf{Lab}\cup\underline{\mathsf{Lab}}\) by setting \(\ell((P,\alpha,Q))=\alpha\).
We let \(\rho,\sigma,\ldots\) range over finite sequences \(\alpha_{1}\ldots\alpha_{n}\), with \(\varepsilon\) representing the empty sequence. Given an LTS, a _path_ is a sequence of forward or reverse transitions of the form \(P_{0}\xrightarrow{\alpha_{1}}P_{1}\cdots\xrightarrow{\alpha_{n}}P_{n}\). We let \(r,s,\ldots\) range over paths. We may write \(r:P\xrightarrow{\rho}_{*}Q\) where the intermediate states are understood. On occasion we may refer to a path simply by its sequence of labels \(\rho\). The concatenation of paths \(r\) and \(s\) is written \(rs\). Given a path \(r:P\xrightarrow{\rho}_{*}Q\), the inverse path is \(\underline{r}:Q\xrightarrow{\rho}_{*}P\) where \(\underline{\varepsilon}=\varepsilon\) and \(\underline{\alpha\rho}=\underline{\rho}\ \underline{\alpha}\). The length of a path \(r\) (notated \(|r|\)) is the number of transitions in the path. Paths \(r:P\xrightarrow{\rho}_{*}Q\) and \(R\xrightarrow{\sigma}_{*}S\) are _coinitial_ if \(P=R\) and _cofinal_ if \(Q=S\). We say that a path is _forward-only_ if it contains no reverse transitions; similarly a path is _backward-only_ if it contains no forward transitions. Sometimes we let \(f,\ldots\) and \(b,\ldots\) range over forward-only and backward-only paths, respectively; it will be clear from the context whether \(b\) represents an action label or a path.
Let \((\mathsf{Proc},\mathsf{Lab},\rightarrow)\) be an LTS. The irreversible processes in \((\mathsf{Proc},\mathsf{Lab},\rightarrow)\) are \(\mathsf{Irr}=\{P\in\mathsf{Proc}:P\not\rightarrow\}\). A _rooted path_ is a path \(r:P\xrightarrow{\rho}_{*}Q\) such that \(P\in\mathsf{Irr}\).
In the following we consider LTSIs obtained by adding a notion of independence to combined LTSs as above. We call the result a _combined LTSI_.
_Remark 2.4_.: From now on, unless stated otherwise, we consider a combined LTSI \(\mathcal{L}=(\mathsf{Proc},\mathsf{Lab},\rightarrow,\iota)\). We will refer to it simply as an LTSI.
## 3. Basic Properties
In this section we show that most of the properties in the reversibility literature (see, e.g., [14, 44, 29, 32]), in particular the parabolic lemma and causal consistency, can be proved under minimal assumptions on the combined LTSI under analysis.
We formalise the minimal assumptions using three axioms, described below.
**Definition 3.1** (Basic axioms).: We say an LTSI \(\mathcal{L}\) satisfies:
**Square property (SP):**: if whenever \(t:P\xrightarrow{\alpha}Q\), \(u:P\xrightarrow{\beta}R\) with \(t\;\iota\;u\) then there are cofinal transitions \(u^{\prime}:Q\xrightarrow{\beta}S\) and \(t^{\prime}:R\xrightarrow{\alpha}S\);
**Backward transitions are independent (BTI):**: if whenever \(t:P\xrightarrow{a}Q\) and \(t^{\prime}:P\xrightarrow{b}Q^{\prime}\) and \(t\neq t^{\prime}\) then \(t\;\iota\;t^{\prime}\);
**Well-foundedness (WF):**: if there is no infinite reverse computation, i.e. we do not have \(P_{i}\) (not necessarily distinct) such that \(P_{i+1}\xrightarrow{a_{i}}P_{i}\) for all \(i=0,1,\ldots\).
WF can alternatively be formulated using backward transitions, but the current formulation makes sense also in non-reversible calculi (e.g., CCS), which can be used as a comparison. Let us discuss the intuition behind these axioms. SP takes its name from the Square Lemma, where it is proved for concrete calculi and languages in [14, 29, 32], and captures the idea that independent transitions can be executed in any order, that is they form commuting diamonds. SP can be seen as a sanity check on the chosen notion of independence. BTI generalises the key notion of backward determinism used in sequential reversibility (see, e.g., [48] for finite state automata and [53] for the imperative language Janus) to a concurrent setting. Backward determinism can be spelled as "two coinitial backward transitions do coincide". This can be generalised to "two coinitial backward transitions are independent". We will show in Proposition 7.10 that the two definitions are equivalent when no transitions are independent, which is the common setting in sequential computing. Note that BTI and SP together rule out examples \(a.\mathbf{0}\xrightarrow{a}\mathbf{0}\), \(b.\mathbf{0}\xrightarrow{b}\mathbf{0}\) as well as \(a.\mathbf{0}+b.\mathbf{0}\xrightarrow{a}\mathbf{0}\), \(a.\mathbf{0}+b.\mathbf{0}\xrightarrow{b}\mathbf{0}\) from the Introduction. Finally, WF means that we consider systems which have a finite past. That is, we consider systems starting from some initial state and then moving forward and back. WF rules out example \(P\xrightarrow{a}P\) where \(P=a.P\) from the Introduction.
Axioms SP and BTI are related to properties which are part of the definition of (occurrence) transition systems with independence in [49, Definitions 3.7, 4.1]. WF was used as an axiom in [43].
Using the minimal assumptions above we can prove relevant results from the literature. As a preliminary step, we define causal equivalence, equating computations differing only for swaps of independent transitions and simplification of a transition with its reverse.
**Definition 3.2** (Causal equivalence, cf. [14, Definition 9]).: Consider an LTSI satisfying SP. Let \(\approx\) be the smallest equivalence relation on paths closed under composition and satisfying:
1. (swap) if \(t:P\xrightarrow{\alpha}Q\), \(u:P\xrightarrow{\beta}R\) are independent, and \(u^{\prime}:Q\xrightarrow{\beta}S\), \(t^{\prime}:R\xrightarrow{\alpha}S\) (which exist by SP) then \(tu^{\prime}\approx ut^{\prime}\);
2. (cancellation) \(t\underline{t}\approx\varepsilon\) and \(\underline{t}t\approx\varepsilon\).
We first consider the Parabolic Lemma [14, Lemma 10], which states that each path is causal equivalent to a backward path followed by a forward path.
**Definition 3.3**.: **Parabolic Lemma (PL)**: for any path \(r\) there are forward-only paths \(s,s^{\prime}\) such that \(r\approx\underline{s}s^{\prime}\) and \(|s|+|s^{\prime}|\leq|r|\).
**Proposition 3.4**.: _Suppose an LTSI satisfies BTI and SP. Then PL holds._
Proof.: Suppose BTI and SP hold. Define a function on paths as follows: \(d(r)\) is the number of pairs of forward transitions \((t,u)\) such that \(t\) occurs to the left of \(\underline{u}\) in \(r\). Clearly \(r\) is parabolic iff \(d(r)=0\).
Suppose \(d(r)>0\). We show that there is \(s\approx r\) with \(|s|\leq|r|\) and \(d(s)<d(r)\). Since \(d(r)>0\), we have \(r=s_{1}t\underline{u}s_{2}\) with \(s_{1}:P\xrightarrow{\sigma_{1}}R\), \(t:R\xrightarrow{a}S\), \(\underline{u}:S\xrightarrow{b}T\) and \(s_{2}:T\xrightarrow{\sigma_{2}}Q\). If \(t=u\), then we obtain \(r=s_{1}t\underline{u}s_{2}\approx s_{1}s_{2}\). Clearly \(r\approx s_{1}s_{2}\) with \(|s_{1}s_{2}|<|r|\) and \(d(s_{1}s_{2})<d(r)\). So suppose \(t\neq u\). By BTI we have \(\underline{t}\,\iota\,\underline{u}\). By SP there are \(S^{\prime}\) and transitions \(u^{\prime}:S^{\prime}\xrightarrow{b}R\), \(t^{\prime}:S^{\prime}\xrightarrow{a}T\). See Figure 2. Then \(\underline{t}\,\underline{u}^{\prime}\approx\underline{u}\,\underline{t}^{\prime}\). Hence, \(r=s_{1}t\underline{u}s_{2}\approx s_{1}t\underline{u}\,\underline{t}^{\prime}t ^{\prime}s_{2}\approx s_{1}t\underline{t}\,\underline{u}^{\prime}t^{\prime}s_ {2}\approx s\) as required. Given that \(|s_{1}\underline{u}^{\prime}\underline{t}^{\prime}s_{2}|=|r|\) and \(d(s_{1}\underline{u}^{\prime}t^{\prime}s_{2})=d(r)-1\) the thesis follows.
The proof of Proposition 3.4 is very similar to that of [14, Lemma 10] except that in the latter BTI is shown directly as part of the proof.
A corollary of PL is that if a process is reachable from an irreversible process, then it is also forwards reachable from it. In other words, making a system reversible does not introduce new reachable states but only allows one to explore forwards-reachable states in a different order. This is relevant, e.g., in reversible debugging of concurrent systems [18, 32], where one wants to find bugs that actually occur in forward-only computations.
**Corollary 3.5**.: _Suppose an LTSI satisfies PL. If a process \(P\) is reachable from some irreversible process \(Q\), then it is also forward reachable from \(Q\)._
Proof.: By hypothesis, there is some path \(r:Q\xrightarrow{}_{*}P\). Thanks to PL, there are forward-only paths \(s,s^{\prime}\) such that \(\underline{s}s^{\prime}:Q\xrightarrow{}_{*}P\). Since \(Q\) is irreversible, \(s=\varepsilon\), hence \(s^{\prime}:Q\xrightarrow{}_{*}P\) as desired.
When WF and PL hold, each process is reachable from a unique irreversible process.
**Proposition 3.6**.: _Suppose an LTSI satisfies WF and PL. For any process \(P\) there is a unique irreversible process \(I\) such that \(P\) is reachable from \(I\)._
Proof.: Let \(P\) be any process. We use WF to deduce that there is an irreversible process \(I\) such that \(P\) is (forward) reachable from \(I\) via some path \(r\). Suppose now that \(I^{\prime}\) is irreversible and there is a path \(r^{\prime}\) from \(I^{\prime}\) to \(P\). Then \(r^{\prime}\underline{r}:I^{\prime}\xrightarrow{}_{*}I\). By PL there are forward-only paths \(s,s^{\prime}\) such that \(\underline{s}s^{\prime}:I^{\prime}\xrightarrow{}_{*}I\). But since \(I\) and \(I^{\prime}\) are irreversible, both \(s=\varepsilon\) and \(s^{\prime}=\varepsilon\). Hence \(I^{\prime}=I\) as required.
We now move to causal consistency [14, Theorem 1].
**Definition 3.7**.: **Causal Consistency (CC)**: if \(r\) and \(s\) are coinitial and cofinal paths then \(r\approx s\).
Figure 2. Proof of Proposition 3.4, case \(t\neq u\).
Essentially, causal consistency states that history information allows one to distinguish computations which are not causal equivalent. Indeed, if two computations are cofinal, that is they reach the same final state (which includes the stored history information) then they need to be causal equivalent.
Causal consistency frequently includes the other direction, namely that coinitial causal equivalent computations are cofinal, meaning that there is no way to distinguish causal equivalent computations. This second direction follows easily from the definition of causal equivalence.
Notably, our proof of CC below is very much shorter than existing proofs, such as the one of [14, Theorem 1] for RCCS and the one of [32, Theorem 21] for reversible Erlang.
**Proposition 3.8**.: _Suppose an LTSI satisfies WF and PL. Then CC holds._
Proof.: Let \(r:P\xrightarrow{\rho}_{*}Q\) and \(r^{\prime}:P\xrightarrow{\rho^{\prime}}_{*}Q\). Using WF, let \(I,s\) be such that \(s:I\xrightarrow{\sigma}_{*}P\), \(I\in\mathsf{Irr}\). Now \(\mathit{srsr}^{\prime}\) is a path from \(I\) to \(I\), and so by PL there are \(r_{1},r_{2}\) forward-only such that \(\underline{r_{1}}r_{2}\approx\mathit{srsr}^{\prime}\). But \(I\in\mathsf{Irr}\) and so \(r_{1}=\varepsilon\) and \(r_{2}=\varepsilon\). Thus \(\varepsilon\approx\mathit{srsr}^{\prime}\), so that \(\mathit{sr}\approx\mathit{sr}^{\prime}\) and \(\overline{r}\approx r^{\prime}\) as required.
Causal equivalent computations are strongly related in terms of the number of transitions with a given label they contain.
**Proposition 3.9**.: _If \(r\approx s\) then for any action \(a\) the number of \(a\)-transitions in \(r\) is the same as in \(s\), where we count reverse transitions negatively._
Proof.: Straightforward, by induction on the derivation of \(r\approx s\).
_Remark 3.10_.: One consequence of Proposition 3.9 is that if \(r\approx s\) and \(r\) and \(s\) are both forward-only, then \(|r|=|s|\).
Causal consistency implies the unique transition property.
**Definition 3.11**.: **Unique Transition (UT)**: if either \(P\xrightarrow{a}Q\) and \(P\xrightarrow{b}Q\) or \(P\xrightarrow{a}Q\) and \(P\xrightarrow{b}Q\) then \(a=b\).
**Corollary 3.12**.: _If an LTSI satisfies CC then it satisfies UT._
Proof.: Since \(P\xrightarrow{a}Q\) and \(P\xrightarrow{b}Q\) are coinitial and cofinal then they are causal equivalent. By Proposition 3.9 the counting of actions should be the same, hence \(a=b\).
UT was shown in the forward-only setting of occurrence TSIs in [49, Corollary 4.4]; it was taken as an axiom in [43].
_Example 3.13_ (PL alone does not imply WF or CC).: Consider the LTSI with states \(P_{i}\) for \(i=0,1,\ldots\) and transitions \(t_{i}:P_{i+1}\xrightarrow{a}P_{i}\), \(u_{i}:P_{i+1}\xrightarrow{b}P_{i}\) with \(a\neq b\) and \(\underline{t_{i}}\ \iota\ u_{i}\). BTI and SP hold. Hence PL holds by Proposition 3.4. However clearly WF fails. Also \(t_{i}\) and \(u_{i}\) are coinitial and cofinal, and \(a\neq b\), so that UT fails, and hence CC fails using Corollary 3.12. Note that the \(ab\) diamonds here have the same side states so are degenerate (cf. Lemma 4.7).
We have seen that SP is assumed when defining causal equivalence \(\approx\). Assuming SP, we give a diagram (Figure 3) to show implications between the remaining two axioms presented so far (BTI, WF) and the two main properties introduced so far (PL, CC). We remark that the implications shown are strict (reverse implication does not hold). We provide below counterexamples showing strictness of implications:
_Example 3.14_ (SP, WF and CC do not imply PL).: Consider the LTSI with states \(P,Q,R\) and transitions \(t:P\xrightarrow{a}R\), \(u:Q\xrightarrow{b}R\), with an empty independence relation. Then clearly BTI and PL fail. However SP, WF and CC (and therefore UT) hold.
For CC, note that we can use cancellation to reduce each path to a unique shortest normal form with respect to \(\approx\). There are various cases to check, depending on the initial and final states of the path. Let \(r:R\xrightarrow{\rho}R\) be any path from \(R\) to \(R\). If \(r\) is non-empty, it must be of the form either \(r=\underline{t}tr^{\prime}\) or \(r=\underline{u}ur^{\prime\prime}\). We can use cancellation to get either \(r\approx r^{\prime}\) or \(r\approx r^{\prime\prime}\). Iterating the argument we see that \(r\approx\varepsilon\). Now let \(r:P\xrightarrow{\rho}R\) be any path from \(P\) to \(R\). Then \(r=tr^{\prime}\) where \(r^{\prime}\) is a path from \(R\) to \(R\). Hence \(r\approx t\). Now let \(r:P\xrightarrow{\rho}P\) be any path from \(P\) to \(P\). Then \(r=tr^{\prime}\underline{t}\) where \(r^{\prime}\) is a path from \(R\) to \(R\). Hence \(r\approx t\underline{t}\approx\varepsilon\). Next let \(r:P\xrightarrow{\rho}Q\) be any path from \(P\) to \(Q\). Then \(r=tr^{\prime}\underline{u}\) where \(r^{\prime}\) is a path from \(R\) to \(R\). Hence \(r\approx t\underline{u}\). The remaining cases are similar. \(\diamond\)
_Example 3.15_ (SP, WF, PL and CC do not imply BTI).: Consider the LTSI with states \(P,Q,R,S\) and transitions \(t:P\xrightarrow{a}Q\), \(u:P\xrightarrow{b}R\), \(t^{\prime}:R\xrightarrow{a}S\) and \(u^{\prime}:Q\xrightarrow{b}S\), with \(t\ \iota\ u\). Then BTI fails for \(\underline{t^{\prime}}\) and \(\underline{u^{\prime}}\). However SP, WF and PL hold, and therefore CC also holds.
We show PL. As in the proof of Proposition 3.4, for a path \(r\) let \(d(r)\) be the number of pairs of forward transitions \((t,u)\) such that \(t\) occurs to the left of \(\underline{u}\) in \(r\). Then \(r\) is parabolic iff \(d(r)=0\).
Suppose \(d(r)>0\). We show that there is \(s\approx r\) with \(|s|\leq|r|\) and \(d(s)<d(r)\). Since \(d(r)>0\), we have \(r=s_{1}t^{\prime\prime}\underline{u^{\prime\prime}}s_{2}\). If \(t^{\prime\prime}=u^{\prime\prime}\), then we can use cancellation as in the proof of Proposition 3.4. So suppose \(t^{\prime\prime}\neq u^{\prime\prime}\). Since the target of \(t^{\prime\prime}\) must be the same as the source of \(u^{\prime\prime}\), the only possibilities are \(t^{\prime\prime}=t^{\prime}\), \(u^{\prime\prime}=u^{\prime}\) or dually \(t^{\prime\prime}=u^{\prime}\), \(u^{\prime\prime}=t^{\prime}\). We consider \(t^{\prime\prime}=t^{\prime}\), \(u^{\prime\prime}=u^{\prime}\); the other case is similar. So \(r=s_{1}t^{\prime}\underline{u^{\prime}}s_{2}\). Since \(t\ \iota\ u\) we have \(tu^{\prime}\approx ut^{\prime}\). Hence \(\underline{u}tu^{\prime}\underline{u^{\prime}}\approx\underline{u}ut^{\prime}\underline {u^{\prime}}\), and so \(\underline{u}t\approx t^{\prime}\underline{u^{\prime}}\). So \(r\approx s_{1}\underline{u}ts_{2}\) and \(d(s)=d(r)-1\), \(|s|=|r|\). \(\diamond\)
_Example 3.16_ (SP and WF do not imply CC (or PL)).: Consider the LTSI of Example 3.15, but without \(t\ \iota\ u\). Clearly SP and WF hold. However CC fails, since there are paths \(tu^{\prime}\) and \(ut^{\prime}\) from \(P\) to \(S\), but \(tu^{\prime}\not\approx ut^{\prime}\). To see this, imagine that the four transitions of the diamond correspond to rotations around the centre of the diamond (see Figure 4). Measuring anti-clockwise rotation in radians we see that \(t\) and \(u^{\prime}\) each give a rotation of \(-\pi/2\), while \(u\) and \(t^{\prime}\) each yield \(+\pi/2\). Let us define the rotation of a path to be the sum of the rotations of
Figure 3. Implications between the main properties discussed in Section 3. We assume SP throughout.
its transitions. Path \(tu^{\prime}\) has rotation \(-\pi\) while \(ut^{\prime}\) has \(+\pi\). Since there are no independent transitions, the only operation of causal equivalence we can perform is to use \(\underline{t}\underline{t}\approx\varepsilon\). This clearly preserves the rotation of a path. Hence \(tu^{\prime}\not\approx ut^{\prime}\) as required.
PL does not hold either, otherwise CC would follow from Proposition 3.8. \(\diamond\)
_Example 3.17_ (SP, BTI and CC do not imply WF).: Consider the LTSI with states \(P_{i}\) for \(i=0,1,\ldots\) and transitions \(t_{i}:P_{i+1}\xrightarrow{a}P_{i}\). Clearly WF does not hold. However SP, BTI (and hence PL) hold; also CC (and hence UT) hold, noting that any path is causally equivalent to a path which is entirely forward or entirely reverse. \(\diamond\)
## 4. Events
In order to define and study causal safety and liveness (Section 5), we first need the concept of event.
**Definition 4.1** (Event, general definition).: Consider an LTSI. Let \(\sim\) be the smallest equivalence relation satisfying: if \(t:P\xrightarrow{\alpha}Q\), \(u:P\xrightarrow{\beta}R\), \(u^{\prime}:Q\xrightarrow{\beta}S\), \(t^{\prime}:R\xrightarrow{\alpha}S\), and \(t\;\iota\;u\), \(\underline{u}\;\iota\;t^{\prime}\), \(\underline{t^{\prime}}\;\underline{u^{\prime}}\), \(u^{\prime}\;\iota\;\underline{t}\), and
* \(Q\neq R\) if \(\alpha\) and \(\beta\) are both forwards or both backwards;
* \(P\neq S\) otherwise;
then \(t\sim t^{\prime}\). The equivalence classes of transitions, written \([t]\) or \([P,\alpha,Q]\), are the _events_. We say that an event is _forward_ if it is the equivalence class of a forward transition; similarly for _reverse_ events. Given an event \(e=[t]\) we let \(\underline{e}=[\underline{t}]\). Also, we let \(\mathsf{und}(e)=e\) if \(e\) is forward, \(\mathsf{und}(e)=\underline{e}\) if \(e\) is backward.
Intuitively, events are the equivalence classes generated by equating transitions on the opposite sides of commuting squares. Events are introduced as a derived notion in an LTS with independence in [49], in the context of forward-only computation. We have changed their definition by using coinitial independence at all corners of the diamond, yielding rotational symmetry. This reflects our view that forward and backward transitions have equal status.
The labelling function \(\ell\) can be extended to \(\,\rightarrow\!/\sim\) since the label does not depend on the choice of the representative inside the equivalence class.
### Pre-reversible LTSIs
Our definition can be simplified if the LTSI, and independence in particular, are well-behaved. Thus, we now add a further axiom related to independence. This leads us to pre-reversible LTSIs.
**Definition 4.2**.: **Propagation of coinitial independence (PCI)1**: if \(t:P\xrightarrow{\alpha}Q\), \(u:P\xrightarrow{\beta}R\), \(u^{\prime}:Q\xrightarrow{\beta}S\) and \(t^{\prime}:R\xrightarrow{\alpha}S\) with \(t\;\iota\;u\), then \(u^{\prime}\;\iota\;\underline{t}\).
Figure 4. Rotations within a diamond (Example 3.16).
PCI states that independence is a property of commuting diamonds more than of their specific pairs of edges. Indeed, it allows independence to propagate around a commuting diamond.
**Definition 4.3** (Pre-reversible LTSI).: If an LTSI satisfies axioms SP, BTI, WF and PCI, we say that it is _pre-reversible_.
The name 'pre-reversible' indicates that we expect to require further axioms, but the present four are enough to ensure that LTSIs are well-behaved, with events compatible with causal equivalence (cfr. Lemma 4.12). Pre-reversible axioms are separated from further axioms by a dashed line in Table 1.
A first consequence of PCI is that coinitial transitions with mutually inverse labels are not independent.
**Lemma 4.4**.: _Suppose that an LTSI satisfies PCI. If \(t:P\xrightarrow{\alpha}Q\) and \(u:P\xrightarrow{\alpha}R\) are coinitial transitions with mutually inverse labels, then \(t\mathbin{\not\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{{ }}}}}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{\mathrel{\mathrel{{\mathrel{{ \mathrel{{ }}}}}}}}}{}{}{}{}{{{{}}{{}}{{}{{}}{{}}{{}}{{}}{{ {}}}{{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{{\mathrel{\mathrel{{\mathrel{{ \mathrel{{ \mathrel{{ }}}}}}}}}}}{}{}}{}{}{{{{{{{{{{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{\mathrel{{\mathrel{{ \mathrel{{ }}}}}}}}}}}}{}{}{}{{{{{{{{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{\mathrel{\mathrel{{\mathrel{{ \mathrel{{ \mathrel{{ }}}}}}}}}}}}}{}{{{{{{{{{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{{\mathrel{\mathrel{{\mathrel{{ \mathrel{{ }}}}}}}}}}}{{{{{{{{{}}}}{{{{}}}{{{}}{{}{{}}{{}}{{}{{}}{{}}{{}{}{{}}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{ }{{}{{}{{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{{}{}{}{{}{}{}{{}{}{ }{{}{{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{ }{{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{ {}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{ }{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ }{{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{} {{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{} {{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{ {}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{ }{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{} {{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{} {{}{}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{ }{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ {}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{ {}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{ {}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{ {}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{ {}{{}{{}{{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{ {}{{}{{}{}{{}{}{{}{}{{}{}{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{ {}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{}{{ }{{{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{ {{}{{}{{}{}{{}{}{}{{{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{ {{{{{}{{{}{{{}{{}{{}{}{{{{}{{{{}{{{}{{}{{{}{{}{{}{{}{{}{{}{{{}{{}{{{}{}{{{}{{}{{{}{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{\{
**Definition 4.8** (Event, simplified definition).: Consider a pre-reversible LTSI. Let \(\sim\) be the smallest equivalence relation satisfying: if \(t:P\xrightarrow{\alpha}Q\), \(u:P\xrightarrow{\beta}R\), \(u^{\prime}:Q\xrightarrow{\beta}S\), \(t^{\prime}:R\xrightarrow{\alpha}S\), and \(t\;\iota\;u\), then \(t\sim t^{\prime}\).
We are now able to show independence of diamonds (ID), which can be seen as dual of SP.
**Definition 4.9**.: **Independence of Diamonds (ID)**: if we have a diamond \(t:P\xrightarrow{\alpha}Q\), \(u:P\xrightarrow{\beta}R\), \(u^{\prime}:Q\xrightarrow{\beta}S\) and \(t^{\prime}:R\xrightarrow{\alpha}S\), with
* \(Q\neq R\) if \(\alpha\) and \(\beta\) are both forwards or both backwards;
* \(P\neq S\) otherwise;
then \(t\;\iota\;u\).
**Proposition 4.10**.: _If an LTSI satisfies BTI and PCI then it satisfies ID._
Proof.: Suppose we have a diamond \(t:P\xrightarrow{\alpha}Q\), \(u:P\xrightarrow{\beta}R\), \(u^{\prime}:Q\xrightarrow{\beta}S\) and \(t^{\prime}:R\xrightarrow{\alpha}S\), with
* \(Q\neq R\) if \(\alpha\) and \(\beta\) are both forwards or both backwards;
* \(P\neq S\) otherwise.
We must show \(t\;\iota\;u\). There are various cases, depending on whether \(\alpha\) and \(\beta\) are forwards or backwards. If they are both forwards, then \(Q\neq R\). Hence \(\underline{t^{\prime}}\neq\underline{u^{\prime}}\) and by BTI we have \(\underline{t^{\prime}}\;\iota\;\underline{u^{\prime}}\). By PCI, \(u^{\prime}\;\iota\;\underline{t}\) and again by PCI \(t\;\iota\;u\) as required. Other cases are similar.
In the proof of the above proposition it must be the case that \(\mathsf{und}(\alpha)\neq\mathsf{und}(\beta)\), or else we get a contradiction using Lemma 4.4.
### Counting occurrences of events
We now consider the interaction between events and causal equivalence. We need some notation first.
**Definition 4.11**.: Let \(r\) be a path in an LTSI \(\mathcal{L}\) and let \(e\) be an event of \(\mathcal{L}\). Let \(\sharp(r,e)\) be the number of occurrences of transitions \(t\) in \(r\) such that \(t\in e\), minus the number of occurrences of transitions \(t\) in \(r\) such that \(t\in\underline{e}\). We define \(\sharp(r,e)\) by induction on the length of \(r\) as follows:
\[\sharp(\varepsilon,e) =0\] \[\sharp(tr,e) =\begin{cases}\sharp(r,e)+1&\text{if }[t]=e\\ \sharp(r,e)-1&\text{if }[t]=\underline{e}\\ \sharp(r,e)&\text{otherwise}\end{cases}\]
We now show that \(\sharp(r,e)\) is invariant under causal equivalence.
**Lemma 4.12**.: _Let \(\mathcal{L}\) be a pre-reversible LTSI. Let \(r\approx s\). Then for each event \(e\) we have that \(\sharp(r,e)=\sharp(s,e)\)._
Proof.: We prove the thesis for \(r\) and \(s\) being derived by a single application of the axioms; the thesis will follow since equality is an equivalence relation.
If \(r=r_{1}tu^{\prime}r_{2}\) and \(s=r_{1}ut^{\prime}r_{2}\) then we have by definition that \(t\;\iota\;u\). Hence, \([t]=[t^{\prime}]\) and \([u]=[u^{\prime}]\) using Definition 4.8. The thesis follows.
If \(r=r_{1}t\underline{tr}_{2}\) and \(s=r_{1}r_{2}\) (the other case is analogous) then the contribution of \(t\) and \(\underline{t}\) to \(\sharp(r,[t])\) (as well as to \(\sharp(r,e)\) for \(t\notin e\)) is \(0\); hence the thesis follows.
Lemma 4.12 generalises what was shown for the forward-only setting in [49, Corollary 4.3].
**Proposition 4.13**.: _If an LTSI is pre-reversible, then for any rooted path \(r\) and any forward event \(e\) we have \(\sharp(r,e)\geq 0\)._
Proof.: Let \(r\) be a rooted path. Using PL (Proposition 3.4), we obtain a coinitial and cofinal forward-only path \(s\) such that \(s\approx r\). Let \(e\) be any forward event. Clearly \(\sharp(s,e)\geq 0\). Hence \(\sharp(r,e)\geq 0\) by Lemma 4.12.
We can lift independence from transitions to events.
**Definition 4.14** (Coinitially independent events).: Let events \(e,e^{\prime}\) be _coinitially independent_, written \(e\ \mathsf{ci}\ e^{\prime}\), iff there are coinitial transitions \(t,t^{\prime}\) such that \([t]=e\), \([t^{\prime}]=e^{\prime}\) and \(t\ \iota\ t^{\prime}\).
**Lemma 4.15**.: _Assume an LTSI is pre-reversible. If \(e\ \mathsf{ci}\ e^{\prime}\) then we have also \(\underline{e}\ \mathsf{ci}\ e^{\prime}\)._
Proof.: Suppose that \(e\ \mathsf{ci}\ e^{\prime}\). Then there are coinitial \(t,u\) such that \([t]=e\), \([u]=e^{\prime}\) and \(t\ \iota\ u\). Use SP to complete a diamond with transitions \(t^{\prime}\sim t\), \(u^{\prime}\sim u\). By PCI we have \(\underline{t}\ \iota\ u^{\prime}\). Hence \(\underline{e}\ \mathsf{ci}\ e^{\prime}\) as required.
Thus in pre-reversible LTSIs, \(\mathsf{ci}\) is fully determined just considering forward events. By Lemma 4.15, if we know \(e\ \mathsf{ci}\ e^{\prime}\) then we know \(\mathsf{und}(e)\ \mathsf{ci}\ \mathsf{und}(e^{\prime})\).
**Proposition 4.16**.: _Assume an LTSI is pre-reversible. Then \(\mathsf{ci}\) is irreflexive._
Proof.: Suppose for a contradiction that \(e\ \mathsf{ci}\ e\) for some event \(e\). By Lemma 4.15, we can assume that \(e\) is forward. Then there are coinitial transitions \(t,u\in e\) such that \(t\ \iota\ u\). We can use SP to complete a square with \(t^{\prime}\sim t\) and \(u^{\prime}\sim u\). This square is non-degenerate by Lemma 4.7. But now \(\underline{t}^{\prime}\) and \(\underline{u}^{\prime}\) are two distinct coinitial backward transitions with the same label, contradicting BLD (Proposition 4.6).
We can slightly strengthen the previous result as follows:
**Proposition 4.17**.: _Assume an LTSI is pre-reversible. If \(t:P\xrightarrow{\alpha}Q\) and \(u:R\xrightarrow{\beta}S\) with \([t]\ \mathsf{ci}\ [u]\) then \(\mathsf{und}(\alpha)\neq\mathsf{und}(\beta)\)._
Proof.: Similar to the proof of Proposition 4.16.
In pre-reversible LTSIs each event can occur at most once in a rooted path.
**Definition 4.18**.: **No repeated events (NRE)**: for any rooted path \(r\) and any forward event \(e\) we have \(\sharp(r,e)\leq 1\).
In order to prove NRE we need the following lemmas.
**Lemma 4.19** (Ladder Lemma).: _Assume an LTSI is pre-reversible. Suppose that \(t:P\xrightarrow{\alpha}Q\) and \(t^{\prime}:P^{\prime}\xrightarrow{\alpha}Q^{\prime}\) with \(t\sim t^{\prime}\). Then there is a path \(s\) from \(Q\) to \(Q^{\prime}\) such that for all \(u\) in \(s\) we have \([t]\ \mathsf{ci}\ [u]\)._
Proof.: By the definition of \(\sim\) there is a ladder of diamonds connecting \(t\) to \(t^{\prime}\). This gives a path \(s\) from \(Q\) to \(Q^{\prime}\). Take any \(u\) in \(s\), and consider the diamond containing \(u\). Let \(u^{\prime}\) be on the opposite side from \(u\), so that \(u^{\prime}\sim u\), and let \(t^{\prime\prime}\) be the rung nearest to \(t\), so that \(t\sim t^{\prime\prime}\). We have \(t^{\prime\prime}\ \iota\ u^{\prime}\). Hence result.
**Lemma 4.20**.: _Let \(\mathcal{L}\) be a pre-reversible LTSI. Suppose \(t:P\xrightarrow{\alpha}Q\) and \(t^{\prime}:P^{\prime}\xrightarrow{\alpha}Q^{\prime}\) with \(t\sim t^{\prime}\), and suppose \(r\) is a path from \(Q\) to \(Q^{\prime}\). Then \(\sharp(r,[t])=0\)._
Proof.: By Lemma 4.19 there is a path \(s\) from \(Q\) to \(Q^{\prime}\) such that for all \(u\) in \(s\) we have \([t]\ \mathsf{ci}\ [u]\). Let \(\ell(t)=\alpha\) and \(\ell(u)=\beta\). By Proposition 4.17 we have \(\mathsf{und}(\alpha)\neq\mathsf{und}(\beta)\). Hence \(\sharp(s,[t])=0\), and by Lemma 4.12\(\sharp(r,[t])=0\) as required.
**Proposition 4.21**.: _If an LTSI is pre-reversible then it satisfies NRE._
Proof.: Let \(e\) be a forward event and \(r\) be a rooted path from \(I\) to \(R\), and suppose for a contradiction that \(\sharp(r,e)>1\). Using PL we can obtain a forward-only path \(r^{\prime}\) from \(I\) to \(R\) with \(r\approx r^{\prime}\). By Lemma 4.12, \(\sharp(r^{\prime},e)>1\). Suppose \(r^{\prime}\) contains \(t:P\xrightarrow{a}Q\) followed later by \(t^{\prime}:P^{\prime}\xrightarrow{a}Q^{\prime}\) where \(t,t^{\prime}\in e\). Let \(r^{\prime\prime}\) be the portion of \(r^{\prime}\) from \(Q\) to \(P^{\prime}\). By Lemma 4.20 applied to \(t,t^{\prime}\) and path \(r^{\prime\prime}t^{\prime}\) we have \(\sharp(r^{\prime\prime}t^{\prime},[t])=0\). This is a contradiction since \(r^{\prime\prime}\) is forward-only.
NRE was shown in the forward-only setting of occurrence transition systems with independence in [49, Corollary 4.6]. It was also shown in the reversible setting without independence in [43, Proposition 2.10].
_Example 4.22_.: Consider the LTSI in Figure 5. Independence holds only between coinitial transitions and is given by closing under BTI and propagating independence around the corners of diamonds as in PCI whenever possible. Note however that PCI does not hold, since we have coinitial independent \(a\) and \(\underline{a}\)-transitions, contradicting Lemma 4.4. As well as BTI, axioms SP and WF hold, so that CC holds. All \(a\)-transitions belong to the same event, and all \(b\)-transitions belong to the same event. We have rooted paths where the same event is repeated, contradicting NRE. Note also that BLD fails and that \(\mathfrak{ci}\) is reflexive. \(\diamond\)
### Polychotomy
We now show what we call _polychotomy_, which states that if forward events do not cause each other and are not in conflict, then they must be independent. This will help us to relate the different notions of causal safety and liveness (Section 5). We first define causality and conflict relations on forward events.
**Definition 4.23** (Causality relation on forward events).: Let \(\mathcal{L}\) be an LTSI. Let \(e,e^{\prime}\) be forward events of \(\mathcal{L}\). Let \(e\leq e^{\prime}\) iff for all rooted paths \(r\), if \(\sharp(r,e^{\prime})>0\) then \(\sharp(r,e)>0\). As usual \(e<e^{\prime}\) means \(e\leq e^{\prime}\) and \(e\neq e^{\prime}\). If \(e<e^{\prime}\) we say that \(e\) is a _cause_ of \(e^{\prime}\).
As expected, the causality relation is a partial ordering (i.e., a reflexive, transitive and antisymmetric relation).
**Lemma 4.24**.: _If an LTSI is pre-reversible then \(\leq\) is a partial ordering on events._
Proof.: Reflexivity and transitivity are immediate. For antisymmetry, suppose that \(e_{1}\leq e_{2}\) and \(e_{2}\leq e_{1}\), where \(e_{1},e_{2}\) are forward events. Then for all rooted \(r,\sharp(r,e_{1})>0\) iff \(\sharp(r,e_{2})>0\). Since the LTSI is pre-reversible, by Proposition 4.13, for all rooted \(r\), \(\sharp(r,e_{1})\geq 0\) and \(\sharp(r,e_{2})\geq 0\). Let \(r\) be a shortest rooted path such that \(\sharp(r,e_{1})>0\). We can use WF to show that \(r\) must exist. Then \(\sharp(r,e_{2})>0\). Also \(r=r^{\prime}t\), where \(\sharp(r^{\prime},e_{1})=0\) (otherwise \(r\) would not be a shortest path) and so \(\sharp(r^{\prime},e_{2})=0\). We see that both \([t]=e_{1}\) and \([t]=e_{2}\), showing that \(e_{1}=e_{2}\) as required.
In [52, 43], orderings on forward events have been defined using forward-only rooted paths; in fact, the definitions coincide for pre-reversible LTSIs.
Figure 5. The LTSI in Example 4.22.
**Definition 4.25** ([52, 43]).: Let \(\mathcal{L}\) be an LTSI. Let \(e,e^{\prime}\) be forward events of \(\mathcal{L}\). Let \(e\leq_{\mathsf{f}}e^{\prime}\) iff for all rooted forward-only paths \(r\), if \(\sharp(r,e^{\prime})>0\) then \(\sharp(r,e)>0\).
**Lemma 4.26**.: _For any LTSI, and any forward events \(e,e^{\prime}\), \(e\leq e^{\prime}\) implies \(e\leq_{\mathsf{f}}e^{\prime}\). If an LTSI is pre-reversible then \(e\leq_{\mathsf{f}}e^{\prime}\) implies \(e\leq e^{\prime}\)._
Proof.: Straightforward using PL and Lemma 4.12.
**Definition 4.27**.: Two forward events \(e,e^{\prime}\) are in _conflict_, written \(e\)\(\#\)\(e^{\prime}\), if there is no rooted path \(r\) such that \(\sharp(r,e)>0\) and \(\sharp(r,e^{\prime})>0\).
Much as for orderings, conflict on events has been defined previously using forward-only rooted paths [52, 43]; in fact, the definitions coincide for pre-reversible LTSIs. We omit the details.
We can now introduce the main result of this section.
**Definition 4.28** (Polychotomy).: Let \(\mathcal{L}\) be a pre-reversible LTSI. We say that \(\mathcal{L}\) satisfies _polychotomy_ if whenever \(e,e^{\prime}\) are _forward_ events, then exactly one of the following holds:
1. \(e=e^{\prime}\);
2. \(e<e^{\prime}\);
3. \(e^{\prime}<e\);
4. \(e\#\)\(e^{\prime}\); or
5. \(e\)\(\mathsf{ci}\)\(e^{\prime}\).
**Proposition 4.29** (Polychotomy).: _Assume an LTSI is pre-reversible. Then polychotomy holds._
Proof.: Consider two forward events \(e\) and \(e^{\prime}\) which may or may not be equal.
We first check mutual exclusivity. Suppose \(e=e^{\prime}\). Then \(e<e\) is impossible by definition of \(<\). Also \(e\) cannot be in conflict with itself (we can use WF to show that there is at least one rooted path). Finally, \(e\)\(\mathsf{ci}\)\(e\) is impossible by Proposition 4.16. From now on we assume \(e\neq e^{\prime}\).
Next suppose \(e<e^{\prime}\). We can rule out \(e^{\prime}<e\) using Lemma 4.24.
Using Lemma 4.26, we know that \(e<_{\mathsf{f}}e^{\prime}\), hence there must be some rooted forward-only path with \(e\) followed by \(e^{\prime}\) (WF ensures at least one rooted path exists), and so \(e\) and \(e^{\prime}\) are not in conflict. Finally \(e\)\(\mathsf{ci}\)\(e^{\prime}\) implies that there are two coinitial transitions \(t\in e\), \(t^{\prime}\in e^{\prime}\) which are independent. Using SP to complete the square we see that \(e<e^{\prime}\) is impossible by NRE, which holds by Proposition 4.21.
Similarly we see that \(e^{\prime}<e\) implies that \(e\) and \(e^{\prime}\) are not in conflict and not independent.
Next suppose that \(e\)\(\#\)\(e^{\prime}\). If \(e\)\(\mathsf{ci}\)\(e^{\prime}\) then there are two coinitial transitions \(t\in e\), \(t^{\prime}\in e^{\prime}\) which are independent. Using SP to complete the square and WF we see that we have a rooted forward-only path containing occurrences of both \(e\) and \(e^{\prime}\) contradicting them being in conflict.
Suppose that none of (1)-(4) hold. We must show (5). Since \(e,e^{\prime}\) do not conflict, there is a rooted path \(r\) starting at some irreversible \(I\) such that \(\sharp(r,e)>0\) and \(\sharp(r,e^{\prime})>0\). If more than one such path exists, choose one of minimal length. W.l.o.g. suppose that \(r\) finishes with \(t^{\prime}\in e^{\prime}\) at \(P\). Since not \(e<e^{\prime}\), using Lemma 4.26 also \(e<_{\mathsf{f}}e^{\prime}\) does not hold; hence there is another forward-only path \(r^{\prime}\) from some irreversible \(I^{\prime}\) finishing with \(t^{\prime\prime}\in e^{\prime}\) at \(Q\) such that \(\sharp(r^{\prime},e)=0\). By Lemma 4.19 there is a path \(s\) from \(Q\) to \(P\) such that \(e^{\prime}\)\(\mathsf{ci}\)\([u]\) for every \(u\) in \(s\). Using Proposition 3.6 we deduce that \(I^{\prime}=I\). By CC \(r\approx r^{\prime}s\) and so by Lemma 4.12\(\sharp(s,e)>0\) and \(s\) must contain \(u\in e\), yielding \(e\)\(\mathsf{ci}\)\(e^{\prime}\) as required.
## 5. Causal Safety and Causal Liveness
In the literature, causal consistent reversibility is frequently informally described by saying that "a transition can be undone if and only if each of its consequences, if any, has been undone" (see, e.g., [32]). In this section we study this property, where the two implications will be referred to as _causal safety_ and _causal liveness_. We provide three different formalisations of such properties, based on independence of transitions (Section 5.1), independence of events (Section 5.2), and ordering of events (Section 5.3), and study their relationships. In Figure 6 we show the relationships between the various axioms and properties we shall study in this section and Section 6.
### CS and CL via Independence of Transitions
We first define causal safety and liveness using the independence relation.
**Definition 5.1**.: Let \(\mathcal{L}\) be an LTSI.
1. We say that \(\mathcal{L}\) is _causally safe (CS\({}_{t}\))_ if whenever \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\), \(\sharp(r,[t_{0}])=0\) and \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\), then \(t_{0}\;\iota\;t\) for all \(t\) in \(r\) such that \(\sharp(r,[t])>0\).
Figure 6. Implications between properties all assuming pre-reversible. Note that CS\({}_{\mathsf{ci}}\) holds for pre-reversible LTSIs (Theorem 5.20). All implications are strict, and there are no further implications between the sixteen properties shown, in view of the examples cited.
2. We say that \(\mathcal{L}\) is _causally live (CL\({}_{\iota}\))_ if whenever \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,[t_{0}])=0\) and \(\underline{t_{0}}\ \iota\ t\), for all \(t\) in \(r\) such that \(\sharp(r,[t])>0\), then we have \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\).
Properties \(\text{CS}_{\iota}\) and \(\text{CL}_{\iota}\) both consider a (forward) transition \(t_{0}:P\xrightarrow{a}Q\) followed by a path \(r\) where the number of occurrences in \(r\) of transitions that belong to the same event as \(t_{0}\) is zero. \(\text{CS}_{\iota}\) states that if after path \(r\) a transition \(t_{0}^{\dagger}\) can be undone, where \(t_{0}\) and \(t_{0}^{\dagger}\) belong to the same event, then the reverse of \(t_{0}\) is independent of all transitions \(t\) where the number of occurrences in \(r\) of the event of \(t\) is positive. Dually, \(\text{CL}_{\iota}\) requires that if the reverse of \(t_{0}\) is independent of all transitions whose events have a positive number of occurrences in \(r\), then it can be undone.
_Remark 5.2_.: In the definition of \(\text{CS}_{\iota}\) the condition that \(\sharp(r,[t_{0}])=0\) can be deduced from the other conditions using Lemma 4.20, provided that the LTSI is pre-reversible.
We use the reverse of \(t_{0}\) when considering independence from \(t\) because our axioms BTI, SP and PCI focus on _coinitial_ independence rather than independence of consecutive transitions in a trace. Take the simplest case where \(r\) is a single transition \(t:Q\xrightarrow{b}R\). First assume \(\underline{t_{0}}\ \iota\ t\); note that this is coinitial independence. We can use SP and PCI to get \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\), which is an example of causal liveness. Conversely, if we assume \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\), we can use BTI, SP, BLD and PCI to get a diamond with \(\underline{t_{0}}\ \iota\ t\), which is an example of causal safety.
Note that in the discussion above to prove causal safety we need to consider also the case \(r=t\underline{t}t\). Since \([\underline{t}]\) has a negative number of occurrences, we only need to show that \(\underline{t_{0}}\ \iota\ t\), which can be proved as above. However, if we replaced the condition \(\sharp(r,[t])>0\) with \(\sharp(r,[t])\neq 0\), we would also need to show \(\underline{t_{0}}\ \iota\ t\), which does not follow from the axioms above. Intuitively, requiring \(\underline{t_{0}}\ \iota\ t\) would make little sense, since all the occurrences of \(\underline{t}\) could be simplified with corresponding occurrences of \(t\). This is why we decided to require \(\sharp(r,[t])>0\).
We have just seen that existing axioms are sufficient to show \(\text{CS}_{\iota}\) and \(\text{CL}_{\iota}\) in the case where trace \(r\) consists of a single transition. However, existing axioms are not enough for general \(r\), as we will show in Examples 5.7 and 5.8. Thus, we introduce the following axiom, which states that independence does not depend on the choice of the representative inside an event.
**Definition 5.3**.: **Independence respects events (IRE)**: Whenever \(t\sim t^{\prime}\ \iota\ u\) we have \(t\ \iota\ u\).
IRE is one of the conditions in the definition of transition systems with independence [49, Definition 3.7].
IRE allows us to relate coinitial independence on events and independence on transitions.
**Lemma 5.4**.: _Assume an LTSI satisfies IRE. If \([t]\ \mathsf{ci}\ [u]\) then \(t\ \iota\ u\)._
Proof.: Immediate.
Together with the axioms for pre-reversibility, IRE is enough to show both \(\text{CS}_{\iota}\) and \(\text{CL}_{\iota}\).
**Theorem 5.5**.: _Let a pre-reversible LTSI satisfy IRE. Then it satisfies \(\text{CS}_{\iota}\)._
Proof.: Suppose \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\). By Lemma 4.19 there is a path \(s\) from \(Q\) to \(R\) such that for all \(u\) in \(s\) we have \([t_{0}]\ \mathsf{ci}\ [u]\). We deduce by Lemmas 4.15 and 5.4 that for all \(u\) in \(s\) we have \(\underline{t_{0}}\ \iota\ u\). By CC, \(r\approx s\).
Take \(t\) in \(r\) such that \(\sharp(r,[t])>0\). Then \(\sharp(s,[t])>0\), thanks to Lemma 4.12. But then there is \(u\) in \(s\) such that \(u\sim t\). We have \(\underline{t_{0}}\ \iota\ u\) and so \(\underline{t_{0}}\ \iota\ t\), using IRE, as desired.
**Theorem 5.6**.: _Let a pre-reversible LTSI satisfy IRE. Then it satisfies CL\({}_{\iota}\)._
Proof.: Suppose \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,[t_{0}])=0\) and \(t_{0}\ \iota\ t\), for all \(t\) in \(r\) such that \(\sharp(r,[t])>0\). We have to show that there is \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\).
Thanks to PL, there is \(T\) such that \(b:P\xrightarrow{\rho_{b}}_{*}T\) and \(f:T\xrightarrow{\rho_{f}}_{*}R\), with \(b\) backward and \(f\) forward. By CC, \(t_{0}r\approx bf\). Since \(\sharp(r,[t_{0}])=0\), thanks to Lemma 4.12 we have \(\sharp(bf,[t_{0}])=1\). As a consequence, there is a transition \(t_{0}^{\prime}:P^{\prime}\xrightarrow{a}Q^{\prime}\in[t_{0}]\) in \(f\). This \(t_{0}^{\prime}\) is in fact the unique transition in \([t_{0}]\) belonging to \(f\) by Proposition 4.21. Let \(f^{\prime}\) be the portion of \(f\) from \(Q^{\prime}\) to \(R\). If we can show that \(\underline{t_{0}^{\prime}}\ \iota\ t^{\prime\prime}\) for each transition \(t^{\prime\prime}\) in \(f^{\prime}\), then the thesis will follow by commuting \(t_{0}^{\prime}\) with all such transitions using SP and IRE.
By Lemma 4.19 there is a path \(s\) from \(Q\) to \(Q^{\prime}\) such that \([t_{0}]\ \mathsf{ci}\ [u]\) for all \(u\) in \(s\). By CC, \(r\approx sf^{\prime}\). Take any \(t^{\prime\prime}\) in \(f^{\prime}\). By Lemma 4.12, \(\sharp(r,[t^{\prime\prime}])=\sharp(s,[t^{\prime\prime}])+\sharp(f^{\prime},[t ^{\prime\prime}])\). If \(\sharp(s,[t^{\prime\prime}])<0\) then there is \(u\) in \(s\) such that \(u\sim\underline{t^{\prime\prime}}\). Now \([t_{0}]\ \mathsf{ci}\ [u]=[\underline{t^{\prime\prime}}]\). Therefore \([t_{0}]\ \mathsf{ci}\ [t^{\prime\prime}]\) by Lemma 4.15, and \(\underline{t_{0}}\ \iota\ t^{\prime\prime}\) by Lemma 5.4. Suppose instead \(\sharp(s,[t^{\prime\prime}])\geq 0\). Since \(\sharp(f^{\prime},[t^{\prime\prime}])>0\), we have \(\sharp(r,[t^{\prime\prime}])>0\). So there is \(u\) in \(r\) such that \(u\sim t^{\prime\prime}\), and by hypothesis \(\underline{t_{0}}\ \iota\ u\), so that \(\underline{t_{0}^{\prime}}\ \iota\ t^{\prime\prime}\) using IRE.
We now give examples of LTSIs which are pre-reversible and where CS\({}_{\iota}\) and CL\({}_{\iota}\) fail.
_Example 5.7_.: Consider the LTSI shown in Figure 7 including the dashed transitions. We add coinitial independence as given by BTI and PCI. The LTSI is pre-reversible. However CS\({}_{\iota}\) fails. We have \(t_{0}:P\xrightarrow{a}Q\), \(Q\xrightarrow{bc}_{*}R\) and \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\). If CS\({}_{\iota}\) held we could deduce that \(\underline{t_{0}}\ \iota\ (Q^{\prime},c,R)\), which is not the case. Similarly we see that IRE fails, since \(\underline{t_{0}}\sim(Q^{\prime},\underline{a},P^{\prime})\ \iota\ (Q^{\prime},c,R)\) but not \(\underline{t_{0}}\ \iota\ (Q^{\prime},c,R)\). Note, however, that CL\({}_{\iota}\) holds, since only transitions inside the same diamond are independent, and transitions on one side of the diamond are undone by the corresponding transition on the opposite side.
_Example 5.8_.: Consider the LTSI shown in Figure 7 excluding the dashed transitions. We add coinitial independence as given by BTI and PCI. We also add \((Q,\underline{a},P)\ \iota\ (Q^{\prime},c,R)\). The LTSI is pre-reversible. However CL\({}_{\iota}\) fails. We have \(t_{0}:P\xrightarrow{a}Q\), \(Q\xrightarrow{bc}_{*}R\) and \(\underline{t_{0}}\ \iota\ (Q,b,Q^{\prime})\), \(\underline{t_{0}}\ \iota\ (Q^{\prime},c,R)\). Clearly CL\({}_{\iota}\) fails, since we cannot reverse the \(a\)-transition at \(R\). IRE fails since \((Q^{\prime},\underline{a},P^{\prime})\sim\underline{t_{0}}\ \iota\ (Q^{\prime},c,R)\) but not \((Q^{\prime},\underline{a},P^{\prime})\ \iota\ (Q^{\prime},c,R)\). Note, however, that CS\({}_{\iota}\) holds since the only way to undo transitions is with transitions on the opposite side of the same diamond, and the path connecting them is the third side of the same diamond. Hence, the condition on independence holds thanks to BTI and PCI.
Examples 5.7 and 5.8 show that the stipulation of IRE cannot be omitted in the statements of Theorems 5.5 and 5.6, respectively. These examples also show that we cannot deduce CS\({}_{\iota}\) or CL\({}_{\iota}\) from CC, nor one from the other.
Figure 7. The LTSIs in Examples 5.7 and 5.8.
_Example 5.9_ (CS\({}_{\iota}\) and CL\({}_{\iota}\) do not imply CC).: Consider the LTSI with states \(P,Q,R,S\) and transitions \(t:P\xrightarrow{a}Q\), \(u:P\xrightarrow{b}R\), \(t^{\prime}:R\xrightarrow{a^{\prime}}S\) and \(u^{\prime}:Q\xrightarrow{b^{\prime}}S\), with empty independence relation. This is essentially the same as Example 3.16, except that we have disambiguated the transition labels, to reflect that the four transitions form four different events. Then CC does not hold, but we claim that both CS\({}_{\iota}\) and CL\({}_{\iota}\) hold.
CS\({}_{\iota}\): There are four possible cases to check, depending on the initial forward transition. Consider first \(t:P\xrightarrow{a}Q\) and some \(r:Q\xrightarrow{\rho}_{*}Q^{\prime}\), \(P^{\prime}\xrightarrow{a}Q^{\prime}\), where \(\sharp(r,[t])=0\) and \((P,a,Q)\sim(P^{\prime},a,Q^{\prime})\). Clearly \(P^{\prime}=P\) and \(Q^{\prime}=Q\). To verify CS\({}_{\iota}\) in this case, it is enough to show that \(\sharp(r,[u])=\sharp(r,[t^{\prime}])=\sharp(r,[u^{\prime}])=0\). Since \(r\) is a circuit, it enters each state as often as it leaves it. Furthermore, since \(\sharp(r,[t])=0\), \(r\) enters \(Q\) from \(P\) as often as it leaves \(Q\) towards \(P\). Hence \(r\) must enter \(Q\) from \(S\) as often as it leaves \(Q\) towards \(S\), meaning that \(\sharp(r,[u^{\prime}])=0\). We can similarly deduce that \(\sharp(r,[t^{\prime}])=0\) and \(\sharp(r,[u])=0\). The remaining three cases with initial transitions \(u\), \(t^{\prime}\) and \(u^{\prime}\) are similar to the case for \(t\).
CL\({}_{\iota}\): Again there are four cases to check, depending on the initial forward transition. Consider first \(t:P\xrightarrow{a}Q\) and some \(r:Q\xrightarrow{\rho}_{*}Q^{\prime}\) where \(\sharp(r,[t])=0\) and for all \(t^{\prime\prime}\) in \(r\) we have \(\sharp(r,[t^{\prime\prime}])\leq 0\) (indeed, if \(\sharp(r,[t^{\prime\prime}])>0\) we would require \(\underline{t}\)\(\iota\)\(t^{\prime\prime}\), which is false since the independence relation is empty, hence the condition for CL\({}_{\iota}\) would hold trivially). However, if \(\sharp(r,[t^{\prime\prime}])<0\) then there is \(t^{\prime\prime\prime}\) in \(r\) with \([t^{\prime\prime\prime}]=[\underline{t^{\prime\prime}}]\) (in this example actually \(t^{\prime\prime\prime}=\underline{t^{\prime\prime}}\)) and \(\sharp(r,[\underline{t^{\prime\prime}}])>0\), but, for the same reason as above, we cannot have \(\sharp(r,[\underline{t^{\prime\prime}}])>0\) since the independence relation is empty. Hence for each \(t^{\prime\prime}\) we have \(\sharp(r,[t^{\prime\prime}])=0\), which implies \(Q^{\prime}=Q\), since the net rotation (cfr. Figure 4) of each transition is zero, and so the net rotation of \(r\) is zero. The thesis follows trivially. The remaining three cases with initial transitions \(u\), \(t^{\prime}\) and \(u^{\prime}\) are similar to the case for \(t\). \(\diamond\)
The next axiom states that independence is fully determined by its restriction to coinitial transitions. It is related to axiom (E) of [49, page 325], but here we allow reverse as well as forward transitions.
**Definition 5.10** (**Independence of events is coinitial (IEC))**.: If \(t_{1}\)\(\iota\)\(t_{2}\) then \([t_{1}]\)ci\([t_{2}]\).
Thanks to previous axioms, independence behaves well w.r.t. reversing.
**Definition 5.11** (Reversing preserves independence (RPI)).: If \(t\)\(\iota\)\(t^{\prime}\) then \(\underline{t}\)\(\iota\)\(t^{\prime}\).
**Proposition 5.12**.: _If an LTSI satisfies SP, PCI, IRE, IEC then it also satisfies RPI._
Proof.: Suppose \(t\)\(\iota\)\(u\). We must show \(\underline{t}\)\(\iota\)\(u\). By IEC we have \(t^{\prime}\sim t\), \(u^{\prime}\sim u\) such that \(t^{\prime}\)\(\iota\)\(u^{\prime}\) and \(t^{\prime},u^{\prime}\) are coinitial. By SP there is a diamond \(t^{\prime},u^{\prime},t^{\prime\prime},u^{\prime\prime}\) with \(t^{\prime}\sim t^{\prime\prime}\), \(u^{\prime}\sim u^{\prime\prime}\). Then \(\underline{t^{\prime}}\)\(\iota\)\(u^{\prime\prime}\) using PCI. Then \(\underline{t}\sim\underline{t^{\prime}}\)\(\iota\)\(u^{\prime\prime}\sim u\) and so by IRE \(\underline{t}\)\(\iota\)\(u\) as required.
We can use IEC or IRE to show that transitions which are part of the same event cannot be independent.
**Definition 5.13** (Event Coherence (ECh)).: If \(t\sim t^{\prime}\) then \(t\)\(\not\)\(t^{\prime}\).
**Proposition 5.14**.: _If a pre-reversible LTSI satisfies either IRE or IEC then it also satisfies ECh._
Proof.: Assume for a contradiction that \(t\sim t^{\prime}\) and \(t\)\(\iota\)\(t^{\prime}\). First suppose that IRE holds. We deduce \(t\)\(\iota\)\(t\), contradicting irreflexivity of \(\iota\). Now suppose that IEC holds. Then \([t]\)ci\([t^{\prime}]\), and so \([t]\)ci\([t]\), contradicting irreflexivity of ci (Proposition 4.16).
All the axioms that we have introduced so far are independent, i.e. none is derivable from the remaining axioms.
The next example shows that IRE is not implied by other axioms.
_Example 5.15_.: Let \(t:P\xrightarrow{a}Q\), \(u:P\xrightarrow{b}R\), \(u^{\prime}:Q\xrightarrow{b}S\), \(t^{\prime}:R\xrightarrow{a}S\), with \(t\;\iota\;u\), \(\underline{u}\;\iota\;t^{\prime}\), \(\underline{t^{\prime}}\;\iota\;\underline{u^{\prime}}\), \(u^{\prime}\;\iota\;\underline{t}\), namely we have independence at all corners of the diamond. Here we have two forward events, labelled with \(a\) and \(b\) respectively. We have \(t^{\prime}\sim t\;\iota\;u\) but not \(t^{\prime}\;\iota\;u\), so that IRE fails. However axioms SP, BTI, WF, PCI and IEC hold. \(\diamond\)
The next example shows that IEC is not implied by other axioms.
_Example 5.16_.: Let \(t:P\xrightarrow{a}Q\), \(u:R\xrightarrow{b}S\), where all states are distinct, and let \(t\;\iota\;u\). Then IEC fails; however axioms SP, BTI, WF, PCI and IRE hold. \(\diamond\)
The counterexample above remains valid also if \(Q=R\), as shown below.
_Example 5.17_.: Let \(t:P\xrightarrow{a}Q\), \(u:Q\xrightarrow{b}S\), and let \(t\;\iota\;u\). Then IEC fails; however axioms SP, BTI, WF, PCI and IRE hold. \(\diamond\)
We can now prove the independence result.
**Proposition 5.18**.: _The axioms SP, BTI, WF, PCI, IRE, IEC are independent of each other._
Proof.: For each of the six axioms we give an LTSI which satisfies the other five axioms but not the axiom itself. In each case it is straightforward to check that the remaining axioms hold.
SP: Let \(t:P\xrightarrow{a}Q\) and \(u:P\xrightarrow{b}R\) with \(t\;\iota\;u\).
BTI: Let \(P\xrightarrow{a}R\) and \(Q\xrightarrow{b}R\) with an empty independence relation (Example 3.14).
WF: Let \(P_{i+1}\xrightarrow{a}P_{i}\) for \(i=0,1,\ldots\) with an empty independence relation.
PCI: Let \(t:P\xrightarrow{a}Q\), \(u:P\xrightarrow{b}R\), \(u^{\prime}:Q\xrightarrow{b}S\), \(t^{\prime}:R\xrightarrow{a}S\), with \(\underline{t^{\prime}}\;\iota\;\underline{u^{\prime}}\).
IRE: See Example 5.15.
IEC: See Example 5.16 or Example 5.17.
### CS and CL via Independent Events
We now introduce a second version of causal safety and liveness, which uses independence like CS\({}_{\iota}\) and CL\({}_{\iota}\), but on events rather than on transitions. More precisely, we use coinitial independence \(\mathsf{ci}\).
**Definition 5.19**.: Let \(\mathcal{L}=(\mathsf{Proc},\mathsf{Lab},\rightarrow,\iota)\) be an LTSI.
1. We say that \(\mathcal{L}\) is _coinitially causally safe_ (CS\({}_{\mathsf{ci}}\)) if whenever \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\), \(\sharp(r,[t_{0}])=0\) and \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\), then \([\underline{t_{0}}]\;\mathsf{ci}\;e\) for all events \(e\) such that \(\sharp(r,e)>0\).
2. We say that \(\mathcal{L}\) is _coinitially causally live_ (CL\({}_{\mathsf{ci}}\)) if whenever \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,[t_{0}])=0\) and \([\underline{t_{0}}]\;\mathsf{ci}\;e\), for all events \(e\) such that \(\sharp(r,e)>0\), then we have \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\).
Note that in Definition 5.19 we operate at the level of events, rather than at the level of transitions as in Definition 5.1. Also note that we could replace \([\underline{t_{0}}]\;\mathsf{ci}\;e\) by \([t_{0}]\;\mathsf{ci}\;e\) using Lemma 4.15. We have used the former for compatibility with Definition 5.1.
**Theorem 5.20**.: _If an LTSI is pre-reversible then it satisfies CS\({}_{\mathsf{ci}}\)._
Proof.: Suppose \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\). By Lemma 4.19 there is a path \(s\) from \(Q\) to \(R\) such that for all \(u\) in \(s\) we have \([t_{0}]\;\mathsf{ci}\;[u]\). By CC, \(r\approx s\).
Suppose that \(e\) is an event and \(\sharp(r,e)>0\). Then \(\sharp(s,e)>0\), thanks to Lemma 4.12. Hence there is \(u\) in \(s\) such that \([u]=e\). Since \([t_{0}]\;\mathsf{ci}\;[u]\), also \([t_{0}]\;\mathsf{ci}\;e\). Hence \([\underline{t_{0}}]\;\mathsf{ci}\;e\) using Lemma 4.15.
We now introduce a weaker version of axiom IRE (Definition 5.3).
**Definition 5.21** (**Coinitial IRE (CIRE))**.: If \([t]\) ci \([u]\) and \(t,u\) are coinitial then \(t\;\iota\;u\).
It is easy to see that IRE implies CIRE. By considering Example 5.15 we see that an LTSI can be pre-reversible and satisfy CIRE (and IEC) but not IRE. Also, CIRE is not sufficient to ensure ECh (Definition 5.13) holds, as shown by the next example.
_Example 5.22_.: Let \(t:P\xrightarrow{a}Q\), \(u:P\xrightarrow{b}R\), \(u^{\prime}:Q\xrightarrow{b}S\), \(t^{\prime}:R\xrightarrow{a}S\). We add independence between all pairs of distinct transitions drawn from \(t,u,t^{\prime},u^{\prime}\). We furthermore add those independent pairs derived from closing under RPI. We see that the LTSI is pre-reversible. It satisfies CIRE and RPI, but not ECh, since \(t\sim t^{\prime}\) and also \(t\;\iota\;t^{\prime}\). \(\diamond\)
The next example shows that notions of CS/CL based on independence on transitions and on coinitial independence of events are not equivalent.
_Example 5.23_.: Consider the LTSI in Figure 8. Independence is given by closing under BTI and PCI. Clearly WF and SP hold; hence the LTSI is pre-reversible and satisfies \(\text{CS}_{\text{ci}}\). There are three events, labelled \(a,b,c\), which are all independent of each other. Furthermore IEC holds, but not CIRE (noting that the leftmost \(b\) and \(c\) transitions are coinitial but not independent, while the corresponding events are coinitially independent thanks to the rightmost square). Also \(\text{CL}_{\text{ci}}\) fails: consider \(P\xrightarrow{a}Q\xrightarrow{b}R\), where \(a\) cannot be reversed at \(R\) even though \([Q\xrightarrow{a}P]\) ci \([Q\xrightarrow{b}R]\). Differently from \(\text{CS}_{\text{ci}}\), \(\text{CS}_{\iota}\) fails: e.g., from the leftmost corner one can do \(b\underline{c}\underline{b}\), reversing \(b\), but the inverse of the first \(b\)-transition is not independent with the \(c\)-transition. Differently from \(\text{CL}_{\text{ci}}\), \(\text{CL}_{\iota}\) holds: the only state at which any event that has occurred cannot be immediately reversed is \(R\). So we can restrict attention to instances of \(P^{\prime}\xrightarrow{a}Q^{\prime}\), \(r:Q^{\prime}\xrightarrow{\rho}_{*}R\). Furthermore \(r\) must finish with either \(Q\xrightarrow{b}R\) or the \(c\) transition to \(R\). These two transitions are not independent with any inverse \(a\) transition. Hence \(\text{CL}_{\iota}\) holds in these cases vacuously. \(\diamond\)
**Proposition 5.24**.: _Let \(\mathcal{L}\) be a pre-reversible LTSI. If \(\mathcal{L}\) satisfies \(\text{CS}_{\iota}\) and RPI then \(\mathcal{L}\) also satisfies CIRE._
Proof.: Assume that \(\mathcal{L}\) satisfies \(\text{CS}_{\iota}\). Suppose that \(t,u\) are coinitial transitions such that \([t]\;\text{ci}\;[u]\). We must show that \(t\;\iota\;u\). We can suppose that at least one of \(t\) and \(u\) is forward; otherwise we can obtain \(t\;\iota\;u\) from BTI. Without loss of generality, suppose that \(t:P\xrightarrow{a}Q\) is forward. Since \([t]\;\text{ci}\;[u]\), there are coinitial \(t^{\prime}:P^{\prime}\xrightarrow{a}Q^{\prime}\) and \(u^{\prime}\) such that \(t\sim t^{\prime}\;\iota\;u^{\prime}\sim u\). By SP we can complete a square containing \(t^{\prime},u^{\prime}\) and two further transitions \(t^{\prime\prime}\sim t^{\prime}\) and \(u^{\prime\prime}\sim u^{\prime}\) both with the same target \(R\).
By Lemma 4.19 there is a path \(s:Q\xrightarrow{\rho}_{*}Q^{\prime}\). Let \(r^{\prime}=\underline{t}uuts\) (a path from \(Q\) to \(Q^{\prime}\)), and consider the path \(r=r^{\prime}u^{\prime\prime}\) from \(Q\) to \(R\). We see that \(\sharp(r,[t])=0\), using Lemma 4.20 applied to \(t,t^{\prime\prime}\) and \(r\). Hence \(\text{CS}_{\iota}\) applies to \(t\) together with \(r\) and \(t^{\prime\prime}\). We deduce that
Figure 8. The LTSI in Example 5.23.
for all \(u_{1}\) in \(r\) such that \(\sharp(r,[u_{1}])>0\). We see that \(\sharp(tr^{\prime},[u])=0\) using Lemma 4.20 applied to \(\underline{u},\underline{u}^{\prime\prime}\) and \(tr^{\prime}\). Noting that \(\mathsf{und}([t])\neq\mathsf{und}([u])\) by Proposition 4.17, we obtain \(\sharp(r,[u])=1\) and so \(\underline{t}\;\iota\;u\). We deduce \(t\;\iota\;u\) using RPI.
We cannot omit the assumption of RPI in Proposition 5.24, in view of the following example.
_Example 5.25_.: Consider the 'half cube' LTSI with transitions \(a,b,c\) in Figure 9. We add independence as given by BTI and PCI, and also between all pairs of transitions \(t,u\) where at least one of \(t,u\) is backward, and \(t\not\sim u\), \(t\not\sim\underline{u}\). Clearly RPI does not hold. The LTSI is pre-reversible, and IEC holds. CIRE does not hold; note that the \(a\) and \(b\)-events are independent, but after performing \(c\) there are coinitial \(a\) and \(b\)-transitions which are not independent. Both \(\mathrm{CL}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\iota}\) hold: note that at any state, all events that have occurred can be reversed immediately. We have ensured that \(\mathrm{CS}_{\iota}\) holds, since all independence deducible from \(\mathrm{CS}_{\iota}\) must involve a backward transition \(\underline{t_{0}}\) and a transition \(u\) such that \(t_{0}\not\sim u\) and \(t_{0}\not\sim\underline{u}\). \(\diamond\)
We can characterise CIRE as being equivalent to coinitial transitions with a common derivative process being independent.
**Proposition 5.26**.: _Let \(\mathcal{L}\) be a pre-reversible LTSI. The following are equivalent:_
1. \(\mathcal{L}\) _satisfies CIRE;_
2. _If_ \(t:P\xrightarrow{\alpha}Q\)_,_ \(r:Q\xrightarrow{\rho}_{*}S\) _and_ \(u:P\xrightarrow{\beta}R\)_,_ \(s:R\xrightarrow{\sigma}_{*}S\) _where_ \(\mathsf{und}(\alpha)\neq\mathsf{und}(\beta)\) _and_ \(\sharp(r,[t])=\sharp(s,[u])=0\) _then_ \(t\;\iota\;u\)_._
Proof.: Assume (1). Let \(t:P\xrightarrow{\alpha}Q\), \(r:Q\xrightarrow{\rho}_{*}S\) and \(u:P\xrightarrow{\beta}R\), \(s:R\xrightarrow{\sigma}_{*}S\) where \(\mathsf{und}(\alpha)\neq\mathsf{und}(\beta)\) and \(\sharp(r,[t])=\sharp(s,[u])=0\). We must show \(t\;\iota\;u\). Since the LTSI is pre-reversible, polychotomy holds for events \([t]\) and \([u]\) (Proposition 4.29). We can exclude \([t]=[u]\) since \(\mathsf{und}(\alpha)\neq\mathsf{und}(\beta)\). There is a rooted path \(r_{0}\) from some irreversible \(I\) to \(P\). Since NRE holds (Proposition 4.21), \(\sharp(r_{0},[t])=\sharp(r_{0},[u])\). By considering the paths \(r_{0}t\) and \(r_{0}u\) we deduce that neither \([u]<[t]\) nor \([t]<[u]\) hold. By CC applied to \(tr\) and \(us\) we see that \(\sharp(r,[u])=1\). Hence \(r_{0}tr\) is a rooted path with \(\sharp(r_{0}tr,[t])=\sharp(r_{0}tr,[u])=1\), so that we can exclude \([t]\;\#\;[u]\). By polychotomy we conclude that \([t]\;\mathsf{ci}\;[u]\). Then \(t\;\iota\;u\) by CIRE.
Assume (2). Let \([t]\;\mathsf{ci}\;[u]\) where \(t:P\xrightarrow{\alpha}Q\) and \(u:P\xrightarrow{\beta}R\) are coinitial. We must show \(t\;\iota\;u\). First note that \(\mathsf{und}(\alpha)\neq\mathsf{und}(\beta)\) by Proposition 4.17. We have \(t\sim t^{\prime}\;\iota\;u^{\prime}\sim u\) where \(t^{\prime}:P^{\prime}\xrightarrow{\alpha}Q^{\prime}\) and \(u^{\prime}:P^{\prime}\xrightarrow{\beta}R^{\prime}\) are coinitial. By SP we have \(t^{\prime\prime}:R^{\prime}\xrightarrow{\alpha}S\) and \(u^{\prime\prime}:Q^{\prime}\xrightarrow{\beta}S\). By Lemma 4.19 we have \(r^{\prime}:Q\xrightarrow{\rho}_{*}Q^{\prime}\) such that for all \(u_{1}\) in \(r^{\prime}\) we have \([t]\;\mathsf{ci}\;[u_{1}]\), and \(s^{\prime}:R\xrightarrow{\sigma}_{*}R^{\prime}\) such that for all \(u_{2}\) in \(s^{\prime}\) we have \([u]\;\mathsf{ci}\;[u_{2}]\). Let \(r=r^{\prime}u^{\prime\prime}\) and \(s=s^{\prime}t^{\prime\prime}\). We have \(\sharp(r,[t])=\sharp(s,[u])=0\) using Lemma 4.20. Hence \(t\;\iota\;u\) as required, using the hypothesis.
Figure 9. LTSI of Examples 5.32 and 5.25
Notably, in the proof of (1) \(\Rightarrow\) (2), CIRE is only used in the last step. Hence, the result could be rephrased by stating that any prereversible LTSI satisfies (2), with a conclusion of \([t]\ \mathfrak{ci}\ [u]\) rather than \(t\ \iota\ u\).
The independence result in Proposition 5.18 holds also if we replace IRE by CIRE.
**Proposition 5.27**.: _The axioms SP, BTI, WF, PCI, CIRE, IEC are independent of each other._
Proof.: For each of the six axioms we need to give an LTSI which satisfies the other five axioms but not the axiom itself. Since IRE implies CIRE, for all axioms apart from CIRE we can reuse the examples given in the proof of Proposition 5.18. Example 5.23 provides an LTSI where CIRE fails and the remaining five axioms hold.
We can distinguish three mutually exclusive cases for CIRE (Definition 5.21):
**forward case::**: both transitions are forward;
**backward-forward case::**: one transition is backward, one is forward;
**backward case::**: both transitions are backward (implied by BTI).
The second case is particularly relevant for the characterisation of \(\mathrm{CL}_{\mathfrak{ci}}\); hence we state it as a separate axiom.
**Definition 5.28**.: **Backward-Forward CIRE (BFCIRE)**: if \(t:P\xrightarrow{a}Q\) and \(u:Q\xrightarrow{b}R\) and \([\underline{t}]\ \mathfrak{ci}\ [u]\) then \(\underline{t}\ \iota\ u\).
Thus BFCIRE is just CIRE specialised to the case where one of the coinitial transitions is backward and one is forward. It has some similarity with one of the properties of transition systems with independence in [41] and [49, Definition 4.1], and Sideways Diamond properties in [43, 1]. However, all of these properties state that if two consecutive forward transitions are independent then they are two sides of a commuting diamond.
Analogously to what was done in Theorem 5.6 for \(\mathrm{CL}_{\iota}\), we give below conditions for ensuring \(\mathrm{CL}_{\mathfrak{ci}}\). Notably, here BFCIRE is necessary and sufficient, while for \(\mathrm{CL}_{\iota}\) we required IRE, which was sufficient but not necessary.
**Theorem 5.29**.: _Let \(\mathcal{L}\) be a pre-reversible LTSI. Then the following are equivalent:_
1. \(\mathcal{L}\) _satisfies BFCIRE;_
2. \(\mathcal{L}\) _satisfies_ \(\mathrm{CL}_{\mathfrak{ci}}\)_._
Proof.: (1) \(\Rightarrow\) (2) Suppose \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,[t_{0}])=0\) and \([\underline{t_{0}}]\ \mathfrak{ci}\ e\), for all \(e\) such that \(\sharp(r,e)>0\). We have to show that there is \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\).
Thanks to PL, there is \(T\) such that \(b:P\xrightarrow{\rho_{b}}_{*}T\) and \(f:T\xrightarrow{\rho_{f}}_{*}R\), with \(b\) backward and \(f\) forward. By CC, \(t_{0}r\approx bf\). Since \(\sharp(r,[t_{0}])=0\), thanks to Lemma 4.12\(\sharp(bf,[t_{0}])=1\). As a consequence, there is a transition \(t_{0}^{\prime}:P^{\prime}\xrightarrow{a}Q^{\prime}\in[t_{0}]\) in \(f\) (which is unique by Proposition 4.21). Let \(f^{\prime}\) be the portion of \(f\) from \(Q^{\prime}\) to \(R\).
If we can show that \([\underline{t_{0}}]\ \mathfrak{ci}\ [t^{\prime\prime}]\) for each transition \(t^{\prime\prime}\) in \(f^{\prime}\), then the thesis will follow by commuting \(t_{0}^{\prime}\) with all such transitions using SP and BFCIRE.
By Lemma 4.19 there is a path \(s\) from \(Q\) to \(Q^{\prime}\) such that \([t_{0}]\ \iota[u]\) for all \(u\) in \(s\). By CC, \(r\approx sf^{\prime}\). Take any \(t^{\prime\prime}\) in \(f^{\prime}\). By Lemma 4.12, \(\sharp(r,[t^{\prime\prime}])=\sharp(s,[t^{\prime\prime}])+\sharp(f^{\prime},[t ^{\prime\prime}])\). If \(\sharp(s,[t^{\prime\prime}])<0\) then there is \(u\) in \(s\) such that \(u\sim t^{\prime\prime}\). Now \([t_{0}]\ \mathfrak{ci}\ [u]\), and so \([t_{0}]\ \mathfrak{ci}\ [t^{\prime\prime}]\) using Lemma 4.15. So suppose \(\sharp(s,[t^{\prime\prime}])\geq 0\). Since \(\sharp(f^{\prime},[t^{\prime\prime}])>0\), we have \(\sharp(r,[t^{\prime\prime}])>0\). So there is \(u\) in \(r\) such that \(u\sim t^{\prime\prime}\), and by hypothesis \([\underline{t_{0}}]\ \mathfrak{ci}\ [u]\), so that \([\underline{t_{0}}]\ \mathfrak{ci}\ [t^{\prime\prime}]\).
(2) \(\Rightarrow\) (1) Suppose that \(t_{0}:P\xrightarrow{a}Q\) and \(u:Q\xrightarrow{b}R\) and \([\underline{t_{0}}]\ \mathfrak{ci}\ [u]\). Clearly \(\sharp(u,[t_{0}])=0\). By \(\mathrm{CL}_{\mathfrak{ci}}\) we have \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\). Using BTI and SP we can complete a square
starting with \(\underline{u}\) and \(\underline{t}_{0}^{\dagger}\). Using BLD this square must include \(t_{0}\). Using PCI we see that \(\underline{t}_{0}\)\(\iota\)\(u\) as required.
CL\({}_{\mathsf{ci}}\) (and BFCIRE) do not imply CIRE, as shown by Example 5.25.
**Lemma 5.30**.: _Let a pre-reversible LTSI satisfy CS\({}_{\iota}\). Then it satisfies BFCIRE._
Proof.: Suppose \(t_{0}:P\xrightarrow{a}Q\) and \(u:Q\xrightarrow{b}R\) and \([\underline{t}_{0}]\)\(\mathsf{ci}\)\([u]\). We must show that \(\underline{t}_{0}\)\(\iota\)\(u\).
By Lemma 4.15\([t_{0}]\)\(\mathsf{ci}\)\([u]\) and so there are coinitial \(t_{0}^{\prime}:P^{\prime}\xrightarrow{a}Q^{\prime}\) and \(u^{\prime}:P^{\prime}\xrightarrow{b}R^{\prime}\) with \(t_{0}\sim t_{0}^{\prime}\)\(\iota\)\(u^{\prime}\sim u\). Using SP we can complete a square with \(t_{0}^{\dagger}:R^{\prime}\xrightarrow{a}S^{\prime}\) and \(u^{\prime\prime}:Q^{\prime}\xrightarrow{b}S^{\prime}\). By Lemma 4.19 applied to \(u\) and \(u^{\prime\prime}\) we have a path \(s\) from \(R\) to \(S^{\prime}\). Let \(r=us\). Then \(\sharp(r,[t_{0}])=0\) using Lemma 4.20. Also \(\sharp(s,[u])=0\) using Lemma 4.20, so that \(\sharp(r,[u])>0\). By CS\({}_{\iota}\) applied to \(t_{0},t_{0}^{\dagger}\) and \(r\) we deduce \(\underline{t}_{0}\)\(\iota\)\(u\) as required.
Perhaps surprisingly, we can now relate safety with independence of transitions to liveness with independence of events.
**Proposition 5.31**.: _Let a pre-reversible LTSI satisfy CS\({}_{\iota}\). Then it satisfies CL\({}_{\mathsf{ci}}\)._
Proof.: By Lemma 5.30 and Theorem 5.29.
CL\({}_{\mathsf{ci}}\) (and BFCIRE) do not imply CS\({}_{\iota}\), as shown by the next example.
_Example 5.32_.: Consider the 'half cube' LTSI with transitions \(a,b,c\) in Figure 9. We add independence as given by BTI and PCI. The LTSI is pre-reversible. As in Example 5.25, CIRE does not hold while both CL\({}_{\mathsf{ci}}\) (hence BFCIRE) and CL\({}_{\iota}\) hold. All independence is coinitial. CS\({}_{\iota}\) however does not hold: consider \(t_{0}:P\xrightarrow{c}Q\), \(r:Q\xrightarrow{ab}_{*}R\), \(S\xrightarrow{c}R\)--here we do not have \(\underline{t}_{0}\)\(\iota\)\((Q^{\prime},b,R)\).
**Proposition 5.33**.: _Let \(\mathcal{L}\) be a pre-reversible LTSI satisfying IEC. If \(\mathcal{L}\) satisfies CL\({}_{\mathsf{ci}}\) then \(\mathcal{L}\) satisfies CL\({}_{\iota}\)._
Proof.: Immediate from the definitions.
We next give an example where CC holds but not CS\({}_{\mathsf{ci}}\) (and not PCI).
_Example 5.34_.: Consider the cube with transitions \(a,b,c\) on the left in Figure 10, where the forward direction is from left to right. We add independence as given by BTI. So SP, BTI, WF hold, but not PCI. Consider the bold path from the leftmost end: we have an \(a\)-transition followed by a path \(r=bc\) followed by \(\underline{a}\). For CS\({}_{\mathsf{ci}}\) to hold, we want \(\underline{a}\) to be the reverse of the same event as the first \(a\). They are connected by a ladder with sides \(cb\). We add independence for all corners on the two faces of the ladder (\(ac\) and \(ab\)). Transitions
Figure 10. The LTSIs in Examples 5.34 and 5.35.
and \(\underline{c}\) at \(P\) are independent (by BTI) so we obtain \(\underline{bc}\approx\underline{cb}\), where \(\underline{bc}\) is dashed and \(\underline{cb}\) is bold. Since \(\approx\) is closed under composition, we get \(bc\approx cb\). However the bold \(b\) is a different event from the event of the top \(b\)s since the bold-dashed \(bc\) face does not have independence at each corner. Therefore we do not get \([a]\ \mathsf{ci}\ [b]\) for the bold \(a\) and bold \(b\), and \(\mathrm{CS}_{\mathsf{ci}}\) fails. However, we note that we do have \([a]\ \mathsf{ci}\ [b]\) for the bold \(a\) and the dashed \(b\) since \(a\) and \(b\) at \(Q\) are independent.
We next give an example where \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\) hold but not CC.
_Example 5.35_.: Consider the LTSI with \(Q_{i}\xrightarrow{b}P_{i}\), \(P_{i+1}\xrightarrow{c}P_{i}\), \(Q_{i+1}\xrightarrow{c}Q_{i}\), \(P_{i+1}\xrightarrow{a}Q_{i}\) for \(i=0,1,\ldots\). This is shown on the right in Figure 10. Clearly WF does not hold. We add coinitial independence to make BTI and PCI hold. Then also SP and CIRE hold. However, CC fails since, for example \(P_{1}\xrightarrow{a}Q_{0}\xrightarrow{b}P_{0}\) and \(P_{1}\xrightarrow{c}P_{0}\) are coinitial and cofinal but not causally equivalent. Note that there are just three events \(a,b,c\) with \(a\ \mathsf{ci}\ c\), \(b\ \mathsf{ci}\ c\) but not \(a\ \mathsf{ci}\ b\). \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\) hold. Indeed, \(c\) is independent from every other action, and it can always be undone, while \(a\) and \(b\) are independent from \(c\) only and they can be undone after any path composed by \(c\) and no others. In more detail, if we have a path \(ar\underline{a}\) with \(\sharp(r,a)=0\) then \(\sharp(r,b)=0\), and if we have a path \(br\underline{b}\) with \(\sharp(r,b)=0\) then \(\sharp(r,a)=0\). \(\diamond\)
The independence result in Proposition 5.27 holds also if we replace CIRE by BFCIRE.
**Proposition 5.36**.: _The axioms SP, BTI, WF, PCI, BFCIRE, IEC are independent of each other._
Proof.: For each of the six axioms we need to give an LTSI which satisfies the other five axioms but not the axiom itself. Since CIRE implies BFCIRE, for all axioms apart from BFCIRE we can reuse the examples given in the proofs of Proposition 5.27 (and of Proposition 5.18). Example 5.23 provides an LTSI where BFCIRE (equivalent to \(\mathrm{CL}_{\mathsf{ci}}\)) fails and the remaining five axioms hold.
### CS and CL via Ordering of Forward Events
We now give definitions of causal safety and causal liveness using ordering on forward events. To this end, we exploit the causality relation \(\leq\) on such events (see Definition 4.23).
**Definition 5.37**.: Let \(\mathcal{L}=(\mathsf{Proc},\mathsf{Lab},\xrightarrow{},\iota)\) be an LTSI.
1. We say that \(\mathcal{L}\) is _ordered causally safe (CS\({}_{<}\))_ if whenever \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\), \(\sharp(r,[t_{0}])=0\) and \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\), then \([t_{0}]\not<e^{\prime}\) for all forward events \(e^{\prime}\) such that \(\sharp(r,e^{\prime})>0\).
2. We say that \(\mathcal{L}\) is _ordered causally live (CL\({}_{<}\))_ if whenever \(t_{0}:P\xrightarrow{a}Q\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,[t_{0}])=0\) and \([t_{0}]\not<e^{\prime}\) for all forward events \(e^{\prime}\) such that \(\sharp(r,e^{\prime})>0\) then we have \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\).
The only difference between \(\mathrm{CS}_{<}\) and \(\mathrm{CS}_{\iota}\) (Definition 5.1) is that the former ensures \([t_{0}]\not<[t]\) instead of \(\underline{t_{0}}\ \iota\ t\) for all transitions \(t\) such that \([t]\) has a positive number of occurrences in \(r\). Similarly for \(\overline{\mathrm{CL}}\). Notably, we do not require \([\underline{t_{0}}]\not<[t]\) since \(<\) is defined on forward events and \(t_{0}\) is forward.
It may seem that the definition above does not take into account backward events that may occur in \(r\), but the next lemma shows that such events are necessarily independent from \([t_{0}]\). This allows us to connect ordered safety and liveness with safety and liveness based on independence of events.
**Lemma 5.38**.: _Suppose that an LTSI is pre-reversible. Suppose \(t_{0}:P\xrightarrow{a}Q\), \(e=[t_{0}]\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,e)=0\). Let \(e^{\prime}\) be a forward event:_
1. _if_ \(\sharp(r,e^{\prime})>0\) _then exactly one of_ \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\) _and_ \(e<e^{\prime}\) _holds;_
2. _if_ \(\sharp(r,e^{\prime})<0\) _then_ \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\)_._
Proof.: We know that polychotomy holds by Proposition 4.29. Also NRE holds by Proposition 4.21. Suppose \(t_{0}:P\xrightarrow{a}Q\), \(e=[t_{0}]\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,e)=0\) and \(\sharp(r,e^{\prime})\neq 0\) where \(e^{\prime}\) is a forward event. We first note that \(e\neq e^{\prime}\), since \(\sharp(r,e)=0\) and \(\sharp(r,e^{\prime})\neq 0\). By WF, there is a rooted path \(s\) from some irreversible \(I\) to \(P\).
1. Suppose first that \(\sharp(r,e^{\prime})>0\). Since \(\sharp(st_{0}r,e)>0\) and \(\sharp(st_{0}r,e^{\prime})>0\) we do not have \(e\)\(\#\)\(e^{\prime}\). Furthermore, if \(e^{\prime}<e\) then we must have \(\sharp(s,e^{\prime})>0\), so that \(\sharp(st_{0}r,e^{\prime})>1\), contradicting NRE. Then the result follows by polychotomy.
2. Now suppose that \(\sharp(r,e^{\prime})<0\). By Proposition 4.13 we must have \(\sharp(s,e^{\prime})>0\). We deduce that \(e\not<e^{\prime}\). Since \(\sharp(st_{0},e)>0\) and \(\sharp(st_{0},e^{\prime})>0\) we do not have \(e\)\(\#\)\(e^{\prime}\). Furthermore \(\sharp(st_{0}r,e)>0\) and \(\sharp(st_{0}r,e^{\prime})=0\) (since \(\sharp(st_{0},e^{\prime})=1\) combining \(\sharp(st_{0},e^{\prime})>0\) shown above and NRE). Hence \(e^{\prime}\not<e\). By polychotomy, \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\).
**Proposition 5.39**.: _Suppose that an LTSI \(\mathcal{L}\) is pre-reversible. Then_
1. \(\mathcal{L}\) _satisfies_ \(\text{CS}_{<}\)_._
2. \(\mathcal{L}\) _satisfies_ \(\text{CL}_{\mathfrak{ci}}\) _iff_ \(\mathcal{L}\) _satisfies_ \(\text{CL}_{<}\)_._
Proof.:
1. We know \(\text{CS}_{\mathfrak{ci}}\) holds by Theorem 5.20. Assume that \(t_{0}:P\xrightarrow{a}Q\), \(e=[t_{0}]\), \(r:Q\xrightarrow{\rho}_{*}R\), \(\sharp(r,e)=0\) and \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\). Take any forward \(e^{\prime}\) such that \(\sharp(r,e^{\prime})>0\). By Lemma 5.38 we know that exactly one of \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\) or \(e<e^{\prime}\) holds. By \(\text{CS}_{\mathfrak{ci}}\) we have \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\), and therefore \(e\not<e^{\prime}\) as required.
2. Suppose that \(\text{CL}_{\mathfrak{ci}}\) holds. Assume that \(P\xrightarrow{a}Q\), \(e=[t_{0}]\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,e)=0\) and \(e\not<e^{\prime}\) for all forward \(e^{\prime}\) such that \(\sharp(r,e^{\prime})>0\). Let event \(e^{\prime}\) be such that \(\sharp(r,e^{\prime})>0\). Suppose first that \(e^{\prime}\) is forward. By assumption \(e\not<e^{\prime}\). So by Lemma 5.38(1) we obtain \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\). Suppose instead that \(e^{\prime}\) is reverse, so that \(\underline{e^{\prime}}\) is forward, and \(\sharp(r,\underline{e^{\prime}})<0\). By Lemma 5.38(2) we obtain \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\), and hence \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\) using Lemma 4.15. We deduce that \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\) for all \(e^{\prime}\) such that \(\sharp(r,e^{\prime})>0\). Hence by \(\text{CL}_{\mathfrak{ci}}\) we have \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\). Conversely, suppose that \(\text{CL}_{<}\) holds. Assume that \(P\xrightarrow{a}Q\), \(e=[t_{0}]\), \(r:Q\xrightarrow{\rho}_{*}R\) and \(\sharp(r,e)=0\) and \(e\)__\(\mathfrak{ci}\)__\(e^{\prime}\) for all \(e^{\prime}\) such that \(\sharp(r,e^{\prime})>0\). By Lemma 5.38(1) we know that \(e\not<e^{\prime}\) for all forward \(e^{\prime}\) such that \(\sharp(r,e^{\prime})>0\). Hence by \(\text{CL}_{<}\) we have \(t_{0}^{\dagger}:S\xrightarrow{a}R\) with \(t_{0}\sim t_{0}^{\dagger}\).
### Implications between the different formalisations of CS/CL
We have introduced three different formalisations of causal safety and liveness. The implications between them, assuming pre-reversibility holds, are shown in Figure 11.
As can be seen in Table 1, only two causal safety properties, namely \(\text{CS}_{\mathfrak{ci}}\) and \(\text{CS}_{<}\), hold for pre-reversible LTSIs. The causal liveness versions of these properties, namely \(\text{CL}_{\mathfrak{ci}}\) and \(\text{CL}_{<}\), additionally require BFCIRE. Actually, BFCIRE is equivalent to both \(\text{CL}_{\mathfrak{ci}}\) and \(\text{CL}_{<}\). The last two properties, \(\text{CS}_{\iota}\) and \(\text{CL}_{\iota}\), which are defined over general independence of transitions, require IRE. No other implications hold beyond those shown. Counterexamples for lack of other implications in Figure 11 are pointed to in Figure 6.
We postpone discussion of which particular version of CS or CL is most relevant in a specific setting until Section 6.3, after we have introduced some structural axioms to better relate them.
## 6. Structured notions of independence
In this section we consider two structured notions of independence, namely independence defined on coinitial transitions only and independence determined by labels only. To this end, we introduce'structural axioms' in Definitions 6.1, 6.9 and 6.11. These have a different status from the axioms already introduced: rather than expressing fundamental properties that are desirable in LTSIs, they are properties that hold in various reversible formalisms (as we shall see in Section 7), are easy to verify, and can be used to derive other axioms in a generic fashion.
### Coinitial independence
In this section we discuss coinitial LTSIs, defined as follows, and their relationship with LTSIs in general.
**Definition 6.1**.: **Independence is coinitial (IC)**: for all transitions \(t,u\), if \(t\ \iota\ u\) then \(t\) and \(u\) are coinitial.
We say that an LTSI \(\mathcal{L}\) is coinitial if it satisfies IC. We also say that its independence relation \(\iota\) is coinitial.
Coinitial independence is of interest since in many cases it is easier to define independence only on coinitial transitions. Indeed, coinitial independence arises, e.g., from the notions of concurrency in [14, Definition 7] for RCCS and in [32, Definition 5] for Core Erlang.
The next example satisfies IC and all and only the properties in Figure 6 implied by it. In particular, it shows that IC does not imply \(\mathrm{CL}_{\iota}\), \(\mathrm{CL}_{\mathsf{ci}}\), or \(\mathrm{CS}_{\iota}\) (this last follows from Proposition 5.31).
_Example 6.2_.: Consider the LTSI in Figure 12. All independence is coinitial as generated by BTI and PCI, and the LTSI is pre-reversible. There are three events, which we denote by \(e_{a},e_{b},e_{c}\), with labels \(a,b,c\), respectively. \(\mathrm{CL}_{\iota}\) fails: let \(t:P\xrightarrow{a}Q\) and let \(r\) from \(Q\) to \(R\) be \(\underline{b}b\) (dashed transitions). We have \(\sharp(r,e_{b})=0\); however \(a\) cannot be reversed at \(R\), as \(\mathrm{CL}_{\iota}\) would yield. Also \(\mathrm{CS}_{\iota}\) fails: let \(t:P\xrightarrow{a}Q\) and let \(r^{\prime}\) be \(\underline{c}\,\underline{b}\) from \(Q\) to \(S\) (bold transitions). After \(r^{\prime}\), \(\underline{a}\) is possible. However \(\underline{t}\) is not independent with the \(\underline{b}\) transition, as \(\mathrm{CS}_{\iota}\) would yield. Also \(\mathrm{CL}_{\mathsf{ci}}\) fails: let \(t_{0}\) be the \(a\) following the leftmost \(b\), and let \(r^{\prime\prime}\) be the
Figure 11. Implications between causal safely and causal liveness properties and some of the related axioms, all assuming pre-reversibility. Note that both \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CS}_{<}\) are implied by pre-reversibility.
\(c\) transition with target \(R\). We have \(e_{a}\ \mathsf{ci}\ e_{c}\). However \(a\) cannot be reversed at \(R\), as \(\mathrm{CL}_{\mathsf{ci}}\) would yield. \(\diamond\)
Coinitial independence is inconsistent with the axiom IRE, showing that IRE is only appropriate for the setting of general, rather than coinitial independence:
**Proposition 6.3**.: _Let a pre-reversible LTSI have a non-empty independence relation, and satisfy IC. Then IRE does not hold._
Proof.: Suppose for a contradiction that IRE holds. Since the independence relation is non-empty and IC holds, we have \(t\ \iota\ u\) with \(t,u\) coinitial. By SP and PCI we can complete a diamond with \(t^{\prime}\sim t\), \(u^{\prime}\sim u\). Since \(t^{\prime}\sim t\ \iota\ u\) we deduce by IRE that \(t^{\prime}\ \iota\ u\). However \(t^{\prime}\) and \(u\) are not coinitial, contradicting IC.
We define a mapping \(c\) restricting general independence to coinitial transitions and a mapping \(g\) extending independence along events.
**Definition 6.4**.: Given an LTSI \(\mathcal{L}\), define \(t\ g(\iota)\ u\) iff \(t\sim t^{\prime}\ \iota\ u^{\prime}\sim u\) for some \(t^{\prime},u^{\prime}\). Furthermore, define \(t\ c(\iota)\ u\) iff \(t\ \iota\ u\) and \(t,u\) are coinitial.
We extend \(c\) and \(g\) to LTSIs \((\mathsf{Proc},\mathsf{Lab},\rightarrow,\iota)\): they behave as the identity of the first three components, and as expected on the fourth. Similarly, we write \(c(\sim)\) and \(g(\sim)\) for the equivalence relations in \(c(\mathcal{L})\) and \(g(\mathcal{L})\), respectively.
We now show that \(c\) and \(g\) play well with events.
**Lemma 6.5**.: _Given an LTSI \(\mathcal{L}\), \(\sim=c(\sim)\)._
Proof.: Follows by noticing that the definition of event only exploits independence on coinitial transitions.
**Lemma 6.6**.: _Given an LTSI \(\mathcal{L}\), \(t\sim u\) implies \(t\ g(\sim)\ u\)._
Proof.: By definition of \(\sim\), noticing that \(\iota\subseteq g(\iota)\).
**Lemma 6.7**.: _Given a pre-reversible LTSI \(\mathcal{L}\), \(t\ g(\sim)\ u\) implies \(t\sim u\)._
Proof.: By definition of \(\sim\), we have \(t\,g(\sim)\,u\) if there is a chain of commuting squares connecting \(t\) and \(u\). Thanks to ID (which holds in pre-reversible LTSIs) all such squares are commuting squares in \(\mathcal{L}\), hence \(t\sim u\) as desired.
We can now study the impact of \(c\) and \(g\) on the axioms satisfied by the LTSI to which they are applied.
**Proposition 6.8**.: _Let \(\mathcal{L}=(\mathsf{Proc},\mathsf{Lab},\rightarrow,\iota)\) be a pre-reversible LTSI._
[MISSING_PAGE_POST]
2. _if_ \(\mathcal{L}\) _satisfies IRE and IEC then_ \(g(c(\iota))=\iota\)_;_
3. _If_ \(\mathcal{L}\) _is coinitial and satisfies CIRE then_ \(g(\mathcal{L})\) _is a pre-reversible LTSI and satisfies IRE and IEC._
4. _if_ \(\mathcal{L}\) _satisfies IRE then_ \(c(\mathcal{L})\) _is a pre-reversible coinitial LTSI and satisfies CIRE._
Proof.:
1. Clearly \(\iota\subseteq c(g(\iota))\). For the converse, suppose \(t\sim t^{\prime}\ \iota\ u^{\prime}\sim u\) and \(t^{\prime},u^{\prime}\) are coinitial and \(t,u\) are coinitial. Then \(t\ \iota\ u\) by CIRE.
2. Suppose \(t\ \iota\ u\). By IEC we have \(t\sim t^{\prime}\ \iota\ u^{\prime}\sim u\) with \(t^{\prime},u^{\prime}\) coinitial. Hence \(t\ g(c(\iota))\ u\). Conversely, suppose \(t\ g(c(\iota))\ u\). Then \(t\ \iota\ u\) by IRE.
3. Suppose \(t\ g(\iota)\ u\) and \(t,u\) are coinitial. Then by CIRE \(t\ \iota\ u\). So we can use SP for \(\iota\) to complete the diamond. Hence SP holds for \(\mathcal{L}^{\prime}\). Clearly PCI holds for \(g(\mathcal{L})\) since \(g(\iota)\) and \(\iota\) agree on coinitial transitions by CIRE. For IRE, suppose \(t^{\prime}\sim t\ g(\iota)\ u\sim u^{\prime}\). Then clearly \(t^{\prime}\ g(\iota)\ u^{\prime}\). Finally, for IEC suppose \(t\ g(\iota)\ u\). Then \(t\sim t^{\prime}\ \iota\ u^{\prime}\sim u\) with \(t^{\prime},u^{\prime}\) coinitial, which is exactly what is needed for IEC.
4. Immediate.
Thanks to Proposition 6.8, we can extend a coinitial pre-reversible LTSI satisfying CIRE in a canonical way to a pre-reversible LTSI satisfying IRE and IEC.
Note that \(g(\mathcal{L})\) satisfies IRE (and hence ECh) by construction, since \(t\ g(\iota)\ u\sim t^{\prime}\) implies \(t\ g(\iota)\ t^{\prime}\). Conditions in Proposition 6.8, item (3) are only needed for the other properties.
### Label-generated independence
In some reversible calculi (such as RCCS) independence of coinitial transitions is defined purely by reference to the labels.
**Definition 6.9**.: **Coinitial label-generated (CLG)**: if there is an irreflexive binary relation \(I\) on \(\mathsf{Lab}\) such that for any transitions \(t:P\xrightarrow{\alpha}Q\) and \(u:P\xrightarrow{\beta}R\) we have \(t\ \iota\ u\) iff \(t\) and \(u\) are coinitial and \(I(a,b)\), where \(a\) and \(b\) are the underlying labels \(a=\mathsf{und}(\alpha)\), \(b=\mathsf{und}(\beta)\).
If this is the case then the axioms IC, PCI and CIRE hold by construction.
**Proposition 6.10**.: _If an LTSI is CLG then it satisfies IC, PCI and CIRE._
Proof.: Straightforward, noting for PCI and CIRE that labels on opposite sides of a diamond of transitions must be equal.
Note that \(I\) must be irreflexive, since \(\iota\) is irreflexive by definition. Even more, we already have seen that for a pre-reversible LTSI there cannot be independent coinitial transitions \(t\), \(u\) with the same underlying label (as a consequence of Lemma 4.4 and BLD).
**Definition 6.11**.: **Label-generated (LG)**: if there is an irreflexive binary relation \(I\) on \(\mathsf{Lab}\) such that for any transitions \(t:P\xrightarrow{\alpha}Q\) and \(u:R\xrightarrow{\beta}S\) we have \(t\ \iota\ u\) iff \(I(a,b)\), where \(a\) and \(b\) are the underlying labels \(a=\mathsf{und}(\alpha)\), \(b=\mathsf{und}(\beta)\).
**Proposition 6.12**.: _If an LTSI is LG then it satisfies PCI, IRE and RPI._
Proof.: Straightforward.
Note that LG does not imply IEC, in view of the following example.
_Example 6.13_.: Consider the LTSI with two transitions \(t:P\xrightarrow{a}Q\) and \(u:R\xrightarrow{b}S\), where all states are distinct (as in Example 5.16) and \(a\neq b\). Let independence be generated by the relation \(I=\{(a,b)\}\). Then LG holds, but not IEC, since \(t\ \iota\ u\) but not \([t]\ \mathsf{ci}\ [u]\). \(\diamond\)
However, LG is compatible with IEC, in view of the following example.
_Example 6.14_.: Let \(t:P\xrightarrow{a}Q\), \(u:P\xrightarrow{b}R\), \(u^{\prime}:Q\xrightarrow{b}S\), \(t^{\prime}:R\xrightarrow{a}S\), where all states are distinct and \(a\neq b\). Let independence be generated by the relation \(I=\{(a,b)\}\). Then both LG and IEC hold. However IC fails. \(\diamond\)
All the axioms and properties we have considered previously are closed under disjoint unions of LTSIs, defined as follows.
**Definition 6.15** (Disjoint union of LTSIs).: Take two LTSIs \((\mathsf{Proc}_{1},\mathsf{Lab}_{1},\xrightarrow{}_{1},\iota_{1})\) and \((\mathsf{Proc}_{2},\mathsf{Lab}_{2},\xrightarrow{}_{2},\iota_{2})\). Their disjoint union is \((\mathsf{Proc}_{1}\cup\mathsf{Proc}_{2},\mathsf{Lab}_{1}\cup\mathsf{Lab}_{2}, \xrightarrow{}_{1}\cup\to_{2},\iota_{1}\cup\iota_{2})\) provided that \(\mathsf{Proc}_{1}\cap\mathsf{Proc}_{2}=\emptyset\), undefined otherwise.
However LG and CLG are not necessarily closed under disjoint unions of LTSIs, in view of the following examples.
_Example 6.16_.: Take the disjoint union of the LTSI of Example 6.14 together with a further transition \(T\xrightarrow{a}U\) with an empty generator relation (this component satisfies LG). Then LG fails; however IEC and IRE still hold. \(\diamond\)
_Example 6.17_.: Take the disjoint union of the LTSI of Example 5.7 (which satisfies CLG) together with further transitions \(T\xrightarrow{a}U\) and \(T\xrightarrow{b}V\) with an empty generator relation (this component satisfies CLG). Then CLG fails; however IC and CIRE still hold. \(\diamond\)
The mapping \(g\) converts an LTSI satisfying CLG into one satisfying LG+IEC. The mapping \(c\) converts an LTSI satisfying LG into one satisfying CLG. Note that there is an alternative way to convert an LTSI satisfying CLG into one satisfying LG: simply use the relation \(I\) applied to any pair of transitions. This will in general create more independent transitions than using \(g\), and so the result may not satisfy IEC.
### Relating different forms of CS/CL
We now discuss the relationships between different forms of CS/CL and consider which ones to work with in particular reversible settings. The starting point is how independence is or can be defined in such settings, and whether it is general or coinitial. We explain how structural axioms and results of this section, together with our axioms, can be used to arrive at the most appropriate causal safety and liveness properties for such reversible settings.
We can sometimes move between LTSIs satisfying \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\) (or equivalently \(\mathrm{CS}_{<}\) and \(\mathrm{CL}_{<}\)), all defined in terms of coinitial independence, and LTSIs satisfying \(\mathrm{CS}_{\iota}\) and \(\mathrm{CL}_{\iota}\), which are based on general independence, using mappings \(c\) and \(g\). Thus, if we have a coinitial pre-reversible LTSI \(\mathcal{L}\) satisfying CIRE then \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\) hold (using Theorems 5.20 and 5.29, respectively). The LTSI \(g(\mathcal{L})\) is pre-reversible and satisfies IRE and IEC by Proposition 6.8. This will satisfy \(\mathrm{CS}_{\iota}\) and \(\mathrm{CL}_{\iota}\) as a result of applying Theorems 5.5 and 5.6, respectively. It will also satisfy \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\). Conversely, if we have a general pre-reversible LTSI \(\mathcal{L}^{\prime}\) satisfying IRE then \(\mathrm{CS}_{\iota}\) and \(\mathrm{CL}_{\iota}\) hold by Theorems 5.5 and 5.6, respectively. The LTSI \(c(\mathcal{L}^{\prime})\) is a coinitial pre-reversible LTSI satisfying CIRE. This will satisfy \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\).
Intuitively, one can think of coinitial independence as a compact way of representing general independence (provided that this is well-behaved, in that it satisfies IRE and IEC), and \(c\) and \(g\) as ways of moving between the two representations (Proposition 6.8). \(\mathrm{CS}_{\iota}\) and \(\mathrm{CL}_{\iota}\) work on the general representation only, since they check independence between transitions that may be far apart. The other two forms of CS/CL can instead work with both the representations, and they are equivalent (Figure 11). Moreover, once we have LTSI with general independence we can work immediately with \(\mathrm{CS}_{\iota}\) and \(\mathrm{CL}_{\iota}\). On the other hand, when independence is coinitial, we need to instantiate the notion of event, and understand
whether events are causally dependent or coinitial independent, before we can use the other two notions of CS/CL. The choice between \(\mathrm{CS}_{<}/\mathrm{CL}_{<}\) and \(\mathrm{CS}_{\mathsf{ci}}/\mathrm{CL}_{\mathsf{ci}}\) depends on whether independence or ordering is more easily or naturally defined on events.
In some process calculi and programming languages, as can be seen in the next section, independence can be defined in terms of transition labels, which gives us structural axioms CLG and LG. So, to show CS/CL we tend to show CLG (RCCS, CCSK, \(\mathrm{HO}\pi\), Erlang) or we prove CIRE (\(\mathrm{R}\pi\), reversible occurrence nets) and then use \(g\). Alternatively, we show LG (\(\pi\mathrm{IH}\)).
Note that whether or not CLG/LG can be applied to a reversible formalism may depend on the level of abstraction adopted in the transition labels.
## 7. Case Studies
We look at whether our axioms hold in various reversible formalisms. Given that we consider a high number of formalisms, we do not provide full background on them, but refer for it to the original papers. Also, we sometimes repeat similar observations for different formalisms, so to make it possible to browse them out of order, to find information on a specific formalism of interest. Remarkably, all the works below provide proofs of the Loop Lemma.
### RCCS
We consider here the semantics of RCCS in [14], and restrict the attention to coherent processes [14, Definition 2]. In RCCS, transitions \(P\xrightarrow{\mu:\zeta}Q\) and \(P\xrightarrow{\mu^{\prime}:\zeta^{\prime}}Q^{\prime}\) are concurrent if \(\mu\cap\mu^{\prime}=\emptyset\)[14, Definition 7]. This allows us to define coinitial independence as \(t\ \iota\ u\) iff \(t\) and \(u\) are concurrent. We now argue that the resulting coinitial LTSI is prereversible and also satisfies CIRE. SP was shown in [14, Lemma 8]. BTI was shown in the proof of [14, Lemma 10]. WF is straightforward, noting that backward transitions decrease memory size. Hence, we obtain a very much simplified proof of CC. For PCI and CIRE we note that CLG holds and thus Proposition 6.10 applies. Therefore \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\) hold. Using Proposition 6.8, we can get an LTSI with general independence satisfying IRE and IEC, and therefore \(\mathrm{CS}_{\iota}\) and \(\mathrm{CL}_{\iota}\). This is the first time these causal properties have been proved for RCCS.
### CCSK
The first notion of independence for CCSK [44] was given in [1]. It is based on the proved transition system approach where transition labels contain information about derivation of transitions. This information can be used to work out whether transitions are in conflict, causally dependent, or concurrent. Two forms of independence are defined in [1]: general independence (called composable concurrency) and coinitial independence (called coinitial concurrency). CC is then obtained using our axiomatic approach (following [33], the conference version of the present paper) by showing SP [1, Theorem 3], BTI [1, Lemma 6] and WF [1, Lemma 7].
Since coinitial independence is defined on labels, we can deduce that the LTSI is CLG. Hence, by Proposition 6.10, PCI and CIRE hold. This allows us to obtain \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\). Using Proposition 6.8, we can get an LTSI with general independence which satisfies IRE and IEC, which gives us \(\mathrm{CS}_{\iota}\) and \(\mathrm{CL}_{\iota}\) as well. As for RCCS, this is the first time such causal properties have been proved for CCSK.
### Ho\(\pi\)
We consider here the uncontrolled reversible semantics for \(\mathrm{HO}\pi\)[29]. We restrict our attention to reachable processes, called there consistent. The semantics is a reduction semantics; hence there are no labels (or, equivalently, all the labels coincide). To have more informative labels we can consider the transitions defined in [29, Section 3.1], where labels contain the memory created or consumed by the transition (they also contain a flag
distinguishing backward from forward transitions, but this plays no role in the definition of the concurrency relation discussed below, hence we can safely drop it). The notion of independence would be given by the concurrency relation on coinitial transitions [29, Definition 9]. All pre-reversible LTSI axioms hold, as well as CIRE. Specifically, SP is proved in [29, Lemma 9]. BTI holds since distinct memories have disjoint sets of keys [29, Definition 3 and Lemma 3] and by the definition of concurrency [29, Definition 9]. WF holds as each backward step consumes a memory, which are a finite number to start with. Finally, PCI and CIRE hold since CLG holds for the LTSI with annotated labels and using our Proposition 6.10.
As a result we obtain a very much simplified proof of CC. Moreover, using PCI and CIRE, we get the \(\mathrm{CS}_{\mathbf{ci}}\) and \(\mathrm{CL}_{\mathbf{ci}}\) safety and liveness properties and, applying mapping \(g\) from Section 6, we get a general pre-reversible LTSI satisfying IRE and IEC, so that \(\mathrm{CS}_{t}\) and \(\mathrm{CL}_{\iota}\) are satisfied. This is the first time that causal properties have been shown for \(\mathrm{HO}\pi\).
### \(\mathbf{R}\pi\)
We consider the (uncontrolled) reversible semantics for \(\pi\)-calculus defined in [13]. We restrict the attention to reachable processes. The semantics is an LTS semantics. Independence is given as concurrency which is defined for consecutive transitions [13, Definition 4.1]. CC holds [13, Theorem 4.5].
Our results are not directly applicable to \(\mathrm{R}\pi\), since SP holds up to label equivalence of transitions on opposite sides of the diamond, rather than equality of labels as in our approach. We would need to extend axiom SP and the definition of causal equivalence to allow for label equivalence in order to directly handle \(\mathrm{R}\pi\) using our axiomatic method.
We can however apply our theory to an LTSI obtained by considering labels up-to the equivalence relation \(=_{\lambda}\)[13, just before Lemma 4.3], which intuitively avoids to observe when a name is being extruded. Notice that the Loop Lemma holds in this new LTSI as well. However, the concurrency relation is given on consecutive transitions, and the same for their SP. Nevertheless, we can define independence as follows: \(t\)\({}_{\pi}\)\(u\) iff \(t\) and \(u\) are coinitial and \(t\) and \(\underline{u}\) are concurrent. Notice that since \(t\) and \(u\) are coinitial then \(t\) and \(\underline{u}\) are consecutive.
**Lemma 7.1**.: \(\iota_{\pi}\) _is symmetric._
Proof.: We have to show that \(t\) and \(\underline{u}\) are concurrent iff \(\underline{t}\) and \(u\) are concurrent. Since concurrency is defined as the complement of structural causality and contextual causality [13, Definition 4.1], it is enough to prove that \(t\) and \(\underline{u}\) are structural or contextual causal iff \(\underline{t}\) and \(u\) are. For structural causality, it follows from the definition [13, Definition 4.1]. For contextual causality, it follows from [13, Proposition 4.2].
With this definition of independence SP holds [13, Lemma 4.3]. WF holds as well since each backward step consumes at least a memory. BTI has been proved as part of the proof of PL in [12, Lemma 14]. As a result we obtain a proof of CC much simpler than the one in [12, Theorem 11] (note that causal equivalence in [13, Definition 4.4] is formalised up-to \(=_{\lambda}\) as well).
Independence is coinitial by construction. We have to prove PCI and CIRE. Unfortunately, we cannot exploit CLG, since it does not hold, as is clear from the definition of structural cause [13, Definition 4.1], one of the ingredients of the concurrency relation. Thus we need to go for a direct proof.
**Lemma 7.2**.: _CIRE holds in the LTSI for \(R\pi\)._
Proof.: Concurrency is defined as the complement of structural causality and contextual causality [13, Definition 4.1]. Contextual causality is defined on labels [13, Proposition 4.2]. Structural causality depends on whether the \(i\) components of the two labels occur in the same memory in a specific relation [13, Definition 2.2]. However, one can notice that \(i\) can only occur in the memory of one of the threads participating to the action (see [13, Table 1]), which are the same in transitions in the same event. The thesis follows.
**Lemma 7.3**.: _PCI holds in the LTSI for \(R\pi\)._
Proof.: Similar to the one above.
Using PCI and CIRE, we get the \(\mathrm{CS}_{\mathsf{ci}}\) and \(\mathrm{CL}_{\mathsf{ci}}\) safety and liveness properties. Applying mapping \(g\) from Section 6, we get a general pre-reversible LTSI satisfying IRE and IEC, so that \(\mathrm{CS}_{\mathsf{t}}\) and \(\mathrm{CL}_{\mathsf{t}}\) are satisfied. Notice that the notion of independence is not influenced by the abstraction on labels; hence the results can be reflected on the original LTSI of \(\mathrm{R}\pi\).
### Reversible internal \(\pi\)-calculus with extrusion histories
The reversible internal \(\pi\)-calculus \(\pi\)IH [20] is based on the work of Hildebrandt _et al._[21], which uses extrusion histories and locations to define a stable non-interleaving early operational semantics for the \(\pi\)-calculus. Locations and extrusion histories are used to define independence of actions. This notion of independence differs from the ones considered in the other case studies in that it allows actions with conflicting causes to be independent. Despite this major difference, it is shown in [20] that nearly all our (non-structural) axioms are satisfied (SP, BTI, WF, PCI, IRE); the only exception is that IEC fails, because a process can have independent transitions with conflicting causes without having a single state where equivalent transitions can both be performed. We use IEC to show RPI (Proposition 5.12). However RPI is shown in [20] for \(\pi\)IH without the need for IEC, using the fact that independence is defined on transition labels. In fact, LG holds for \(\pi\)IH, from which we can deduce PCI, IRE and RPI by Proposition 6.12. It follows that all the properties listed in Table 1 hold for \(\pi\)IH, with the exception of IEC, IC and CLG.
### Reversible Erlang
We consider the uncontrolled reversible (reduction) semantics for Erlang in [32]. We restrict our attention to reachable processes. In order to have more informative labels we can consider the annotations defined in [32, Section 4.1]. We can then define coinitial transitions to be independent iff they are concurrent [32, Definition 12].
We next discuss the validity of our axioms in reversible Erlang. SP is proved in [32, Lemma 13] and BTI is trivial from the definition of concurrency [32, Definition 12]. WF holds since the pair of non-negative integers (total number of elements in history, total number of messages queued) ordered under lexicographic order decreases at each backward step. Intuitively, each step but the ones derived using the rule for reverse sched (see [32, Figure 11]) consumes an item of memory, and each step derived using rule reverse sched removes a message from a process queue. Finally, PCI and CIRE hold since CLG holds for the LTSI with annotated labels, and by Proposition 6.10.
Since this setting is very similar to the one of \(\mathrm{HO}\pi\) (both calculi have a reduction semantics and a coinitial notion of independence defined on enriched labels), we get the same results as for \(\mathrm{HO}\pi\) (described in Section 7.3), including CC, and causal safety and liveness.
### Reversible occurrence nets
We consider occurrence nets which are the result of unfolding Place/Transition nets, and their reversible versions [38, 39, 37]. Reversible occurrence nets are occurrence nets (1-safe and with no backward conflicts) extended with a backward (reverse in the terminology of [39]) transition name \(\overleftarrow{\mathsf{t}}\) for each forward transition name \(\mathsf{t}\). We write \(t,u\) (note the \(italic\) font) for forward or backward transition names,
and \(\overleftarrow{t},\overleftarrow{u}\) for their backward or forward duals. We use "transition name" to mean forward or backward transition name. They give rise to an LTS where states are pairs \((N,m)\) with \(N\) a net and \(m\) a marking. A computation that represents firing a (forward or backward) transition name \(t\) in \((N,m)\) and resulting in \((N,m^{\prime})\) is given by a firing relation \((N,m)\xrightarrow{t}(N,m^{\prime})\)2. Independence is the concurrency relation \(\mathsf{co}\) which is defined between arbitrary firings as follows: two firings are concurrent if their transition names are concurrent, that is when they are not in conflict and do not cause each other [38, 39, Section 3]. The last two notions are defined in terms of conditions on pre- and postset relations on transition names. Hence, we get an LTSI with general independence. Note that transition names are unique.
Footnote 2: We use “transition names” in this subsection to name the members of the set of transitions which, together with the set of places, are part of the definition of Place/Transition nets or occurrence nets. This distinguishes them from our transitions, which are called firings in Place/Transition nets and occurrence nets.
Properties SP and PL are shown as [39, Lemma 4.3] and [39, Lemma 4.4], respectively. Then CC is proved (over several pages) as [39, Theorem 4.6] using SP and PL. The causal safety and causal liveness properties are not considered in [38, 39]. However, a form of such properties is discussed in [37] in the setting of reversible prime event structures; we discuss this point in Section 8.
We can obtain causal safety and causal liveness properties, as well as PL and CC, for reversible occurrence nets using our axiomatic approach. The following lemma will be helpful.
**Lemma 7.4**.: _Let \(t\) and \(u\) be enabled and coinitial (forward or backward) transition names. Then \(t\) does not cause \(u\). If additionally \(t\) and \(u\) are backward, then they are not in conflict._
Proof.: Assume for contradiction that \(t\) causes \(u\). So there is a place, say \(a\), in the preset of \(u\) such that \(t\) causes \(a\). Since \(u\) is enabled there is a token in \(a\). Also, since \(t\) is enabled, after it fires a second token will arrive in \(a\), thus contradicting the 1-safe property of occurrence nets.
Let \(t\) and \(u\) be \(\overleftarrow{\mathsf{t}}\) and \(\overleftarrow{\mathsf{u}}\) respectively. Assume for contradiction that they are in conflict. This means that they share a place, say \(a\), in their presets. Hence, \(\mathsf{t}\) and \(\mathsf{u}\) share \(a\) in their postsets, which contradicts the no backwards conflict property of occurrence nets.
We can now combine Lemma 7.4 with the conditions in [39, Lemma 3.3] of when enabled and coinitial \(t\) and \(u\) are concurrent.
**Lemma 7.5**.: _Let \(t\) and \(u\) be enabled and coinitial (forward or backward) transition names. Then \(t\,\mathsf{co}\,u\) iff \(t\) and \(u\) are backward or they are not in an immediate conflict._
As a consequence, BTI holds.
**Lemma 7.6**.: _BTI holds in the LTSI for reversible occurrence nets._
WF holds because there are no forward cycles of firings in occurrence nets, hence no infinite reverse paths. This gives us PL and CC. Next, we prove PCI.
**Lemma 7.7**.: _PCI holds in the LTSI for reversible occurrence nets._
Proof.: Consider enabled coinitial firings \(\phi_{1},\phi_{2}\) with transition names \(t,u\) respectively, and assume \(\phi_{1}\,\mathsf{co}\,\phi_{2}\). Hence \(t\,\mathsf{co}\,u\). We get a commuting diamond by SP, where the opposite sides have the same transition names. Since \(t\,\mathsf{co}\,u\), we have \(\overleftarrow{t}\,\mathsf{co}\,u\) by [39, Lemma 3.4], so PCI holds.
This gives us a pre-reversible LTSI, and thus \(\operatorname{CS}_{\mathsf{ci}}\) and \(\operatorname{CS}_{<}\) hold.
Given a pair of enabled coinitial concurrent transition names we get a commuting diamond by SP, and the pairs of coinitial transition names in all corners of the diamond are concurrent. Events can then be defined on firings in such diamonds as in Definition 4.1, and we can show IRE.
**Lemma 7.8**.: _IRE holds in the LTSI for reversible occurrence nets._
Proof.: Let \(\phi_{1},\phi_{2}\) be firings with \(t,u\) respectively, and let \(\phi_{1}\mathsf{co}\;\phi_{2}\). This means that \(t\mathsf{co}\;u\). Since any \(\phi_{1}^{\prime}\) equivalent to \(\phi\) has the same transition name \(t\), \(t\mathsf{co}\;u\) gives us \(\phi_{1}^{\prime}\mathsf{co}\;\phi_{2}\).
Since IRE implies CIRE we obtain \(\operatorname{CL}_{\mathsf{ci}}\) (or \(\operatorname{CL}_{<}\)). We also have \(\operatorname{CS}_{\iota}\) and \(\operatorname{CL}_{\iota}\) as IRE holds.
An alternative proof strategy would be to show CLG first, but we believe this approach leads to more complex technicalities, and we would still need to prove IRE, hence we have preferred the approach above.
### Reversible sequential systems
In _sequential systems_ there is no concurrency. Hence, in this section, we represent them as LTSIs where the independence relation, modelling concurrency, is empty. This is for instance the case for Janus programs [53] or CCSK processes without parallel composition. In this setting, SP, PCI, IRE and IEC hold trivially. Moreover, BTI is equivalent to backward determinism, which is the main condition required for reversibility in a sequential setting (see, e.g., Janus [53]).
**Definition 7.9** (Backward determinism).: An LTSI is backward deterministic iff \(P\stackrel{{ a}}{{\rightharpoonup}}Q\) and \(P^{\prime}\stackrel{{ a^{\prime}}}{{\rightharpoonup}}Q\) imply \(P=P^{\prime}\) and \(a=a^{\prime}\).
**Proposition 7.10**.: _A sequential system satisfies BTI iff it is backward deterministic._
Proof.: For the left to right implication, assume towards a contradiction that the system satisfies BTI but it is not backward deterministic. Then there are \(P\stackrel{{ a}}{{\rightharpoonup}}Q\) and \(P^{\prime}\stackrel{{ a^{\prime}}}{{\rightharpoonup}}Q\) with \(P\neq P^{\prime}\) or \(a\neq a^{\prime}\). By the Loop Lemma we have the reverse transitions, which are coinitial and backwards, hence by BTI they need to be independent, what is a contradiction since the independence relation is empty.
For the right to left implication, take two backward coinitial transitions \(t,t^{\prime}\). By applying the Loop Lemma there exist \(\underline{t},\underline{t}^{\prime}\). One can notice that \(\underline{t},\underline{t}^{\prime}\) satisfy the hypothesis of backward determinism. Hence, \(\underline{t}=\underline{t}^{\prime}\) and \(t=t^{\prime}\). Hence BTI trivially holds.
WF does not hold in general and needs to be assumed.
If we assume WF then all our results hold, but they all become trivial or almost trivial. E.g., all events are singletons. Also, all the notions of causal liveness coincide, and they state that the last transition can always be undone, but this is just one direction of the Loop Lemma. Similarly, all the notions of causal safety do coincide, and they require that only the last transition can be undone.
## 8. Related Work
Causal Consistency (CC), Parabolic Lemma (PL) and informal versions of Causal Safety and Liveness (CS, CL), the main general properties of reversible computation considered in this paper, were proposed by Danos and Krivine [14]. Since then, many reversible process calculi or formalisms have been developed as we have described in the Introduction. Most of them use memories to save information lost when computing forwards, which can be easily retrieved when computing in reverse. Concurrency relation between coinitial transitions is
typically defined in terms of structural conditions on the memories of the transitions. In order to show that reversibility is well-behaved, PL and then CC is proved. In contrast, CS and CL (in any of the variants we considered), or properties close to them, have not been widely considered.
Information needed for undoing of computation in a process calculus can be saved differently. An alternative method was proposed for reversing a process calculus given by a general format of SOS rules in [42, 44]. When applied to CCS it produces CCSK, where reversible processes maintain their syntax as they compute, and executed actions are marked with _communication keys_. When computation reverses keys are removed, thus returning processes to their original form. This approach has a drawback in that it is not easy to define a concurrency relation purely on transition labels. As a result, proving CC in the traditional way is not straightforward. Hence, slightly different properties are proved to show that the resulting reversible calculi are well-behaved. The main property is Reverse Diamond (RD): if \(Q\xrightarrow{a}P\), \(R\xrightarrow{b}P\) and \(Q\neq R\), then there is \(S\) such that \(S\xrightarrow{a}R\) and \(S\xrightarrow{b}Q\). In our setting, RD can be proved from the Loop Lemma, BTI and SP. It is worth noting that PL can be shown for CCSK mainly using RD [44, Lemma 5.9]. Moreover, a form of CC for forward computation is shown [44, Proposition 5.15]: two forward computations from the same start to the same endpoint are _homotopic_[51], meaning that one computation can be transformed into the other by swapping adjacent transitions in commuting diamonds. In effect, concurrency is represented as commuting diamonds in the LTSs for reversible calculi obtained by applying the method in [42, 44].
A more abstract approach to defining desirable properties for reversibility was taken in [43]. General LTSs were considered instead of LTSs for specific reversible calculi, and two sets of axioms were proposed. The first set inherited RD and Forward Diamond (FD) from [42, 44], and also included WF, UT and Event Determinism (ED) [49, 51]: if \(P\xrightarrow{a}Q\) and \(P\xrightarrow{a}R\), and \((P,a,Q)\sim(P,a,R)\), then \(Q=R\). ED is not a consequence of our basic axioms. Consider the LTS [43, Fig. 1], and add coinitial independence using BTI and PCI. The resulting LTSI is prereversible and satisfies CLG, yet it fails ED. LTSs satisfying the five axioms above are called _prime_ LTSs and are shown to correspond to prime event structures. Several interesting properties were proved for prime LTSs, including RED (event determinism for backward transitions, which follows from BLD in our setting) and NRE which we also consider here. The second set of axioms aimed at providing local versions of FD, ED and RED.
As we have mentioned in the Introduction, a combined causal safety and liveness property has been formulated in [32, Corollary 22]. A form of causal safety and liveness properties has been defined in the setting of reversible event structures in [45, 46]. A reversible event structure is called _cause-respecting_ if an event cannot be reversed until all events it has caused have also been reversed, and it is _causal_ if it is cause-respecting and a reversible event can be reversed if all events it has caused have been reversed [46, Definition 3.34]. Causal reversible prime event structures are considered in [37] as well, where it is shown that they correspond precisely to reversible occurrence nets.
Another related work is [16], which like ours takes an abstract view, though based on category theory. However, its results concern irreversible actions, and do not provide insights in our setting, where all actions are reversible. The only other work which takes a general perspective is [7], which concentrates on how to derive a reversible extension of a given formalism. However, proofs concern a limited number of properties (essentially our CC), and hold only for extensions built using the technique proposed there. An approach similar to that in [44, 7] is taken in [26], which focuses on systems modelled using reduction
semantics. In order to prove properties of the reversible systems they build they use our theory (taken from the conference version of the present paper [33]), hence this can be taken as an additional case study for our results. Finally, [17] presents a number of properties such as, for example, backward confluence, which arise in the context of reversing of multiple transitions at the same time (called a step) in Place/Transition nets.
## 9. Conclusion and Future Work
The literature on causal-consistent reversibility (see, for example the early survey [30]) has a number of proofs of results such as PL and CC, all of which are instantiated to a specific calculus, language or formalism. We have taken here a complementary and more general approach, analysing the properties of interest in an abstract and language-independent setting. In particular, we have shown how to prove the most relevant of these properties from a small number of axioms. Among the properties, we discussed in detail the formalisation of Causal Safety and Causal Liveness, which were mostly informally discussed in the literature.
The approach proposed in this paper opens a number of new possibilities. Firstly, when devising a new reversible formalism, our results provide a rich toolbox to prove (or disprove) relevant properties in a simple way. Indeed, proving the axioms is usually much simpler than proving the properties directly. This is particularly relevant since causal-consistent reversibility is getting applied to more and more complex languages, such as Erlang [32], where direct proofs become cumbersome and error-prone. Secondly, our abstract proofs are relatively easy to formalise in a proof-assistant, which is even more relevant given that this will certify the correctness of the results for many possible instances. Another possible extension of our work concerns integrating into our framework mechanisms to control reversibility [28], such as a rollback operator [27] or irreversible actions [15]. For the latter we could take inspiration from the above-mentioned [16].
## Acknowledgements
This work has been partially supported by COST Action IC1405 on Reversible Computation - Extending Horizons of Computing. The first author has also been partially supported by French ANR project DCore ANR-18-CE25-0007 and by INdAM as a member of GNCS (Gruppo Nazionale per il Calcolo Scientifico). The third author has been partially supported by the JSPS Invitation Fellowship S21050.
|
2305.13519 | Development of Non-Linear Equations for Predicting Electrical
Conductivity in Silicates | Electrical conductivity is of fundamental importance in electric arc furnaces
(EAF) and the interaction of this phenomenon with the process slag results in
energy losses and low optimization. As mathematical modeling helps in
understanding the behavior of phenomena and it was used to predict the
electrical conductivity of EAF slags through artificial neural networks. The
best artificial neural network had 100 neurons in the hidden layer, with 6
predictor variables and the predicted variable, electrical conductivity. Mean
absolute error and standard deviation of absolute error were calculated, and
sensitivity analysis was performed to correlate the effect of each predictor
variable with the predicted variable. | Patrick dos Anjos, Lucas A. Quaresma, Marcelo L. P. Machado | 2023-05-22T22:20:57Z | http://arxiv.org/abs/2305.13519v2 | # Development of Non-Linear Equations for Predicting Electrical Conductivity in Silicates
###### Abstract
Electrical conductivity is of fundamental importance in electric arc furnaces (EAF) and the interaction of this phenomenon with the process slag results in energy losses and low optimization. As mathematical modeling helps in understanding the behavior of phenomena and it was used to predict the electrical conductivity of EAF slags through artificial neural networks. The best artificial neural network had 100 neurons in the hidden layer, with 6 predictor variables and the predicted variable, electrical conductivity. Average absolute error and standard deviation of absolute error were calculated, and sensitivity analysis was performed to correlate the effect of each predictor variable with the predicted variable.
**Keywords**: Electrical conductivity, Electric arc furnaces, Slag, Artificial Neural Network.
DOI: [https://doi.org/10.48550/arXiv.2305.13519](https://doi.org/10.48550/arXiv.2305.13519)
arXiv(c)
+
Footnote †: E-mail: [email protected]
## 1 Introduction
The electric arc furnace (EAF) slag is composed of SiO\({}_{2}\)-CaO-MgO-Al\({}_{2}\)O\({}_{3}\)-FeO [1,2] system with a temperature of up to 1756K [3]. The electrical conductivity of the slag in the EAF is important in the industrial process and directly affects the quality of the final product and energy consumption [4].
Slag foaming consists of introducing gas bubbles into molten metal and slag by bubble injection or chemical reaction. The slag foam protects the refractory applied to the EAF of the arc, increasing the working time of the lining composed of refractories. The foam prevents oxidation of the molten material and allows control of its chemical composition, aiding refining and homogenization, also acting as a thermal insulator between the molten material and its surroundings, thus reducing the energy required to maintain the operating temperature [2].
\[2\text{Fe+O}_{2}\Rightarrow 2\text{FeO} \tag{1}\] \[\text{C+FeO}\Rightarrow\text{CO+Fe}\;\text{(2)} \tag{2}\]
To form and maintain the CO gas bubbles responsible for slag foaming, formed through the Equation 1 in steel and in slag reactions (Equation 2) (Figure 1), optimized slag chemistry is required. The slag, also composed of solid phases, mainly formed by MgO and CaO, must have a viscosity in a narrow range that generally occurs when there is the formation of the MgO FeO phase through the saturation of MgO [5].
The electrical conductivity of slag can generally be determined by electronic conduction and ionic conduction [7] correlating the structure of the slag composed by the NBO/T parameter. In CaO-SiO\({}_{2}\)-B\({}_{2}\)O\({}_{3}\) slags, charge transport occurs through Ca\({}^{2^{+}}\) ions because Si\({}^{4^{+}}\) and B\({}^{3^{+}}\) ions belong to the structure of this silicate [4]. Generally, ions with high valence number interact with the surrounding ions forming structural parts of the slag and thus low mobility, not contributing to the electrical conductivity of the material [4] (Figure 2).
The modeling of electrical conductivity can be carried out by linear methods [4] but it has the drawback of presenting only linearity in the modeling process. With that, the non-linear modeling is able to capture the complexity of the process of mathematical modeling of the electrical conductivity of slag. With the use of artificial neural networks, non-linear modeling was performed with optimization by hyperparameter variation. The present work aims at the non-linear modeling of the electrical conductivity of SiO\({}_{2}\)-CaO-MgO-Al\({}_{2}\)O\({}_{3}\)-FeO slags using artificial neural networks.
## 2 Materials and Methods
### Database
Figure 1: The action of Fe, O\({}_{2}\) and C in the formation of CO bubbles [6].
Figure 2: Electrical conductivity by electronic and ionic conduction in slags.
The Sciglass database used to predict the refractive index of optical materials [8] and the viscosity of oxides [9] was chosen to provide data on electrical conductivity (Siemens/m or S/m), temperature (K) and chemical composition of SiO\({}_{2}\)-CaO-MgO-Al\({}_{2}\)O\({}_{3}\)-FeO (molar fraction) silicates.
The electrical conductivity histogram can be seen in Figure 3.
All data were submitted in the removal of outliers where data above or below 3 standard deviations of electrical conductivity is removed. An optimization method during the training of artificial neural networks stems from changing the range of input variables to compute the algorithm with greater efficiency and speed. The standardization method [9] uses calculus to result in a normal distribution for each input variable in the artificial neural network, with the downside of modifying the variable's distribution. The normalization between values makes it possible to enter with the same distribution as the data prior to preprocessing and improves training time.
Artificial neural networks are algorithms that try to achieve performance with respect to learning from experience, and thereby make generalizations from similar situations and judge states where good and bad results were achieved in the past [10]. The statistical pattern approach has been the most commonly studied and used in practice in the implementation and construction of artificial neural networks, which is also referred to as a non-linear model that replicates the biological neuron [11].
With extensive parameters [9], such as the number of neurons, layers, activation functions, optimizers, weights and bias initialization, loss and metrics, the construction and training along with the testing phase of an artificial neural network can be time-consuming and difficult. But artificial neural networks have advantages over other mathematical modeling practices because they are robust and develop complex relationships between variables [12], in addition to presenting a modeling that has the properties of a universal approximator.
### Artificial Neural Networks
Artificial neural networks (ANN) with a fixed number of depth and an arbitrary number of neurons can approximate any continuous functions in a closed set with an inherent error when the activation function applied in the construction of this artificial neural network is continuous and non-polynomial [13]. This therefore indicates that artificial neural networks with a fixed depth of 1 with a variable number of neurons is also a universal approximator. There is also a minimum width for artificial neural networks to present the properties of a universal approximator as a function of the cardinality of the set of arguments in the mathematical modeling [13].
Figure 3: Electrical conductivity histogram.
The artificial neural networks were built with the variation of neurons following the equation where the minimum number of neurons in the hidden layer (W\({}_{\text{min}}\)) must be equal to or greater than the cardinality of the set of arguments of the artificial neural network (D\({}_{\text{s}}\)) added to 1 (Equation 3) to provide the universal approximator property [14].
\[\text{W}_{\text{min}}\geq\text{D}_{\text{x}}+1 \tag{3}\]
The Adam [15] optimizer was used, which is an algorithm for first-order gradient-based optimization of stochastic objective functions. For the initialization of weights and biases, the method of initialization of weights and biases Glorot uniform [16] was used, which has advantages over the methods of initialization of standardized weights and biases because it presents lower activation values and backpropagated gradients with approximation at different depths in an artificial neural network.
With the preprocessing database, training data and test data were taken to present efficiency values through the chosen metrics. The amount of 80% for the training phases and 20% for the testing steps were separated and the test data were **not** used during the training of the artificial neural networks. The test steps were carried out with a loss calculated by the root mean squared error (Equation 4) [17]. The activation function ReLU (Equation 5) was chosen as the activation function of the trained artificial neural networks.
RMSE = \[\sqrt{\frac{1}{\text{N}}\sum{(\text{y}_{\text{true}}-\text{y}_{ \text{predicted}})}^{2}}\] (4) f(x) = max(0, x)
Thus, the artificial neural networks were built by varying the neurons in the hidden layer with 6 input variables (Temperature in Kelvin and chemical composition of SiO\({}_{2}\), CaO, MgO, Al\({}_{2}\)O\({}_{3}\), FeO in molar fraction) and 1 output variable (Electrical conductivity in S/m).
### Statistical Evaluation
Metrics of average absolute error (Equation 6) [18] and standard deviation (Equation 7) [19] were used during the training and test phase of the artificial neural networks. The standard deviation can be related to the shape of a one-variable distribution where it indicates the width of that variable in a probability density function [20].
AAE = \[\frac{1}{\text{N}}\sum{\text{y}_{\text{true}}-\text{y}_{\text{ predicted}}}\] (6) St.Dev. = \[\sqrt{\frac{1}{\text{N}}\sum{(\text{deviation}-\mu_{\text{ deviation}})}^{2}}\] (7)
_deviation_ = y\({}_{\text{true}}\)-y\({}_{\text{predicted}}\) and \(\mu_{\text{deviation}}\) is the arithmetic mean of deviation.
Sensitivity analysis can be defined as the determination of the contribution spectrum of an input variable in an artificial neural network [21]. There are different ways of performing sensitivity analysis in an artificial neural network, such as the perturbation of the input variables, partial derivatives [21] and the method of connection weights [22]. The connection weights method is determined by the relationships between the weights of an artificial neural network and applied to perform the sensitivity analysis to predict the density of oil-based muds in high-temperature [23] and the viscosity of multicomponent slags [24]. Furthermore, the sensitivity analysis using the connection weights method has a higher similarity coefficient than other sensitivity analysis methods [21].
The best artificial neural network was subjected to the connection weights method in relation to sensitivity analysis to demonstrate the relative importance of each input variable, chemical composition and temperature, in relation to the output variable, electrical conductivity.
Results and discussion
Several artificial neural networks were trained and the best one had a number of 100 neurons in the hidden layer. Therefore, the best artificial neural network has 6 input variables, 100 neurons in the hidden layer and 1 neuron in the output layer. The training of the best artificial neural network in relation to losses at training epochs (Figure 4) has a high loss value in the first training epochs later optimized by the Adam algorithm, presenting a loss value close to zero at the end of training.
The best artificial neural network presented a average absolute error result of 21.29 S/m and a standard deviation of 20.07 S/m (Table 1) and the graph that relates the deviation between the database data and the predicted data for the best artificial neural network can be seen in Figure 5.
With the predicted data there is a good approximation in relation to the database raised through the use of the chemical composition data of the SiO\({}_{2}\)-CaO-MgO-Al\({}_{2}\)O\({}_{3}\)-FeO system and the temperature in relation to the electrical conductivity of the slag applied and generated in the EAF. Mathematical modeling also helps in decision-making in control processes based on sensitivity analysis. With the weights of the hidden layer and the output layer, the relative contributions of each input variable in relation to the output variable were calculated to establish the relative importance.
Sensitivity analysis showed that CaO and SiO\({}_{2}\) variables have a greater impact on the electrical conductivity of the studied slags, with 30.28% and 20.29% of importance, respectively. A high relative importance in relation to the CaO variable indicates that the conductivity process in the SiO\({}_{2}\)-CaO-MgO-Al\({}_{2}\)O\({}_{3}\)-FeO system is ionic, similar to CaO-SiO\({}_{2}\)-B\({}_{2}\)O\({}_{3}\) systems and the variable SiO\({}_{2}\) is a deleterious variable to the refining process of materials within an EAF.
The variables MgO and FeO showed relative importance of 19.29% and 18.67% respectively. The influence of MgO on the foaming index in EAF slags is notorious, which has extreme importance in energy efficiency in energy consumption in the process [2] and FeO acts directly in the formation of CO bubbles that act for slag foaming. Al\({}_{2}\)O\({}_{3}\) and temperature obtained the importance of 9.22% and 2.25% respectively, demonstrating lower importance in the electrical conductivity control process in relation to the other variables (Figure 6).
\begin{table}
\begin{tabular}{c c c} \hline
**Model** & **AAE (S/m)** & **St. Dev. (S/m)** \\ \hline ANN & 21.29 & 20.07 \\ \hline \end{tabular}
\end{table}
Table 1: Average absolute error (AAE) and Standard Deviation in ANN
Figure 4: Loss (RMSE) versus Epochs in the best artificial neural network.
## 4 Conclusion
The electrical conductivity of the slag in the material manufacturing process in the electric arc furnace (EAF) has recognized importance for the quality of the final products and for the productive energy control. The use of mathematical modeling helps in understanding the interactions between some important variables, such as the chemical composition of the slag and the operating temperature.
Artificial neural networks were built for the mathematical modeling of the electrical conductivity of the EAF slag using the non-linear modeling method. The best artificial neural network showed a average absolute error of 21.29 S/m and a standard deviation of 20.07 S/m in relation to the Sciglass database used.
Sensitivity analysis using the connection weights method showed that the variable with the highest relative importance was CaO with 30.28%, thus indicating that the charge transport mechanism inn SiO\({}_{2}\)-CaO-MgO-Al\({}_{2}\)O\({}_{3}\)-FeO system is by ionic conduction. The second highest relative importance was SiO\({}_{2}\) with 20.29%, which has a deleterious effect on refining reactions in the EAF. Then the variables MgO
Figure 5: Electrical conductivity predicted (y-axis) and in the database (x-axis) in the developed artificial neural network.
Figure 6: Relative importances in SiO\({}_{2}\)-CaO-MgO-Al\({}_{2}\)O\({}_{3}\)-FeO system and temperature
and FeO indicated relative importance of 19.29% and 18.67% indicating that both have similar values. The variables Al\({}_{2}\)O\({}_{3}\) and temperature had the lowest importance relative to the 6 input variables in the best artificial neural network, presenting 9.22% and 2.25% respectively.
|
2308.15194 | Ensemble of Counterfactual Explainers | In eXplainable Artificial Intelligence (XAI), several counterfactual
explainers have been proposed, each focusing on some desirable properties of
counterfactual instances: minimality, actionability, stability, diversity,
plausibility, discriminative power. We propose an ensemble of counterfactual
explainers that boosts weak explainers, which provide only a subset of such
properties, to a powerful method covering all of them. The ensemble runs weak
explainers on a sample of instances and of features, and it combines their
results by exploiting a diversity-driven selection function. The method is
model-agnostic and, through a wrapping approach based on autoencoders, it is
also data-agnostic. | Riccardo Guidotti, Salvatore Ruggieri | 2023-08-29T10:21:50Z | http://arxiv.org/abs/2308.15194v1 | # Ensemble of Counterfactual Explainers
###### Abstract
In eXplainable Artificial Intelligence (XAI), several counterfactual explainers have been proposed, each focusing on some desirable properties of counterfactual instances: minimality, actionability, stability, diversity, plausibility, discriminative power. We propose an ensemble of counterfactual explainers that boosts weak explainers, which provide only a subset of such properties, to a powerful method covering all of them. The ensemble runs weak explainers on a sample of instances and of features, and it combines their results by exploiting a diversity-driven selection function. The method is model-agnostic and, through a wrapping approach based on autoencoders, it is also data-agnostic.
## 1 Introduction
In eXplainable AI (XAI), several counterfactual explainers have been proposed, each focusing on some desirable properties of counterfactual instances. Consider an instance \(x\) for which a black box decision \(b(x)\) has to be explained. It should be possible to find various counterfactual instances \(c\) (_availability_) which are _valid_ (change the decision outcome, i.e., \(b(c)\neq b(x)\)), _minimal_ (the number of features changed in \(c\) w.r.t. \(x\) should be as small as possible), _actionable_ (the feature values in \(c\) that differ from \(x\) should be controllable) and _plausible_ (the feature values in \(c\) should be coherent with the reference population). The counterfactuals found should be similar to \(x\) (_proximity_), but also different among each other (_diversity_). Also, they should exhibit a _discriminative power_ to characterize the black box decision boundary in the feature space close to \(x\). Counterfactual explanation methods should return similar counterfactuals for similar instances to explain (_stability_). Finally, they must be fast enough (_efficiency_) to allow for interactive usage.
In the literature, these desiderata for counterfactuals are typically modeled through an optimization problem [12], which, on the negative side, favors only a subset of the properties above. We propose here an _ensemble of counterfactual explainers_ (ece) that, as in the case of ensemble of classifiers, boosts weak explainers to a powerful method covering all of the above desiderata. The ensemble runs _base counterfactual explainers_ (bce) on a sample of instances and of features, and it combines their results by exploiting a diversity-driven selection function. The method is model-agnostic and, through a wrapping approach based on encoder/decoder functions, it is also data-agnostic. We will be able to reason uniformly on counterfactuals for tabular data, images, and time series. An extensive experimentation is presented to validate the approach. We compare with state-of-the-art explanation methods on several metrics from the literature.
## 2 Related Work
Research on XAI has flourished over the last few years [5]. Explanation methods can be categorized as: _(i) intrinsic_ vs _post-hoc_, depending on whether the AI model is directly interpretable, or if the explanation is computed for a given black box model; _(ii) model-specific_ vs _model-agnostic_, depending on whether the approach requires access to the internals of the black box model; _(iii) local_ or _global_, depending on whether the explanation regards a specific instance, or the overall logic of the black box. Furthermore, explanation methods can be categorized w.r.t. the type of explanation they return (factual or counterfactual) and w.r.t. the type of data they work with. We restrict to local and post-hoc methods returning counterfactual explanations, which is the focus of our proposal.
A recent survey of counterfactual explainers is [15]. Most of the systems are data-specific and generate synthetic (_exogenous_) counterfactuals. Some approaches search _endogenous_ counterfactuals in a given dataset [9] of instances belonging to the reference population. Exogenous counterfactuals may instead break known relations between features, producing unrealistic instances. Early approaches generated exogenous counterfactuals by solving an optimization problem [12]. In our proposal, we do not rely on this family of methods as they are typically computationally expensive. Another family of approaches are closer to instance-based classification, and rely on a distance function among instances [9, 10]. E.g., [10] grows a sphere around the instance to explain, stopping at the decision boundary of the black box. They are simple but effective, and the idea will be at the core of our base explainers. Some approaches deal with high dimensionality of data through autoencoders [3], which map instances into a smaller latent feature space. Search for counterfactuals is performed in the latent space, and then instances are decoded back to the original space. We rely on this idea to achieve a data-agnostic approach.
## 3 Problem Setting
A _classifier_\(b\) is a function mapping an instance \(x\) from a reference population in a feature space to a nominal value \(y\) also called class value or decision, i.e., \(b(x)=y\). The classifier \(b\) is a _black box_ when its internals are either unknown to the observer or they are known but uninterpretable by humans. Examples include neural networks, SVMs, ensemble classifiers [5].
A _counterfactual_ of \(x\) is an instance \(c\) for which the decision of the black box differs from the one of \(x\), i.e., such that \(b(c)\neq b(x)\). A counterfactual is _actionable_ if it belongs to the reference population. Since one may not have a complete specification of the reference population, a relaxed definition of actionability is to require the counterfactual to satisfy given constraints on its feature values. We restrict to simple constraints \(a_{A}(c,x)\) that hold iff \(c\) and \(x\) have the same values over for a set \(A\) of _actionable features_. Non-actionable features (such as age, gender, race) cannot be changed when searching for a counterfactual.
A _\(k\)-counterfactual explainer_ is a function \(f_{k}\) returning a set \(C=\{c_{1},\ldots,c_{h}\}\) of \(h\leq k\) actionable counterfactuals for a given instance of interest \(x\), a black
box \(b\), a set \(X\) of known instances from the reference population, and a set \(A\) of actionable features, i.e., \(f_{k}(x,b,X,A)=C\). For endogenous approaches, \(C\subseteq X\). A counterfactual explainer is model-agnostic (resp., data-agnostic) if the definition of \(f_{k}\) does not depend on the internals of \(b\) (resp., on the data type of \(x\)). We consider the following data types: tabular data, time series and images. For _tabular data_, an instance \(x=\{(a_{1},v_{1}),\ldots,(a_{m},v_{m})\}\) is a tuple of \(m\) attribute-value pairs \((a_{i},v_{i})\), where \(a_{i}\) is a feature (or attribute) and \(v_{i}\) is a value from the domain of \(a_{i}\). For example, \(x=\{(\mathit{age},22),(\mathit{sex},\mathit{male}),(\mathit{income},\mathit{ 800})\}\). The domain of a feature can be continuous (_age_, _income_), or categorical (_sex_). For (univariate) _time series_, an instance \(x=\langle v_{1},\ldots,v_{m}\rangle\) is an ordered sequence of continuous values (e.g., the body temperature registered at hourly rate). For _images_, \(x\) is a matrix in \(\mathbb{R}^{m\times m}\) representing the intensity of the image pixels.
_Problem Statement_. We consider the problem of designing a \(k\)-counterfactual explainer satisfying a broad range of properties: availability, validity, actionability, plausibility, similarity, diversity, discriminative power, stability, efficiency.
## 4 Ensemble of Explainers
Our proposal to the stated problem consists of an ensemble of base explainers named ece (ensamble of counterfactual explainers). Ensemble classifiers boost the performance of weak learner base classifiers by increasing the predictive power, or by reducing bias or variance. Similarly, we aim at improving base \(k\)-counterfactual explainers by combining them into an ensemble of explainers.
The pseudo-code1 of ece is shown in Alg. 1. It takes as input an instance to explain \(x\), the black box to explain \(b\), a set of known instances \(X\), the number of required counterfactuals \(k\), the set of actionable features \(A\), a set of base \(k\)-counterfactual explainers \(E\), and it returns (at most) \(k\) counterfactuals \(C\). Base explainers are invoked on a sample without replacement \(X^{\prime}\) of instances from \(X\) (line 3), and on a random subset \(A^{\prime}\) of the actionable features \(A\) (line 4), as in Random Forests. All counterfactuals produced by the base explainers are collected in a set \(C\) (line 5), from which \(k\) counterfactuals are selected (line 6). Actionability of counterfactuals is guaranteed by the base explainers (or by filtering
out non-actionable ones from their output). Diversity is enforced by randomization (instance and feature sampling) as well as by tailored selection strategies. Stability is a result of combining multiple base explainers, analogously to the smaller variance of ensemble classification w.r.t. the base classifiers. Moreover, if all base explainers are model-agnostic, this also holds for ece.
### Base Explainers
All bce's presented are parametric to a distance function \(d()\) over the feature space. In the experiments, we adopt: for tabular data, a mixed distance weighting Euclidean distance for continuous features and the Jaccard dissimilarity for categorical ones; for images and times series, the Euclidean distance.
**Brute Force Explainer (bce-b).** A brute force approach considers all subsets \(\mathcal{A}\) of actionable features \(A\) with cardinality at most \(n\). Also, for each actionable feature, an equal-width binning into \(r\) bins is computed, and for each bin the center value will be used as representative of the bin. The binning scheme considers only the known instances \(X\) with black box decision different from \(x\). The brute force approach consists of generating all the possible variations of \(x\) with respect to any of the subset in \(\mathcal{A}\) by replacing an actionable feature value in \(x\) with any representative value of a bin of the feature. Variations are ranked according to their distance from \(x\). For each such variation \(c\), a _refine_ procedure implements a bisecting strategy of the features in \(c\) which are different from \(x\) while maintaining \(b(c)\neq b(x)\). The procedure returns either a singleton with a counterfactual or an empty set (in case \(b(c)=b(x)\)). The aim of _refine_ is to improve similarity of the counterfactual with \(x\). The procedure stops when \(k\) counterfactuals have been found or there is no further candidate. The greater are \(n\) and \(r\), the larger number of counterfactuals to choose from, but also the higher the computational complexity of the approach, which is \(O(\binom{|A|}{n}\cdot n\cdot r)\). bce-b tackles minimization of changes and similarity, but not diversity.
**Tree-based Explainer (bce-t).** This proposal starts from a (surrogate/shadow [7]) decision tree \(\mathcal{T}\) trained on \(X\) to mine the black box behavior. Leaves in \(T\) leading to predictions different from \(b(x)\) can be exploited for building counterfactuals. Basically, the splits on the path from the root to one such leaf represent conditions satisfied by counterfactuals. To ensure actionability, only splits involving actionable constraints are considered. To tackle minimality, the filtered paths are sorted w.r.t. the number of conditions not already satisfied by \(x\). For each such path, we choose one instance \(c\) from \(X\) reaching the leaf and minimizing distance to \(x\). Even though the path has been checked for actionable splits, the instance \(c\) may still include changes w.r.t. \(x\) that are not actionable. For this, we overwrite non-actionable features. Since not all instances at a leaf have the same class as the one predicted at the leaf, we also have to check for validity before including \(c\) in the result set. The search over different paths of the decision tree allows for some diversity in the results, even though this cannot be explicitly controlled for. The computational complexity requires both a decision tree construction and a number of distance calculations.
**Generative Sphere-based Explainer (bce-s).** The last base counterfactual explainer relies on a generative approach growing a _sphere_ of synthetic instances around \(x\)[10]. Instance are generated in all directions of the feature space until the decision boundary of the black box \(b\) is crossed and the closest counterfactual to \(x\) is retrieved. The sphere radius is initialized to a large value, and then it is decreased until the boundary is crossed. Next, a lower bound radius and an upper bound radius are determined such that the boundary of \(b\) crosses the area of the sphere between the lower bound and the upper bound radii. In its original version, the growing spheres algorithm generates instances following a uniform distribution. bce-s adopts instead a _Gaussian-Matched_ generation [1]. To ensure actionability, non-actionable features of generated instances are set as in \(x\). Finally, bce-s selects from the instances in the final ring the ones which are closest to \(x\) and are valid. The complexity of the approach depends on the distance of the decision boundary from \(x\), which in turn determines the number of iterations needed to compute the final ring.
### Counterfactual Selection
The selection function \(\mathcal{S}\) at line 5 of Alg. 1 selects \(k\)-counterfactuals from those returned by the base explainers. This problem can be formulated as maximizing an objective function over \(k\)-subsets of valid counterfactuals \(C\). We adopt a _density-based_ objective function:
\[\operatorname*{arg\,max}_{S\subseteq C\wedge|S|\leq k}\ |\bigcup_{c\in S}knn_{C}(c)|- \lambda\sum_{c\in S}d(c,x)\]
It aims at maximizing the difference between the size of neighborhood instances of the counterfactuals (a measure of diversity) and the total distance from \(x\) (a measure of similarity) regularized by a parameter \(\lambda\). \(knn_{C}(c)\) returns the \(h\) most similar counterfactuals to \(c\) among those in \(C\). We adopt the Cost Scaled Greedy (csg) algorithm [4] for the above maximization problem.
### Counterfactuals for Other Data Types
We enable ece to work on data types other than tabular data by wrapping it around two functions. An _encoder_\(\zeta\) : \(\mathbb{D}{\rightarrow}\mathbb{R}^{q}\) that maps an instance from its actual domain \(\mathbb{D}\) to a latent space of continuous features, and a _decoder_\(\eta:\mathbb{R}^{q}{\rightarrow}\mathbb{D}\) that maps an instance of the latent space back to the actual domain. Using such functions, any explainer \(f_{k}(x,b,X,A)\) can be extended to the domain \(\mathbb{D}\) by invoking \(\eta(f_{k}(\zeta(x),b^{\prime},\zeta(X),A^{\prime}))\) where the black box in the latent space is \(b^{\prime}(x)=b(\eta(x))\). The definition of the actionable features in the latent space \(A^{\prime}\) depends on the actual encoder and decoder.
Let us consider the image data type (for time series, the reasoning is analogous). A natural instantiation of the wrapping that achieves dimensionality reduction with a controlled loss of information consists in the usage of _autoencoders_ (AE) [8]. An AE is a neural network composed by an encoder and a
decoder which are trained simultaneously for learning a representation that reduces the dimensionality while minimizing the reconstruction loss. A drawback of this approach is that we cannot easily map actionable feature in the actual domain to features in the latent space (this is a challenging research topic on its own). For this, we set \(A^{\prime}\) to be the whole set of latent features and hence, we are not able to deal with actionability constraints.
## 5 Experiments
**Experimental Settings.** We consider a few datasets widely adopted as benchmarks in the literature (see Table 1). There are three time series datasets, two image datasets, and four tabular datasets. For each tabular dataset, we have selected the set \(A\) of actionable features, as follows. adult: age, education, marital status, relationship, race, sex, native country; compas: age, sex, race; fico: external risk estimate; german: age, people under maintenance, credit history, purpose, sex, housing, foreign worker.
For each dataset, we trained and explained the following black box classifiers: Random Forest (RF) as implemented by _scikit-learn_, and Deep Neural Networks (DNN) implemented by _keras_ for tabular datasets, and Convolutional Neural Networks (CNNs) implemented with _keras_ for images and time series. We split tabular datasets into a 70% partition used for the training and 30% used for the test, while image and time series datasets are already released in partitioned files. For each black-box and for each dataset, we performed on the training set a random search with a 5-fold cross-validation for finding the best parameter setting. The classification accuracy on the test set is shown in Table 1 (right).
We compare our proposal against competitors from the state-of-the-art offering a software library that is updated and easy to use. dice[12] handles categorical features, actionability, and allows for specifying the number \(k\) of counterfactuals to return. However, it is not model-agnostic as it only deals with
\begin{table}
\begin{tabular}{|c c|c c c c c c c c|c|} \hline \multicolumn{2}{|c|}{Dataset} & \(n\) & \(m\) & \(m_{con}\) & \(m_{cat}\) & \(m_{act}\) & \(m_{1h}\) & \(l\) & RF NN \\ \hline \multirow{4}{*}{**Model**} & adult & 32,561 & 12 & 4 & 8 & 5 & 103 & 2 &.85 &.84 \\ & compas & 7,214 & 10 & 7 & 3 & 7 & 17 & 3 &.56 &.61 \\ & fico & 10,459 & 23 & 23 & 0 & 22 & - & 2 &.68 &.67 \\ & german & 1,000 & 20 & 7 & 13 & 13 & 61 & 2 &.76 &.81 \\ \hline \multirow{4}{*}{**Model**} & mhist & 60k & \(28\times 28\) & all & 0 & all & - & 10 & - &.99 \\ & fashion & 60k & \(28\times 28\) & all & 0 & all & - & 10 & - &.97 \\ \hline \multirow{4}{*}{**Model**} & gunpoint & 250 & 150 & all & 0 & all & - & 2 & - &.72 \\ & power & 1,096 & 24 & all & 0 & all & - & 2 & - &.98 \\ \cline{1-1} & ecg200 & 200 & 96 & all & 0 & all & - & 2 & - &.76 \\ \hline \end{tabular}
\end{table}
Table 1: Datasets description and black box accuracy. \(n\) is the no. of instances. \(m\) is the no. of features. \(m_{con}\) and \(m_{cat}\) are the no. of continuous and categorical features respectively. \(m_{act}\) is the no. of actionable features. \(m_{1h}\) is the total no. of features after one-hot encoding. Rightmost columns report classification accuracy: NN stands for DNN for tabular data, and for CNN for images and time series.
differentiable models such as DNNs. The _FAT_[13] library implements a brute force (bf) counterfactual approach. It handles categorical data but not the number \(k\) of desired counterfactuals nor actionability. The _ALIBI_ library implements the counterfactual explainers cem[3, 11], cegp[14] and wach[16]. All of them are designed to explain DNNs, do not handle categorical features and return a single counterfactual, but it is possible to enforce actionability by specifying the admissible feature ranges. Finally, ceml[2] is a model-agnostic toolbox for computing counterfactuals based on optimization that does not handle categorical features and returns a single counterfactual. We also re-implemented the case-based counterfactual explainer (cbc) from [9]. For each tool, we use the default settings offered by the library or suggested in the reference paper. For each dataset, we explain 100 instances \(x\) from the test set. The set \(X\) of known instances in input to the explainers is the training set of the black box. We report aggregated results as means over the 100 instances, datasets and black boxes.
**Evaluation Metrics.** We evaluate the performances of counterfactual explainers under various perspectives [12]. The measures reported in the following are stated for a single instance \(x\) to be explained, and considering the returned \(k\)-counterfactual set \(C=f_{k}(x,b,X,A)\). The metrics are obtained as the mean value of the measures over all \(x\)'s to explain.
_Size._ The number of counterfactuals \(|C|\) can be lower than \(k\). We define \(\mathit{size}=|C|/k\). The higher the better. Recall that by definition of a \(k\)-counterfactual explainer, any \(c\in C\) is valid, i.e., \(b(c)\neq b(x)\).
_Actionability._ It accounts for the counterfactuals in \(C\) that can be realized: \(\mathit{act}=|\{c\in C\mid a_{A}(c,x)\}|/k\). The higher the better.
_Implausibility._ It accounts for how close are counterfactuals to the reference population. It is the average distance of \(c\in C\) from the closest instance in the known set \(X\). The lower the better.
\[\mathit{impl}=\frac{1}{|C|}\sum_{c\in C}\min_{x\in X}d(c,x)\]
_Dissimilarity._ It measures the proximity between \(x\) and the counterfactuals in \(C\). The lower the better. We measure it in two fashions. The first one, named \(\mathit{dis}_{\mathit{dist}}\), is the average distance between \(x\) and the counterfactuals in \(C\). The second one, \(\mathit{dis}_{\mathit{count}}\), quantifies the average number of features changed between a counterfactual \(c\) and \(x\). Let \(m\) be the number of features.
\[\mathit{dis}_{\mathit{dist}}=\frac{1}{|C|}\sum_{c\in C}d(x,c)\qquad\mathit{dis }_{\mathit{count}}=\frac{1}{|C|m}\sum_{c\in C}\sum_{i=1}^{m}\mathbb{1}_{c_{i} \neq x_{i}}\]
_Diversity._ It accounts for a diverse set of counterfactuals, where different actions can be taken to recourse the decision of the black box. The higher the better. We denote by \(\mathit{div}_{\mathit{dist}}\) the average distance between the counterfactuals in \(C\), and by \(\mathit{div}_{\mathit{count}}\) the average number of different features between the counterfactuals.
\[\mathit{div}_{\mathit{dist}}=\frac{1}{|C|^{2}}\sum_{c\in C}\sum_{c^{\prime}\in C }d(c,c^{\prime})\qquad\mathit{div}_{\mathit{count}}=\frac{1}{|C|^{2}m}\sum_{c \in C}\sum_{c^{\prime}\in C}\sum_{i=1}^{m}\mathbb{1}_{c_{i}\neq c^{\prime}_{i}}\]
_Discriminative Power._ It measures the ability to distinguish through a naive approach between two different classes only using the counterfactuals in \(C\). In line with [12], we implement it as follows. The sets \(X_{=}\subset X\) and \(X_{\neq}\subset X\) such that \(b(X_{=})=b(x)\) and \(b(X_{\neq})\neq b(x)\) are selected such that the instances in \(X_{=},X_{\neq}\) are the \(k\) closest to \(x\). Then we train a simple 1-Nearest Neighbor (1NN) classifier using \(C\cup\{x\}\) as training set, and \(d\) as distance function. The choice of 1NN is due to its simplicity and connection to human decision making starting from examples. We classify the instances in \(X_{=}\cup X_{\neq}\) and we use the accuracy of the 1NN as _discriminative power_ (_dipo_).
_Instability._ It measures to which extent the counterfactuals \(C\) are close to the ones obtained for the closest instance to \(x\) in \(X\) with the same black box decision. The rationale is that similar instances should obtain similar explanations [6]. The lower the better.
\[\mathit{inst}=\frac{1}{1+d(x,x^{\prime})}\frac{1}{|C||C^{\prime}|}\sum_{c\in C }\sum_{c^{\prime}\in C^{\prime}}d(c,c^{\prime})\]
with \(x^{\prime}=\mathit{argmin}_{x_{1}\in X\setminus\{x\},b(x_{1})=b(x)}\,d(x,x_{1})\) and \(C^{\prime}=f_{k}(x^{\prime},b,X,A)\).
_Runtime._ It measures the elapsed time required by the explainer to compute the counterfactuals. The lower the better. Experiments were performed on Ubuntu 20.04 LTS, 252 GB RAM, 3.30GHz x 36 Intel Core i9.
In line with [12, 16], in the above evaluation measures, we adopt as distance \(d\) the following mixed distance:
\[d(a,b)=\frac{1}{m_{\mathit{con}}}\sum_{i\in\mathit{con}}\frac{|a_{i}-b_{i}|}{ \mathit{MAD}_{i}}+\frac{1}{m_{\mathit{cat}}}\sum_{i\in\mathit{cat}}\mathbb{1}_ {a_{i}\neq b_{i}}\]
where _con_ (resp., _cat_) is the set of continuous (resp., categorical) feature positions. Such a distance is not necessarily the one used by the compared explainers. In particular, it substantially differs from the one used by ece.
**Parameter Tuning.** From an experimental analysis (not reported here) of the impact of the components of ece, we set: for bce-b, \(r=10\) and \(n=1\); and for ece, \(|E|=10\) base explainers chosen uniformly random.
**Quantitative Evaluation.** Fig. 1 shows the performance of the compared explainers on tabular data when varying \(k\). From the first plot, we notice that
Figure 1: Aggregate metrics on tabular datasets by varying \(k\).
only ece, dice, cbcce and bf are able to return at least 80% of the required counterfactuals. Most of the other methods only return a single one. From the second plot, we conclude that only ece, bf and cbcce return a notable fraction of actionable counterfactuals (_act_). From the plots on dissimilarity (_discount_ and _disdist_) and diversity (_divcount_ and _divdist_), it turns out that cbcce (and also dice) has good values of diversity, but performs poorly w.r.t. dissimilarity. bf wins over ece w.r.t. the _disdist_ measure, loses w.r.t. the _divdist_ measure, and is substantially equivalent w.r.t. the other two measures. As for discriminative power _dipo_, ece performs slightly lower than dice, cbcce, bf and ceml. Regarding plausibility (_impl_), ece is the best performer if we exclude methods that return a single counterfactual (i.e., cem, cegp and wach). Indeed, ece _impl_ is constantly smaller that dice and bf and in line with cbcce, which is the only endogenous methods compared. Intuitively, counterfactuals returned by ece resemble instances from the reference population. Concerning instability _inst_, ece is slightly worse than bf and slightly better than dice. ceml is the most stable, and cbcce the most unstable. cem, cegp and wach are not shown in the instability plot because, in many cases, they do not return counterfactuals for both of the two similar instances. Finally, all the explainers, with the exception of bf and ece, require on average a runtime of more than one minute. We summarize the performances of the approaches by the CD diagram in Fig. 3 (left), which shows the mean rank position of each method over all experimental runs (datasets \(\times\) black boxes \(\times\) metrics \(\times\)\(k\)). Overall, ece performs better than all competitors, and the difference is statistically significant.
Fig. 2 shows the performance on images (first row) and time series (second row) datasets. We consider also the ece with the identity encoder/decoder (named ece\({}_{I}\)), and with the kernel encoder/decoder (ece\({}_{K7}\) for kernel of size
Figure 3: Critical Difference (CD) diagrams for the post-hoc Nemenyi test at 95% confidence level: tabular (left), images (center), and time series (right) datasets.
Figure 2: Aggregate metrics on images (\(1^{st}\) row) and time series (\(2^{nd}\) row) by varying \(k\).
\(7\times 7\) and \(\textsc{ece}_{K4}\) for kernel of size \(4\times 4\)). For images, cem, cegp and wach return only a single counterfactual, while ece provides more alternatives and with the best diversity. wach returns the least implausible counterfactuals, the variants of ece stand in the middle, while cem returns less realistic counterfactuals. Regarding running time, cegp is the most efficient together with \(\textsc{ece}_{I}\) and \(\textsc{ece}_{K4}\). The usage of the autoencoder in ece increases the runtime. cem and wach are the slowest approaches. Similar results are observed for time series, with few differences. The CD diagrams in Fig. 3 (center, right) confirm that ece and its variants are the best performing methods.
**Acknowledgment.** Work partially supported by the European Community H2020-EU.2.1.1 programme under the G.A. 952215 _Tailor_.
|
2305.04065 | Some Cosmological Consequences of Higher Dimensional
Klein-Gordon-Rastall Theory | Using dynamical system analysis, we investigate some cosmological
consequences of Rastall gravity coupled to a scalar field (called the
Klein-Gordon-Rastall theory) with exponential scalar potential turned on in
higher dimensions. From the critical points of the autonomous equations, we can
determine the dominant components of the energy density in different cosmic
eras. We obtain a fixed point representing a scalar field-matter-dominated era
which corresponds to either a late-time or past-time attractor depending on the
parameters used. According to this point, the inflationary phase, corresponding
to past-time attractors, is given by unstable nodes, whilst the dark energy
era, corresponding to late-time attractors, is represented by stable nodes. In
the inflationary sector, power-law inflation can still occur in this
Klein-Gordon-Rastall cosmological model. On the other hand, in the late-time
sector, we find a nontrivial interplay between a scalar field with an
exponential potential and the non-conservative energy-momentum tensor of the
non-relativistic matter field (baryonic-dark matter) in curved spacetime plays
a role as the dark energy. Based on such features, the Klein-Gordon-Rastall
cosmology could be a promising candidate for describing both the early and
late-time universe. | Tegar Ari Widianto, Ahmad Khoirul Falah, Agus Suroso, Husin Alatas, Bobby Eka Gunara | 2023-05-06T15:00:44Z | http://arxiv.org/abs/2305.04065v2 | # Some Cosmological Consequences of Higher Dimensional Klein-Gordon-Rastall Theory
###### Abstract
We study some cosmological consequences of Rastall gravity coupled to a scalar field (called Klein-Gordon-Rastall theory) with exponential potential scalar potential turned on in higher dimensions using the dynamical system analysis. The evolution of the universe could be investigated by determining the components dominating the energy density in each era through the critical points of the autonomous equations. We obtain a fixed point representing scalar field-matter dominated era which could be either the late-time or past time attractors depending on parameters we used. According to this point, the inflationary phase, past time attractors, is given by unstable whilst the dark energy era, late-time attractors, is represented by stable nodes. In inflationary sector, we find that power-law inflation can still occur in this Klein-Gordon-Rastall cosmological model. On the other hand, from late-time sector we find that nontrivial interplay between the quintessence and the non-conservative energy momentum tensor of the non-relativistic matter field (baryonic-dark matter) in curved spacetime play a role as dark energy. Based on such features, the Klein-Gordon-Rastall cosmology may become a good candidate describing both early and late-time universe.
## 1 Introduction
The accuracy of General Relativity's predictions with observational results has been confirmed at the level of the solar system [1, 2] to galaxies and galaxy clusters scale tests
[3, 4]. However, some problems arise when we apply General Relativity (GR) to the cosmological scale to study the dynamics of the universe. To obtain an accelerating universe that is in agreement with the observational result, the cosmological constant, \(\Lambda\), must be added to Einstein field equation, the \(\Lambda\)CDM model [5, 6]. The problem lies on the interpretation of \(\Lambda\) as a vacuum energy density [7, 8]. It leads to the cosmological constant problem where the calculation result using quantum field theory shows that the vacuum energy density is \(\rho_{vac}\approx 10^{74}GeV^{4}\), a value that is much bigger than the cosmological observations, \(\rho_{obs}\approx 10^{-47}GeV^{4}\)[7, 9, 10]. Thus, the interpretation of \(\Lambda\) as a vacuum energy density remains questionable. It may be either a natural property of the fabric structure of spacetime that has antigravity property or a mathematical representation such as the Lagrange's multiplier or the integration's constant [11]. In other words, the source of cosmic acceleration is still unknown.
There have been several attempts addressed to overcome such problem. In late-time sector, one of the proposed models is a dynamical scalar field with a particular potential form. The model was later referred to as _quintessence_[12], a canonical scalar field coupled to gravity that plays a role as dark energy which can explain the late-time cosmic acceleration. In early time sector, the scalar field varying slowly along the potential of \(V(\phi)\) can trigger the universe to accelerated expand known as inflationary era. Such a mechanism called _slow-roll inflation_ where the scalar field plays a role as the inflaton field. The difference between those two eras are the current accelerating universe, driven by dark energy, also contains dark matter and baryons unlike the inflationary phase containing only inflaton field. Other scalar field models such as k-essence, phantom, tachyon and dilaton [13, 14, 15, 16, 17, 18], have also been proposed to reveal the nature of accelerating universe. Another theory made to provide cosmic acceleration is modified gravity such as _Brans-Dicke theory_[19], _Massive Gravity_[20], \(f(\mathcal{R})\)_Gravity_[21] and _Nonlinear Electrodynamics_ (NLED) [22]. All of these models are based on the conservation law of energy-momentum tensor (EMT).
Another modified gravity that has been considered is the theory proposed by Rastall in which the conservation law of EMT, \(\nabla_{\mu}T^{\mu\nu}=0\) where \(\nabla_{\mu}\) is the covariant derivative, in curved spacetimes is replaced the EMT non-conservation equation with the form \(\nabla_{\mu}T^{\mu\nu}=\nabla^{\nu}\lambda R\) where \(\lambda\) is a constant and \(R\) is the spacetime Ricci scalar [23] such that we recover GR for \(\lambda=0\) and the conservation law of EMT in flat spacetime. However, in its development, some dissents arose in response to the Rastall modifications. Some studies conclude that Rastall gravitational theory is equivalent to GR [24, 25]. However, according to [26], the Rastall theory is not equivalent to GR and it is an open theory compared to GR so this theory has the opportunity to answer the problems that have not been explained by standard cosmological models. Phenomenologically, the violation of the conservation law of EMT occurs when the particle creation process in cosmology [27, 28]. Then, Rastall theory can be considered as a classical formulation of quantum phenomena that occur on the cosmological scale [29]. In addition, dark energy may arise as a result of a violation of the EMT's conservation law [31] which is a fundamental assumption used in the Rastall theory. The application of this Rastall theory and its generalization to cosmology enable us to have an accelerating universe model [32, 33, 34]. See also, for instance, [35, 36, 30] for interesting properties of Rastall cosmology. The Rastall theory has also been used to answer another problem from the \(\Lambda\)CDM model, namely, \(H_{0}\) tension. However, the modification of Friedmann's equation by the non conservation equation, Rastall-\(\Lambda\)CDM, obtains a value of \(H_{0}\) which is not much different from that obtained by the \(\Lambda\)CDM model
[37]. Fortunately, Rastall-\(f({\cal R})\) theory is enable us to obtain a \(H_{0}\) value that is better close to the observed value by choosing a suitable form of \(f({\cal R})\) function [32].
In this paper, we study Rastall gravity using different approach from the previous works mentioned above. We consider higher dimensional Rastall gravity coupled with a canonical scalar field \(\phi\) called Klein-Gordon-Rastall (KGR) theory with exponential scalar potential and then, study its cosmological consequences using dynamical system analysis. The biggest motivation of our study lies on the lack of discussion of KGR cosmology both in early and late-time era in a unified picture. We expect that KGR cosmology might provide a good description of both eras. Here, we perform the calculation of KGR model on higher dimensional spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetimes to study its effects on the existence of critical points and their stability. Also, we investigate the cosmological consequences of our model in the absence and the presence of the cosmological constant term. Our work refers to [38, 39, 40, 41, 42] who also study the cosmological properties of other theories of gravity using dynamical system analysis. In our model, it has five critical points related to cosmological era. We obtain some stable and unstable nodes corresponding to late-time and early-time attractors, respectively, on scalar field-matter dominated era called CP4. The description of the early-time universe (unstable nodes and accelerated) obtained from this point leading the power-law inflation mechanism. We also derive the exact solution of power-law inflation in the KGR framework. A description of the late-time era is given by CP4 for \(\Lambda=0\) case which allows us to have stable solutions and an accelerating universe in KGR theory, \(\gamma\neq 0\), when non-relativistic matter exists as subdominant components. This feature cannot be found in the standard GR, \(\gamma=0\). In the present model, the accelerating universe can be explained by the nontrivial interplay between the quintessence paradigm and the non-conservative EMT of the non-relativistic matter field in curved spacetime. They play a role as a dark energy. However, this scenario cannot be found in CP4 for \(\Lambda\neq 0\) case due to its no existence of stable nodes.
The structure of this paper can be mentioned as follows. In Section (2) we derive the Friedmann equations in KGR theory. Then, we present the way to transform the Friedmann equations for \(\Lambda=0\) case in KGR theory into a set of autonomous equations and perform the dynamical analysis around the critical points in Section (3). The same procedures as previous section are also employed to analyze \(\Lambda\neq 0\) case in Section (4). We construct the local-global existence and the uniqueness of autonomous equations for both \(\Lambda=0\) and \(\Lambda\neq 0\) in Section (5). Next, in Section (6) we discuss the cosmological implications on both inflationary and late-time accelerating universe as we also attempt to track cosmological sequences of KGR theory. Finally, we conclude our results in Section (7). The detailed calculations of stability conditions of critical points for \(\Lambda=0\) and \(\Lambda\neq 0\) cases are given in Appendix (A) and Appendix (B), respectively.
## 2 Spatially Flat Cosmology in KGR Theory
In this section, we consider a homogeneous and isotropic higher dimensional cosmological model in KGR gravitational theory. Particularly, we use the ansatz metric in the rest of the paper called spatially flat FLRW metric
\[ds^{2}=-N^{2}(t)\,dt^{2}+a^{2}(t)\delta_{ij}\,dx^{i}dx^{j}\,, \tag{2.1}\]
where \(\delta_{ij}\) is \(\delta\)-Kronecker and the indices \(i,j=1,2,3,...,(d-1)\) label the spatial components. From the metric (2.1), we obtain the Ricci tensor components
\[\begin{split} R_{00}&=-(d-1)\Big{(}\frac{\ddot{a}}{ a}-\frac{\dot{a}\dot{N}}{aN}\Big{)}\,,\qquad R_{0i}=R_{i0}=0\,,\\ R_{ij}&=\delta_{ij}\Big{[}\frac{a\ddot{a}}{N^{2}}+( d-2)\frac{\dot{a}^{2}}{N^{2}}-\frac{a\dot{a}\dot{N}}{N^{3}}\Big{]}\,,\end{split} \tag{2.2}\]
which follows that we have the Ricci scalar
\[R=\frac{(d-1)}{N}\bigg{[}\frac{2\ddot{a}}{aN}-\frac{2\dot{a}\dot{N}}{aN^{2}}+( d-2)\frac{\dot{a}^{2}}{a^{2}N}\bigg{]}\,. \tag{2.3}\]
Let us focus on the EMT in our model. The first part of the EMT is the perfect fluid EMT whose form is given by
\[T^{(\rm m)}_{\mu\nu}=(\rho_{\rm m}+p_{\rm m})u_{\mu}u_{\nu}+p_{\rm m}g_{\mu\nu }\,, \tag{2.4}\]
where the indices \(\mu,\nu=0,1,2,3,...,(d-1)\) are the spacetime indices, \(u_{\mu}=-N\delta^{0}_{\mu}\), \(p_{\rm m}=w_{\rm m}\,\rho_{\rm m}\), and \(w_{\rm m}\in{\rm I\kern-1.8ptR}\) which contains dust-like, radiation-like and vacuum-like for \(d>4\). The second part of the energy-momentum tensor is coming from a canonical scalar field \(\phi\), namely,
\[T^{(\phi)}_{\mu\nu}=\partial_{\mu}\phi\,\partial_{\nu}\phi-\frac{1}{2}g_{\mu\nu }\,\partial_{\alpha}\phi\,\partial^{\alpha}\phi-g_{\mu\nu}V(\phi)\,, \tag{2.5}\]
where \(V(\phi)\) is a scalar potential.
In the Rastall theory, we take the assumption that the divergence of the perfect fluid EMT satisfies
\[\nabla^{\nu}T^{(\rm m)}_{\mu\nu}=\lambda\nabla_{\mu}R\,, \tag{2.6}\]
where \(\lambda\) is a constant Rastall parameter and \(R\) is a Ricci scalar given by (2.3), whereas the scalar field EMT still satisfies the conservation law
\[\nabla^{\nu}T^{(\phi)}_{\mu\nu}=0\,. \tag{2.7}\]
By adding the equations (2.6) and (2.7)
\[\nabla^{\mu}\big{(}T^{(\rm m)}_{\mu\nu}+T^{(\phi)}_{\mu\nu}- \lambda g_{\mu\nu}R\big{)}=0\,, \tag{2.8}\]
and using the contracted Bianchi identity with cosmological constant
\[\nabla^{\mu}\Big{(}R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+\Lambda g_{ \mu\nu}\Big{)}=0\,, \tag{2.9}\]
we then have
\[R_{\mu\nu}+\left(\gamma-\frac{1}{2}\right)g_{\mu\nu}R=\kappa\left(T^{(\rm m)}_ {\mu\nu}+T^{(\phi)}_{\mu\nu}-\tilde{\Lambda}g_{\mu\nu}\right)\,, \tag{2.10}\]
which is commonly known as the Rastall field equation with \(\gamma\equiv\kappa\lambda\) and \(\tilde{\Lambda}\equiv\frac{\Lambda}{\kappa}\). The quantity \(R_{\mu\nu}\) is the Ricci tensor whose components given by (2.2). Some comments are
in order. First, we recover the standard higher dimensional GR by taking \(\lambda=0\). Second, the conservation law of the scalar field EMT (2.7) is ensured by the scalar field equation of motions
\[g^{\mu\nu}\nabla_{\mu}\partial_{\nu}\phi=-\frac{\partial V}{ \partial\phi}. \tag{2.11}\]
Now, let us focus on the Rastall field equation (2.10) on the metric (2.1). By substituting equation (2.2),(2.3),(2.4), and (2.5) into equation (2.10), one obtains the modified first Friedmann equation
\[\frac{(d-1)(d-2)}{2}H^{2}=\kappa(\rho_{\rm m}+\rho_{\rm KGR}+ \rho_{\Lambda})\,, \tag{2.12}\]
where
\[\rho_{\rm KGR}\equiv\bigg{[}1-\frac{4\gamma(d-1)}{(d-2)}\bigg{]} \frac{\dot{\phi}^{2}}{2N^{2}}+V(\phi)-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m}) \,\rho_{\rm m}\,+\frac{d\,(d-1)\gamma H^{2}}{\kappa}\,, \tag{2.13}\]
and
\[\rho_{\Lambda}\equiv\tilde{\Lambda}\,. \tag{2.14}\]
One also obtains the modified second Friedmann equation
\[[d-2-2\gamma(d-1)]\frac{\dot{H}}{N}+\frac{1}{2}(d-1)(d-2-2\gamma d )H^{2}=-\kappa\left(p_{\rm m}+\frac{\dot{\phi}^{2}}{2N^{2}}-V(\phi)-\tilde{ \Lambda}\right)\,. \tag{2.15}\]
Here, we have defined the Hubble parameter
\[H=\frac{\dot{a}}{aN}\,. \tag{2.16}\]
From \(0i\) component of (2.10), we conclude \(\phi(t)\). The time component of (2.6) implies the fluid equation of higher dimensional Rastall theory
\[\dot{\rho}_{\rm m}+(d-1)NH(\rho_{\rm m}+p_{\rm m})=-\frac{2\lambda (d-1)}{N}\Bigg{(}\ddot{H}-\frac{\dot{N}\dot{H}}{N}+dNH\dot{H}\Bigg{)}\,, \tag{2.17}\]
in which we have
\[\frac{\dot{H}}{N}=-\frac{\kappa}{(d-2)}\Big{[}\rho_{\rm m}+p_{\rm m }+\frac{\dot{\phi}^{2}}{N^{2}}\Big{]}\,, \tag{2.18}\]
and
\[\ddot{H}=-\frac{\kappa}{(d-2)}\Bigg{[}\dot{N}(\rho_{m}+p_{m})+N( \dot{\rho}_{m}+\dot{p}_{m})-\frac{\dot{N}}{N^{2}}\dot{\phi}^{2}+\frac{2\dot{ \phi}\ddot{\phi}}{N}\Bigg{]}\,. \tag{2.19}\]
By substituting Eq. (2.18) into Eq. (2.15), we can rewrite the second Friedmann equation in the following form
\[(d-2)\frac{\dot{H}}{N}+\frac{1}{2}(d-1)(d-2)H^{2}=-\kappa\left(p_{ \rm m}+p_{\rm KGR}+p_{\Lambda}\right)\,, \tag{2.20}\]
where
\[p_{\rm KGR}\equiv\bigg{[}1+\frac{4\gamma(d+1)}{(d-2)}\bigg{]} \frac{\dot{\phi}^{2}}{2N^{2}}-V(\phi)+\frac{2\gamma(d+1)}{(d-2)}(1+w_{\rm m}) \,\rho_{\rm m}-\frac{d\,(d-1)\gamma H^{2}}{\kappa}\,, \tag{2.21}\]
and
\[p_{\Lambda}\equiv-\tilde{\Lambda}\,. \tag{2.22}\]
From Eq. (2.12) and Eq. (2.20), we can define the equation of state parameters as follow
\[w_{\rm m}\equiv \frac{p_{\rm m}}{\rho_{\rm m}}\,,\] \[w_{\rm KGR}\equiv \frac{p_{\rm KGR}}{\rho_{\rm KGR}}=\frac{\left[1+\frac{4\gamma(d+ 1)}{(d-2)}\right]\frac{\dot{\phi}^{2}}{2N^{2}}-V(\phi)+\frac{2\gamma(d+1)}{(d -2)}(1+w_{\rm m})\,\rho_{\rm m}-\frac{d\,(d-1)\gamma H^{2}}{\kappa}}{\left[1- \frac{4\gamma(d-1)}{(d-2)}\right]\frac{\dot{\phi}^{2}}{2N^{2}}+V(\phi)-\frac{ 2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\,\rho_{\rm m}+\frac{d\,(d-1)\gamma H^{2}}{ \kappa}}\,, \tag{2.23}\] \[w_{\Lambda}\equiv \frac{p_{\Lambda}}{\rho_{\Lambda}}=-1\,.\]
It is worth mentioning that eq. (2.17) is consistent with the scalar field equation of motions (2.11) whose form on the metric (2.1) is given by
\[\ddot{\phi}+\bigg{[}(d-1)NH-\frac{\dot{N}}{N}\bigg{]}\dot{\phi}+N ^{2}V_{,\phi}\!=0\,. \tag{2.24}\]
Analysis on the spatial components of (2.6), we find that \(p_{\rm m}(t)\) and \(\rho_{\rm m}(t)\). Hence, we have a model of higher dimensional Rastall cosmology with homogeneous perfect fluid. Such a model has been studied in [43] where \(w_{\rm m}=0\) for dust, \(w_{\rm m}=\frac{1}{d-1}\) for radiation and \(w_{\rm m}=-1\) for vacuum.
## 3 Dynamical System Analysis: \(\Lambda=0\) Case
In this section, we will investigate the behavior of Rastall cosmology using dynamical system in the absence of the cosmological constant term, namely \(\Lambda=0\). In order to transform our cosmological equations from previous section to _autonomous_ equations, we have to choose a particular form of the scalar field potential which has the form
\[V=V_{0}\exp\left(-\sqrt{\kappa}\lambda_{V}\phi\right)\,. \tag{3.1}\]
The gravity coupled scalar field theories with exponential potential have been widely considered in various physical theories such as Kaluza-Klein cosmology [44], the Salam-Sezgin model (supergravity with \({\cal N}=2\) coupled to matter in six dimensions) [46], and the exponential potential also provides a solution for power-law inflation [47] where the scale factor expands according to \(a(t)\propto t^{\ell}\) with \(\ell>1\).
This type of potential has also been studied by several authors [38, 39, 40] in the context of system dynamics focusing on either the inflationary phase or the late-time universe. Also, there is an attempt to investigate the inflationary and the dark energy eras in the massive gravity framework [42]. This inspires us to investigate these two crucial eras in the context of Rastall gravity.
First, let us define three autonomous variables as follows
\[x_{\rm m}=\sqrt{\frac{2\kappa\rho_{m}}{(d-1)(d-2)H^{2}}}\,, \tag{3.2}\]
\[x_{\phi}=\sqrt{\frac{\kappa\dot{\phi}^{2}}{(d-1)(d-2)N^{2}H^{2}}}\,, \tag{3.3}\]
\[x_{V}=\sqrt{\frac{2\kappa V(\phi)}{(d-1)(d-2)H^{2}}}\,. \tag{3.4}\]
We have the equations of motion correspond to the autonomous variables \(x_{\phi}\) and \(x_{V}\) by differentiating them with respect to \(\ln a\) and using (2.15) and (2.24)
\[\frac{2}{(d-1)N}x_{\phi}^{{}^{\prime}}= \left(\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})} {\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}-2\right)x_{\phi}+2 \lambda_{V}\sqrt{\frac{(d-2)}{(d-1)}}x_{V}^{2}\] \[+\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{ \rm m})\right]}x_{\phi}^{3}-\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{ d-2}(1+w_{\rm m})\right]}x_{\phi}x_{V}^{2}\,, \tag{3.5}\]
\[\frac{2}{(d-1)N}x_{V}^{{}^{\prime}}= \frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[ 1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{V}-\lambda_{V}\sqrt{\frac {(d-2)}{(d-1)}}x_{\phi}x_{V}\] \[+\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m })\right]}x_{V}x_{\phi}^{2}-\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d -2}(1+w_{\rm m})\right]}x_{V}^{3}\,. \tag{3.6}\]
while, by using (3.2),(3.3) and (3.4), the Friedmann equation (2.12) can be rewritten as
\[\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{m})\right]x_{m}^{2}+\left[1-\frac{4 \gamma(d-1)}{(d-2)}\right]x_{\phi}^{2}+x_{V}^{2}=1-\frac{2\gamma d}{(d-2)}\,. \tag{3.7}\]
which can be viewed as a constraint equation. Following [50], let us introduce the density of parameters
\[\Omega_{\rm m}\equiv\frac{2\kappa\rho_{\rm m}}{(d-1)(d-2)H^{2}}=x_{\rm m}^{2}\,, \tag{3.8}\]
\[\Omega_{\rm KGR}\equiv\frac{2\kappa\rho_{\rm KGR}}{(d-1)(d-2)H^{2}}= \frac{1}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]} \Biggl{(}\Biggl{[}1-\frac{4\gamma(d-1)}{(d-2)}\Biggr{]}x_{\phi}^{2}+x_{V}^{2} \tag{3.9}\] \[+\frac{2\gamma d}{(d-2)}-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m}) \Biggr{)}\,.\]
Then, we can define the equation of state parameter of KGR term as
\[w_{\rm KGR}=\frac{\left[1+\frac{4\gamma(d+1)}{(d-2)}\right]x_{\phi}^{2}-x_{V}^ {2}+\frac{2\gamma(d+1)}{(d-2)}(1+w_{\rm m})\,x_{\rm m}^{2}-\frac{2\gamma d}{(d -2)}}{\left[1-\frac{4\gamma(d-1)}{(d-2)}\right]x_{\phi}^{2}+x_{V}^{2}-\frac{2 \gamma(d-1)}{(d-2)}(1+w_{\rm m})\,x_{\rm m}^{2}+\frac{2\gamma d}{(d-2)}}\,. \tag{3.10}\]
It should be noted that if \(\gamma=0\), then \(\Omega_{\rm KGR}=x_{\phi}^{2}+x_{V}^{2}\) and \(w_{\rm KGR}=(x_{\phi}^{2}-x_{V}^{2})/(x_{\phi}^{2}+x_{V}^{2})\). These equations are equivalent to the ones used in the standard scalar field model within the general relativity framework. The equation (3.7) can be cast into the form
\[\Omega_{\rm m}+\Omega_{\rm KGR}=1\,. \tag{3.11}\]
It is clear from (3.8) that \(\Omega_{\rm m}\geq 0\) implying the restriction \(\Omega_{\rm KGR}\leq 1\). For example, if we consider \(\Lambda\)CDM model in which we have \(\Omega_{\rm m}\approx 0.3\), then \(\Omega_{\rm KGR}\approx 0.7\)[51]. In this sense the energy density of cosmological constant replaced by \(\Omega_{\rm KGR}\) and it might play a role as dark energy as long as \(w_{\rm KGR}<0\). Thus, in our model, dark energy can be seen as a consequence of the existence of a scalar field and the ability of geometry to couple matter field which drives our universe to expand with accelerating rate. In order to see this, we first define the deceleration parameter
\[q\equiv-1-\frac{\dot{H}}{NH^{2}}=-1+\frac{(d-1)}{2\left[1-\frac{2\gamma(d-1)} {(d-2)}\right]}\Bigl{[}1-\frac{2\gamma d}{(d-2)}+w_{m}x_{m}^{2}+x_{\phi}^{2}- x_{V}^{2}\Bigr{]}\,, \tag{3.12}\]
where \(x_{m}^{2}\) can be obtained from the constraint equation (3.7). An indication of an accelerating universe is given by \(q<0\). Note that unlike in the GR case, here \(\Omega_{\rm KGR}\) is not necessary to be greater than zero which depends on the density parameter \(\rho_{\rm m}\). For example, if \(\Omega_{\rm m}>1\), we have \(\Omega_{\rm KGR}<0\) in the contrast of a case where if \(\Omega_{\rm m}<1\) we have \(\Omega_{\rm KGR}>0\).
In our model, we have five critical points related to the cosmological eras as follows.
* CP1 \((0,0)\): CP1 is always exists for any \(w_{m}\), \(\lambda_{V}\). The properties of CP1 can be described by following parameters \[\Omega_{\rm KGR} =\frac{\frac{2\gamma d}{(d-2)}-\frac{2\gamma(d-1)}{(d-2)}(1+w_{ \rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\,,\] (3.13) \[q =-1+\frac{(d-1)\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m} )}{2\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\,.\] The value of the equation of state parameter \(w_{\rm KGR}\) for the scalar field is undefined when \(\gamma=0\). Otherwise, \(w_{\rm KGR}=-1\) such that it mimics dark energy which behaves
like cosmological constant. Since the energy density of scalar field can be neglected, the role of dark energy is played by the non-conservative of EMT of perfect fluid. The existence and the stability properties of the CP1 point, which describes the universe during the matter-dominated era with coupling between the matter field and geometry via the Rastall parameter, are illustrated in Fig. 1. In contrast to the case in general relativity, the stability of this point is determined by the parameters \(w_{\rm m}\) and \(\gamma\). There are only two possible outcomes from this point: a past-time attractor (unstable node) and a saddle node. The instability of this point implies that, even though it leads to an accelerating universe, it cannot represent the late-time universe.
* CP2 \(\left(\sqrt{1-\frac{2\gamma(1+w_{m})}{(1-w_{m})}},0\right)\) and CP3 \(\left(\;-\;\sqrt{1-\frac{2\gamma(1+w_{m})}{(1-w_{m})}},0\right)\): CP2 and CP3 exist if \(w_{m}\neq 1\) and \(1-\frac{2\gamma(1+w_{m})}{(1-w_{m})}>0\). The properties of CP2 and CP3 exist in the same way as in the case of \(w_{m}\neq 1\) and \(1-\frac{2\gamma(1+w_{m})}{(1-w_{m})}>0\).
Figure 1: The bifurcation diagrams showing the existence and the stability conditions of CP1 by plotting \(\gamma\) as a function of \(w_{\rm m}\) for constant value of (a) \(d=4\), (b) \(d=10\) and (c) \(d=20\) for both \(\Lambda=0\) and \(\Lambda\neq 0\) cases. These figures show that in radiation-like dominated universe (red line), we always obtain saddle nodes and non-accelerating universe for arbitrary \(d\geq 4\) both in GR and Rastall framework. In dust-like matter universe (blue line), we have saddle nodes and accelerating universe, saddle nodes and non-accelerating universe or unstable nodes and non-accelerating universe in Rastall sector \(\gamma\neq 0\) whilst at GR limit we have only saddle nodes and non-accelerating universe. From the figures, in case of \(w_{\rm m}=0\) it can be seen that the cosmic acceleration is barely obtained for large \(d\) such that it tends to vanish at \(d>>\). Note that we have forbidden zone giving \(\Omega_{\rm m}<0\).
CP3 can be described by following parameters
\[\begin{split}\Omega_{\text{KGR}}=&\frac{1}{\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}\Bigg{(}\left[1-\frac{4\gamma (d-1)}{(d-2)}\right]\left[1-\frac{2\gamma(1+w_{\text{m}})}{(1-w_{\text{m}})} \right]\right.\\ &+\frac{2\gamma d}{(d-2)}-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m }})\Bigg{)}\,,\\ q=&\,d-2\,.\end{split} \tag{3.14}\]
The equation of state parameter is
\[w_{\text{KGR}}=\frac{\left[1+\frac{4\gamma}{(d-2)}-\frac{4\gamma dw_{\text{m}} }{(d-2)}\right]\left[1-\frac{2\gamma(1+w_{\text{m}})}{(1-w_{\text{m}})}\right] +\frac{2\gamma\left[1+(d+1)w_{\text{m}}-\frac{4\gamma d(1+w_{\text{m}})}{(d-2 )}\right]}{\left[1-\frac{4\gamma(d-1)}{(d-2)}\right]\left[1-\frac{2\gamma(1+w_ {\text{m}})}{(1-w_{\text{m}})}\right]+\frac{2\gamma[1-(d-1)w_{\text{m}}]}{(d-2 )}}\,. \tag{3.15}\]
Fig. 2 and Fig. 3 display the existence and the stability characteristics of CP2 and CP3, respectively. These points correspond to the kinetic-matter dominated era, which becomes a kinetic-dominated era when \(\gamma=0\) and \(\gamma=\frac{d(1-w_{\text{m}})-4}{(d-1)(1+w_{\text{m}})}\). Three possible stabilities, namely stable nodes, unstable nodes, and saddle nodes, can be obtained at CP2. Although we have stable nodes, based on the deceleration parameter of CP2 (\(q>0\)), it cannot describes late-time universe since it is disfavoured by observations. In terms of its existence and the values of \(\Omega_{\text{KGR}},w_{\text{KGR}}\), and \(q\), CP3 exhibits similar properties as CP2. Only saddle points and unstable nodes exist at this critical point, leading to a non-accelerating expansion of the universe such that it also cannot describes the late-time universe.
* CP4 \(\left(\frac{1}{\lambda_{V}}\sqrt{\frac{d-1}{d-2}}\mathcal{A}_{\pm},\frac{1}{ \lambda_{V}}\sqrt{\frac{(d-1)(2-\mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2(d-2)}}\right)\): CP4 exists if \(0<\mathcal{A}_{\pm}<2\). The properties of CP4 can be described by \[\begin{split}\Omega_{\text{KGR}}=&\frac{1}{\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}\Bigg{(}\frac{(d-1)\mathcal{ A}_{\pm}}{2(d-2)\lambda_{V}^{2}}\left[2+\mathcal{A}_{\pm}\left[1-\frac{8 \gamma(d-1)}{(d-2)}\right]\right]\\ &+\frac{2\gamma d}{(d-2)}-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m }})\Bigg{)}\,,\\ q=&-1+\frac{(d-1)}{2\left[1-\frac{2\gamma(d-1)}{(d -2)}(1+w_{\text{m}})\right]}\Bigg{(}\left[1-\frac{2\gamma d}{(d-2)}\right](1+w _{\text{m}})\\ &+\frac{(d-1)\mathcal{A}_{\pm}}{2(d-2)\lambda_{V}^{2}}\left[(3-w_ {\text{m}})\mathcal{A}_{\pm}-2(1+w_{\text{m}})\right]\Bigg{)}\,.\end{split}\] (3.16)
The equation of state parameter is
\[w_{\text{KGR}}=\frac{\frac{(d-1)\mathcal{A}_{\pm}}{2(d-2)\lambda_{V}^{2}}\left[ \left[3+\frac{8\gamma(1-dw_{\text{m}})}{(d-2)}\right]\mathcal{A}_{\pm}-2 \right]+\frac{2\gamma\left[1+(d+1)w_{\text{m}}-\frac{4\gamma d(1+w_{\text{m}}) }{(d-2)}\right]}{(d-2)\frac{(d-1)\mathcal{A}_{\pm}}{2(d-2)\lambda_{V}^{2}} \left[2+\left[1-\frac{8\gamma(d-1)}{(d-2)}\right]\mathcal{A}_{\pm}\right]+ \frac{2\gamma[1-(d-1)w_{\text{m}}]}{(d-2)}}\,. \tag{3.17}\]
The parameter \(\mathcal{A}_{\pm}\) is defined as
\[\mathcal{A}_{\pm}(d,w_{\rm m},\gamma,\lambda_{V})\equiv\left\{\frac{ 1+w_{\rm m}}{3-w_{\rm m}}+\frac{\lambda_{V}^{2}(d-2)}{(d-1)(3-w_{\rm m})}\Big{[} 1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\Big{]}\right\}\] \[\qquad\qquad\times\left\{1\pm\sqrt{1-\frac{\frac{2\lambda_{V}^{2 }(d-2)(1+w_{\rm m})}{(d-1)(3-w_{\rm m})}\Big{[}1-\frac{2\gamma d}{(d-2)}\Big{]} }{\Big{[}\frac{1+w_{\rm m}}{3-w_{\rm m}}+\frac{\lambda_{V}^{2}(d-2)}{(d-1)(3- w_{\rm m})}\Big{[}1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\Big{]}\Big{]}^{2}} \right\}\;, \tag{3.18}\]
such that
\[w_{\rm m}=\frac{3\mathcal{A}_{\pm}^{2}-2\mathcal{A}_{\pm}\Big{[}1+\frac{ \lambda_{V}^{2}(d-2)}{(d-1)}\Big{]}+4\gamma\lambda_{V}^{2}\mathcal{A}_{\pm}+ \frac{2\lambda_{V}^{2}(d-2)}{(d-1)}\Big{[}1-\frac{2\gamma d}{(d-2)}\Big{]}}{ \mathcal{A}_{\pm}^{2}+2\mathcal{A}_{\pm}-\frac{2\lambda_{V}^{2}(d-2)}{(d-1)} \Big{[}1-\frac{2\gamma d}{(d-2)}\Big{]}}\;, \tag{3.19}\]
The existence and the stability properties of CP4 can be observed in Fig. 4-Fig. 7. CP4 represents a scalar field-matter-dominated era, in which there exists a fraction between the scalar field and matter energy density. The critical point exists in
Figure 2: The bifurcation diagrams showing the existence and the stability conditions of CP2 by plotting \(\gamma\) as a function of \(w_{\rm m}\) for constant value of (a) \(d=4\), (b) \(d=10\) and (c) \(d=20\). The stability characteristics of this point are given both \(\Lambda=0\) and \(\Lambda\neq 0\) cases which can be seen at the figure legend. In Rastall sector, these figures show that in kinetic-radiation-like universe (red line), we obtain either unstable or saddle nodes depending on \(\beta\) for both \(\Lambda=0\) and \(\Lambda\neq 0\) cases. In kinetic-dust-like matter universe (blue line), we have either saddle or stable nodes depending on parameter \(\beta\) for \(\Lambda=0\) case whilst it becomes only saddle for \(\Lambda\neq 0\) case. At GR limit (black-dashed) we have unstable or saddle nodes depending on \(\beta\) for arbitrary \(\Lambda\). From the figures, in case of \(w_{\rm m}=0\) it can be seen that the stable nodes is nearly vanished at large \(d\) such that it tends to completely vanish at \(d>>\) in the ranges of \(\gamma<(d-2)/2d\). Note that we have forbidden zone giving either \(\Omega_{\rm m}<0\) or \(\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}}<0\).
the range of \(0<\mathcal{A}_{\pm}<2\). At this point, we find late-time attractors (stable nodes), past-time attractors (unstable nodes), and saddle points that depend on the parameters \(\mathcal{A}_{\pm}(d,w_{\rm m},\lambda_{V},\gamma)\). The density parameter for CP4 is given by \(\Omega_{\rm m}+\Omega_{\rm KGR}=1\), such that \(\Omega_{\rm KGR}\leq 1\). However, there is a forbidden zone where this constraint is not satisfied, \(\Omega_{\rm KGR}>1\) or \(\Omega_{\rm m}<0\). The constraints can be expressed as \(\lambda_{V}<\sqrt{\frac{(d-1)}{2(d-2-2\gamma d)}\Big{[}2+\mathcal{A}_{\pm} \Big{[}1-\frac{8\gamma(d-1)}{(d-2)}\Big{]}\Big{]}}\). We find that there are unstable nodes. It provides a good description of the early-time universe associated to power-law inflation era. In the KGR theory, \(\gamma\equiv\kappa\lambda\neq 0\), Eq. (2.6) prohibits \(\rho_{\rm m}\) to be zero. It means that even during inflation, the perfect fluid must be presented. Such models have been widely considered in [52, 53]. Fortunately, we still obtain power-law inflation since the scale factor evolves according to \(a(t)\propto t^{\ell}\) where \(\ell>1\). On the other hand, the late-time attractor or stable nodes can also be obtained, which enables the universe undergoing accelerated expansion for \(\gamma\neq 0\), when dust is the subdominant constituent. These features will be discussed further in Sec. (6). The colors specifying the existence and the stability conditions of CP4 can be seen on Fig. 5. In Fig. 6 and Fig. 7 we use the black-dashed to show the \(\Omega_{\rm KGR}\approx 0.9\) and \(\Omega_{\rm KGR}\approx 0.7\) (\(\Lambda\)CDM-like model) curve, respectively. We also use the black-thin to show scalar field-dust-like matter universe (\(w_{\rm m}=0\)) curve. Note that uncolored regions denote forbidden zone leading to \(\Omega_{\rm m}<0\) or \(w_{\rm KGR}>0\) since we have claimed that \(\Omega_{\rm KGR}\) acts as an energy density of dark energy.
Figure 3: The bifurcation diagrams showing the existence and the stability conditions of CP3 by plotting \(\gamma\) as a function of \(w_{\rm m}\) for constant value of (a) \(d=4\), (b) \(d=10\) and (c) \(d=20\) for both \(\Lambda=0\) and \(\Lambda\neq 0\) cases. In Rastall sector (\(\gamma\neq 0\)), these figures show that in kinetic-radiation-like universe (red line), we obtain unstable nodes. In kinetic-dust-like matter universe (blue line), we have either unstable or saddle nodes. At GR limit (black dashed) we have only unstable nodes. Note that we have forbidden zone giving either \(\Omega_{\rm m}<0\) or \(\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}}<0\).
* CP5 \(\left(0,\sqrt{1-\frac{2\gamma d}{(d-2)}}\right)\): CP5 exists if \(\lambda_{V}=0\) and \(\gamma<(d-2)/2d\). The properties of CP5 can be described by \[\Omega_{\rm KGR}=1,\quad q=-1\,.\] (3.20) The equation of state parameter for CP5 is \(w_{\rm KGR}=-1\). The stability properties of CP5 can be seen on Fig. 8. CP5 represents a universe dominated by the scalar field potential. The stabilities are stable and saddle nodes. At this point, the scalar potential term is constant when \(\lambda_{V}=0\) and behaves like cosmological constant that dominates the acceleration of the expansion of the universe.
## 4 Dynamical System Analysis: \(\Lambda\neq 0\) Case
In this section, we focus a model with \(\Lambda\neq 0\) case in which it implies that we have to add a new dynamical variable related to \(\Lambda\) whose characteristics do affect the history of the cosmic evolution. This distinguishes from the case of \(\Lambda=0\). However, in the limit of
Figure 4: The bifurcation diagrams showing the existence and the stability conditions of CP4 by plotting \(\mathcal{A}_{\pm}\) as a function of \(\lambda_{V}\) for constant value of (a) \(\gamma=0,\Lambda=0\) and (b) \(\gamma=0,\Lambda\neq 0\). It is known as scaling solution which a scalar field mimics the dominant matter in GR limit. Such a case had been considered in [40] and does not gives late-time cosmic acceleration when dust matter is a subdominant constituent. Note that the black-dashed and the black thin illustrate the \(\Omega_{\rm m}\approx 0.3\) and \(w_{\rm m}=0\), respectively. Accelerating universe obtained for \(\mathcal{A}_{\pm}<\frac{2}{(d-1)}\) corresponding to \(w_{\rm KGR}<0\). Otherwise, we have non-accelerating one.
\(\lambda_{V}\to 0\) so that the scalar potential is constant, \(V_{0}\), we may have a similar property as in the \(\Lambda\neq 0\) case. So, the analysis discussed in this section can be applied to this latter case.
Let us first define a new autonomous variable beside of (3.2), (3.3) and (3.4) which is associated to \(\tilde{\Lambda}\) term,
\[x_{\Lambda(\pm)}=\sqrt{\pm\frac{2\kappa\tilde{\Lambda}}{(d-1)(d-2)H^{2}}}\,. \tag{4.1}\]
It is important to mention that we consider two cases which are \(\tilde{\Lambda}>0\) and \(\tilde{\Lambda}<0\). Positive and negative sign correspond to former and latter cases, respectively. Hence, the
Figure 5: The colors specifying the stability conditions of CP4 for Fig. 6, Fig. 7 and Fig. 11.
Figure 6: The bifurcation diagrams showing the existence and the stability conditions of CP4 by plotting \(\mathcal{A}_{\pm}\) defined in Eq. (3.18) as a function of \(\lambda_{V}\) for constant value of (a) \(d=4,\gamma=0.155\), (b) \(d=4,\gamma=0.16\) and (c) \(d=4,\gamma=0.165\). The power-law inflation is given by either unstable or saddle nodes. The intersection between black-thin and black-dashed curves on the stable nodes region show the late-time acceleration consisting of quintessence along with baryonic and dark matter where \(\Omega_{\rm m}\approx 0.1\).
equations of motion become,
\[\frac{2}{(d-1)N}x^{{}^{\prime}}_{\phi}= \left(\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{ \left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}-2\right)x_{\phi}+2 \lambda_{V}\sqrt{\frac{(d-2)}{(d-1)}}x_{V}^{2}\] \[+\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{ \rm m})\right]}x_{\phi}^{3}-\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d -2}(1+w_{\rm m})\right]}x_{\phi}x_{V\Lambda(\pm)}^{2}\,, \tag{4.2}\]
\[\frac{2}{(d-1)N}x^{{}^{\prime}}_{V}= \frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[ 1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{V}+\frac{(1-w_{\rm m})}{ \left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{V}x_{\phi}^{2}\] \[-\lambda_{V}\sqrt{\frac{(d-2)}{(d-1)}}x_{\phi}x_{V}-\frac{(1+w_{ \rm m})}{\left[1-\frac{2\gamma(d-1)}{d-2}(1+w_{\rm m})\right]}x_{V}x_{V\Lambda (\pm)}^{2}\,, \tag{4.3}\]
\[\frac{2}{(d-1)N}x^{{}^{\prime}}_{\Lambda(\pm)}= \frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[ 1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{\Lambda(\pm)}+\frac{(1-w _{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{m})\right]}x_{\Lambda(\pm) }x_{\phi}^{2}\] \[-\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d-2}(1+w_{\rm m })\right]}x_{\Lambda(\pm)}x_{V\Lambda(\pm)}^{2}\,. \tag{4.4}\]
Note that we have introduced \(x_{V\Lambda(\pm)}^{2}\) variable which is satisfy following conditions,
\[x_{V\Lambda(\pm)}^{2}=\begin{cases}x_{V}^{2}+x_{\Lambda(+)}^{2},&\text{if } \tilde{\Lambda}>0\\ x_{V}^{2}-x_{\Lambda(-)}^{2},&\text{if }\tilde{\Lambda}<0\end{cases}\,. \tag{4.5}\]
Figure 7: The bifurcation diagrams showing the existence and the stability conditions of CP4 by plotting \(\mathcal{A}_{\pm}\) defined in Eq. (3.18) as a function of \(\lambda_{V}\) for constant value of (a) \(d=4,\gamma=0.21\), (b) \(d=5,\gamma=0.27\) and (c) \(d=6,\gamma=0.3\). The power-law inflation is given by either unstable or saddle nodes. The intersection between black-thin and black-dashed curves on the stable spiral region show the late-time acceleration consisting of quintessence along with baryonic and dark matter where \(\Omega_{\rm m}\approx 0.3\).
Then, the constraint equation become,
\[\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{m})\right]x_{m}^{2}+\left[1-\frac{4\gamma (d-1)}{(d-2)}\right]x_{\phi}^{2}\pm x_{V\Lambda(\pm)}^{2}=1-\frac{2\gamma d}{(d -2)}\,, \tag{4.6}\]
which is can be written in the density parameters form,
\[\Omega_{m}+\Omega_{\rm KGR}\pm\Omega_{\Lambda\,(\pm)}=1\,, \tag{4.7}\]
where we have defined a density parameter of scalar field \(\Omega_{\rm KGR}\) as in equation (3.9). Here, we have a new density parameter \(\Omega_{\Lambda\,(\pm)}\) which defined as
\[\Omega_{\Lambda\,(\pm)}\equiv\pm\,\frac{2\kappa\tilde{\Lambda}}{(d-1)(d-2)H^{ 2}}=x_{\Lambda(\pm)}^{2}\,. \tag{4.8}\]
The deceleration parameter becomes
\[q=-1+\frac{(d-1)}{2\Big{[}1-\frac{2\gamma(d-1)}{(d-2)}\Big{]}}\left[1-\frac{2 \gamma d}{(d-2)}+w_{\rm m}x_{m}^{2}+x_{\phi}^{2}-x_{V\Lambda(\pm)}^{2}\right]\,, \tag{4.9}\]
where \(x_{m}^{2}\) can be obtained from constraint equation (4.6). Here, we have five critical points:
* CP1 \((0,0,0)\) The equation of state parameter for scalar field \(w_{\rm KGR}\) is undefined if \(\gamma=0\). Otherwise, \(w_{\rm KGR}=-1\). The existence and the stability properties of CP1 can be seen in Figure 1. In this case, \(\Lambda\neq 0\), it describes the matter-dominated era in which it has the same properties and behavior as CP1 in the absence of \(\Lambda\). CP1 has the parameter of density \(\Omega_{\rm KGR}\) and the deceleration \(q\) given by (3.13).
* CP2 \(\Big{(}\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}},0,0\Big{)}\) and CP3 \(\Big{(}-\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}},0,0\Big{)}\): CP2 and CP3 exist if \(w_{m}\neq 1\) and \(1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}>0\). The equation of state parameter is given by equation (3.15). The stability properties of CP2 and CP3 can be seen on Fig. 2 and Fig. 3, respectively. For \(\Lambda\neq 0\) case, CP2 and CP3 describes the kinetic-matter dominated era in which it has no stable nodes or late-time attractors as obtained in \(\Lambda=0\) case. Two possible stabilities, namely the unstable nodes and the saddle nodes, can be obtained at these points depend on parameter values. CP2 and CP3 have the parameter of density \(\Omega_{\rm KGR}\) and the deceleration parameter \(q\) given by (3.14).
* CP4 \(\Big{(}\frac{1}{\lambda_{V}}\sqrt{\frac{d-1}{d-2}}\mathcal{A}_{\pm},\frac{1}{ \lambda_{V}}\sqrt{\frac{(d-1)(2-\mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2(d-2)}},0\Big{)}\): CP4 exists if \(0<\mathcal{A}_{\pm}<2\). The equation of state parameter is given by equation (3.17). The parameter \(\mathcal{A}_{\pm}\) is defined as equation (3.18). This point has the
parameters of \(\Omega_{\rm KGR}\) and \(q\) given by equation (3.16). The stability properties of CP4 is shown by Fig. 4-Fig.7. The stability properties and behavior of CP4 found for \(\Lambda\neq 0\) are distinct from the \(\Lambda=0\) case in which we do not obtain a late-time attractors. It implies that CP4 cannot be late-time model of universe due to no existence of stable nodes. However, this point remains to describe early universe known as power-law inflation.
* CP5 \(\left(0,\sqrt{1-\frac{2\gamma d}{(d-2)}\mp x_{\Lambda,c(\pm)}^{2}},x_{\Lambda,c(\pm)}\right)\): CP5 exists for \(\lambda_{V}=0\), \(0\leq x_{\Lambda,c\,(+)}\leq\sqrt{1-\frac{2\gamma d}{(d-2)}}\) for \(\tilde{\Lambda}>0\) and \(x_{\Lambda,\,c(-)}\geq 0\) for \(\tilde{\Lambda}<0\). The properties of CP5 can be described by following parameters, \[\Omega_{\rm KGR}\pm\Omega_{\Lambda\,(\pm)}=1,\quad q=-1\,.\] (4.10) The stability properties of CP5 can be seen on Fig. 8 and Fig. 9. At CP5, the behavior of the universe which is dominated by \(\Lambda\)-like causing an acceleration in the expansion of the universe is contributed by the parameter \(\Lambda\) that we defined at the beginning and the scalar potential where \(\lambda_{V}=0\). The stability is either incomplete stable or unstable. The incomplete stable indicate the late-time universe where the interplay between massless scalar field and cosmological constant acts as dark energy. The unstable may indicate the early-time attractors known as conventional inflation. The equation of state parameter in this critical point is \[w_{\rm KGR,\Lambda}\equiv\frac{p_{\rm KGR}+p_{\Lambda}}{\rho_{\rm KGR}+\rho_{ \Lambda}}=-1\,.\] (4.11)
## 5 Local-Global Existence of Solutions
In this Section we will establish the proof of the local-global existence and the uniqueness of the evolution equations in the preceding sections using Picard's iteration and the contraction mapping properties. We begin the construction by considering the \(\Lambda=0\) case, and then, the \(\Lambda\neq 0\) case.
First of all, let us consider the \(\Lambda=0\) case in which we define the dynamical variables
\[\mathbf{u}=\begin{pmatrix}x_{\phi}\\ x_{V}\end{pmatrix}, \tag{5.1}\]
on an interval \(I\equiv[s,s+\epsilon]\) where \(s\equiv\ln a\in{\rm I\kern-1.8ptR}\) and \(\varepsilon\) is a small positive constant. If all constants in (3.7) are positive, that is, it satisfies
\[\gamma\leq\frac{d-2}{4(d-1)}\,\quad\gamma(1+w_{\rm m})\leq\frac{d-2}{2(d-1)}\, \tag{5.2}\]
then, we could have
\[\begin{array}{c}0\leq\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{1}{2} }|x_{\phi}|\leq\left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac{1}{2}}\quad,\\ 0\leq|x_{V}|\leq\left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac{1}{2}}\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
implying that all of the quantities \(\left(\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{1}{2}}x_{\phi},x_{V}\right)\) are defined on an open set \(U\subset S^{2}\) where \(S^{2}\) is the 2-sphere with radius \(\left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac{1}{2}}\).
Without loss of generality, we could set the lapse function \(N=1\) so that the evolution equations (3.5) and (3.6) can be simply rewritten into
\[\frac{d\mathbf{u}}{ds}=\mathcal{J}(\mathbf{u})\, \tag{5.4}\]
with
\[\mathcal{J}(\mathbf{u})\equiv\frac{1}{2}(d-1)\left(\begin{array}{c} \left(\frac{\left[1-\frac{2\gamma d}{(d-2)}\right]\left(1+w_{\text{m}}\right)} {\left[1-\frac{2\gamma(d-1)}{(d-2)}\left(1+w_{\text{m}}\right)\right]}-2 \right)x_{\phi}+2\lambda_{V}\sqrt{\frac{(d-2)}{(d-1)}}x_{V}^{2}\\ +\frac{\left(1-w_{\text{m}}\right)}{\left[1-\frac{2\gamma(d-1)}{(d-2)}\left( 1+w_{\text{m}}\right)\right]}x_{\phi}^{3}-\frac{\left(1+w_{\text{m}}\right)} {\left[1-\frac{2\gamma(d-1)}{d-2}\left(1+w_{\text{m}}\right)\right]}x_{\phi} x_{V}^{2}\\ +\frac{\left[1-\frac{2\gamma d}{(d-2)}\right]\left(1+w_{\text{m}}\right)}{ \left[1-\frac{2\gamma(d-1)}{(d-2)}\left(1+w_{\text{m}}\right)\right]}x_{V}- \lambda_{V}\sqrt{\frac{(d-2)}{(d-1)}}x_{\phi}x_{V}\\ +\frac{\left(1-w_{\text{m}}\right)}{\left[1-\frac{2\gamma(d-1)}{(d-2)}\left( 1+w_{\text{m}}\right)\right]}x_{V}x_{\phi}^{2}-\frac{\left(1+w_{\text{m}} \right)}{\left[1-\frac{2\gamma(d-1)}{d-2}\left(1+w_{\text{m}}\right)\right]}x _{V}^{3}\\ \end{array}\right). \tag{5.5}\]
Figure 8: The bifurcation diagrams showing the existence and the stability conditions of CP5. The stability properties of CP5 by plotting \(\gamma\) as a function of \(w_{\text{m}}\) for constant value of (a) \(d=4\), (b) \(d=10\) and (c) \(d=20\). The stability characteristics of this point are given both \(\Lambda=0\) and \(\Lambda>0\) cases which can be seen at the figure legends. The blue line and the red line represent \(w_{\text{m}}=0\) and \(w_{\text{m}}=1/(d-1)\), respectively. Note that we have forbidden zone giving \(\sqrt{1-\frac{2\gamma d}{(d-2)}}<0\) for \(\Lambda=0\) or \(\sqrt{1-\frac{2\gamma d}{(d-2)}-x_{\Lambda,c\left(+\right)}^{2}}<0\) for \(\Lambda>0\) case. The blue line and the red line represent \(w_{\text{m}}=0\) and \(w_{\text{m}}=1/(d-1)\), respectively.
**Lemma 1**.: _The operator \(\mathcal{J}(\mathbf{u})\) in Eq. (5.4) is locally Lipschitz with respect to \(\mathbf{u}\)._
Proof.: We have the following estimate
\[|\mathcal{J}|_{U} \leq\frac{1}{2}(d-1)\left[\left|\frac{\left[1-\frac{2\gamma d}{(d-2 )}\right](1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]} -2\right||x_{\phi}|+2|\lambda_{V}|\sqrt{\frac{(d-2)}{(d-1)}}|x_{V}|^{2}\right.\] \[+\left.\left|\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d- 2)}(1+w_{\rm m})\right]}\right|\left|x_{\phi}\right|^{3}+\left|\frac{(1+w_{\rm m })}{\left[1-\frac{2\gamma(d-1)}{d-2}(1+w_{\rm m})\right]}\right|\left|x_{\phi }\right|\left|x_{V}\right|^{2}\] \[+\left.\left|\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{ \rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\right|\left| x_{V}\right|+|\lambda_{V}|\sqrt{\frac{(d-2)}{(d-1)}}|x_{\phi}||x_{V}|\] \[+\left.\left|\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d -2)}(1+w_{\rm m})\right]}\right|\left|x_{V}\right|\left|x_{\phi}\right|^{2}+ \left|\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d-2}(1+w_{\rm m}) \right]}\right|\left|x_{V}\right|^{3}\right]. \tag{5.6}\]
Then, using Eq. (5.3), we can show that \(|\mathcal{J}(\mathbf{u})|_{U}\) is indeed bounded on \(U\).
Figure 9: The bifurcation diagrams showing the existence and the stability conditions of CP5. The stability properties of CP5 by plotting \(\gamma\) as a function of \(w_{\rm m}\) for constant value of (a) \(d=4\), (b) \(d=10\) and (c) \(d=20\). The stability characteristics of this point are given both \(\Lambda=0\) and \(\Lambda>0\) cases which can be seen at the figure legends. The blue line and the red line represent \(w_{\rm m}=0\) and \(w_{\rm m}=1/(d-1)\), respectively. Note that we have forbidden zone giving \(\sqrt{1-\frac{2\gamma d}{(d-2)}}<0\) for \(\Lambda=0\) or \(\sqrt{1-\frac{2\gamma d}{(d-2)}-x_{\Lambda,c\,(+)}^{2}}<0\) for \(\Lambda>0\) case. The blue line and the red line represent \(w_{\rm m}=0\) and \(w_{\rm m}=1/(d-1)\), respectively.
Moreover, for \(\mathbf{u},\hat{\mathbf{u}}\in U\) we have
\[|\mathcal{J}(\mathbf{u})-\mathcal{J}(\hat{\mathbf{u}})|_{U} \leq \frac{1}{2}(d-1)\left[\left|\frac{\left[1-\frac{2\gamma d}{(d-2)} \right](1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}-2 \right|\left|x_{\phi}-\hat{x}_{\phi}\right|+2|\lambda_{V}|\sqrt{\frac{(d-2)}{( d-1)}}|x_{V}^{2}-\hat{x}_{V}^{2}|\right.\] \[+\left.\left|\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d- 2)}(1+w_{\rm m})\right]}\right|x_{\phi}^{3}-\hat{x}_{\phi}^{3}|+\left|\frac{(1 +w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d-2}(1+w_{\rm m})\right]}\right|\left| x_{\phi}x_{V}^{2}-\hat{x}_{\phi}\hat{x}_{V}^{2}|\right.\] \[+\left.\left|\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{ \rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\right|x_{V}- \hat{x}_{V}|+|\lambda_{V}|\sqrt{\frac{(d-2)}{(d-1)}}|x_{\phi}x_{V}-\hat{x}_{ \phi}\hat{x}_{V}|\] \[+\left.\left|\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d- 2)}(1+w_{\rm m})\right]}\right|\left|x_{V}x_{\phi}^{2}-\hat{x}_{V}\hat{x}_{ \phi}^{2}\right|+\left|\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d-2}( 1+w_{\rm m})\right]}\right|\left|x_{V}^{3}-\hat{x}_{V}^{3}\right|\right]\,.\]
After some computations, we could show that \(\mathcal{J}\) is locally Lipshitz with respect to \(\mathbf{u}\), namely,
\[|\mathcal{J}(\mathbf{u})-\mathcal{J}(\hat{\mathbf{u}})|_{U}\leq C_{\mathcal{J}}(|\mathbf{u }|,|\hat{\mathbf{u}}|)|\mathbf{u}-\hat{\mathbf{u}}|. \tag{5.8}\]
Next, the integral form of Eq. (5.4) is given by
\[\mathbf{u}(s)=\mathbf{u}(s_{0})+\int_{s_{0}}^{s}\mathcal{J}\left(\mathbf{u}(\hat{s}) \right)d\hat{s}. \tag{5.9}\]
By defining a Banach space
\[X\equiv\{\mathbf{u}\in C(I,\mathbb{R}^{2}):\,\mathbf{u}(x_{0})=\mathbf{u}_{0},\,\sup_{x\in I }|\mathbf{u}(x)|\leq L_{0}\}\, \tag{5.10}\]
endowed with the norm
\[|\mathbf{u}|_{X}=\sup_{x\in I}|\mathbf{u}(x)|\, \tag{5.11}\]
where \(L_{0}>0\), we introduce an operator \(\mathcal{K}\)
\[\mathcal{K}(\mathbf{u}(x))=\mathbf{u}_{0}+\int_{x_{0}}^{x}\mathcal{J}\left(\mathbf{u}(s), s\right)ds\, \tag{5.12}\]
and using Lemma 1, we have the following result [56]:
**Lemma 2**.: _Let \(\mathcal{K}\) be an operator defined in Eq. (5.12). Suppose there exists a constant \(\varepsilon>0\) such that \(\mathcal{K}\) is a mapping from \(X\) to itself and \(\mathcal{K}\) is a contraction mapping on \(I=[x,x+\varepsilon]\) with_
\[\varepsilon\leq\min\left(\frac{1}{C_{L_{0}}},\frac{1}{C_{L_{0}}L_{0}+\| \mathcal{J}(x)\|}\right). \tag{5.13}\]
_Then, the operator \(\mathcal{K}\) is a contraction mapping on \(X\)._
which shows the existence of a unique fixed point of Eq. (5.12) ensuring a unique local solution of the differential equation (5.4). One can further establish a maximal solution by repeating the above arguments of the local existence with the initial condition \(\boldsymbol{u}(x-x_{n})\) for some \(x_{0}<x_{n}<x\) and using the uniqueness condition to glue the solutions.
To show the global existence of Eq. (5.4), let us first consider an inequality coming form (5.9)
\[|\boldsymbol{u}(s)|\leq|\boldsymbol{u}(s_{0})|+\int_{s_{0}}^{s}|\mathcal{J} \left(\boldsymbol{u}(\hat{s})\right)|d\hat{s}. \tag{5.14}\]
Using Eqs. (5.3) and (5.6), we get
\[|\boldsymbol{u}(t)| \leq |\boldsymbol{u}(t_{0})|+\frac{1}{2}(d-1)\left[\left|\frac{\left[ 1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d -2)}(1+w_{\rm m})\right]}-2\right|\frac{\left|1-\frac{2\gamma d}{(d-2)}\right| ^{\frac{1}{2}}}{\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{1}{2}}}+2| \lambda_{V}|\sqrt{\frac{(d-2)}{(d-1)}}\left|1-\frac{2\gamma d}{(d-2)}\right|\] \[+\left|\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}( 1+w_{\rm m})\right]}\right|\frac{\left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac {3}{2}}}{\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{3}{2}}}+\left|\frac {(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d-2}(1+w_{\rm m})\right]}\right| \frac{\left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac{3}{2}}}{\left|1-\frac{4 \gamma(d-1)}{(d-2)}\right|^{\frac{1}{2}}}\] \[+\left|\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m}) }{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\right|\left|1-\frac{ 2\gamma d}{(d-2)}\right|^{\frac{1}{2}}+|\lambda_{V}|\sqrt{\frac{(d-2)}{(d-1)}} \frac{\left|1-\frac{2\gamma d}{(d-2)}\right|}{\left|1-\frac{4\gamma(d-1)}{(d-2 )}\right|^{\frac{1}{2}}}\] \[+\left|\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}( 1+w_{\rm m})\right]}\right|\frac{\left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac {3}{2}}}{\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{3}{2}}}+\left|\frac {(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d-2}(1+w_{\rm m})\right]}\right| \left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac{3}{2}}\right]\ln\left(\frac{a(t) }{a(t_{0})}\right)\,.\]
The second part is to consider a case where
\[\gamma>\frac{d-2}{4(d-1)}\,\quad\gamma(1+w_{\rm m})\leq\frac{d-2}{2(d-1)}\, \tag{5.16}\]
in which the pre-coefficient of the second term in the left hand side of (3.7) is negative.
In this case, we may have
\[\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{1}{2}}x_{\phi} = \left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac{1}{2}}\cos\alpha\sinh\beta\] \[x_{V} = \left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac{1}{2}}\cos\alpha\cosh\beta \tag{5.17}\] \[\left|1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{m})\right|^{\frac{1}{2}}x_ {m} = \left|1-\frac{2\gamma d}{(d-2)}\right|^{\frac{1}{2}}\sin\alpha\]
where \(\alpha\equiv\alpha(s)\) and \(\beta\equiv\beta(s)\). In the case at hand, it is easy to show that Lemma 1 and Lemma 2 still hold, but the estimate of (5.14) has to be modified. Thus, the estimate
(5.14) has the form
\[|\boldsymbol{u}(t)| \leq |\boldsymbol{u}(t_{0})|+\frac{1}{2}(d-1)\left[\left|\frac{\left[1- \frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)} (1+w_{\rm m})\right]}-2\right|\frac{\left|1-\frac{2\gamma d}{(d-2)}\right|^{ \frac{1}{2}}}{\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{1}{2}}}\int_{s _{0}}^{s}|\sinh\beta|d\hat{s}\right. \tag{5.18}\] \[\left.+2|\lambda_{V}|\sqrt{\frac{(d-2)}{(d-1)}}\left|1-\frac{4 \gamma(d-1)}{(d-2)}\right|\int_{s_{0}}^{s}\cosh^{2}\beta\ d\hat{s}\right.\] \[\left.+\left|\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d -2)}(1+w_{\rm m})\right]}\right|\frac{\left|1-\frac{2\gamma d}{(d-2)}\right|^ {\frac{3}{2}}}{\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{3}{2}}}\int_{s _{0}}^{s}|\sinh\beta|^{3}d\hat{s}\right.\] \[\left.+\left|\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d- 2}(1+w_{\rm m})\right]}\right|\frac{\left|1-\frac{2\gamma d}{(d-2)}\right|^{ \frac{3}{2}}}{\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{1}{2}}}\int_{s _{0}}^{s}|\sinh\beta|\cosh^{2}\beta d\hat{s}\right.\] \[\left.+\left|\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{ \rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\right|\left| 1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{1}{2}}\int_{s_{0}}^{s}|\cosh\beta| \ d\hat{s}\right.\] \[\left.+|\lambda_{V}|\sqrt{\frac{(d-2)}{(d-1)}}\frac{\left|1-\frac{ 2\gamma d}{(d-2)}\right|}{\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{\frac{1} {2}}}\int_{s_{0}}^{s}|\sinh\beta\cosh\beta|d\hat{s}\right.\] \[\left.+\left|\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d -2)}(1+w_{\rm m})\right]}\right|\left|1-\frac{4\gamma(d-1)}{(d-2)}\right|^{ \frac{3}{2}}\int_{s_{0}}^{s}|\cosh^{3}\beta|\ d\hat{s}\right].\]
For other cases, we do similar way as above.
Finally, we can discuss the \(\Lambda\neq 0\) case in which we introduce the extended dynamical variables
\[\boldsymbol{u}_{\Lambda}=\begin{pmatrix}x_{\phi}\\ x_{V}\\ x_{\Lambda}\end{pmatrix}, \tag{5.19}\]
on an interval \(I\equiv[s,s+\epsilon]\) such that we have the equation
\[\frac{d\boldsymbol{u}_{\Lambda}}{ds}=\mathcal{J}_{\Lambda}(\boldsymbol{u}_{ \Lambda})\, \tag{5.20}\]
coming from (4.2), (4.3), and (4.4) where
\[\mathcal{J}_{\Lambda}(\mathbf{u}_{\Lambda})\equiv\frac{1}{2}(d-1)\left( \begin{array}{c}\left(\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m}) }{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}-2\right)x_{\phi}+2 \lambda_{V}\sqrt{\frac{(d-2)}{(d-1)}}x_{V}^{2}\\ +\frac{(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x _{\phi}^{3}-\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{d-2}(1+w_{\rm m}) \right]}x_{\phi}x_{V\Lambda(\pm)}^{2}\\ \\ \frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[1-\frac{2 \gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{V}+\frac{(1-w_{\rm m})}{\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{V}x_{\phi}^{2}\\ -\lambda_{V}\sqrt{\frac{(d-2)}{(d-1)}}x_{\phi}x_{V}-\frac{(1+w_{\rm m})}{ \left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{V}x_{V\Lambda(\pm)} ^{2}\\ \\ \frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[1-\frac{2 \gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{\Lambda(\pm)}+\frac{(1-w_{\rm m})} {\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x_{\Lambda(\pm)}x_{ \phi}^{2}\\ -\frac{(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}x _{\Lambda(\pm)}x_{V\Lambda(\pm)}^{2}\end{array}\right).\]
Then, we employ a similar procedure as the preceding \(\Lambda=0\) case to show that in this case a global solution of (4.2), (4.3), and (4.4) with constraint (4.6) does exist.
Thus, we could state
**Theorem 1**.: _There exists a global classical solution of spatially flat FLRW spacetimes in higher dimensional Klein-Gordon-Rastall theory with scalar potential (3.1) and real cosmological constant._
## 6 Cosmological Model
In Sec. (3) and (4), we have describe the cosmological behaviors of each critical point in our model. In this section, we focus to discuss the cosmological model based on them. It is about how our model might provide the pictures of early time universe corresponding to its unstable nodes whilst the late time universe corresponding to its stable nodes. The former associated to inflationary phase which we shall show that even though in KGR framework we still obtain power-law inflation due to exponential potential scalar field. The latter one, the late-time universe model, should be able to explain the accelerated expansion of the universe which driven by the dark energy-like along with baryonic and dark matter as the constituents of the universe. For \(\Lambda=0\) case, these interesting features are included in CP4 since it gives accelerated expansion both at early and late-time eras. At this point, CP4, the cosmological model of \(\Lambda\neq 0\) case cannot describe the late-time universe because there are only unstable and saddle nodes. The stable nodes, in this case, are only obtained from CP5 even if we have to argue that the non-hyperbolic regions will become late-time attractors by further analysis, as CP5 is reached as \(\lambda_{V}=0\), this point does not describe the universe filled by the scalar field with exponential potential such that it is inconsistent to power-law inflation in the early time.
### The Early Time Universe: Power-Law Inflation in KGR Cosmology
In order to convert Friedmann equation (2.12) into autonomous equations, we have introduced exponential potential of \(V(\phi)\) as the specific form in our analysis. Since we use the exponential form of the potential, we may have an inflationary era in the early epoch of the universe that can be described by the power-law inflation if the scalar \(\phi\) plays a role as inflaton field which dominates the energy density of the early universe. In this case, the scale factor is given by \(a(t)\propto t^{\ell}\)[47], where the parameter \(\ell>1\). Thus, CP4 with saddle nodes and \(\Omega_{\rm KGR}\approx 1\), are the good candidates to describe such era. On the side, there are some models that describe inflationary era in where not only did the inflaton predominate in the early universe, \(\Omega_{\rm KGR}<1\), but was also perfect fluid with general value of \(w_{\rm m}\)[52, 53]. In CP4, the latter possibility gives unstable solutions and accelerating universe which make this condition can also be inferred as power-law inflation model in Rastall cosmology as long as the scale factor evolves according to \(a(t)\propto t^{\ell}\) with \(\ell>1\).
Now, we will begin discuss the inflationary model obtained from CP4 for both \(\Lambda=0\) and \(\Lambda\neq 0\) cases. We have to emphasize that Rastall theory, \(\gamma\neq 0\), only consistent to inflationary phase driven by inflaton and perfect fluid. Let us begin with the Friedmann equation (2.12), by taking \(N=1\), in condition when not only does the scalar \(\phi\) energy density dominates at early epoch but also \(\rho_{\rm m}\) are exist in some fraction as perfect fluid.
\[H^{2}= \frac{2\kappa}{(d-1)(d-2-2\gamma d)}\Biggl{(}\biggl{[}1-\frac{4 \gamma(d-1)}{(d-2)}\biggr{]}\frac{\dot{\phi}^{2}}{2}+V(\phi)+\biggl{[}1-\frac{ 2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\biggr{]}\,\rho_{\rm m}\Biggr{)}\,. \tag{6.1}\]
In such situation perfect fluid may be a form of dust-like (\(w_{\rm m}=0\)), radiation-like (\(w_{\rm m}=1/(d-1)\)), vacuum-like (\(w_{\rm m}=-1\)), or fermionic matter represented as a non-linear spinor field as discussed in [54, 55]. The last one might describe perfect fluid from phantom to ekpyrotic matter due to its equation of state parameter \(w_{\rm m}\). Equation of motion of scalar field \(\phi\) gives,
\[\ddot{\phi}+(d-1)H\dot{\phi}+V_{,\phi}=0\,. \tag{6.2}\]
We also have
\[\dot{H}=-\frac{\kappa}{(d-2)}\biggl{[}\rho_{\rm m}+p_{\rm m}+\dot{\phi}^{2} \biggr{]}\,, \tag{6.3}\]
and
\[\ddot{H}=-\frac{\kappa}{(d-2)}\Biggl{[}\dot{\rho}_{\rm m}+\dot{p}_{\rm m}+2 \dot{\phi}\ddot{\phi}\Biggr{]}\,. \tag{6.4}\]
In addition, we have the solution of Eq. (6.2) as follows,
\[\phi(t)=\frac{2}{\sqrt{\kappa}\lambda_{V}}\ln\left(\kappa^{-\frac{1}{2}}\,t \right), \tag{6.5}\]
such that we have,
\[V_{0}=\frac{2}{\kappa^{2}\lambda_{V}^{2}}[(d-1)\ell-1]\,, \tag{6.6}\]
From Rastall modification, we have
\[\dot{\rho}_{\rm m}+(d-1)H(\rho_{\rm m}+p_{\rm m})=-2\lambda(d-1)( \ddot{H}+dH\dot{H})\,. \tag{6.7}\]
we take the form of
\[\rho_{\rm m}(t)=\frac{2(d-1)\ell}{\kappa\lambda_{V}^{2}\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]t^{2}}\left[\frac{1}{1-\frac{2 \gamma d}{d-2}}-1\right]\,, \tag{6.8}\]
for non-vacuum fluid (\(w_{\rm m}\neq-1\)) and
\[\rho_{\rm m}(t)=\frac{4\gamma(d-1)(2-d\,\ell)}{(d-2)\kappa\lambda_ {V}^{2}t^{2}}\,, \tag{6.9}\]
for vacuum fluid (\(w_{\rm m}=-1\)) in our calculation and use the form of \(H(t)=\ell/t\) in order to obtain an exact solution of power-law inflation. The form of \(\phi(t)\), \(H(t)\) and \(\rho_{\rm m}(t)\) are consistent to solve Eq. (6.1) and Eq. (6.3) with the consequence that
\[\ell=\frac{2(d-2)}{(d-2-2\gamma d)^{2}\lambda_{V}^{2}}\Bigg{[}1+ \sqrt{1-\frac{4\gamma(d-2-2\gamma d)^{3}}{(d-2)^{3}}\lambda_{V}^{2}}\Bigg{]}>1\,, \tag{6.10}\]
for \(w_{\rm m}\neq-1\) case and
\[\ell=\frac{4}{(d-2)\lambda_{V}^{2}}>1\,, \tag{6.11}\]
for \(w_{\rm m}=-1\) case. It can be seen that solution in Eq. (6.10) is only exists for \(0<\lambda_{V}\leq\sqrt{\frac{(d-2)^{3}}{4\gamma(d-2-2\gamma d)^{3}}}\) and it is possible to obtain \(\ell>>1\) by suitable parameters \(\lambda_{V}\) and \(\gamma\). For instance, it can be seen in Fig. 10. It is worth to mention that power-law inflation discussed in GR framework has been ruled out by PLANCK 2013 data [57] such that, in this sense, it is viewed as a failed model of inflation. Fortunately, there is an attempt to recover this problem in term of cuscuton-gravity [58] in which it can satisfy PLANCK constraint such that it may still remain as an accepted inflation model. We roughly suspect that power-law inflation may be recovered in beyond GR theories, one of which is in the KGR cosmology, although further investigation is needed to ensure this statement.
### The Late-Time Universe: Dark Energy Paradigm in KGR Cosmology
At the same fixed point, CP4, it is possible to obtain stable nodes and accelerating universe when the scalar field and non-relativistic matter (\(w_{\rm m}=0\)) exist in KGR cosmology for \(\Lambda=0\) case. Scalar field \(\phi\) plays a role as _quintessence_ in presence of coupling between
perfect fluid and geometry due to Rastall parameter \(\gamma\). In this sense, the accelerating universe can be explained by the nontrivial interplay between the quintessence paradigm and the non-conservative EMT of the non-relativistic matter field (baryonic and dark matter) in curved spacetime which play a role as a dark energy since \(w_{\rm KGR}<0\). This result agrees with the cosmological observation that the subdominant components of our current universe is baryonic and dark matter, \(\Omega_{\rm m}\approx 0.3\) (See Fig. 7). We may argue that our model can alleviate cosmic coincidence problem since our observed universe is a late-time attractor. However, these features cannot be found for \(\Lambda\neq 0\) case due to its no existence of stable nodes in CP4. The late-time behavior for this case is given by the interplay between massless scalar field and cosmological constant acting as dark energy taking responsibility for accelerated expansion of the universe.
Cosmological Sequence of KGR Cosmology for Constant Parameters of \(d,w_{\rm m},\gamma\) and \(\lambda_{V}\)
In the following discussion, we attempt to track the cosmic history of KGR cosmology for \(\Lambda=0\) case. In the case of taking a set of parameters \((d,w_{\rm m},\gamma,\lambda_{V})=(4,0,0.16,1.37)\) we have the sequence of cosmological era which includes both power-law inflation and the late-time acceleration in a unified chronology. Due to its unstable solutions, either CP2 or CP3 is an initial condition followed by power-law inflation which occurs in CP4 with saddle nodes and the parameter of \({\cal A}_{+}\). Then, the universe reaches matter-dominated era given by CP1. Finally, it attains to dark energy-baryonic-dark matter era described by the stable nodes of CP4 which actually different point from inflationary phase due to the parameter of \({\cal A}_{-}\). In the case of \((d,w_{\rm m},\gamma,\lambda_{V})=(4,0,0.16,1.37)\) there are two solutions of parameter \({\cal A}_{\pm}\) corresponding to \({\cal A}_{+}\approx 0.6\) and \({\cal A}_{-}\approx 0.5\). According to its stability, the former and the latter one are a solution for the early and the late-time universe, respectively. For details, see Fig. 11. Hence, the cosmological sequence in term of critical point \((x_{\phi,c},x_{V,c})\) is
\[{\rm CP2/CP3:}\left(\pm 0.82,0\right)\rightarrow{\rm CP4}({\cal A}_{+}): \left(0.53,0.57\right)\rightarrow{\rm CP1:}\left(0,0\right)\rightarrow{\rm CP 4}({\cal A}_{-}):\left(0.44,0.54\right),\]
Lastly, we can also track the cosmic history of KGR cosmology for \(\Lambda\neq 0\) case which compatible only if \(\lambda_{V}=0\). It is equivalent to the context of a massless scalar field and
a cosmological constant. If we take a set of parameter \((d,w_{\rm m},\gamma,\lambda_{V})=(4,0,0.16,0)\) we have the following cosmic sequence in terms of critical point \((x_{\phi,c},x_{V,c},x_{\Lambda,c}(\pm))\)
\[{\rm CP2/CP3:}\left(\pm 0.82,0,0\right)\to{\rm CP1:}\left(0,0,0\right)\to{\rm CP 5:}\left(0,\sqrt{0.36\mp x_{\Lambda,c}^{2}(\pm)},x_{\Lambda,c}(\pm)\right)\,.\]
## 7 Conclusion
We have investigated cosmological consequences of spatially flat FLRW spacetime in higher dimensional KGR theory using dynamical system analysis. A scalar field with exponential potential is chosen to transform Friedmann equations to autonomous equations. We find that there exists five critical points, namely \({\rm CP}_{1-5}\), which can be related
Figure 11: The critical points (a) CP1, (b) CP2/CP3 and (c) CP4 mentioned in Subsec. (6.3) along with its (d) phase plane describing the cosmological sequence of KGR cosmology for \(\Lambda=0\) case where the set of parameters \((d,w_{\rm m},\gamma,\lambda_{V})=(4,0,0.16,1.37)\) is chosen. The red dot on CP1, CP2 and CP3 denotes the point where \(w_{\rm m}=\) and \(\gamma=0.16\) whilst it denotes the \(w_{\rm m}=0\), \(\gamma=0.16\) and \(\lambda_{V}=1.37\) points on CP4. The black-dashed on CP1, CP2 and CP3 illustrates the GR limit \(\gamma=0\). The black-thin and black dashed on CP4 represents the dust matter and \(\Omega_{\rm m}=0.1\) curves, respectively.
to cosmological era based on its stability, equation of parameter, density parameter and decelerated parameter.
Then, we have established the local-global existence and the uniqueness of the evolution equations (3.5) and (3.6) with constraint (3.7) for \(\Lambda=0\) case as well as the equations (4.2), (4.3) and (4.4) with constraint (4.6) for \(\Lambda\neq 0\) case using Picard's iteration and the contraction mapping properties. In each case we consider all ranges of parameters \(w_{\rm m}\) and \(\gamma\) correspond to the signature of each coefficient in Eq.(3.7) and (4.6). Note that our results apply to both cases, namely \(\Lambda=0\) and \(\Lambda\neq 0\).
We have also particularly discussed several possible cosmological models of higher dimensional KGR theory for \(\Lambda=0\) case that are suitable to explain both inflation and the accelerated universe in the late-time epoch. In inflationary sector, we have derived exact solution of power-law inflation without assuming slow-roll mechanism in which it represents scalar field-perfect fluid universe given by CP4. It is necessary to investigate further whether the power-law inflation in the KGR model can satisfy the PLANCK constraints. In late-time sector, CP4 of KGR model also offers an interesting feature such that cosmic acceleration occurs due to non-trivial interplay between quintessence and non-conservation of EMT of baryonic and dark matter which play a role as dark energy since \(w_{\rm KGR}<0\).
Finally, we can track the cosmic history of KGR model in both \(\Lambda=0\) and \(\Lambda\neq 0\) case. The former case might provide the chronology including power-law inflation with dust-like matter (\(w_{\rm m}=0\)) as an existing perfect fluid and late-time acceleration through matter-dominated era. For instance, in the case of a set of parameters \((d,w_{\rm m},\gamma,\lambda_{V})=(4,0,0.16,1.4)\) the cosmological sequence in terms of critical point \((x_{\phi,c},x_{V,c})\) is \((\pm 0.8246,0)\)\(\rightarrow(0.5144,0.5636)\rightarrow(0,0)\rightarrow(0.4024,0.5206)\). On the other hand, the cosmic history of KGR cosmology for \(\Lambda\neq 0\) case compatible only if \(\lambda_{V}=0\) such that for a set of parameters \((d,w_{\rm m},\gamma,\lambda_{V})=(4,0,0.16,0)\) the cosmological sequence is \((\pm 0.8246,0,0)\rightarrow(0,0,0)\rightarrow\Big{(}0,\sqrt{0.36\mp x_{\Lambda,c \pm}^{2}},x_{\Lambda,c\pm}\Big{)}\).
## Acknowledgments
TAW acknowledges LPDP for financial support. The work of BEG and AS is supported by Hibah Riset ITB. BEG also acknowledges Hibah Riset Fundamental Klemendikbudristekdikti for financial support. HA is partially supported by the World Class Research (WCR) Grant from Klemendikbudristek-IPB of fiscal year 2022.
## Appendix A Appendix A: Linear Stability for \(\Lambda=0\) Case
In this Appendix we analyze the linear perturbation of the equations (3.5)-(3.6) around the critical points \((x_{\phi,c},x_{V,c})\) by expanding the autonomous variables as follow
\[x_{\phi}=x_{\phi,c}+u_{\phi}\,,\] (A.1)
\[x_{V}=x_{V,c}+u_{V}\,.\] (A.2)
The form of the equation of motions for the first order are as follow
\[\frac{2}{(d-1)N}u^{{}^{\prime}}_{\phi} =\left[\frac{(1+w_{\rm m})\left(1-\frac{2\gamma d}{(d-2)}-x_{V,c}^{2 }\right)+3(1-w_{\rm m})x_{\phi,c}^{2}}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{ \rm m})\right]}-2\right]u_{\phi}\] \[\quad-\left[\frac{2(1+w_{m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}( 1+w_{m})\right]}x_{\phi,c}-4\lambda_{V}\sqrt{\frac{d-2}{d-1}}\,\right]x_{V,c}\, u_{V}\,,\] (A.3) \[\frac{2}{(d-1)N}u^{{}^{\prime}}_{V}= \left[\frac{(1+w_{\rm m})\left(1-\frac{2\gamma d}{(d-2)}-3x_{V,c} ^{2}\right)+(1-w_{\rm m})x_{V,c}^{2}}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_ {m})\right]}-\lambda_{V}\sqrt{\frac{d-2}{d-1}}\,\right]u_{V}\] \[+\left[\frac{2(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}( 1+w_{\rm m})\right]}x_{\phi,c}-\lambda_{V}\sqrt{\frac{d-2}{d-1}}\,\right]x_{V, c}\,u_{\phi}\,.\] (A.4)
Therefore, it is possible for us to express the equations mentioned above in a matrix form
\[\begin{pmatrix}u^{{}^{\prime}}_{\phi}\\ u_{V}\end{pmatrix}=\boldsymbol{J}\begin{pmatrix}u_{\phi}\\ u_{V}\end{pmatrix}\,,\] (A.5)
where \(\boldsymbol{J}\) is Jacobian matrix. Its eigenvalues, \(\mu_{1}\) and \(\mu_{2}\), will be identified and recorded for each critical point. These values will be used to evaluate the stability characteristics of the critical points. The Jacobian matrix along with its eigenvalues for each critical point of the autonomous system in this case can thus be written as follow
* Jacobian matrix for CP1 \((0,0)\): \[\boldsymbol{J}=\begin{pmatrix}\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+ w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}-2&0\\ 0&\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[1-\frac{2 \gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\end{pmatrix}\,,\] (A.6) with eigenvalues \[\mu_{1}=\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}-2\,,\quad\mu_{2}=\frac{\left[1 -\frac{2\gamma d}{(d-2)}\right](1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d- 2)}(1+w_{\rm m})\right]}\,.\] (A.7)
* Jacobian matrix for CP2 \(\left(\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}},0\right)\) : \[\boldsymbol{J}=\begin{pmatrix}\frac{2\left[(1-w_{\rm m})-2\gamma(1+w_{\rm m}) \right]}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}&0\\ 0&2-\lambda_{V}\sqrt{\frac{d-2}{d-1}}\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1- w_{\rm m})}}\end{pmatrix}\,,\] (A.8)
with eigenvalues \[\mu_{1}=\frac{2[(1-w_{\rm m})-2\gamma(1+w_{\rm m})]}{\left[1-\frac{2\gamma(d-1)}{( d-2)}(1+w_{\rm m})\right]}\,,\quad\mu_{2}=2-\lambda_{V}\sqrt{\frac{d-2}{d-1}} \sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}}\,.\] (A.9)
* Jacobian matrix for CP3 \(\Bigg{(}-\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}},0\Bigg{)}\) : \[\mathbf{J}=\begin{pmatrix}\frac{2[(1-w_{\rm m})-2\gamma(1+w_{\rm m})]}{\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}&0\\ 0&2+\lambda_{V}\sqrt{\frac{d-2}{d-1}}\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1- w_{\rm m})}}\end{pmatrix}\,,\] (A.10) with eigenvalues \[\mu_{1}=\frac{2[(1-w_{\rm m})-2\gamma(1+w_{\rm m})]}{\left[1-\frac{2\gamma(d- 1)}{(d-2)}(1+w_{\rm m})\right]}\,,\quad\mu_{2}=2+\lambda_{V}\sqrt{\frac{d-2}{ d-1}}\sqrt{1-\frac{2\gamma(1+w_{\rm m})}{(1-w_{\rm m})}}\,.\] (A.11)
* Jacobian matrix for CP4 \(\Bigg{(}\frac{1}{\lambda_{V}}\sqrt{\frac{d-1}{d-2}}\mathcal{A}_{\pm},\frac{1} {\lambda_{V}}\sqrt{\frac{(d-1)(2-\mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2(d-2)} }\Bigg{)}\): \[\mathbf{J}=\begin{pmatrix}\mathcal{A}_{\pm}+\frac{2(d-1)(1-w_{\rm m})A_{\pm}^{2}}{ \lambda_{V}^{2}(d-2)\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}- 2&\sqrt{\frac{(2-\mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2}}\left[4-\frac{2(d-1) (1+w_{\rm m})\mathcal{A}_{\pm}}{\lambda_{V}^{2}(d-2)\left[1-\frac{2\gamma(d-1) }{(d-2)}(1+w_{\rm m})\right]}\right]\\ \sqrt{\frac{(2-\mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2}}\left[-1+\frac{2(d-1)(1 -w_{\rm m})\mathcal{A}_{\pm}}{\lambda_{V}^{2}(d-2)\left[1-\frac{2\gamma(d-1)} {(d-2)}(1+w_{\rm m})\right]}\right]&-\frac{2(d-1)(1+w_{\rm m})}{\lambda_{V}^{ 2}(d-2)\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\frac{(2- \mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2}\end{pmatrix}\,,\] (A.12)
* Jacobian matrix for CP5 \(\Bigg{(}0,\sqrt{1-\frac{2\gamma d}{(d-2)}}\Bigg{)}\): \[\mathbf{J}=\begin{pmatrix}-2&0\\ 0&-\frac{2(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m}) \right]}\left[1-\frac{2\gamma d}{(d-2)}\right]\end{pmatrix}\,,\] (A.13) with eigenvalues \[\mu_{1}=-2\,,\quad\mu_{2}=-\frac{2(1+w_{\rm m})\left[1-\frac{2\gamma d}{(d-2)} \right]}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\,.\] (A.14)
Appendix B: Linear Stability for \(\Lambda\neq 0\) Case
In this Appendix we analyze the linear perturbation of the equations (4.2), (4.3) and (4.4) around the critical points \((x_{\phi,c},x_{V,c},x_{\Lambda,c\,(\pm)})\) by expanding the autonomous variables as follow
\[x_{\phi} =x_{\phi,c}+u_{\phi}\,,\] (B.1) \[x_{V} =x_{V,c}+u_{V}\,,\] (B.2) \[x_{\Lambda\,(\pm)} =x_{\Lambda,c\,(\pm)}+u_{\Lambda\,(\pm)}\,.\] (B.3)
The form of the equation of motions for the first order are as follow
\[\frac{2}{(d-1)N}u^{{}^{\prime}}_{\phi} =\left[\frac{(1+w_{\rm m})\left(1-\frac{2\gamma d}{(d-2)}\mp x^ {2}_{V\Lambda,c\,(\pm)}\right)+3(1-w_{\rm m})x^{2}_{\phi,c}}{\left[1-\frac{2 \gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}-2\right]u_{\phi}\] \[\quad-\left[\frac{2(1+w_{m})}{\left[1-\frac{2\gamma(d-1)}{d-2}(1 +w_{m})\right]}x_{\phi,c}-4\lambda_{V}\sqrt{\frac{d-2}{d-1}}\,\right]x_{V,c}\, u_{V}\] \[\quad-\frac{2(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}( 1+w_{m})\right]}x_{V,c}x_{\Lambda,c\,(\pm)}u_{\Lambda\,(\pm)}\,,\] (B.4)
\[\frac{2}{(d-1)N}u^{{}^{\prime}}_{V}= \Bigg{[}\frac{(1+w_{\rm m})\left(1-\frac{2\gamma d}{d-2}-3x^{2}_{V,c}\mp x^{2}_{\Lambda,c\,(\pm)}\right)+(1-w_{\rm m})x^{2}_{\phi,c}}{\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{m})\right]}-\lambda_{V}\sqrt{\frac{d-2}{d-1}} \,\Bigg{]}u_{V}\] \[\quad+\left[\frac{2(1-w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d -2)}(1+w_{\rm m})\right]}x_{\phi,c}-\lambda_{V}\sqrt{\frac{d-2}{d-1}}\,\right]x _{V,c}\,u_{\phi}\] \[\quad-\frac{2(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}( 1+w_{\rm m})\right]}x_{V,c}\,x_{\Lambda,c\,(\pm)}\,u_{\Lambda\,(\pm)}\,,\] (B.5)
\[\frac{2}{(d-1)N}u^{{}^{\prime}}_{\Lambda\,(\pm)}= \frac{1}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]} \Bigg{[}2\Big{[}(1-w_{\rm m})x_{\phi,c}\,\,u_{\phi}-2(1+w_{\rm m})\,x_{V,c}\,u_ {V}\Big{]}x_{\Lambda,c\,(\pm)}\] \[\quad+\Bigg{[}(1+w_{\rm m})\left(1-\frac{2\gamma d}{(d-2)}-x^{2} _{V,c}\right)+(1-w_{\rm m})x^{2}_{\phi,c}\] \[\quad\mp 3(1+w_{\rm m})\,x^{2}_{\Lambda,c\,(\pm)}\Bigg{]}u_{ \Lambda}\Bigg{]}\,.\] (B.6)
Therefore, it is possible for us to express the equations mentioned above in a matrix form
\[\begin{pmatrix}u^{{}^{\prime}}_{\phi}\\ u_{V}\\ u^{{}^{\prime}}_{\Lambda\,(\pm)}\end{pmatrix}=\mathbf{J}\begin{pmatrix}u_{\phi}\\ u_{V}\\ u_{\Lambda\,(\pm)}\end{pmatrix}\,,\] (B.7)
where \(\mathbf{J}\) is Jacobian matrix. Its eigenvalues, \(\mu_{1}\), \(\mu_{2}\) and \(\mu_{3}\), will be identified and recorded for each critical point. These values will be used to evaluate the stability characteristics of the critical points. The Jacobian matrix along with its eigenvalues for each critical point of the autonomous system in this case can thus be written as follow
* Jacobian matrix for CP1 \((0,0,0)\): \[\boldsymbol{J}=\begin{pmatrix}\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+ w_{\text{m}})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}-2&0&0\\ \\ 0&\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\text{m}})}{\left[1-\frac {2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}&0\\ \\ 0&0&\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\text{m}})}{\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}\end{pmatrix}\,,\] (B.8) with eigenvalues \[\mu_{1}=\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\text{m}})}{\left[1 -\frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}-2\,,\quad\mu_{2}=\mu_{3} =\frac{\left[1-\frac{2\gamma d}{(d-2)}\right](1+w_{\text{m}})}{\left[1-\frac{ 2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}\,.\] (B.9)
* Jacobian matrix for CP2 \(\left(\sqrt{1-\frac{2\gamma(1+w_{\text{m}})}{(1-w_{\text{m}})}},0,0\right)\) : \[\boldsymbol{J}=\begin{pmatrix}\frac{2[(1-w_{\text{m}})-2\gamma(1+w_{\text{m}})] }{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}&0&0\\ \\ 0&2-\lambda_{V}\sqrt{\frac{d-2}{d-1}}\sqrt{1-\frac{2\gamma(1+w_{\text{m}})}{ (1-w_{\text{m}})}}&0\\ \\ 0&0&2\end{pmatrix}\,,\] (B.10) with eigenvalues \[\mu_{1}= \frac{2[(1-w_{\text{m}})-2\gamma(1+w_{\text{m}})]}{\left[1-\frac {2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}\,,\] \[\mu_{2}= \,2-\lambda_{V}\sqrt{\frac{d-2}{d-1}}\sqrt{1-\frac{2\gamma(1+w_{ \text{m}})}{(1-w_{\text{m}})}}\,,\] (B.11) \[\mu_{3}= \,2\,.\]
* Jacobian matrix for CP3 \(\left(\begin{array}{cc}-\sqrt{1-\frac{2\gamma(1+w_{\text{m}})}{(1-w_{\text{ m}})}},0,0\end{array}\right)\) : \[\boldsymbol{J}=\begin{pmatrix}\frac{2[(1-w_{\text{m}})-2\gamma(1+w_{\text{m}})] }{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\text{m}})\right]}&0&0\\ \\ 0&2+\lambda_{V}\sqrt{\frac{d-2}{d-1}}\sqrt{1-\frac{2\gamma(1+w_{\text{m}})}{ (1-w_{\text{m}})}}&0\\ \\ 0&0&2\end{pmatrix}\,,\] (B.12)
with eigenvalues \[\mu_{1} = \frac{2[(1-w_{\rm m})-2\gamma(1+w_{\rm m})]}{\left[1-\frac{2\gamma(d -1)}{(d-2)}(1+w_{\rm m})\right]}\,,\] \[\mu_{2} = 2+\lambda_{V}\sqrt{\frac{d-2}{d-1}}\sqrt{1-\frac{2\gamma(1+w_{ \rm m})}{(1-w_{\rm m})}}\,,\] (B.13) \[\mu_{3} = 2\,.\]
* Jacobian matrix for CP4 \(\left(\frac{1}{\lambda_{V}}\sqrt{\frac{d-1}{d-2}}\mathcal{A}_{\pm},\frac{1}{ \lambda_{V}}\sqrt{\frac{(d-1)(2-\mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2(d-2)}},0\right)\): \[\boldsymbol{J}=\left(\begin{array}{cc}\mathcal{A}_{\pm}+\frac{2(d-1)(1-w_{\rm m })A_{\pm}^{2}}{\lambda_{V}^{2}(d-2)\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{ \rm m})\right]}-2&\sqrt{\frac{(2-\mathcal{A}_{\pm})\mathcal{A}_{+}}{2}}\left[ 4-\frac{2(d-1)(1+w_{\rm m})\mathcal{A}_{+}}{\lambda_{V}^{2}(d-2)\left[1- \frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\right]&0\\ \sqrt{\frac{(2-\mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2}}\left[-1+\frac{2(d-1)( 1-w_{\rm m})\mathcal{A}_{\pm}}{\lambda_{V}^{2}(d-2)\left[1-\frac{2\gamma(d-1) }{(d-2)}(1+w_{\rm m})\right]}\right]&-\frac{2(d-1)(1+w_{\rm m})}{\lambda_{V}^{ 2}(d-2)\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\frac{(2- \mathcal{A}_{\pm})\mathcal{A}_{\pm}}{2}&0\\ 0&0&\mathcal{A}_{\pm}\end{array}\right)\,,\] (B.14)
* Jacobian matrix for CP5 \(\left(0,\sqrt{1-\frac{2\gamma d}{(d-2)}\mp x_{\Lambda,c(\pm)}^{2}},x_{\Lambda,c(\pm)}\right)\): \[\boldsymbol{J}=\begin{pmatrix}-2&0&0&0\\ 0&-\frac{2(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right] }\left[1-\frac{2\gamma d}{d-2}\mp x_{\Lambda,c(\pm)}^{2}\right]&-\frac{2(1+w_{ \rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\sqrt{1-\frac {2\gamma d}{d-2}\mp x_{\Lambda,c(\pm)}^{2}}\,x_{\Lambda,c(\pm)}\\ 0&-\frac{2(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m}) \right]}\sqrt{1-\frac{2\gamma d}{d-2}-x_{\Lambda,c(\pm)}^{2}}\,x_{\Lambda,c(\pm )}&\mp\frac{2(1+w_{\rm m})}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m}) \right]}x_{\Lambda,c(\pm)}^{2}\end{pmatrix}\] (B.15) with eigenvalues \[\mu_{1}=-2\,,\quad\mu_{2}=-\frac{2(1+w_{\rm m})\left[1-\frac{2\gamma d}{(d-2) }\right]}{\left[1-\frac{2\gamma(d-1)}{(d-2)}(1+w_{\rm m})\right]}\,,\quad\mu _{3}=0\,.\] (B.16)
|
2306.15438 | Testing for asymmetric dependency structures in financial markets:
regime-switching and local Gaussian correlation | This paper examines asymmetric and time-varying dependency structures between
financial returns, using a novel approach consisting of a combination of
regime-switching models and the local Gaussian correlation (LGC). We propose an
LGC-based bootstrap test for whether the dependence structure in financial
returns across different regimes is equal. We examine this test in a Monte
Carlo study, where it shows good level and power properties. We argue that this
approach is more intuitive than competing approaches, typically combining
regime-switching models with copula theory. Furthermore, the LGC is a
semi-parametric approach, hence avoids any parametric specification of the
dependence structure. We illustrate our approach using returns from the US-UK
stock markets and the US stock and government bond markets. Using a two-regime
model for the US-UK stock returns, the test rejects equality of the dependence
structure in the two regimes. Furthermore, we find evidence of lower tail
dependence in the regime associated with financial downturns in the LGC
structure. For a three-regime model fitted to US stock and bond returns, the
test rejects equality of the dependence structures between all regime pairs.
Furthermore, we find that the LGC has a primarily positive relationship in the
time period 1980-2000, mostly a negative relationship from 2000 and onwards. In
addition, the regime associated with bear markets indicates less, but
asymmetric dependence, clearly documenting the loss of diversification benefits
in times of crisis. | Kristian Gundersen, Timothée Bacri, Jan Bulla, Sondre Hølleland, Bård Støve | 2023-06-27T12:53:17Z | http://arxiv.org/abs/2306.15438v1 | Testing for Asymmetric dependency structures in financial markets: regime-switching and local Gaussian correlation
###### Abstract
This paper examines asymmetric and time-varying dependency structures between financial returns, using a novel approach consisting of a combination of regime-switching models and the local Gaussian correlation (LGC). We propose an LGC-based bootstrap test for whether the dependence structure in financial returns across different regimes is equal. We examine this test in a Monte Carlo study, where it shows good level and power properties. We argue that this approach is more intuitive than competing approaches, typically combining regime-switching models with copula theory. Furthermore, the LGC is a semi-parametric approach, hence avoids any parametric specification of the dependence structure. We illustrate our approach using returns from the US-UK stock markets and the US stock and government bond markets. Using a two-regime model for the US-UK stock returns, the test rejects equality of the dependence structure in the two regimes. Furthermore, we find evidence of lower tail dependence in the regime associated with financial downturns in the LGC structure. For a three-regime model fitted to US stock and bond returns, the test rejects equality of the dependence structures between all regime pairs. Furthermore, we find that the LGC has a primarily positive relationship in the time period 1980-2000, mostly a negative relationship from 2000 and onwards. In addition, the regime associated with bear markets indicates less, but asymmetric dependence, clearly documenting the loss of diversification benefits in times of crisis.
Regime switching, Hidden Markov Models, Local Gaussian Correlation, Financial Time Series
## 1 Introduction
Dependence between asset returns is important in many aspects in finance, in particular for portfolio theory, where the aim is to allocate assets by maximizing the expected return of the portfolio while minimizing its risk, for instance measured by the standard deviation. The rule is simple: weakly correlated assets are good for diversification, but highly
correlated assets should be avoided. The crucial assumption is that the asset returns follow a joint-Gaussian distribution in this classical mean-variance approach, see Markowitz (1952). The advantage of the Gaussian approach for modelling asset returns is that it is straightforward. Solely based on means and covariances, it leads to a complete theoretical framework in the considered multivariate framework.
However, the restrictive nature of the Gaussian distribution approach is well-documented, as asymmetries are often found in the distribution of financial returns, (see, for example, Silvapulle and Granger, 2001; Longin and Solnik, 2001; Ang and Chen, 2002; Hong et al., 2007; Okimoto, 2008; Chollete et al., 2009; Aas et al., 2009; Stove and Tjostheim, 2014; Bernardi et al., 2017; BenSaida et al., 2018). One main finding opposing the Gaussian assumption is the often stronger dependence between returns of financial assets during periods of market downturn or crashes (often called <<bear markets>>), and less dependence in stable or increasing markets (often called <<bull markets>>), hence time-varying dependency structures are observed. Another well-known asymmetry is the skewness in the distribution of individual asset returns. This has led to the conclusion that the Gaussian distribution is not well-founded empirically (see, e.g., Rydberg and Shephard, 2000).
There are several methods for studying asymmetry of financial returns. Silvapulle and Granger (2001) looked at various quantile estimation methods, and Longin and Solnik (2001) employed extreme value theory to show that there is a bear market effect, but no bull effect, for monthly data. Okimoto (2008), Rodriguez (2007) and BenSaida et al. (2018) have employed regime-switching copulas to study asymmetric dependence for various international stock indices. Moreover, for instance Aas et al. (2009) and Nikoulopoulos et al. (2012) have used vine copulas (also called the pair-copula construction) to model multivariate financial return data. Related works are Ang and Bekaert (2002) and Ang and Chen (2002), who have based themselves on Markov regime structures with ARCH/GARCH modeling. Selected further references to the modelling of financial returns using regime-switching models are Hardy (2001), Bulla and Bulla (2006), and Maruotti et al. (2019).
Recently, factor copulas have been introduced for modeling dependence in high dimensions, see e.g. Oh and Patton (2017). Also, Christoffersen et al. (2012) model the correlation among a large set of countries with a dynamic asymmetric copula (or DAC), concluding that correlations have increased markedly in both developed markets and emerging markets over the past decades. Another way of modeling time-varying correlation, is the use of the very popular dynamic conditional correlation (DCC) estimators, which possesses the flexibility of univariate GARCH models without the complexity of conventional multivariate GARCH, see Engle (2002).
A common feature for many of the alternative approaches mentioned above is that one ends up with one or more parameters that have a rather indirect interpretation as a measure of dependence. In this respect, correlation has a more natural basis. Local Gaussian correlation (LGC, see Tjostheim and Hufthammer, 2013) is a local dependence measure capable of revealing asymmetric dependence, and interpretable as the standard correlation. It has been successfully applied to analyze dependence structures between asset returns (see, e.g., Stove and Tjostheim, 2014; Stove et al., 2014; Bampinas and Panagiotidis, 2017; Nguyen et al., 2020). However, none of these studies have examined the time-varying local Gaussian correlation in a structured way, and the aim of this paper is to close this gap. Hence, in this paper we combine the use of local Gaussian correlation with regime-switching models, and propose a formal test for equality of dependence structures in financial markets across regimes, taking into account the existence of any asymmetric dependence structures. We will not limit ourselves to testing across only two regimes, even though many studies (e.g. Ang and Bekaert, 2002; Okimoto, 2008) document that there are typically two distinct regimes observed in financial returns series. The test procedure is related to the test for financial contagion presented in Stove et al. (2014). However, the test developed in this paper is a more general test for examining whether dependency structures between financial returns are different across regimes, and not only focusing on a "stable" time period and a "crisis" time period. Furthermore, the proposed test in this paper bases on the whole LGC map, and is not limited to testing on the diagonal elements. The advantages of this extension will become clear in the empirical analysis of this paper.
The organisation of the paper is as follows. Section 2 briefly reviews LGC and regime-switching models. Section 3 presents our methodological set-up, including a nonparametric bootstrap test for asymmetric dependence across regimes, and examining its level and power properties in a Monte Carlo study. In Section 4, we illustrate the approach by performing several empirical analyses by the example of different financial return data sets, while Section 5 offers some conclusions.
## 2 Methodology
In this section we briefly review the main theory of the local Gaussian correlation (LGC) and regime-switching models. Book length treatments are found in Tjostheim et al. (2022) and Zucchini et al. (2016), respectively.
### Local Gaussian correlation
This paper relies on the relatively recently developed dependence measure LGC, introduced by Tjostheim and Huithammer (2013). This is a local characterization of dependence, and the underlying idea has also been extended to several different situations. These include a test of independence (Berentsen and Tjostheim, 2014; Lacal and Tjostheim, 2017, 2019), density and conditional density estimation (Otneim and Tjostheim, 2017, 2018), a local Gaussian partial correlation (Otneim and Tjostheim, 2021), local Gaussian spectral (Jordanger and Tjostheim, 2022) and cross-spectrum estimation (Jordanger and Tjostheim, 2023). Finally, the relationship between the local Gaussian correlation and different copulas has been studied in Berntsen et al. (2014). A thorough overview of the local Gaussian approximation approach can be found in Tjostheim et al. (2022a). For completeness, we present the local Gaussian correlation in a standard way, and we note that this section closely follows the presentation of the LGC in Tjostheim et al. (2022b).
Finally, as already mentioned in the introduction, the local Gaussian correlation has been used in several studies examining the dependence structure between asset returns, testing for financial contagion, and utilized in portfolio allocation, see e.g. Stove and Tjostheim (2014), Stove et al. (2014), Bampinas and Panagiotidis (2017), Nguyen et al. (2020), Sleire et al. (2021) and Ming et al. (2022), but not in conjunction with regime-switching models, which is the focus of this paper.
#### 2.1.1 Definition
Let \(\mathbf{R}=(R_{1},R_{2})\in\mathbb{R}^{2}\) represent the stochastic return variable of two risky assets with bivariate density \(f\) and let \(\mathbf{r}=(r_{1},r_{2})\in\mathbb{R}^{2}\) denote a realisation of said variable. For simplicity we drop the time index here. We approximate \(f\) locally in each point \(\mathbf{x}=(x,y)\in\mathbb{R}^{2}\) by a Gaussian bivariate density, \(\psi_{\mathbf{x}}(\mathbf{v})\), where \(\mathbf{v}=(v_{1},v_{2})\) are running variables. Let \(\mathbf{\mu}(\mathbf{x})=(\mu_{1}(\mathbf{x}),\mu_{2}(\mathbf{x}))\) be the mean vector in the normal distribution having density \(\psi_{\mathbf{x}}\), \(\mathbf{\sigma}(\mathbf{x})=(\sigma_{1}(\mathbf{x}),\sigma_{2}(\mathbf{x}))\) is the vector of standard deviations, and \(\rho(\mathbf{x})\) is the correlation coefficient in the normal distribution \(\psi_{\mathbf{x}}\). The approximating density is then given as
\[\psi_{\mathbf{x}}=\psi(\mathbf{v},\mu_{1}(\mathbf{x}),\mu_{2}(\mathbf{x}),\sigma_ {1}^{2}(\mathbf{x}),\sigma_{2}^{2}(\mathbf{x}),\rho(\mathbf{x}))=\frac{1}{2\pi\sigma_{1}( \mathbf{x})\sigma_{2}(\mathbf{x})\sqrt{1-\rho^{2}(\mathbf{x})}}\] \[\times\exp\Big{[}-\frac{1}{2}\frac{1}{1-\rho^{2}(\mathbf{x})}\Big{(} \frac{(v_{1}-\mu_{1}(\mathbf{x}))^{2}}{\sigma_{1}^{2}(\mathbf{x})}-2\rho(\mathbf{x})\frac {(v_{1}-\mu_{1}(\mathbf{x}))(v_{2}-\mu_{2}(\mathbf{x}))}{\sigma_{1}(\mathbf{x})\sigma_{2} (\mathbf{x})}\] \[\qquad\qquad+\frac{(v_{2}-\mu_{2}(\mathbf{x}))^{2}}{\sigma_{2}^{2}( \mathbf{x})}\Big{)}\Big{]}. \tag{1}\]
Moving to another point \(\mathbf{x}^{\prime}\) results in another approximating normal distribution \(\psi_{\mathbf{x}^{\prime}}\) which depends on a new set of parameters \((\mu_{1}(\mathbf{x}^{\prime}),\mu_{2}(\mathbf{x}^{\prime}),\sigma_{1}(\mathbf{x}^{\prime} ),\sigma_{2}(\mathbf{x}^{\prime}),\rho(\mathbf{x}^{\prime}))\). One exception to this is the case where \(f\) itself is Gaussian with parameters \((\mu_{1},\mu_{2},\sigma_{1},\sigma_{2},\rho)\), in which case \((\mu_{1}(\mathbf{x}),\mu_{2}(\mathbf{x}),\sigma_{1}(\mathbf{x}),\sigma_{2}(\mathbf{x}),\rho( \mathbf{x}))\equiv(\mu_{1},\mu_{2},\sigma_{1},\sigma_{2},\rho)\).
The population parameter vector \(\mathbf{\theta}(\mathbf{x})\stackrel{{\text{def}}}{{=}}(\mu_{1}(\mathbf{x}),\mu_{2}(\mathbf{x}),\sigma_{1}(\mathbf{x}),\sigma_{2}(\mathbf{x}),\rho(\mathbf{x}))\) is obtained by minimizing the local penalty function measuring the difference between \(f\) and \(\psi_{\mathbf{x}}\). It is defined by
\[q=\int K_{\mathbf{b}}(\mathbf{v}-\mathbf{x})[\psi(\mathbf{v},\mathbf{\theta}(\mathbf{x}))-\ln\{\psi( \mathbf{v},\mathbf{\theta}(\mathbf{x}))\}f(\mathbf{v})]\mathrm{d}\mathbf{v} \tag{2}\]
where \(K_{\mathbf{b}}(\mathbf{v}-\mathbf{x})=(b_{1}b_{2})^{-1}K_{1}(b_{1}^{-1}(v_{1}-x))K_{2}(b_{ 2}^{-1}(v_{2}-y))\) is a product kernel with bandwidths \(\mathbf{b}=(b_{1},b_{2})\). As is seen in Hjort and Jones (1996, pp 1623-1624), the expression in (2) can be interpreted as a locally weighted Kullback-Leibler distance from \(f\) to \(\psi(\cdot,\mathbf{\theta}(\mathbf{x}))\). Hence, the minimizer \(\mathbf{\theta}_{\mathbf{b}}(\mathbf{x})\) (which also depends on \(K\)) should be a solution of
\[\int K_{\mathbf{b}}(\mathbf{v}-\mathbf{x})\frac{\partial}{\partial\theta_{j}}[\ln\{\psi( \mathbf{v},\mathbf{\theta}(\mathbf{x}))\}f(\mathbf{v})-\psi(\mathbf{v},\mathbf{\theta}(\mathbf{x}))] \mathrm{d}\mathbf{v}=0,\ \ j=1,\ldots,5. \tag{3}\]
In the first step, we define the population value \(\mathbf{\theta}_{\mathbf{b}}(\mathbf{x})\) as the minimizer of (2), assuming that there is a unique solution to (3). The definition of \(\mathbf{\theta}_{\mathbf{b}}(\mathbf{x})\) and the assumption of uniqueness are essentially identical to those used in Hjort and Jones (1996) for more general parametric families of densities.
In the next step, we let \(\mathbf{b}\to\mathbf{0}\) and consider the limiting value \(\mathbf{\theta}(\mathbf{x})=\lim_{\mathbf{b}\to\mathbf{0}}\mathbf{\theta}_{\mathbf{b}}(\mathbf{x})\). This is in fact considered indirectly by Hjort and Jones (1996) and more directly in Tjostheim and Huithammer (2013), both using Taylor expansion arguments. In the following we assume that there exists a limiting value \(\mathbf{\theta}(\mathbf{x})\) independent of \(\mathbf{b}\) and \(K\).
#### 2.1.2 Estimation and likelihood function
When estimating \(\mathbf{\theta}(\mathbf{x})\) and \(\mathbf{\theta}_{\mathbf{b}}(\mathbf{x})\) we have to use a neighborhood with a finite bandwidth, which is in analogy to nonparametric density estimation. The estimate \(\widehat{\mathbf{\theta}}(\mathbf{x})=\widehat{\mathbf{\theta}}_{\mathbf{b}}(\mathbf{x})\) is then obtained from maximizing a local likelihood.
Given observations \(\mathbf{R}_{1},\ldots,\mathbf{R}_{T}\), the local log likelihood is determined by
\[L(\mathbf{R}_{1},\ldots,\mathbf{R}_{T},\mathbf{\theta}(\mathbf{x}))=T^{-1}\sum_{i}K _{\mathbf{b}}(\mathbf{R}_{i}-\mathbf{x})\log\psi(\mathbf{R}_{i},\mathbf{\theta}(\mathbf{x}))\] \[-\int K_{b}(\mathbf{v}-\mathbf{x})\psi(\mathbf{v},\mathbf{\theta}(\mathbf{x}))\mathrm{ d}\mathbf{v}. \tag{4}\]
When \(\mathbf{b}\to\infty\), the last term has 1 as its limiting value, and the likelihood reduces to the ordinary global likelihood. This last term is essential, as it implies that \(\psi(\mathbf{x},\mathbf{\theta}_{\mathbf{b}}(\mathbf{x}))\) is not allowed to stray far away from \(f(\mathbf{x})\) as \(\mathbf{b}\to\mathbf{0}\). Indeed, with the notation
\[u_{j}(\cdot,\mathbf{\theta})\stackrel{{\mathrm{def}}}{{=}}\frac{ \partial}{\partial\theta_{j}}\log\psi(\cdot,\mathbf{\theta}), \tag{5}\]
and assuming \(\mathrm{E}(K_{\mathbf{b}}(\mathbf{R}_{i}-\mathbf{x})\log\psi(\mathbf{R}_{i},\mathbf{\theta}_{\mathbf{ b}}(\mathbf{x})))<\infty\), we have almost surely
\[\frac{\partial L}{\partial\theta_{j}}=T^{-1}\sum_{i}K_{\mathbf{b}}( \mathbf{R}_{i}-\mathbf{x})u_{j}(\mathbf{R}_{i},\mathbf{\theta}_{\mathbf{b}}(\mathbf{x}))\] \[-\int K_{\mathbf{b}}(\mathbf{v}-\mathbf{x})u_{j}(\mathbf{v},\mathbf{\theta}_{\mathbf{b}}( \mathbf{x}))\psi(\mathbf{v},\mathbf{\theta}_{\mathbf{b}}(\mathbf{x}))\mathrm{d}\mathbf{v}\] \[\to\int K_{\mathbf{b}}(\mathbf{v}-\mathbf{x})u_{j}(\mathbf{v},\mathbf{\theta}_{\mathbf{b} }(\mathbf{x}))[f(\mathbf{v})-\psi(\mathbf{v},\mathbf{\theta}_{\mathbf{b}}(\mathbf{x}))]\mathrm{d}\mathbf{v}. \tag{6}\]
by the law of large numbers, or by the ergodic theorem in the time series case. Setting the expression in the first line of (6) equal to zero yields the local maximum likelihood estimate \(\widehat{\mathbf{\theta}}_{\mathbf{b}}(\mathbf{x})\) (\(=\widehat{\mathbf{\theta}}(\mathbf{x})\)) of the population value \(\mathbf{\theta}_{\mathbf{b}}(\mathbf{x})\) (and \(\mathbf{\theta}(\mathbf{x})\) which satisfies (3)). Hence, for each point \(\mathbf{x}\), also referred to as gridpoints in the sequel, we obtain an estimate for the correlation in that point, \(\hat{\rho}(\mathbf{x})\), which we call the local Gaussian correlation. Maximizing the likelihood in several gridpoints, thus results in several estimates of the local Gaussian correlations, that constitutes what we call an LGC map in the sequel. Hence, we are thus able to describe any potential asymmetric dependence patterns by this map of locally estimated correlations.
An asymptotic theory has been developed in Tjostheim and Huthammer (2013) for \(\widehat{\mathbf{\theta}}_{\mathbf{b}}(\mathbf{x})\) for the case that \(\mathbf{b}\) is fixed and for \(\widehat{\mathbf{\theta}}(\mathbf{x})\) in the case that \(\mathbf{b}\to\mathbf{0}\). The first case is much easier to treat than the second one. In fact for the first case the theory of Hjort and Jones (1996) can be used almost directly, although it is extended to the ergodic time series case in Tjostheim and Huthammer (2013). In the case that \(\mathbf{b}\to\mathbf{0}\), this leads to a slow convergence rate of \((n(b_{1}b_{2})^{3})^{-1/2}\), which is the same convergence rate as for the estimated dependence function treated in Jones (1996).
As already mentioned, the local estimates depend on the smoothing device - the bandwidth vector \(\mathbf{b}\) and a specific choice of the kernel function, \(K\). There are various ways of selecting the bandwith parameter \(\mathbf{b}\), (see, e.g. Otneim and Tjostheim, 2018; Berentsen and Tjostheim, 2014; Stove et al., 2014).
#### 2.1.3 Multivariate case
We have thus far concentrated on the bivariate case, in which we estimate a single local Gaussian correlation map based on a bivariate sample, and in the present paper we restrict ourselves to this situation. However, it is in principle straightforward to extend to the case of more than two variables. Assume that we observe a multivariate sample \(\mathbf{R}_{t}=\{R_{1t},\ldots,R_{pt}\}\), \(t=1,\ldots,T\) with dimension \(p>2\). We can then estimate the \(p\times p\) local correlation matrix \(\mathbf{\rho}(\mathbf{x})=\{\rho_{k\ell}(\mathbf{x})\}\), \(1\leq k<\ell\leq p\), as well as the local means and local variances \(\mathbf{\mu}(\mathbf{x})=\{\mu_{1}(\mathbf{x}),\ldots,\mu_{p}(\mathbf{x})\}\) and \(\mathbf{\sigma}(\mathbf{x})=\{\sigma_{1}(\mathbf{x}),\ldots,\sigma_{p}(\mathbf{x})\}\) by maximizing the local likelihood function (4). The precision of such estimates, however, deteriorates quickly as the dimension \(p\) grows, due to the curse of dimensionality.
But, a simplifying technique that reduces the complexity of this estimation problem, introduced by Otneim and Tjostheim (2017), is to estimate each local correlation \(\rho_{k\ell}(\mathbf{z})\) as a bivariate problem by only considering the corresponding pair of observation vectors \(\{R_{kt},R_{\ell\ell}\}\), \(t=1,\ldots,T\). Thus, we reduce the \(p\)-variate problems of estimating the local parameters depending on all coordinates, to a series of bivariate problems of estimating pairwise local correlations depending on their respective pairs of coordinates. In this way, we obtain a simplification that is analogous to an additive approximation in nonparametric regression. For more details regarding this pairwise modeling approach, see Otneim and Tjostheim (2017).
### Regime-switching models - hidden Markov models
In this paper, we employ a regime-switching model - also known by the name hidden Markov model (HMM) - to allow for switching between different regimes (or states, used interchangeably). First used in speech recognition (see, e.g., Baum and Petrie, 1966; Fredkin and Rice, 1992; Gales and Young, 2008), these models are now employed in ecology (McClintock et al., 2020), biology and bioinformatics (Schadt et al., 1998; Durbin, 1998; Eddy, 1998), finance (Hamilton, 1989; Quandt, 1958; Ang and Timmermann, 2012), and many other fields.
The two commonly used estimation procedures for HMMs are Direct Numerical Maximization (DNM) of the likelihood as introduced by Turner (2008) and later detailed by MacDonald and Zucchini (1997), and Expectation Maximization (EM)-type algorithms as introduced by Baum et al. (1970); Dempster et al. (1977). Each procedure possesses advantages and downsides, for example a main difference is the robustness of the EM algorithm towards poor initial values. More details and a comparison of both approaches are discussed in Bulla and Berzel (2008), who also describe a hybrid approach combining both algorithms. For simplicity, we choose to adopt the DNM approach, as it is easier to adapt to different situations. In addition, we employ the Template Model Builder (TMB, Kristensen et al., 2015) package in R to accelerate the estimation process. We refer to (Bacri et al., 2022, 2023) for a tutorial on TMB with HMMs using DNM, along with detailed Poisson, Gaussian, and multivariate Gaussian examples and an overview of suitable optimization algorithms.
#### 2.2.1 Definition
Roughly speaking, HMMs are characterized by switching between \(C\) so-called conditional distributions (or regimes) in time, where the switching process is governed by latent Markov chain. Similarly to the notation from Section 2.1, we let \(\{\mathbf{R_{t}}:t=1,\ldots,T\}\) and \(\{S_{t}:t=1,\ldots,T\}\) denote respectively an observed multivariate time series and the states of a hidden (unobserved) Markov chain, where \(t\) denotes the (time) index ranging from one to \(T\). For the purposes of this paper, the hidden Markov chain is assumed homogeneous, irreducible and aperiodic.
We define our \(C\)-state Gaussian HMM through bivariate Gaussian conditional distributions, i.e., the probability density function equals
\[p_{i}(\mathbf{r})=\text{P}(\mathbf{R_{t}}=\mathbf{r}|S_{t}=i)=\frac{1}{\sqrt{2\pi\det(\bm {\Sigma}_{i})}}\exp\left(-\frac{1}{2}(\mathbf{r}-\mathbf{\mu}_{i})^{\prime}\mathbf{\Sigma }_{i}^{-1}(\mathbf{r}-\mathbf{\mu}_{i})\right),\]
with parameters \((\mathbf{\mu}_{i},\mathbf{\Sigma}_{i})\), where \(i=1,\ldots,C\). Any conditional distribution could be used, but as we are mainly interested in the difference in dependence structures across regimes, we use the Gaussian distribution for convenience. The latent Markov chain of the HMM is characterized by a transition probability matrix (TPM) that we denote \(\mathbf{\Gamma}=\{\gamma_{ij}\}\). We assume ergodicity of the chain, which implies existence and uniqueness of the stationary distribution as the limiting distribution, which we denote \(\mathbf{\delta}\). For more details on these results, we refer to Grimmett and Stirzaker (2001, Lemma 6.3.5 on p. 225 and Theorem 6.4.3 on p. 227) and Feller (1968, p. 394).
#### 2.2.2 Likelihood function
Estimation of the HMM via DNM requires computation of the likelihood. Let \(\mathbf{R}^{(t)}=\{\mathbf{R}_{1},\ldots,\mathbf{R}_{t}\}\) and \(\mathbf{r}^{(t)}=\{\mathbf{r}_{1},\ldots,\mathbf{r}_{t}\}\) denote the 'history' of the observed process \(\mathbf{R}_{t}\) and of the observations \(\mathbf{r}_{t}\), respectively, with \(t\) denoting the time ranging from one to \(T\). Moreover, \(\mathbf{\zeta}\) denotes the vector of model parameters. With this notation, the likelihood of the observations can be written as
\[L(\mathbf{\zeta})=\text{P}(\mathbf{R}^{(T)}=\mathbf{r}^{(T)})=\mathbf{\delta}\text{P}(\mathbf{r}_{ 1})\mathbf{\Gamma}\text{P}(\mathbf{r}_{2})\mathbf{\Gamma}\text{P}(\mathbf{r}_{3})\ldots\mathbf{ \Gamma}\text{P}(\mathbf{r}_{T})\mathbf{1}^{\prime}, \tag{7}\]
where the \(C\) conditional probability density functions evaluated at \(\mathbf{r}\) can be represented as the diagonal matrix
\[\text{P}(\mathbf{r})=\begin{pmatrix}p_{1}(\mathbf{r})&&0\\ &p_{2}(\mathbf{r})&&\\ &&\ddots&\\ 0&&p_{C}(\mathbf{r})\end{pmatrix},\]
and \(\mathbf{1}\) denotes a vector of ones and \(\mathbf{\delta}\) denotes the stationary distribution. When \(\mathbf{r}\) is a missing observation, one can set \(p_{i}(\mathbf{r})=1\)\(\forall i\), hence \(\text{P}(\mathbf{r})\) becomes the unity matrix as explained by Zucchini et al. (2016, p. 40). We choose to set the first term of the likelihood - the so-called initial distribution - to \(\mathbf{\delta}\). Note, however, that it is also possible to freely estimate the initial distribution (Zucchini et al., 2016, Section 2.3.2 Proposition 1 p. 37).
#### 2.2.3 State inference
Once an HMM has been estimated, it is possible to determine the sequence of most likely states of the data set. These states can be inferred by a method known as local decoding through so-called smoothing probabilities, as detailed in Zucchini et al. (2016, Chapter 5). Let us define the so-called forward
\[\alpha_{t}(i)=\text{P}(\mathbf{R}^{(T)}=\mathbf{r}^{(T)}|S_{t}=i)\]
and backward probabilities
\[\beta_{t}(i)=\text{P}(R_{t+1}=r_{t+1},R_{t+2}=r_{t+2},\ldots,R_{T}=r_{T}|S_{t}=i).\]
Then, the smoothing probabilities \(\text{P}(S_{t}=i|\mathbf{R}^{(T)}=\mathbf{r}^{(T)})\) equal
\[\text{P}(S_{t}=i|\mathbf{R}^{(T)}=\mathbf{r}^{(T)})=\frac{\alpha_{t}(i)\beta_{t}(i)}{L (\mathbf{\psi})}\]
for \(i=1,\ldots,C\) and \(t=1,\ldots,T\), and correspond to the conditional probability of being in state \(i\) at time \(t\) given all observations. The most probable state \(i^{*}_{t}\) at time \(t\) then directly follows from the maximal smoothing probability over all possible states through
\[i^{*}_{t}=\operatorname*{arg\,max}_{i\in\{1,\ldots,m\}}\text{P}(S_{t}=i|\mathbf{R }^{(T)}=\mathbf{r}^{(T)}).\]
## 3 Comparing dependence across regimes
The main purpose of the suggested approach is to combine a regime-switching model with the local Gaussian correlation to describe regimes in financial returns. Furthermore, to test whether the dependence structure between the returns differs across the regimes. The method is a step-wise procedure, which we explain in the following.
We consider a bivariate sample \(\mathbf{R}_{t}=\{R_{1t},R_{2t}\}\), where \(t=1,\ldots,T\). Each bivariate observation at time \(t\) is classified into a regime \(c_{i}\in\{1,\ldots,C\}\) with a fitted HMM to the whole sample. The classified bivariate observation at time \(t\) is thus denoted \(\mathbf{R}^{c_{t}}_{t}=\{R^{c_{1t}}_{1t},R^{c_{t}}_{2t}\}\), where \(c_{t}\) denotes the regime of the specific observation at \(t\). Observations with equal regimes constitute a subset of \(\mathbf{R}\), i.e. \(\mathbf{R}^{c_{t}}\subset\mathbf{R}\). Our goal is to examine the dependency structure over the different regimes \(c_{t}\) in \(\mathbf{R}^{c_{t}}\) with the LGC measure, as decribed in Section 2.1. Note that one usually needs to perform a filtration of the data to remove dependence over time, and to remove volatility effects, this is further elaborated in Section 3.2.
For all observations within a specific regime \(c\), we can estimate the \(2\times 2\) local correlation matrices \(\mathbf{\rho}_{c}(\mathbf{x})\) in the grid point \((x,y)\) by maximizing the local likelihood function (4).
Furthermore, using several gridpoints, we estimate the LGC map for each of the \(C\) different regimes. We can therefore proceed to examine whether the dependency structure for the different regimes of the time series are equal or not. Hence, we propose a bootstrap test procedure, and we note that similar test procedures are often used in a nonparametric setting, e.g. for testing difference between quantities in nonparametric regressions, see e.g. Hall and Hart (1990) or Vilar-Fernandez et al. (2007).
### Bootstrap test
To accommodate for any asymmetric dependence structures, we test on the entire LGC map. We use \(i,j\) as notation for specifying the gridpoints such that \(\mathbf{x}_{ij}=(x_{i},y_{j})\), where \(i=1,\ldots,n\) and \(j=1\ldots,n\). The test we propose here is similar to Stove et al. (2014), who developed a bootstrap test for contagion between financial time series. Where Stove et al. (2014) considered the diagonal elements, \((x_{i},y_{i})\), this bootstrap procedure considers the entire grid \((x_{i},y_{j})\). Performing the test on the entire grid instead of on the diagonal elements ensures robustness to reveal non-linear dependencies between the LGC maps of the different regimes. The test on the entire grid \(\mathbf{x}_{ij}\) and arbitrary many regimes \(C\) can be formulated with the following null and alternative hypothesis
\[H_{0}: \mathbf{\rho}_{1}(x_{i},y_{j})=\mathbf{\rho}_{2}(x_{i},y_{j})=\ldots=\mathbf{ \rho}_{C}(x_{i},y_{j})\quad\text{for}\quad i,j=1,\cdots,n(\text{no difference in dependence across regimes})\] \[H_{1}: \mathbf{\rho}_{1}(x_{i},y_{j})\neq\mathbf{\rho}_{2}(x_{i},y_{j})\neq\ldots \neq\mathbf{\rho}_{C}(x_{i},y_{j})\quad\text{for}\quad i,j=1,\cdots,n(\text{difference in dependence across regimes})\]
The bootstrap method works as follows. From the classified observations \(\{\mathbf{R}^{c_{1}}_{1},\mathbf{R}^{c_{2}}_{2}\ldots\mathbf{R}^{c_{T}}_{T}\}\), we draw randomly and with replacement a re-sample \(\{\mathbf{R}^{c_{1}*}_{1},\mathbf{R}^{c_{2}*}_{2}\ldots\mathbf{R}^{c_{T}*}_{T}\}\) and by gathering observations classified to the same regime \(c\in\{1,\ldots,C\}\), compute \(\hat{\mathbf{\rho}}^{*}_{1}(x_{i},y_{j}),\hat{\mathbf{\rho}}^{*}_{2}(x_{i},y_{j}), \ldots,\hat{\mathbf{\rho}}^{*}_{C}(x_{i},y_{j})\) on the grid \(\mathbf{x}_{ij}\) for \(i,j=1,\ldots,n\). With \(C\) classes we can pairwise test between different regimes. Excluding to test between equal regimes and any perturbations there
are \(\binom{C}{2}\) relevant pairwise combinations. The test statistic we apply is the square of the difference between the local correlation estimates over the grid \(\mathbf{x}_{ij}\). The test variable can thus be defined as follows,
\[D_{1}^{*}(k,l)=\begin{cases}\frac{1}{n^{2}}\sum\limits_{i=1}^{n}\sum\limits_{j=1 }^{n}\left[\hat{\mathbf{\rho}}_{k}^{*}(x_{i},y_{j})-\hat{\mathbf{\rho}}_{l}^{*}(x_{i},y _{j})\right]^{2}w(x_{i},y_{j})&\text{for}\quad k>l\\ 0&\text{otherwise}\end{cases}\]
where \(k,l=1,\ldots C\) and \(w\) is a weight function to screen off parts of the local correlation or to concentrate on a certain region. Note that this does not imply disregarding any of the observations, but we choose the weight function such that the distance between the gridpoints and the observations is not too large, i.e. we avoid using an estimated local correlation in a gridpoint far away from any observations. By repeated resampling, \(D_{1}^{*}(k,l)\) is computed for these resamples and its distribution constructed (i.e. the distribution under \(H_{0}\)). From the observations, \(\{\mathbf{R}_{1}^{c},\mathbf{R}_{2}^{c}\ldots\mathbf{R}_{T}^{c}\}\), calculate \(\hat{\mathbf{\rho}}_{1}(x_{i},y_{j}),\hat{\mathbf{\rho}}_{2}(x_{i},y_{j}),\ldots,\hat {\mathbf{\rho}}_{c}(x_{i},y_{j})\) and the test statisic,
\[D_{1}(k,l)=\frac{1}{n^{2}}\sum\limits_{i=1}^{n}\sum\limits_{j=i}^{n}\left[ \hat{\mathbf{\rho}}_{k}(x_{i},y_{j})-\hat{\mathbf{\rho}}_{l}(x_{i},y_{j})\right]^{2}w( x_{i},y_{j}).\]
The p-value in terms of the \(D_{1}\) distribution is found, and implies a rejection of \(H_{0}\) if it is below a chosen significant level \(\alpha\). Section 4.1 describes an example of this bootstrap test in more detail. If \(C>2\) the statistical analysis involves multiple simultaneous statistical tests, i.e. we face a multiple comparison test problem with \(\binom{C}{2}\) pairwise combinations. With large \(C\), the number of pairwise tests that we have to perform to confirm/reject the hypothesis increases and with that also the probability of observing rare events, or type I errors. As a consequence, the likelihood of incorrectly rejecting the null hypothesis increases, which we have to adjust for. A classical, but conservative way of dealing with multiple comparison test problem have been to apply the Bonferroni correction. To obtain the Bonferroni corrected/adjusted p-value we divide the original significance level \(\alpha\) by the number of tests, \(\binom{C}{2}\). Thus, with the Bonferroni correction, we reject the null hypothesis for each pairwise test if the p-value is smaller than \(\alpha_{k,l}=\alpha_{k,l}/\binom{C}{2}\). There are other, less conservative corrections that can be applied, e.g. Duncan's multiple range test (Duncan, 1955), Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995) or Holm-Bonferroni method (Holm, 1979)). Note, that with \(C=2\), there is only one combination to test, and no adjustments are required. In the empirical analysis in Section 5, we have situations where \(C>2\), and thus the p-value needs to be adjusted.
### GARCH filtering and bandwidth selection
LGC estimation requires that the pairs \([R_{1t},R_{2t}],t=1,...,T\) are independent and identically distributed (see Tjostheim and Huffhammer (2013); Berentsen and Tjostheim (2014)). This is not always realistic, and especially the volatility may exhibit dependence in time. In this paper, we thus apply a GARCH(1,1) filtration to come closer to this assumption, (see Bollerslev et al. (1992)). In the analysis presented in Section 5, we filtrate the returns with a GARCH(1,1) model with a student t-distribution. This is also consistent with the approach of e.g. Forbes and Rigobon (2002) and Stove and Tjostheim (2014) in their study of contagion. This, to a sufficient degree levitates the time dependence in the data, and makes it more suitable for the proposed dependency test.
In Section 4 we perform simulation studies to check the finite sample performance of the bootstrap test, however, we restrict ourselves for computational reasons to two regimes. We will look at both the error in significance level as well as the power of the test. As described in Section 2, the local Gaussian correlation estimator, \(\hat{\rho}(\mathbf{r})\) depends on two smoothing devices, the bandwidths \(\mathbf{b}=(b_{1},b_{2})\), and to a lesser degree the kernel used. In the simulations and the empirical analysis we use the Gaussian kernel, and choose the bandwidths using a simple rule of thumb -- the global standard deviation times a constant equal to 1.1. This approach gives reasonable results, for instance used in Stove et al. (2014), see also Tjostheim and Huffhammer (2013) and Berentsen and Tjostheim (2014) for further discussions regarding bandwidth selection. Algorithm 1 presents and summarizes the necessary steps to properly perform the test for asymmetric dependence across regimes, see appendix C.
## 4 Simulation studies
For the first two simulation set-ups, that is the study of significance level and the power study, we simulate observations with known underlying distributions and dependencies. In the last study, we still simulate from two different distributions, however, we now also classify the observations with a HMM into two regimes, and investigate the power of the test both with the true and predicted observations. A HMM will misclassify some observations, and the purpose of this study is to evaluate how this misclassification impacts the power of the test.
### Study of error in significance level
The simulation study for examining the significance level of the proposed test is as follows. The same data generating process (DGP) is used for both regime 1 and 2, hence \(H_{0}\) is true. We use six different DGPs; for every DGP we use two Gaussian marginal distributions, each with a mean equal to zero and a standard deviation equal to four, but with a different copula. The six different copulas are; a Clayton copula with parameter \(\theta=1\), a Clayton copula with parameter \(\theta=2\), a Gaussian copula with parameter \(\rho=0.3\), a Gaussian copula with parameter \(\rho=-0.5\), a Gumbel copula with parameter \(\theta=2\) and finally, a Gumbel copula with parameter \(\theta=3\). Hence this set-up closely resembles the set-up in Stove et al. (2014), except for the Gaussian copula case with a negative parameter.
The first two models and model 4 are typical models for bivariate equity returns, see e.g. Okimoto (2008). The negative dependence model 3 corresponds to mimic the often negative relationship observed between bond and equity returns, while models 5 and 6 are included in order to examine how the proposed test behaves in a right-tail dependent environment.
We generate \(M=1000\) independent sets of data from the six models, where each data set is on the form \(\{d_{1},...,d_{T}\}\). We set \(T=400\) and let regime 1 consists of 300 observations, while regime 2 consists of 100 observations, since usually in practice, one of the regimes will represent a more volatile/bear market period that usually will be shorter than the other regime, representing normal market conditions. Note that for simplicity, we do not perform step 1 in the step-wise procedure above, i.e. we do not fit the regime-switching model to the observations, as we prefer examining the level property of the test under no uncertainty regarding the classification of the observations into two regimes.
For each model, and for a given set of data, the test statistic \(D_{1}\) is calculated. Bootstrap tests of nominal level 0.01, 0.05 and 0.l0 are conducted based on \(B=1000\) bootstrap samples from each of the given data sets, as described above. The null hypothesis is rejected if the proportion of bootstrap statistics exceeding \(D_{1}\) is less than or equal to the appropriate nominal level. Note that the same simulations were used to check all three nominal levels.
The weight function is in this case chosen such that it corresponds to the range based on the thresholds defined by the \(5\%\) lower and \(95\%\) upper percentiles, i.e. the weight equals to 1 in this interval and zero outside. Hence, it can vary across the 1000 simulated data sets.
The empirical significance level of the test is reported in Table 1, and the results show that the empirical level of the bootstrap test is consistently close to the nominal level for all models.
### Study of power
The same setup as for the study of level is used for the study of power, that is, the number of gridpoints and the weight function is similar. The main difference is that the data generating process is different. For regime 1 we use Gaussian copula with \(\rho=0.5\) and two Gaussian marginals both with mean equal to one and standard deviation equal to four. For the second regime, we use six different models. That is, the DGP is still two Gaussian marginal distributions with mean zero and standard deviation equal to four, however, we apply different copulas, as listed in Table 2. For all the scenarios the \(H_{1}\) hypothesis is true.
In practice, we generate \(B=1000\) independent sets of data for each model, each with \(T=400\) replications. For the first regime we use 300 samples, while for the second regime we sample 100 replications. This is a similar setup as for the level study. The empirical power is calculated for the three nominal levels \(0.01,0.05\) and \(0.1\). The reported
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Nominal level (\(\alpha\))} \\ \cline{2-4} & 0.01 & 0.05 & 0.1 \\ \hline
1. Clayton copula, \(\theta\) = 1 & 0.017 & 0.056 & 0.102 \\
2. Clayton copula, \(\theta\) = 2 & 0.012 & 0.059 & 0.116 \\
3. Gaussian copula, \(\rho\) = -0.5 & 0.011 & 0.045 & 0.094 \\
4. Gaussian copula, \(\rho\) = 0.3 & 0.007 & 0.058 & 0.116 \\
5. Gumbel copula, \(\theta\) = 2 & 0.019 & 0.063 & 0.110 \\
6. Gumbel copula, \(\theta\) = 3 & 0.011 & 0.054 & 0.102 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Empirical level of the bootstrap test in the Monte Carlo study. The models correspond to the DGP in both regimes. Each table entry is based on 1000 replications, each with 400 observations.
power for the different models and nominal levels are reported in Table 2. In most of the cases examined, the power is acceptable. In particular, for cases 3 and 6 the power is excellent.
However, from the Table 2, we observe that the power of case 1, with Clayton copula, with \(\theta=2\), and case 5, the Gumbel copula with \(\theta=2\), has much lower power than the other models. The global correlation of these models are approximately \(\rho=0.68\) and \(\rho=0.70\), respectively. Both models have a correlation that is similar to the model that one is comparing against under \(H_{0}\), hence the power naturally decreases. However, there are ways to improve the power in these cases. For instance, a refinement of the grid for calculation of the LGC could be a possible approach to achieve better power. Stove et al. (2014) also experienced decreased power for the same models in their test for contagion. The choice of grid size and the fact that we test on the entire grid, and not only on the diagonal, are possible causes that can explain the differences observed between our results and the results in Stwe et al. (2014). To perform the test on a subset, e.g. only focusing on the lower tail, would certainly improve the power.
Overall, based on the results from both the level and power study, we conclude that the proposed bootstrap test performs as expected for these experiments. The test shows good level and power properties, under the assumption of no misclassification of the observations into the two regimes. The results furthermore indicate that the test is valid.
### Study of power with HMM classification
We study how the proposed tests perform when we use a Gaussian multivariate HMM to classify the observations into two regimes. In principle, any conditional distribution could be used, but as we are mainly interested in the difference in dependence structures across regimes, we use the Gaussian distribution for convenience. By using a HMM, we introduce errors due to misclassification, hence we want to assess and compare how the power of the test is affected when introducing potentially wrong regime classifications. To examine this, we design a simulation study where we use a DGP with a Gaussian copula with \(\rho=0.5\), with Gaussian marginals with \(\mu=0\) and \(\sigma=3\) for the first regime, while the second regime consists of a Clayton copula with \(\theta=3\) with Gaussian marginals with \(\mu=0\) and \(\sigma=5\).
We generate 500 data sets each with 500 observations. For each of the 500 data sets we fit a bi-variate Gaussian HMM with TMB in line with (Baciri et al., 2022). Furthermore, we use the Backward-Forward algorithm (Zucchini et al., 2016) to predict each observation into one of the two regimes for every the data sets. Figure 1 shows one of the simulated data sets, where the left plot presents the observations with the correct regimes and the right plot, shows how the observations are classified by the HMM into the two regimes. The overall classification accuracy is approximately 79 \(\%\) if we assess the classification on all the 500 data sets, see section 4.3. From Figure 1 we observe that the model is quite good to identify the correct regime in the observations in both tails. While, in the center of the distribution, the mean, variance and dependency is quite similar, and the HMM struggles somewhat more to classify the observations correctly.
The confusion matrix is given by
\[\begin{array}{l}\text{True regime 1}\\ \text{True regime 2}\end{array}\left[\begin{array}{cc}68.5\%&6.8\%\\ 14.2\%&10.5\%\end{array}\right], \tag{8}\]
showing that the HMM models across all 500 simulated data sets have relatively good prediction capabilities when it comes to correctly classifying the observations.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{Model under \(H_{1}\)} & \multicolumn{3}{c}{Nominal level (\(\alpha\))} \\ \cline{2-4} & 0.01 & 0.05 & 0.1 \\ \hline
1. Clayton copula, \(\theta\) = 2 & 31.9 & 68 & 82.2 \\
2. Clayton copula, \(\theta\) = 3 & 82.1 & 97.8 & 99.7 \\
3. Gaussian copula, \(\rho\) = -0.5 & 100 & 100 & 100 \\
4. Gaussian copula, \(\rho\) = 0.8 & 73.6 & 93.5 & 96.8 \\
5. Gumbel copula, \(\theta\) = 2 & 24.1 & 53.7 & 68.9 \\
6. Gumbel copula, \(\theta\) = 3 & 96.3 & 99.4 & 99.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Empirical power (times 100) of the bootstrap test in the Monte Carlo study. The models under \(H_{1}\) correspond to the DGP in two different regimes. The DGP in the first regime is a Gaussian copula with \(\rho=0.5\) and with Gaussian marginals. Each table entry is based on 1000 replications, each with 400 observations.
After the classification of the observations into two regimes by the fitted HMM for each of the 500 simulated data sets, we calculate the LGC and apply the asymmetric dependency test similarly as in Section 4.2 for each realization. For comparison, we perform the test twice for each data set, that is, on the observations arising from the predicted regimes and on the observations from the true regimes. Figure 2 shows the LGC-map of one of the data sets generated, using the observations from the true regimes on the left plot, while using the observations from the predicted regimes on the right plot. We observe a significant difference in the estimated LGC map when comparing regime 1 and regime 2 for both models, as expected from the DGP. However, we observe only minor differences in the LGC maps when comparing the map based on the observations using the true regimes versus the predicted regimes. This is indeed a positive finding, as it implies that the fitting of the HMM and the corresponding classification of the observations into the two regimes, are only marginally impacting the estimated LGC.
The power of the test from the two scenarios, both based on the true and on the predicted observations are shown in Table 3.
The 500 HMMs have approximately \(21\%\) of the observations misclassified. Due to this misclassification we observe a small degeneration of power in the cases with predicted regimes compared with the cases using the true regimes. Dependent on the nominal level, the degeneration of the power is \(32.8\%\), \(17.7\%\), and \(12.6\%\) for nominal levels \(0.01\), \(0.05\) and \(0.1\), respectively. However, the power is still acceptable, and we conclude that the test is performing reasonably well also in the case where regime classification is performed.
Figure 1: True vs. predicted regimes by a Gaussian HMM for one of the simulated data sets.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model under \(H_{1}\)} & \multirow{2}{*}{regime} & \multicolumn{3}{c}{Nominal level (\(\alpha\))} \\ \cline{3-5} & & 0.01 & 0.05 & 0.1 \\ \hline
1. Clayton copula, \(\theta\) = 3 & True & 96.2 & 99.4 & 99.8 \\
2. Clayton copula, \(\theta\) = 3 & Predicted & 64.6 & 81.8 & 87.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Empirical power (times 100) of the dependency bootstrap test in the Monte Carlo study.
## 5 Empirical analysis
In this section we illustrate our approach on two real data sets. We want to assess and find differences in LGC between two financial time series. For stock indices, e.g. S&P500 vs. FTSE100, it is observed that in a falling market, there is a different dependence structure than in more stable periods (see e.g. Okimoto (2008)). To identify stable versus more unstable regimes we can use a HMM, as outlined in Section 2.2. Our objective is, given a set of regimes obtained from a classification with a HMM, to identify whether or not the LGC in the regimes are significantly different using the proposed bootstrap test. In the first empirical analysis we use well known financial time series of the US stock index, the S&P500, and the UK index, FTSE100. Secondly, we investigate weekly data from S&P500 and US Bonds with 10 year maturity (BMUS10Y).
### S&P500 and FTSE100
The combination of the US index S&P500 and UK index FTSE100 have been used in numerous analyses e.g. (Longin and Solnik, 2001; Okimoto, 2008; Stove et al., 2014). The time series consist of 9006 daily observations from \(1987/07/09\) to \(2022/11/01\). In Table 4 an overview of descriptive statistics for both the S&P500 and FTSE100 returns is given. The data exhibit non-normality, as is seen from the skewness and kurtosis coefficients. Also, the Jarque-Bera test rejects normality for all series. See Appendix A for details on the parameters of the HMM fitting. Regime 2 is more volatile than regime 1, as we can see from the statistics in Table 4, in particular the variance and kurtosis is much larger in regime 2 than regime 1.
Figure 2: LGC-map of true vs. predicted regimes.
The two upper plots in Figure 3 shows the log-returns of S&P500 and FTSE100. Furthermore, the observations have been classified into two regimes (coloured red and green). This classification is the output of the bivariate Gaussian HMM with the assumption that there exists two regimes. Comparing the classification with historical events, it seems that the fitted HMM has the ability to identify crisis periods/bear markets. The observations around the financial crisis of 2007-2009 and the COVID-19 pandemic in 2020, both belongs to regime 2. Furthermore, observations during the financial crash of October 1987, and during the dot-com bubble in the early 2000s, are also classified as regime 2. The majority of the observations are, however, classified as regime 1, or stable/bull market periods. As mentioned, the returns are next filtered by a separate GARCH(1,1) model with a Student-t distribution for each of the time series, and the estimated parameters are given in Table 10 and Table 11. The GARCH filtrated time series, that will be used in the statistical test, are presented in the two lower plots of Figure 3. The volatility clustering, and time dependence in the observations are clearly reduced by this filtering. The observed LGC for the two regimes is presented in Figure 4.
Visual inspection of the LGC map for the two regimes, shows that the variance and local correlation is larger for regime 2 than regime 1. The correlations for regime 1 (stable/bull market) are relatively uniform. In the second regime (bear market) we observe an asymmetric dependency structure with a larger correlation in the lower-left tail of the distribution/LGC-map. That is, in a bear market we have identified a stronger tail dependency than in the bull market. This is in line with other studies using regime switching copulas (Rodriguez, 2007; Okimoto, 2008; BenSaida et al., 2018). The benefit of our approach is that it is much easier to interpret and understand the dependence structures. There is no need to have any assumption on the (bear market) dependency structure beforehand, i.e. by specifying a certain
\begin{table}
\begin{tabular}{l l r r r r r r r r r r}
**Variable** & **Levels** & **n** & \(\mathbf{\bar{x}}\) & \(\mathbf{\bar{x}}\) & **Min** & **Max** & **IQR** & **Variance** & **Skewness** & **Kurtosis** & **Jarque-Bera** \\ \hline S\&P500 & Regime 1 & 7038 & 0.1 & 0.0 & -2.6 & 2.8 & 0.8 & 0.5 & 0.0 & 3.7 & 163.0 \\ & Regime 2 & 2109 & -0.1 & 0.0 & -22.9 & 11.0 & 2.5 & 4.3 & -0.8 & 12.6 & 8232.7 \\ \hline & all & 9147 & 0.0 & 0.0 & -22.9 & 11.0 & 1.0 & 1.4 & -1.2 & 29.3 & 266689.3 \\ \hline S\&P500 GF & Regime 1 & 7038 & 0.0 & 0.0 & -4.4 & 3.7 & 1.0 & 0.8 & -0.2 & 4.2 & 504.5 \\ & Regime 2 & 2109 & -0.2 & -0.1 & -10.4 & 4.1 & 1.6 & 1.7 & -1.0 & 7.4 & 2050.7 \\ \hline & all & 9147 & -0.1 & 0.0 & -10.4 & 4.1 & 1.1 & 1.0 & -0.7 & 7.5 & 8442.7 \\ \hline FTSE100 & Regime 1 & 7038 & 0.1 & 0.0 & -2.6 & 2.8 & 0.9 & 0.5 & -0.1 & 3.4 & 59.8 \\ & Regime 2 & 2109 & -0.1 & -0.1 & -13.0 & 9.4 & 2.2 & 3.5 & -0.3 & 6.7 & 1250.6 \\ \hline & all & 9147 & 0.0 & 0.0 & -13.0 & 9.4 & 1.1 & 1.2 & -0.6 & 13.7 & 43760.0 \\ \hline FTSE100 GF & Regime 1 & 7038 & 0.1 & 0.0 & -2.6 & 2.8 & 0.9 & 0.5 & -0.1 & 3.4 & 59.8 \\ & Regime 2 & 2109 & -0.1 & -0.1 & -13.0 & 9.4 & 2.2 & 3.5 & -0.3 & 6.7 & 1250.6 \\ \hline & all & 9147 & 0.0 & 0.0 & -13.0 & 9.4 & 1.1 & 1.2 & -0.6 & 13.7 & 43760.0 \\ \hline \end{tabular}
\end{table}
Table 4: Descriptive statistics for a 2 regime classification of daily returns from S&P500 and FTSE100.
Figure 3: Log-returns and GARCH filtrated log-returns of S&P500 and FTSE100 with classification into two regimes using a Gaussian hidden Markov model.
copula. In both the bull and bear market, the dependency structure is directly revealed through the LGC maps.
Finally, to examine whether the two regimes have different dependency structures, we apply the test outlined in Section 3. The test is performed with 1000 bootstrap replicates, which gives a p-value of 0.001. This means that the null hypothesis is clearly rejected for all reasonable significance levels, and we conclude that the dependency structures of the two regimes are statistically significantly different from each other.
### S&P500 and US Bonds
The dependence relationships between different asset classes, as stocks, bonds and commodities, have been widely studied, see e.g. Dajcman (2012), Aslanidis and Christiansen (2012) and Jammazi et al. (2015). The main reason for studying these relationships is that different asset classes typically represent the building blocks of most investment portfolios because of their different risk-return characteristics, and in particular the stocks and bonds linkage is important in this respect. In the next empirical analysis, we thus study the stock-bond relationship with our proposed procedure, and perform equality tests across different regimes. We further align our findings with the current knowledge of the stock-bond relationship throughout the analysis. It is well-known that there is substantial time variation in the co-movement. Until the mid-1990s the US stock-bond correlation was strongly positive, and then changed to a negative correlation by the early 2000s and onwards. Furthermore, some authors have also used a copula approach, for instance Jammazi et al. (2015). They document a lack of tail dependence in the stock-bond relation, which suggests that stock and bond markets do not tend to boom or crash together. Further, the dependence seems not especially strong during extreme market conditions, but rather that it is present most of the time.
We use weekly log-returns of the S&P500 and US Bonds with 10 year maturity (BMUS10Y) from \(1980/01/02\) to \(2022/08/31\). In the first part of this section we assume that there exists two regimes, i.e. a bull and bear market for these observations. The results of the HMM classification are visually presented in Figure 5 along with descriptive statistics in Table 5. The optimized HMM model parameters are presented in Appendix A. We perform the classification on raw returns, however, for estimation of the LGC map, we use the GARCH(1,1)-filtrated data for the same reasons mentioned for the empirical analysis of stock indices, see Table 12 for the estimated parameters. Table 5 shows that there are in total 2220 pair of observations, where 1664 are classified as bull market, or regime 1, and 556 are classified as bear market or, regime 2. The bear market has less observations, higher variance, IQR, minimum and maximum values. We observe higher Jarque-Berra values for the bear compared to the bull market, i.e. the bear markets are less Gaussian than the bull markets. We also observe that the GARCH filtrated data have a mean closer to zero and a lower IQR.
Figure 4: LGC regime 1 and 2 for S&P500 and FTSE100 daily data.
Stock indices usually have positive dependence both in crisis and non-crisis periods, with however a stronger degree of dependence in crisis periods in the tail. S&P500 time series vs. US10Y have a different dependency structure. Both in the bull and bear market we observe an asymmetric dependency behaviour when we assume there exists 2 regimes. The variability of the bull market LGC map is somewhat lower than for the bear market LGC map, but the same underlying dependency structure is observed. That is, on the diagonal, the local correlations are negative and conversely on the cross-diagonal. As will be examined in Section 5.2.2, where we will fit models with 3 and more regimes, this asymmetry is due to a shift in the bull market behaviour in the early 2000s, where the dependency shifted from positive to a negative dependency. In essence, this empirical analysis shows that a two regime Gaussian HMM has too few parameters to identify this shift in the bull market dependency structure. This change in bull market dependency was also highlighted by (Jammazi et al., 2015) in his paper on time-varying dependence between stock and government bond returns.
We have performed the dependency test on the classified GARCH-filtrated data and the two corresponding LGC maps. Unsurprisingly, the test shows that the dependence between the two regimes are significantly different, i.e. the null hypothesis is rejected. The p-value calculated with the asymmetric dependence bootstrap test was in fact 0 for this case.
Figure 5: Log-return of S&P500 and BMUS10Y with classification with a HMM with two regimes.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c}
**Variable** & **Levels** & **n** & \(\bar{\mathbf{x}}\) & \(\bar{\mathbf{x}}\) & **Min** & **Max** & **IQR** & **Variance** & **Skewness** & **Kurtosis** & **Jarque-Bera** \\ \hline S\&P500 & Regime 1 & 1664 & 0.4 & 0.4 & -5.0 & 5.3 & 1.8 & 2.1 & -0.1 & 3.2 & 9.5 \\ & Regime 2 & 556 & -0.4 & -0.4 & -16.7 & 12.4 & 5.3 & 14.4 & -0.4 & 4.2 & 46.6 \\ \hline & all & 2220 & 0.2 & 0.3 & -16.7 & 12.4 & 2.4 & 5.3 & -0.9 & 8.8 & 3374.1 \\ \hline S\&P500.GF & Regime 1 & 1664 & 0.0 & 0.1 & -3.2 & 2.5 & 1.0 & 0.6 & -0.3 & 3.4 & 41.9 \\ & Regime 2 & 556 & -0.4 & -0.2 & -6.6 & 3.6 & 2.0 & 2.0 & -0.7 & 4.3 & 86.4 \\ \hline & all & 2220 & -0.1 & 0.0 & -6.6 & 3.6 & 1.1 & 1.0 & -0.9 & 6.3 & 1317.5 \\ \hline BMUS10Y & Regime 1 & 1664 & 0.0 & 0.0 & -3.0 & 3.0 & 1.2 & 0.7 & 0.0 & 3.2 & 2.8 \\ & Regime 2 & 556 & 0.1 & 0.1 & -5.6 & 6.7 & 2.1 & 2.9 & 0.2 & 3.8 & 16.2 \\ \hline & all & 2220 & 0.0 & 0.0 & -5.6 & 6.7 & 1.3 & 1.3 & 0.2 & 5.7 & 710.4 \\ \hline BMUS10Y.GF & Regime 1 & 1664 & 0.0 & 0.0 & -3.0 & 3.5 & 1.1 & 0.7 & 0.0 & 3.2 & 2.3 \\ & Regime 2 & 556 & 0.1 & 0.1 & -4.2 & 4.1 & 1.8 & 1.8 & -0.1 & 3.2 & 2.5 \\ \hline & all & 2220 & 0.0 & 0.0 & -4.2 & 4.1 & 1.3 & 1.0 & 0.0 & 3.8 & 61.4 \\ \hline \end{tabular}
\end{table}
Table 5: Descriptive statistics for 2 regime classification of weekly returns from S&P500 and BMUS10Y.
#### 5.2.1 Model selection
So far we have performed empirical analysis with two different regimes, in line with previous work e.g. Okimoto (2008). From Jammazi et al. (2015), we know there may exist more than one dependency structure for the bull market. For the S&P500 and BMUS10Y data we want to examine this further by assessing different models by evaluating the AIC and BIC with several number of regimes. We fit models up to 6 regimes. For each of the models, we calculate the AIC and BIC, and both AIC and BIC are relative estimators of the prediction error and thus can be used for model selection. The main difference between the two criteria are that the BIC penalizes the number of parameters harder. The number of parameters increases with the number of regimes specified for the HMMs. Hence, BIC favours models with fewer regimes than AIC. Table 6 shows the AIC and BIC calculated for different HMMs with different number of regimes. The BIC favours a model with 3 regimes whereas the AIC selects a model with 5 regimes.
#### 5.2.2 Three-regime model
Table 7 shows descriptive statistics for the three regime model of the time series S&P500 and US10Y log-returns. Regime 2 contains the most observations with 1115 while regime 1 consists of 743 observations, and regime 3 contains 362 observations. Regime 1 and 2 are less volatile and have lower IQR and smaller maximum and minimum values than regime 2. The GARCH filtrated time series (note that we utilize the same GARCH-model as for the case with two regimes) have a lower mean, IQR and extreme values than the raw log-returns.
Figure 7 shows the log-returns and the GARCH filtrated returns. The colours red, green and blue shows the classification for regime 1, 2 and 3, respectively. Figure 8 shows the LGC map for observations for the three different regimes. The first regime, has a clear negative correlation, the second regime a clear positive correlation, while the third regime has
\begin{table}
\begin{tabular}{c c c} \hline \# of regimes & AIC & BIC \\ \hline
1 & 9852.45 & 9880.98 \\
2 & 9506.66 & 9575.13 \\
3 & 9376.73 & **9496.54** \\
4 & 9340.52 & 9523.08 \\
5 & **9316.37** & 9573.10 \\
6 & 9325.96 & 9668.28 \\ \hline \end{tabular}
\end{table}
Table 6: Calculated AIC and BIC dependent on number of regimes.
Figure 6: LGC map of S&P500 and BMUS10Y two regimes.
an asymmetric dependency structure. We observe that regime two is mainly present from the 80s to the beginning of the 00s. When examining the corresponding estimated LGC map, we observe a clear positive dependence. From the 00s, and until 2022, the observations in the calmer period (regime 1) have a negative dependence. This change in dependency is well known (see e.g. Jammazi et al. (2015)) between US stock indices and US bonds.
The more volatile period, that is regime 3, is observable throughout the time period under study and seems to mainly correspond to historical recessions or crisis. Where both regime 1 and 2 have a relatively uniform LGC structure, regime 3 has an asymmetric dependency structure. Compared to regime 1, the negative correlation is reduced, and we also observe positive local correlations in some segments. Hence, in crisis periods, the anticipated diversification benefits is clearly reduced when comparing to regime 1.
Comparing the two-regime model with the three-regime model, we observe that the most volatile and turmoil period in both models are relatively equal. In regime 1 of the two-regime model, we see a clear asymmetric dependency structure which is not present in regime 1 and 2 in the three-regime model. Assessing the observations in both models, more or less regime 1 and regime 2 in the three-regime model attracts the same observations as regime 1 in the two-regime model. Due to the underparametrization of the two-regime model, the change in dependency structure is not identified, which in turn is reflected in the LGC map.
\begin{table}
\begin{tabular}{l l r r r r r r r r r}
**Variable** & **Levels** & \(\mathbf{n}\) & \(\mathbf{\bar{x}}\) & \(\mathbf{\tilde{x}}\) & **Min** & **Max** & **IQR** & **Variance** & **Skewness** & **Kurtosis** & **Jarque-Bera** \\ \hline S\&P500 & Regime 1 & 743 & 0.2 & 0.3 & -5.6 & 5.6 & 1.8 & 2.9 & -0.5 & 3.9 & 48.3 \\ & Regime 2 & 1115 & 0.3 & 0.4 & -6.4 & 5.3 & 2.2 & 3.0 & -0.3 & 3.5 & 27.5 \\ & Regime 3 & 362 & -0.4 & -0.2 & -16.7 & 12.4 & 5.4 & 17.2 & -0.5 & 4.3 & 37.3 \\ \hline & all & 2220 & 0.2 & 0.3 & -16.7 & 12.4 & 2.4 & 5.3 & -0.9 & 8.8 & 3374.1 \\ \hline S\&P500 GF & Regime 1 & 743 & -0.1 & 0.0 & -3.6 & 2.5 & 0.9 & 0.8 & -0.9 & 4.7 & 176.3 \\ & Regime 2 & 1115 & 0.0 & 0.0 & -3.3 & 2.2 & 1.1 & 0.8 & -0.4 & 3.3 & 35.0 \\ & Regime 3 & 362 & -0.3 & -0.1 & -6.6 & 3.6 & 1.8 & 2.1 & -1.0 & 5.6 & 157.7 \\ \hline & all & 2220 & -0.1 & 0.0 & -6.6 & 3.6 & 1.1 & 1.0 & -0.9 & 6.3 & 1317.5 \\ \hline BMUS10Y & Regime 1 & 743 & 0.1 & 0.1 & -2.5 & 3.1 & 1.0 & 0.7 & 0.0 & 3.5 & 7.3 \\ & Regime 2 & 1115 & -0.1 & -0.1 & -3.7 & 3.3 & 1.4 & 1.0 & -0.1 & 3.3 & 4.5 \\ & Regime 3 & 362 & 0.2 & 0.1 & -5.6 & 6.7 & 2.1 & 3.4 & 0.2 & 3.8 & 14.1 \\ \hline & all & 2220 & 0.0 & 0.0 & -5.6 & 6.7 & 1.3 & 1.3 & 0.2 & 5.7 & 710.4 \\ \hline BMUS10Y GF & Regime 1 & 743 & 0.1 & 0.1 & -3.0 & 3.9 & 1.1 & 0.7 & 0.0 & 3.7 & 14.7 \\ & Regime 2 & 1115 & -0.1 & -0.1 & -4.2 & 3.5 & 1.3 & 0.9 & -0.1 & 3.5 & 13.9 \\ & Regime 3 & 362 & 0.1 & 0.0 & -3.7 & 4.1 & 1.7 & 1.7 & 0.0 & 3.3 & 1.1 \\ \hline & all & 2220 & 0.0 & 0.0 & -4.2 & 4.1 & 1.3 & 1.0 & 0.0 & 3.8 & 61.4 \\ \end{tabular}
\end{table}
Table 7: Descriptive statistics for 3 regime classification of weekly returns from S\(\&\)P500 and BMUS10Y.
Figure 7: LGC map of S\(\&\)P500 and BMUS10Y three regimes.
With more than two regimes we need to adjust the p-values because we face the multi-comparison problem. However, as expected by examining the LGC maps (Figure 8), the differences in the LGCs are large, and the asymmetric dependency test rejects the null hypothesis in all pairwise tests. In other words, the p-value is 0 for all of the three relevant pairwise tests.
#### 5.2.3 Five-regime model
Table 8 shows descriptive statistics for the 5 different regimes in the HMM classification. Regime 5 has relatively few observations, and the estimated LGCs should thus be viewed with care.
Figure 9 shows the classification from the HMM model. Figure 10 shows the LGC maps for the 5 different regimes. The LGC is relatively uniform for regime 1 to regime 4 and the LGC coincide with the correlations/covariance that
\begin{table}
\begin{tabular}{l r r r r r r r r r r}
**Variable** & **Levels** & \(\mathbf{n}\) & \(\widetilde{\mathbf{x}}\) & **Min** & **Max** & **IQR** & **Variance** & **Skewness** & **Kurtosis** & **Jarque-Bera** \\ \hline S\&P500 & Regime 1 & 480 & 0.4 & 0.5 & -2.7 & 3.6 & 1.4 & 1.2 & -0.2 & 3.3 & 5.3 \\ & Regime 2 & 952 & 0.3 & 0.4 & -6.4 & 7.5 & 2.2 & 3.1 & -0.2 & 3.7 & 24.0 \\ & Regime 3 & 301 & 0.2 & 0.3 & -6.4 & 7.1 & 2.7 & 4.5 & -0.2 & 3.1 & 2.1 \\ & Regime 4 & 398 & -0.2 & -0.2 & -9.0 & 8.3 & 4.3 & 8.7 & -0.1 & 2.6 & 3.5 \\ & Regime 5 & 89 & -1.1 & -0.3 & -16.7 & 12.4 & 7.9 & 37.2 & -0.3 & 2.9 & 1.2 \\ \hline & all & 2220 & 0.2 & 0.3 & -16.7 & 12.4 & 2.4 & 5.3 & -0.9 & 8.8 & 3374.1 \\ \hline S\&P500 GF & Regime 1 & 480 & 0.1 & 0.1 & -2.2 & 1.7 & 0.8 & 0.4 & -0.4 & 3.6 & 19.4 \\ & Regime 2 & 952 & 0.0 & 0.0 & -3.3 & 3.6 & 1.1 & 0.8 & -0.3 & 3.5 & 29.4 \\ & Regime 3 & 301 & -0.1 & 0.0 & -2.8 & 3.5 & 1.2 & 1.0 & -0.2 & 3.2 & 1.8 \\ & Regime 4 & 398 & -0.3 & -0.2 & -4.8 & 3.0 & 1.5 & 1.4 & -0.6 & 3.5 & 27.5 \\ & Regime 5 & 89 & -0.6 & -0.1 & -6.6 & 2.7 & 2.3 & 3.7 & -1.1 & 4.5 & 26.1 \\ \hline & all & 2220 & -0.1 & 0.0 & -6.6 & 3.6 & 1.1 & 1.0 & -0.9 & 6.3 & 1317.5 \\ \hline BMUS10Y & Regime 1 & 480 & 0.0 & 0.1 & -2.5 & 2.0 & 1.0 & 0.6 & -0.3 & 3.2 & 5.7 \\ & Regime 2 & 952 & 0.0 & 0.0 & -3.7 & 3.0 & 1.2 & 0.8 & 0.0 & 3.3 & 3.2 \\ & Regime 3 & 301 & -0.3 & -0.3 & -4.4 & 4.8 & 2.3 & 2.7 & 0.2 & 2.9 & 2.4 \\ & Regime 4 & 398 & 0.2 & 0.2 & -3.1 & 3.1 & 1.4 & 1.0 & -0.2 & 3.2 & 3.6 \\ & Regime 5 & 89 & 0.7 & 0.6 & -5.6 & 6.7 & 3.5 & 6.2 & 0.1 & 2.8 & 0.4 \\ \hline & all & 2220 & 0.0 & 0.0 & -5.6 & 6.7 & 1.3 & 1.3 & 0.2 & 5.7 & 710.4 \\ \hline BMUS10Y GF & Regime 1 & 480 & 0.0 & 0.1 & -3.0 & 2.1 & 1.1 & 0.7 & -0.2 & 3.2 & 5.4 \\ & Regime 2 & 952 & 0.0 & 0.0 & -4.2 & 3.5 & 1.2 & 0.8 & -0.1 & 3.7 & 22.5 \\ & Regime 3 & 301 & -0.2 & -0.2 & -3.7 & 3.0 & 1.6 & 1.5 & 0.0 & 2.9 & 0.1 \\ & Regime 4 & 398 & 0.2 & 0.1 & -3.0 & 3.9 & 1.3 & 0.9 & 0.0 & 3.5 & 4.6 \\ & Regime 5 & 89 & 0.4 & 0.4 & -3.4 & 4.1 & 2.2 & 2.6 & 0.0 & 2.7 & 0.4 \\ \hline & all & 2220 & 0.0 & 0.0 & -4.2 & 4.1 & 1.3 & 1.0 & 0.0 & 3.8 & 61.4 \\ \hline \end{tabular}
\end{table}
Table 8: Descriptive statistics for 5 regime classification of weekly returns from S&P500 and BMUS10Y.
Figure 8: LGC map of S&P500 and BMUS10Y three regimes.
is identified in the HMM (see Appendix A).
From the Table 9 we observe that for the test between regime 1 and 4, the null hypothesis is accepted, i.e. the dependency is equal. Similarly we observe that regime 3 and regime 5 do not have a significantly different LGC structure. In this particular test setup, the number of pairwise tests is 10
Figure 10: LGC map of S&P500 and BMUS10Y five regimes.
Figure 9: TS Classification of S&P500 and BMUS10Y five regimes.
and if we consider nominal level with \(\alpha=0.05\), the Bonferroni correction would test each individual hypothesis at \(\alpha^{*}=0.05/10=0.005\). At a nominal level with \(\alpha=0.01\), the null hypothesis between regime 2 and 3 also would be accepted. As can be viewed, the dependency test between regime 2 and 4 is barely within this threshold.
With regards to LGC, this experiment has shown that in this particular case, five regime HMM seems to be an overparametrization with respect to the dependency structures and to the number of regimes. Although we observe more uniform LGC with more regimes, the dependency structures between several of the regimes are not significantly different.
## 6 Concluding remarks
This paper presents a new procedure using the LGC for testing whether the dependency structures across different regimes, classified by a HMM, in financial returns are different. The test is a bootstrap procedure and the test statistic uses the squared difference of the estimated LGC between different regimes. With more than two regimes we have to perform a set of pairwise tests which in turn requires correction as the test expands and becomes a multi-comparison problem.
The proposed test is verified by a study of significance level and power on simulated data. Both the level and power study are examined on 6 different models, all showing acceptable results. In addition, we conduct a simulation study where we classify observations with a HMM and compare how the misclassification affects the power of the test. By including the misclassified observations in the different regimes, the power decreases for all nominal levels. The power of the test is still on an acceptable level. The decrease in power is, however, an important aspect to have in mind during empirical analysis.
We have illustrated the approach by performing empirical analysis on two different real data sets. First, we examine the daily returns on two stock indices, the S&P500 and FTSE100, and second, stock market returns, S&P500, against bond returns, the US 10 year government bond (US10Y). For the stock market indices, S&P500 and FTSE100, we confirm that there are two regimes in the return series, a bull market regime with close to uniform dependency structure, and a bear market regime with overall higher dependence and an asymmetric structure, in particular, much higher dependence in the tails. Our approach thus confirms well known facts about the dependency structure between the S&P500 and FTSE100 (see e.g Okimoto (2008)). However, at the same time, we argue that our approach is more intuitive and easy to understand and interpret, than competing approaches. In particular, the LGC measure can be interpretable as the ordinary correlation. Furthermore, another main advantage is that in our framework we do not need a parametric assumption of the dependency structure in different regimes, and the proposed test can actually determine whether there are statistically significant differences between them.
For the stock-bond relationship, we considered weekly data and used the AIC and BIC to determine the optimal number of regimes in the HMM. The BIC and AIC showed that 3 and 5 regimes was preferred, respectively. We also assessed the classical two-regime model. The empirical analysis of returns of S&P500 and US10Y showed that with increasing number of regimes, the LGC for the different regimes became more uniform and distinct. On the other hand we identified that the difference in dependency between several regimes was insignificant. Through our test, we showed and concluded that with respect to dependency, the five-regime HMM model was over-parameterized. In the three-regime model, the LGC documents a primarily positive relationship in the time period 1980-2000. From 2000 and onwards the relationship is mostly negative, whereas the regime associated with bear markets indicates less, but asymmetric dependence, hence documenting the loss of diversification benefits in times of crisis.
Although we have used Gaussian HMMs for simplicity, other more complex approaches for classification of observations is possible. In fact, because the analysis of dependency and the classification are separate procedures, we could use other more complex non-linear approaches such as neural networks or support vector machines or other state-of-the-art machine learning techniques for classification, see e.g. (Constantinou et al., 2006; Hassan et al., 2007; Liu et al., 2019; Mustafa et al., 2022). We leave this for future research.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline P-value & Regime 1 & Regime 2 & Regime 3 & Regime 4 & Regime 5 \\ \hline Regime 1 & 0 & 0 & **0.681** & 0 \\ Regime 2 & & & **0.004** & 0 & 0 \\ Regime 3 & & & & 0 & **0.141** \\ Regime 4 & & & & & 0 \\ Regime 5 & & & & & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 9: P-values five regimes.
Data availability statementThe BMUS10Y is downloaded from Refinitin Eikon under the ticker._TRXVUSGOV10U_. This is the clean daily price index which has been aggregated to weekly observations for the purpose of this study. The other indices are publicly available and may be downloaded from e.g. Yahoo! Finance.
AcknowledgementsThis work was supported by the Financial Market Fund (Norwegian Research Council project no. 309218). We thank Dag Tjostheim for valuable discussions and comments.
|
2310.18375 | CMOS-based Single-Cycle In-Memory XOR/XNOR | Big data applications are on the rise, and so is the number of data centers.
The ever-increasing massive data pool needs to be periodically backed up in a
secure environment. Moreover, a massive amount of securely backed-up data is
required for training binary convolutional neural networks for image
classification. XOR and XNOR operations are essential for large-scale data copy
verification, encryption, and classification algorithms. The disproportionate
speed of existing compute and memory units makes the von Neumann architecture
inefficient to perform these Boolean operations. Compute-in-memory (CiM) has
proved to be an optimum approach for such bulk computations. The existing
CiM-based XOR/XNOR techniques either require multiple cycles for computing or
add to the complexity of the fabrication process. Here, we propose a CMOS-based
hardware topology for single-cycle in-memory XOR/XNOR operations. Our design
provides at least 2 times improvement in the latency compared with other
existing CMOS-compatible solutions. We verify the proposed system through
circuit/system-level simulations and evaluate its robustness using a 5000-point
Monte Carlo variation analysis. This all-CMOS design paves the way for
practical implementation of CiM XOR/XNOR at scaled technology nodes. | Shamiul Alam, Jack Hutchins, Nikhil Shukla, Kazi Asifuzzaman, Ahmedullah Aziz | 2023-10-26T21:43:01Z | http://arxiv.org/abs/2310.18375v1 | # CMOS-based Single-Cycle In-Memory XOR/XNOR
###### Abstract
Big data applications are on the rise, and so is the number of data centers. The ever-increasing massive data pool needs to be periodically backed up in a secure environment. Moreover, a massive amount of securely backed-up data is required for training binary convolutional neural networks for image classification. XOR and XNOR operations are essential for large-scale data copy verification, encryption, and classification algorithms. The disproportionate speed of existing compute and memory units makes the von Neumann architecture inefficient to perform these Boolean operations. Compute-in-memory (CiM) has proved to be an optimum approach for such bulk computations. The existing CiM-based XOR/XNOR techniques either require multiple cycles for computing or add to the complexity of the fabrication process. Here, we propose a CMOS-based hardware topology for single-cycle in-memory XOR/XNOR operations. Our design provides at least 2\(\times\) improvement in the latency compared with other existing CMOS-compatible solutions. We verify the proposed system through circuit/system-level simulations and evaluate its robustness using a 5000-point Monte Carlo variation analysis. This all-CMOS design paves the way for practical implementation of CiM XOR/XNOR at scaled technology nodes.
- Artificial Intelligence, Compute-in-Memory, Encryption, Verification, XOR, XNOR.
## 1 Introduction
Academia and industry are pushing their last strides in keeping Moore's law alive, demonstrated by IBM's 2 nm process technology [1]. However, as the available bandwidth between the processor and main memory is not growing commensurately with the advancements in compute units, the well-known'memory wall' [2] is becoming one of the toughest challenges for engineers in this exascale (big data) computing era. The issue with handling this massive data load is getting more acute with unprecedented progress in machine learning and artificial intelligence (AI) applications. Surprisingly, these data-intensive applications are often not inherently complicated. Rather, they rely on simple logic operations at a massive scale. Therefore, data movement ends up being the bottleneck, causing latency issues and consuming more energy than the computation process itself [3]. Recent reports by _Google_ have shown that a significant portion of their data center workload is performing bulk data movement and about 20-42% of the energy is required to drive the data bus connecting the compute and memory units [4, 5]. This specific data movement problem is causing the traditional _von Neumann_ architecture to lose its glory, where back-and-forth data movement is necessary between the memory and compute units. As an alternative, compute-in-memory (CiM) has garnered attention in the research community [6, 7, 8, 9, 10, 2]. CiM not only dramatically reduces the data movements, but also takes advantage of large internal memory bandwidth and enables massive parallelism to improve latency. In addition to the endeavor to improve the architecture, device engineers are exploring next-generation memory technologies as the mainstream CMOS memories are approaching the scaling limit [11, 12, 13, 14, 15]. The emerging memories are expected to provide a faster yet more energy-efficient solution in a compact footprint. Combining the best of both worlds, several CiM architectures have been proposed in recent years with emerging memory devices [16, 17, 18, 19, 20, 21, 22]. However, with exponentially increasing data volume, customized solutions are needed for optimized performance in application-specific scenarios.
A denser integration of memory chips with CiM capability will radically change the data center field. With the advent
of cloud computing, consumer computer applications are gradually finding their way into virtual machines rather than physical devices, thereby leading to more data in data centers. Keeping this ever-increasing data in a secured backup is a challenging task in terms of performance, energy, and memory. While intelligent and efficient algorithms were proposed for bulk data movement in data centers using row-level cloning [23], integrity verification of the copy procedure is also extremely important. Moreover, in the age of cybersecurity and identity theft, data encryption is equally crucial. Having such securely backed-up data is essential for big data applications like image classification. This data can be used to train the classifier algorithms to get acceptable inference accuracy. Fig. 1 illustrates the usage of XOR/XNOR operations for copy verification, encryption, and image classification algorithms in binary convolutional neural networks (CNN).
Here, we propose a ubiquitous system to achieve single-cycle in-memory bitwise XOR/XNOR operation using modified peripheral sensing circuitry. We evaluate the functionality and robustness of our design using transient simulations in HSPICE, and _Monte Carlo_ variation analysis. While our proposed design is aimed at fast and secure data movement at the data center level, it can also be used in the most prominent big-data application for AI- a 'binary CNN hardware accelerator' with no additional operation cycle. We discuss the motivation and principle of in-memory XOR/XNOR in section II. We then present our design methodology and the simulation framework (section III). Sections IV and V present the timing simulations and variation analysis, respectively. Section VI presents a comparison with existing literature.
## II Motivation for Single-Cycle In-Memory XOR/XNOR
Bulk data copy is such an expensive process (in terms of memory usage and energy demands), that there has been a separate hardware-level instruction set for it since the introduction of Intel IA-32 architecture [24]. Extensive studies have shown that the optimized way to transfer data between the conventional memory cells is at the row granularity [23, 25]. In cutting-edge memory chips, an entire row of data is copied from the memory array to the corresponding row buffer and then to the destination row [26]. This multi-cycle copy procedure is already a major concern for low-latency memories. On top of that, the need for additional cycles to validate a successful copy operation aggravates the issue.
For the validation process, parity checking is the most commonly used algorithm in present-day digital electronics. An odd parity checker performs XOR operations between the bits copied from and to the memory cells. A logical '0' XOR output indicates a successful copy operation (Fig. 1(a)). In addition to having back-ups, it is also important to ensure its security. Fortunately, the in-memory XOR operation is perfectly suited for data encryption (Fig. 1(b)). Among the known techniques for ciphers, XOR is the most trustworthy and unbreakable if the key used is a true random number.
The significance of performing such XOR/XNOR operations within the memory block (CiM implementation) can be well understood with a system-level view. Two subsequent row activation cycles are needed to copy a single unique row of data into the memory block. At least another cycle is required for verification of each 'copy' incidence using XOR operation. For a memory bank of 512 rows, duplication of 256 unique data rows requires 256\(\times\)2 row activation cycles and 256 XOR operation streams for verification (assuming single-cycle XOR).
Similarly, every single row goes through a single XOR operation stream with a key stored in the row buffer for encryption. Now, if each of these XOR operations is itself a multi-cycle process, the latency will take a serious hit. Now, all the in-memory XOR operations previously demonstrated take more than one cycle except for one proposed in [17], which too is a memristor-only CMOS non-compatible design, for which the design space will be too complicated. To the best of our knowledge, ours is the first CMOS-compatible in-memory XOR that operates in a single-cycle. We propose a simple all-CMOS-based peripheral circuit design, slightly modifying the sensing circuitry to employ CiM XOR for superior performance in bulk data
operations. On top of that, this modification in peripheral circuitry can also be used in binary neural networks like image classification problems, which is essentially an XNOR operation (shown in Fig. 1(c)). Thus, to gain excellent capacity and speed in an in-memory system, the proposed system can be put into use.
## 3 Design Methodology and Simulation Framework
For a conventional memory array comprised of access transistors and memory cells, the sense line (SL) currents are collected and sensed via a current-based sense amplifier at the periphery (Fig. 2(a)). In our work, we utilize the current-based sense amplifier (CSA) reported in [27] as the building block for the modified peripheral circuitry to realize the in-memory XOR/XNOR. Here, we use a ReRAM as the NVM cell, but the peripheral circuit modification (all CMOS) to realize the in-memory XOR/XNOR operation is a memory-agnostic design. Irrespective of the memory used, when in computation mode, two-word lines (WL) are asserted in a single sensing line to select the memory cells that will undergo the XOR/XNOR operation. The current contribution of the two selected cells along with the unselected ones of that column is fed into the modified SA. The modified SA consists of a current mirror to copy the SL current, two current-based SAs (CSA), one inverter, and one AND gate as shown in Fig, 2(c). Fig. 2(d) shows the circuit schematic of each CSA used in the SA. The SL current being fed into the two CSAs sets a gate voltage through the current mirror circuit. This set voltages then being compared to the reference voltages, produce binary outputs. As for XOR/XNOR operations, two different reference current levels are being used, they will produce two different logic outputs. These two different logic outputs, one negated through an inverter and the other one intact, fed into the AND gate, give out XOR/XNOR logic. Here it is noteworthy that, the complementary reference current level is set for two CSAs for giving out XOR/XNOR logic output. A truth table for the sense line current levels and the corresponding logic levels are shown in terms of the reference level set in Fig. 2(b). It can be seen from the illustration that reference current levels are set in between the \(I_{00}\) and \(I_{11}\) current levels. The reference currents are set in such an intelligent way that an AND operation of the outputs of two CSAs gives out the desired XOR/XNOR result. The sense amplifiers being exactly similar in construction in a CMOS process separates the two extreme cases of both the selected cells storing either '0' or '1' using two reference currents (\(I_{00}<I_{REF1}<I_{01}\) & \(I_{01}<I_{REF2}<I_{11}\)). This slight modification in peripheral sensing circuitry allows normal memory mode operation as well as single cycle XOR/XNOR operation, which can be very crucial in certain specific application scenarios. Not only that, but this design can also be used to implement other logic operations like AND/NAND, OR/NOR, etc. by carefully choosing the two reference current levels.
In this work, a rigorous SPICE simulation is done for the CiM provision in the memory array. For simulation, a phenomenological compact model of resistive RAM (ReRAM) is used as the non-volatile memory (NVM). The model is calibrated and matched with the experimental data for the Cu/HfO2/Pt stack published in [28]. The low resistance state (LRS) and the high resistance state (HRS) are set at 10 k\(\Omega\) and 3 G\(\Omega\), respectively. 14 nm PTM (Predictive Technology Model) [29] transistors are utilized to simulate the CMOS transistors (FinFETs) used in the memory array and peripheral circuitry. A detailed Monte-Carlo variation analysis is also shown to determine the limitation of the effect of variation on the number of allowed rows in the memory array along with sense margins for the successful operation.
## 4 Functional Verification
Upon setting up the simulation framework, functional verification was performed for the in-memory XOR/XNOR operation in HSPICE. The memory array functions as expected in the memory mode, allowing successful write operations shown in Fig. 3. In the memory mode of operation, the bit lines (BL) are kept
precharged and the access transistors are turned on for the selected cell applying suitable biases to the WLs and SLs. Although all the bit lines are kept high, the access transistors are not turned on for the unaccessed and half-accessed cells. Depending on the bias voltages applied to the WLs, BLs, and SLs, the corresponding memory state is stored in the memory cell. 0.4 V (-0.15 V) is applied to the corresponding BL for writing '1' ('0') into the memory cell, as per the non-volatile memory material we are using from [28]. Later, when WLs are asserted, the accessed cell gets the write voltage applied to the BL. The biasing scheme for write operations is designed in such a way that the half-accessed and unaccessed cells are not accidentally disturbed. Also, reading from the memory cell, we propose to use the same SA designed for the in-memory XOR/XNOR operation to make the peripheral circuitry universal for both memory and compute mode. The only difference between memory read and compute operation is that for reading, only one cell is accessed at a time, and reference current levels are different.
The proposed design allows the memory operations (write and read)as well as the in-memory logic operations. In the computation mode, the memory states stored in the accessed cells are first read and then the logic operation is done using the peripheral circuits. To demonstrate the successful operation of our design, we simulated a 3x3 array shown in Fig. 4(a). Here, all the bit lines (BL) are pre-charged with a 100 mV supply. After the WLs corresponding to two computing rows are asserted, current starts to flow through the memory cells. Fig. 4(b) shows the biasing scheme for the in-memory operations. Now, based on the assumed memory states for the accessed cells (shown in Fig. 4(a)), different current levels are obtained in the SLs. The SL current levels for different combinations of memory states in the columns are well distinguishable as shown in Fig. 4(d). Considering the unaccessed cells in HRS, the SL currents are obtained as 100 pA, 7.87 \(\mu\)A, and 15.7 \(\mu\)A for '00', '01'/'10', and '11' logic combinations in the accessed cells, respectively. The reference current levels of the sense amplifiers need to be carefully set in based on these numbers.
For verifying the XOR operation, we set the reference currents as \(I_{REF1}\)= 4 \(\mu\)A and \(I_{REF2}\)= 12 \(\mu\)A. When the SEN (Sense Enable) is enabled, the CSAs sense the current levels and result in logic '1' or '0' based on the difference between the SL currents and the reference currents. With the AND operation as shown in Fig. 2(b), the output of the XOR operation is obtained as shown in Fig. 3(d). As seen, the XOR-OUT becomes logic '1' only for '01'/'10' logic combination. Note, the SL currents are readily available in the sense amplifiers when WLs and BLs are asserted. Therefore, the XOR operation only requires a single cycle, for the AND operation to be completed. However, for XNOR operation, the reference currents are set in the exact opposite fashion (\(I_{REF1}\)= 12 \(\mu\)A and \(I_{REF2}\)= 4 \(\mu\)A). XNOR operation also requires a single-cycle.
## 5 Variation Analysis
It is seen in Fig. 4(d) that the SL currents are well-distinguishable for different memory combinations in the cells in a single column. However, a quantitative analysis was performed to full-proof the robustness of the design. Even when a cell is not accessed (WL not asserted), a small leakage current flows through those cells: 28 pA for HRS and 774 pA for LRS. The leakage currents through the unaccessed cells contribute to the SL current of the column, which causes a risk of identifying the SL current of one logic combination as another. Therefore, the leakage current (depending on the LRS and HRS values) puts a restriction on the maximum number of rows allowed in an array. Also, average power consumption and area are two very important parameters that directly affect the scaling of the memory system. Fig. 5(a) and 5(b) show the effects of a number of fins on the power consumption and area of the CSA and the effects of HRS and LRS values on the maximum number of rows in the array, respectively. In Fig. 5(b), we show the effects of variation in both HRS and LRS separately which shows that the variation in LRS affects more significantly
compared to that in HRS. With a fixed HRS, when we vary the LRS by changing the HRS/LRS ratio (black line in Fig. 5(b)), we observe that a larger HRS/LRS ratio results in higher scalability. This analysis not only lets a designer be aware of the size limitation of the memory array but also opens up a new window of research from the perspective of the material choice.
Furthermore, a rigorous 5000-point Monte-Carlo simulation is performed to ensure that different current levels are well-distinguishable even with the process variations. In our variation analysis, we consider a Gaussian distribution for LRS and HRS with a mean value of 10 k\(\Omega\) and 3 G\(\Omega\) (respectively) and a 3\(\sigma\) variation of 10% of the mean value. We also consider a variation in the threshold voltages of the transistors with a standard deviation of 25 mV. The results are shown in Fig. 5(c) and 5(d). Fig. 5(e) shows the schematic of a conventional current sense amplifier with different important nodes marked. The distribution in SL currents shown in Fig. 5(b) leads to a distribution of voltages at the \(n_{CELL}\) node of the sense amplifier. Finally, the digital output at the OUT node is obtained based on the difference between the voltages set at \(n_{CELL}\) and \(n_{REF}\) nodes.
## 6 Comparative Study
The surge in compute-in-memory research because of the'memory wall' problem led to many recent publications. Studies have shown that the ReRAM crossbar array can implement logic operations in the crossbar array [30]. However, some of them are not necessarily fitted to the CiM concept as they use the memory technique to implement processing units. They still pay for the expensive data fetching from the memory and are limited by the memory bus bandwidth. Those that implement the in-memory computation, are tailored to do basic logic operations like AND, OR, etc., some to make ADD operations. Our work can be distinguished from those works in terms of bulk data application in an all-CMOS process.
Based on the required operation steps and overhead circuitry, a comparison with the existing relevant works [31, 32, 30, 33, 17] is presented in Table 1. Our work promises the most efficient solution in terms of latency. Also, an all-CMOS design makes it easy to implement.
We also extend the comparison to the application level using XNOR-Net which uses XNOR operation to replace the computationally complex convolution operations in convolutional neural networks (CNN). XNOR-Net is a CNN that uses binary filters and XNOR operations to decrease memory cost and decrease computational cost by around 58\(\times\)[34]. Fig. 6(a) shows a single convolutional block of XNOR-Net. In the beginning, XNOR-Net performs batch normalization and then performs binary activation that binarizes the inputs and generates the scaling factors \(K\) and \(a\). From there, the XNOR convolution is performed. We propose using our XNOR processor to accelerate this part of the network. After calculating the XNOR convolution, we then perform element-wise multiplication with the scaling factors (\(K\) and \(a\)) that we calculated before the XNOR operations. While these operations must be done outside of our accelerator, there are far fewer of these operations than XNOR operations, making our approach still viable despite this limitation. The theoretical speedup due to the use of XNOR convolution is given by [34]-
\[S=\frac{cN_{W}N_{I}}{\frac{1}{N_{O}}cN_{W}N_{I}+N_{I}}\]
Here, \(c\) is the number of channels, \(N_{W}\) is the width times the height of the filter, \(N_{I}\) is the width times the height of the input of the layer, and \(N_{O}\) is the number of XNOR operations that can be done in a single clock cycle. In [1], \(c=256\), \(N_{W}=14^{2}\), and \(N_{I}=3^{2}\) were used since layers with these parameters are common in ResNet [35]. While using a CPU, \(N_{O}\) will be 64, which will be our baseline. Fig. 6(b) shows the speedup of our approach compared to XNOR-Net being executed in CPU. The speedup of this application compared to the CPU is significantly higher for our XNOR Implementation. We also compare
our design with the existing works that require two or three cycles for XNOR operation. Additionally, our design scales better for larger array sizes than the existing designs. In addition to XNOR net, our design could also be used for XOR-Net [36], a version of XNOR-Net that uses XOR and reduces the required number of full precision operations significantly. Using this algorithm, we should see similar speedups and scaling as we did with XNOR-Net, though they will be slightly closer to the ideal \(S=\frac{N_{O}}{64}\) speedup since XOR-Net reduces the full precision operations in a layer with our given parameters by 39.84% [36].
## 7 Conclusion
In this paper, an all-CMOS single-cycle in-memory XOR/XNOR operation is proposed for bulk data copy, verification, and encryption making a slight modification in the peripheral circuitry. Our design allows for a reduced number of cycles and a leap in latency performance. For bulk data operations, even an incremental improvement can be tremendously advantageous. This circuit topology can also be used in modern and upcoming heavy data applications like binary convolutional neural networks. In-memory computing on top of that single-cycle operation can well justify the area implication and extra circuit overhead needed for the design.
## Acknowledgment
S. A. was supported with funds provided by the Science Alliance, a Tennessee Higher Education Commission center of excellence administered by The University of Tennessee-Oak Ridge Innovation Institute on behalf of The University of Tennessee, Knoxville.
N. S. was supported in part by the National Science Foundation under Grant No. 2132918.
This manuscript has also been authored in part by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of the manuscript or allow others to do so, for U.S. Government purposes. The DOE will provide public access to these results in accordance with the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan)).
## Data Availability
The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
## Competing Interests
The authors declare no competing interests.
Figure 1: **A system level view in commercial memory products, where the memory cells are banked, will help understand the latency minimization for the proposed CiM XOR in (a) verification of copied data and (b) data encryption/decryption. (c) CiM configuration can also be used to deploy binary CNN to image classification problem which is essentially an XNOR operation.**
Figure 2: **(a) Non-volatile memory array with modified sense amplifiers. (b) Mechanism of choosing reference currents. Schematic of (c) the modified SA for in-memory XOR/XNOR and (d) a current sense amplifier.**
Figure 4: **(a) The application of required voltages to WLs, BLs, and SEN. (b) Reference current levels chosen for XOR and XNOR operations. (d) SL currents and logic outputs of XOR and XNOR operations.**
Figure 5: **(a) Effect of number of fins of the transistors on the CSA circuit and (b) memristor on/off ratio on the array size. Histogram plots of (c) the current distributions and (d) voltages of \(n_{\mathit{CELL}}\) and \(n_{\mathit{REF}}\) nodes set by the distributions in input and reference current levels, respectively.**
Figure 6: **Comparison of our design with the existing works based on the implementation of a XNOR-based CNN.**
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{c|}{**Properties**} \\ \cline{2-4}
**Design** & \multicolumn{1}{c|}{**Tech.**} & \multicolumn{1}{c|}{**Additional**} & \multicolumn{1}{c|}{**Latency**} \\ \cline{2-4} & \multicolumn{1}{c|}{**Transistors**} & \multicolumn{1}{c|}{**(Cycles)**} \\ \hline Pinatubo [17] & CMOS & 7 & 3 \\ \hline FELIX [31] & Crossbar & - & 3 \\ \hline CMOS Memristive [30] & CMOS & 16 & 2 \\ \hline XORiM [32] & CMOS & 12 & 3 \\ \hline SiXOR [33] & Memristor & - & 1 \\ \hline This.Work & CMOS & 13 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of our design with the existing works. |
2301.09539 | A critical comparison of general-purpose collective variables for
crystal nucleation | The nucleation of crystals is a prominent phenomenon in science and
technology that still lacks a full atomic-scale understanding. Much work has
been devoted to identifying order parameters able to track the process, from
the inception of early nuclei to their maturing to critical size until growth
of an extended crystal. We critically assess and compare two powerful
distance-based collective variables, an effective entropy derived from liquid
state theory and the path variable based on permutation invariant vectors using
the Kob-Andersen binary mixture and a combination of enhanced-sampling
techniques. Our findings reveal a comparable ability to drive nucleation when a
bias potential is applied, and comparable free-energy barriers and structural
features. Yet, we also found an imperfect correlation with the committor
probability on the barrier top which was bypassed by changing the order
parameter definition. | Julien Lam, Fabio Pietrucci | 2023-01-23T16:48:30Z | http://arxiv.org/abs/2301.09539v1 | # A critical comparison of general-purpose collective variables for crystal nucleation
###### Abstract
The nucleation of crystals is a prominent phenomenon in science and technology that still lacks a full atomic-scale understanding. Much work has been devoted to identifying order parameters able to track the process, from the inception of early nuclei to their maturing to critical size until growth of an extended crystal. We critically assess and compare two powerful distance-based collective variables, an effective entropy derived from liquid state theory and the path variable based on permutation invariant vectors using the Kob-Andersen binary mixture and a combination of enhanced-sampling techniques. Our findings reveal a comparable ability to drive nucleation when a bias potential is applied, and comparable free-energy barriers and structural features. Yet, we also found an imperfect correlation with the committor probability on the barrier top which was bypassed by changing the order parameter definition.
Numerous important phenomena in nature can be characterized as rare events, where a transition between metastable states involves the crossing of free-energy barriers.[1; 2; 3] Atomistic computer simulations of such mechanisms typically require exceedingly-long trajectories so that rare spontaneous fluctuations allow for the emergence of the critical event. Tempering, biasing and path sampling techniques have been developed to accelerate the simulations by many orders of magnitude, thus overcoming the timescale problem.[4; 5; 6; 7; 8] In many cases, the success of those techniques is bound to the correct definition of a collective variable (CV) able to precisely track the transition from one state to the other.[9; 10]
Traditionally, each CV is designed for a specific type of transition. In the case of crystallization, a paradigmatic phenomenon at the focus of large theoretical and computational efforts, the solid formation within a liquid is associated with the breaking of translational and orientational symmetries[11; 12], that can be measured, e.g., via the local density[13] or the spherical harmonics analysis as proposed by Steinhardt et al.[14; 15] However, such order parameters are by construction related to geometrical properties of the final crystal. Using them as CVs assumes implicitly that the nucleation pathway goes through a monotonic increase of particular geometric quantities. This assumption turns out to be well-adapted to simple systems including monodisperse Lennard-Jones[16] and hard-spheres[17]. Yet, materials of technological interests can exhibit more complex nucleation pathways[18; 19; 20; 21] which may not be captured by traditional CVs. Therefore, recent efforts have been dedicated to defining novel CVs that are structurally agnostic and do not constrain the nucleation pathway, constructed also by means of machine-learning techniques [22; 23; 24; 25; 26; 27]
Two recent simple and physically-transparent CV formulations tackle the problem of tracking order-disorder transitions based on the set of all interatomic distances: (1) the permutation invariant vector (PIV),[28], combined with the path-CV scheme,[29; 30] and (2) the approximate two-body entropy, combined with enthalpy. [31; 32; 33; 34] While both CV formulations have been successful at exploring phase transitions and sampling free-energy landscapes in a range of different systems, [35; 36; 37; 38; 20; 39] a critical comparison between them is, to our knowledge, still lacking: this is the aim of this work, exploiting binary Lennard-Jones (LJ) crystallization as a non-trivial test-case.
While numerous works focused on the exploration of the free energy landscape for crystallization in monodisperse LJ [40; 41; 42; 43], the case of binary LJ remains only scarcely explored despite being of great interest for fundamental purposes. One of the most studied binary LJ mixture was first introduced by Kob and Andersen more than twenty years ago[44]. In particular, the glass forming ability of this particular binary LJ fluid has been employed to tackle fundamentals of the glass transition itself[45; 46; 47; 48; 49; 50]. Regarding its crystallization counterparts, more than twenty different crystal phases were found when using the Kob-Andersen (KA) interactions[51; 52] and it was observed that CsCl-like crystal could rapidly be formed when the system is at the equimolar ratio[52]. To the best of our knowledge, the nucleation mechanisms leading to such crystal in the equimolar ratio remains unexplored.
In this work, we examined the free energy landscape of an equimolar mixture of binary KA particles by using a combination of metadynamics simulations[53; 6] and umbrella sampling[54; 55]. We found that both CVs efficiently trigger crystallization and lead to similar free energy barriers of nucleation. However, when analyzing detailed commitment probabilities[56], we show that such CVs are insufficient to discriminate with high precision the transition state. We finally demonstrate that the size of the crystal cluster provides the sufficient additional information to complete the set of CV.
All simulations involve 4394 atoms with the same number of A and B particles interacting through a LJ model. For AA interactions, we define \(\epsilon\) and \(\sigma\) as respectively the energy and distance LJ parameters while for the other interactions, the KA model is the following [44]: \(\epsilon_{AB}/\epsilon=1.5\), \(\epsilon_{BB}/\epsilon=0.5\), \(\sigma_{AB}/\sigma=0.8\), and \(\sigma_{BB}/\sigma=0.88\). The NPT ensemble is employed at \(k_{B}T=0.75\epsilon\) and \(P=0\) so that we have \(T/T_{melt}=0.95\)[46]. LAMMPS (version 4 Jan 2019)[57] patched with PLUMED (version 2.5.1) [58; 59] is used for the molecular dynamics (MD) simulations and Ovito[60] and Pyscal[61] are employed for the structure analysis.
In the case of the PIV-based CV, we constructed a liquid configuration and a CsCl-type crystal which are then relaxed at the investigated thermodynamics conditions. The obtained configurations are then used as references to construct a path CV named \(PIV.s\) tracking the progression from liquid to crystal:
\[s=\frac{e^{-D(X,X_{\rm liq})}+2e^{-D(X,X_{\rm cry})}}{e^{-D(X,X_{\rm liq})}+e^{- D(X,X_{\rm cry})}} \tag{1}\]
where \(X\) is the atomic configuration, \(\lambda=2.3D(X_{\rm liq},X_{\rm cry})\), and the metric \(D\) is the squared Euclidean distance in the space of sorted vectors of distances, filtered via a rational coordination function of formula \((1-(r_{ij}/r_{0})^{6})/(1-(r_{ij}/r_{0})^{12})\) with \(r_{ij}\) the distance between atoms and \(r_{0}=1.4\sigma\). As such, the average of \(PIV.s\) is equal to 1.08 and 1.89 respectively for liquid and crystal structures.
For the second CV, we employed the effective entropy, \(S\) which is approximated from liquid state theory:
\[S=-2\pi\rho k_{B}\int_{0}^{\infty}[g(r)\ln g(r)-g(r)+1]r^{2}dr \tag{2}\]
where \(g(r)\) is the pair-distribution function computed with a cut-off at \(2.5\,\sigma\) and a broadening parameter equal to \(0.05\,\sigma\), \(k_{B}\) is the Boltzman constant and \(\rho\) the density of the system. We note that the employed implementation of the effective entropy does not distinguish between different types of atoms. Under this formulation, the average of \(S\) is equal to \(-1.85\) and \(-9.49\) respectively for liquid and crystal structures. More details on both methods can be found in the original papers[29; 31], while Plumed input files can be downloaded from Plumed Nest (link available upon acceptance of the article). In all simulations, the system volume is constrained not to exceed more than \(5\%\) the equilibrium liquid, to avoid sampling structures with voids. This is achieved by imposing a semi-parabolic wall on the volume with an elastic constant equal to \(10^{3}\epsilon/\sigma^{3}\).
In the first comparison, for each of the two CVs we performed three independent metadynamics simulations with purposely short duration thus allowing for only one barrier crossing event. The objective here was not to reach an accurate measurement of the free energy landscape but only to rapidly find a first reactive trajectory and critical nucleus. The height of the Gaussian kernels is equal to \(0.05\,\epsilon=0.667k_{B}T\) in both cases. The widths are chosen as twice the standard deviation of the CVs distribution in the liquid regime. From Fig. 1, both sampling methods lead to the nucleation event with roughly the same time scales and maximum bias height. In addition, Fig. 1(g) shows that both methods do not lead to the emergence of several crystalline clusters at the same time but to a single, roughly spherical cluster following an isotropic growth. This is a remarkable result for PIV and entropy CVs: they lead to localized nucleation events despite being global order parameters. At this stage, it remains difficult to observe any difference between the two approaches.
Commitment probability analysis (CPA) consists in determining the probability to form the crystal before the liquid starting from a specific configuration, by generating a set of unbiased MD trajectories with different initial velocities drawn from the Maxwell-Boltzmann distribution.[3] We employed this technique in two stages. In the first stage, atomic configurations on the transition pathway obtained with metadynamics are used to initialize MD trajectories of relatively long duration (\(>5\times 10^{4}t_{0}\)). Such simulations can lead to crystal growth or melting, but can also display a cluster size lasting for a sizable time. In the second stage, we therefore use the latter configurations to identify a critical nucleus that is defined as leading to the same number of crystallization and melting trajectories from 10 independent sets of velocities.
The CPA trajectories collected from this second stage are finally used to perform umbrella sampling calculations, that allow for a relatively simple control on the convergence of the free energy landscape. By initializing with unbiased reactive trajectories, we sample a realistic crystallization pathway and we reduce the chances to observe hysteresis.
Although metadynamics simulations sample a large region of \(PIV.s\) and \(S\), it remains that the nucleation barrier is located in a much more narrow phase-space which will be investigated using umbrella sampling calculations. We used 50 windows with one-dimensional biases applied respectively on \(PIV.s\) and \(S\). To validate the convergence of the free energy, we tested two different values of the harmonic restraint for each CV, \(k=[5\times 10^{5};10^{6}]\) and \(k=[2\times 10^{4};5\times 10^{4}]\) for \(PIV.s\) and \(S\), respectively. We applied the weighted histogram analysis method [55] comparing the last half and the last quarter of the total simulation time of each window (\(4\times 10^{5}t_{0}\)) in order to estimate the error bar on the free energy. We therefore obtain in Fig. 2.(a.b) four free-energy curves for each CV, that appear to be similar thus showing that the free energy calculations are well converged with a standard deviation of the barrier value respectively equal to \(0.37\) and \(0.55\,k_{B}T\). At this stage, we show that the two CVs exhibit the same free energy barrier equal to \(30\,k_{B}T\).
After having compared both methods employing meta
dynamics and umbrella sampling, we confronted \(PIV.s\) and \(S\) in terms of commitment probability \(P_{crys}\). For that purpose, configurations obtained with umbrella sampling are used to initialize CPA. Based on results from SI. B, we used 100 independent sets of velocities to ensure convergence of the commitment probability. Fig. 2.(c,d) shows \(P_{crys}\) as a function of the CVs. The black lines correspond to a hyperbolic tangent fit from which we extracted a critical value indicated as a dotted line in Fig. 2(a,b). In both cases, the obtained critical value only slightly differs from the maximum of the free energy curve. Furthermore, in Fig. 2.(e,f), we restricted CPA to configurations that are located near the barrier top. In both cases, it appears that instead of a peaked distribution around \(P_{crys}=0.5\), an indication of an optimal reaction coordinate[3], we obtain distributions that have significant values in the whole range from zero to one. This demonstrates that both \(PIV.s\) and \(S\) are sub-optimal CVs that can not precisely discriminate transition states from structures committed to the crystal or to the liquid.
We further investigated the issue of the quantitative comparison of free-energy barriers estimated from different CVs. In SI. C we report calculations using a second definition of \(PIV.s\) based on a shorter-range switching function (i.e., including poorer information about atomic environments compared to the original one). The free energy barrier estimated from US with the latter lower-quality CV differs by a significant amount (\(7\,k_{B}T\) representing 25%) compared to what was obtained with both the original \(PIV.s\) and \(S\), with the commitment distribution still exhibiting a sub-optimal behavior. This result points to the relevance of developing algorithms combin
Figure 1: (a-d) Temporal evolution of the collective variables during metadynamics simulations. In Fig. (a,c) and in Fig. (b,d)), the biasing is made respectively using PIV-based and the entropy-based collective variables. (e,f) Corresponding temporal evolution of the metadynamics instantaneous bias that results from successive Gaussians depositions. Each color corresponds to an independent simulation. (g) Typical images of the observed nucleation event along metadynamics trajectories using using the two variables. Color coding is based on the value of the averaged Steinhardt’s parameters[14; 15] taken in their sixth’s order \(\overline{q_{6}}\) and particles with \(\overline{q_{6}}\) smaller than 0.25 are shown with a smaller size [See SI. A for more information].
Figure 2: (a,b) Free-energy barriers obtained with umbrella sampling using (a) \(PIV.s\) and (b) \(S\) as CV. The dotted lines indicate the transition-state CV values as obtained from CPA. (c,d) Commitment probability for the two CVs (c) PIV.s, (d) \(S\): the dotted lines indicate the critical value deduced from a fit of the data set. (e,f) Commitment distribution extracted from all of the obtained transition-state configurations with (e) PIV.\(s\) (300 samples) and (f) \(S\) (300 samples).
ing CV-optimization and sampling acceleration in order to obtain accurate barriers. [25; 62; 26]
To shed light on the issue related to the non-peaked distribution of CPA, we inspected the size of the largest crystalline cluster, \(N_{crys}\), by computing the value of the Steinhardt's bond-orientational order parameter averaged over the first neighbor shell, and defined ordered atoms as having \(q_{6}\) larger than 0.25[14; 15]. In order to identify the shortcomings in the employed CVs, we focused on structures that were selected in Fig. 2(e,f) and plot their commitment probability \(P_{crys}\) as a function of \(N_{crys}\) [See Fig. 3.(a,b)]. When filtered at critical values of \(PIV.s\) or \(S\), \(P_{crys}\) again exhibits a clear correlation with \(N_{crys}\), indicating that the combination of \(N_{crys}\) along with \(PIV.s\) or \(S\) might constitute an improved CV for the crystallization pathway. Finally, we computed the critical values of \(N_{crys}\) using the hyperbolic tangent fit, obtaining 316 and 321 atoms respectively for the \(S\)-based and \(PIV.s\)-based datasets. As shown in Fig. 2(c,d), the \(P_{crys}\) distributions corresponding to the critical values of simultaneously \(N_{crys}\) and either \(PIV.s\) or \(S\), albeit obtained with fewer points than in Fig. 2(e,f) (74 for \(S\) and 79 for \(PIV.s\)), are clearly peaked around 0.5 in both cases. This latter result confirms that both \(PIV.s\) and \(S\) are improved in their ability to resolve transition state structures by combining them with \(N_{crys}\).
We note that based on this results, it can be natural to ask if \(N_{crys}\) alone provides a good committor distribution. Results shown in the SI. D demonstrate that when taken alone, \(N_{crys}\) is similar to both \(S\) or PIV. Indeed, although \(N_{crys}\) positively correlates with the committor probability, the distribution at the critical value of \(N_{crys}\) does not lead to a narrow-peaked distribution centered around 0.5. Further analysis of potential correlations between \(N_{crys}\) and the investigated CVs can be found in SI. E. As such, we confirm the need to combine \(S\) or PIV.s with \(N_{crys}\).
Finally, this study comparing the use of \(PIV\) and \(S\) as order parameters gives also insights into the crystallization mechanisms in the Kob-Andersen equimolar binary Lennard-Jones system. Indeed, all of the configurations with a commitment probability between 0.4 and 0.6 are collected and characterized in terms of atomic structure [see Table 1 and Fig.3.(e,f)]. First, results obtained with both methods seem to lead to similar results. In particular, the size of the nucleus is around 335 atoms which correspond to radii around 3 A. We note that although the critical nucleus is not extending through the periodic boundary conditions, our results may still suffer from finite size since we have 4394 particles and 340 in the critical nucleus. Regarding the binary ratio, the critical nucleus almost respects that of the equimolar mixture which suggest that chemical ordering is directly reached during the nucleation event. The small value of the asphericity demonstrate that the nucleus is mostly spherical [See Fig. 3 (e,f)]. One final structural measurement for the obtained critical clusters concerns the chemical ordering since the Kob-Andersen mixture is supposed to crystallize with the CsCl chemical ordering. For that purpose, we measured \(N_{SC}^{A}\) (resp. \(N_{SC}^{B}\)) the number of single cubic atoms when isolating atoms of type A (resp. B) using the Polyhedral template matching algorithm as implemented in Ovito. Results in Tab. 1 show that there is almost the same number of A and B single cubic atoms and that most of crystalline structures within the critical cluster is made of A and B single cubic atoms thus confirming that the obtained critical clusters follows the CsCl chemical ordering.
A large body of literature indicates that crystal nucleation is a complex process, with several features that are system-independent (captured to some extent by classical nucleation theory) and others that are specific to the materials and conditions. Our results carry new insight into this old problem and allow us to draw several conclusions.
First, the two CVs under examination (the PIV-based path coordinate and the entropy-based coordinate), albeit different in formulation, have a comparable performance on the binary Kob-Andersen system. In particu
Figure 3: (a,b) Commitment probability as a function of \(N_{crys}\) at fixed critical values of (c) \(PIV.s\) and (d) \(S\). The black lines indicate a hyperbolic tangent fit of the whole data set. (c,d) Commitment probability distribution obtained with (c) \(PIV.s\) and (d) \(S\) when also constraining the value of \(N_{crys}\in[305:345]\). (e,f) Typical aspect of the critical nucleation cluster, defined as having \(P_{crys}=0.5\). Color coding is based on the value of \(\overline{q_{6}}\) and particles with \(\overline{q_{6}}\) smaller than 0.25 are shown with a smaller size.
lar, both CVs lead to statistically converging free-energy landscapes via umbrella sampling. Yet, because the commitment distribution is not centered around 0.5 at the critical barrier, the obtained value for the barrier is likely misestimated when compared to a more optimal reaction coordinate, so that an accurate nucleation rate can not be deduced. Meanwhile, they allow one to accelerate via metadynamics the formation and growth of crystal nuclei from the liquid. This result is non-trivial to achieve in generic systems, as testified by the difficult cases of ice (tackled with the PIV-based coordinates in Ref. [29], and combining the entropy-based coordinate with an ad-hoc structural fingerprint in Ref. [63]) or CO\({}_{2}\) and methane hydrates nucleation[64; 65].
Detailed inspection of the kinetic fate of atomic configurations found at the barrier top (the committor probability histogram) indicate however that the two coordinates are sub-optimal, and can be improved by including additional degrees of freedom such has those encoded in Steinhardt-based nucleus-size indicators. This result is, again, non-trivial since the latter class of order parameters, although well-adapted in the simple case of the single-component Lennard-Jones system[41; 42; 43; 66] can be sub-optimal for systems undergoing a complex non-classical nucleation pathway traversing polymorphic and/or disordered structures.
The results of this study represent a manifestation of the well-known "chicken and egg" paradox in the field of rare-events sampling and free-energy calculations: optimal CVs are necessary to accelerate the sampling of a transition in order to explore the most relevant mechanisms, while, at the same time, a detailed knowledge of the most relevant mechanisms is necessary to design beforehand optimal CVs.
A broad consensus identifies the optimal CV for a transition between two metastable states with the committor function: unfortunately, information about committor values can be obtained in practical cases only in a very small subset of configuration space, for instance in the vicinity of a barrier top explored with metadynamics, transition path sampling, or other techniques. A CV optimized to represent the committor in such small configurational subset [10], when used in combination with biased sampling techniques like metadynamics or umbrella sampling is likely to drive the system towards sub-optimal transition mechanisms and hysteresis effects, because such CV ignores the behavior of the committor in the entirety of configurational space.
For the same reason, computing the committor histogram for CVs in a small subset of configurational space, as done in this work and, customarily, in many recent works, is a useful test that, unfortunately, even when passed offers no guarantees about the optimality of the same CVs in other regions of configuration space. Only estimating the committor for all possible configurations, an impossible task, would yield an optimal CV that guarantees optimal biased dynamics. This is the main reason why biased dynamics, albeit powerful, always needs to be used and interpreted with care.
Considering the many challenges posed by the investigation of rare events, we propose the approach in the present work as a good compromise to bridge the communities exploiting transition path sampling and CV-biasing techniques, providing at the same time important information in the context of the development of machine-learning CV optimization algorithms.
## Supplementary Information
Supplementary information is split in four sections: Crystal structure analysis, Convergence analysis of CPA, Alternative expression of the PIV-based CV, CPA analysis for \(N_{crys}\) alone and Correlation between \(N_{crys}\) and the other CVs.
## Acknowledgement
JL acknowledges financial support of the Fonds de la Recherche Scientifique - FNRS. Computational resources have been provided by the Consortium des Equipements de Calcul Intensif (CECI) and by the Federation Lyonnaise de Modelisation et Sciences Numeriques (FLMSN). JL thanks James F. Lutsko and Pablo P. Piaggi for fruitful discussions. JL is also grateful to Sarath Menon for his help is the use of Pyscal and Daniel Forster for helping with the computation of asphericity.
|
2302.10538 | Lasserre Hierarchy for Graph Isomorphism and Homomorphism
Indistinguishability | We show that feasibility of the $t^\text{th}$ level of the Lasserre
semidefinite programming hierarchy for graph isomorphism can be expressed as a
homomorphism indistinguishability relation. In other words, we define a class
$\mathcal{L}_t$ of graphs such that graphs $G$ and $H$ are not distinguished by
the $t^\text{th}$ level of the Lasserre hierarchy if and only if they admit the
same number of homomorphisms from any graph in $\mathcal{L}_t$. By analysing
the treewidth of graphs in $\mathcal{L}_t$, we prove that the $3t^\text{th}$
level of Sherali--Adams linear programming hierarchy is as strong as the
$t^\text{th}$ level of Lasserre. Moreover, we show that this is best possible
in the sense that $3t$ cannot be lowered to $3t-1$ for any $t$. The same result
holds for the Lasserre hierarchy with non-negativity constraints, which we
similarly characterise in terms of homomorphism indistinguishability over a
family $\mathcal{L}_t^+$ of graphs. Additionally, we give characterisations of
level-$t$ Lasserre with non-negativity constraints in terms of logical
equivalence and via a graph colouring algorithm akin to the Weisfeiler--Leman
algorithm. This provides a polynomial time algorithm for determining if two
given graphs are distinguished by the $t^\text{th}$ level of the Lasserre
hierarchy with non-negativity constraints. | David E. Roberson, Tim Seppelt | 2023-02-21T09:12:19Z | http://arxiv.org/abs/2302.10538v3 | # Lasserre Hierarchy for Graph Isomorphism and Homomorphism Indistinguishability
###### Abstract
We show that feasibility of the \(t^{\text{th}}\) level of the Lasserre semidefinite programming hierarchy for graph isomorphism can be expressed as a homomorphism indistinguishability relation. In other words, we define a class \(\mathcal{L}_{t}\) of graphs such that graphs \(G\) and \(H\) are not distinguished by the \(t^{\text{th}}\) level of the Lasserre hierarchy if and only if they admit the same number of homomorphisms from any graph in \(\mathcal{L}_{t}\). By analysing the treewidth of graphs in \(\mathcal{L}_{t}\) we prove that the \(3t^{\text{th}}\) level of Sherali-Adams linear programming hierarchy is as strong as the \(t^{\text{th}}\) level of Lasserre. Moreover, we show that this is best possible in the sense that \(3t\) cannot be lowered to \(3t-1\) for any \(t\). The same result holds for the Lasserre hierarchy with non-negativity constraints, which we similarly characterise in terms of homomorphism indistinguishability over a family \(\mathcal{L}_{t}^{+}\) of graphs. Additionally, we give characterisations of level-\(t\) Lasserre with non-negativity constraints in terms of logical equivalence and via a graph colouring algorithm akin to the Weisfeiler-Leman algorithm. This provides a polynomial time algorithm for determining if two given graphs are distinguished by the \(t^{\text{th}}\) level of the Lasserre hierarchy with non-negativity constraints.
Lasserre hierarchy, homomorphism indistinguishability, Sherali-Adams hierarchy, treewidth, semidefinite programming, linear programming, graph isomorphism [1]
the Sherali-Adams1 linear programming hierarchy [34], which is closely related to the Weisfeiler-Leman algorithm [38, 3, 18], the arguably most relevant combinatorial method for distinguishing graphs. It was shown in [4] that there exists a constant \(c\) such that, for all graphs \(G\) and \(H\), if the level-\(ct\) Sherali-Adams relaxation of \(\mathrm{ISO}(G,H)\) is feasible then so is the level-\(t\) Lasserre relaxation, which in turn implies that the level-\(t\) Sherali-Adams relaxation is feasible, cf. [23].
Footnote 1: Following [4], when referring to the Sherali–Adams relaxation of \(\mathrm{ISO}(G,H)\) in this article, we do not refer to the original relaxation [34] but to its variant introduced by [3, 18], which corresponds more directly to other graph properties, cf. Theorem 8 and [20].
Another set of expressive equivalence relations comparing graphs is given by homomorphism indistinguishability, a notion originating from the study of graph substructure counts. Two graphs \(G\) and \(H\) are _homomorphism indistinguishable_ over a family of graphs \(\mathcal{F}\), in symbols \(G\equiv_{\mathcal{F}}H\), if the number of homomorphisms from \(F\) to \(G\) is equal to the number of homomorphisms from \(F\) to \(H\) for every graph \(F\in\mathcal{F}\). The study of this notion began in 1967, when Lovasz [24] showed that two graphs \(G\) and \(H\) are isomorphic if and only if they are homomorphism indistinguishable over all graphs. In recent years, many prominent equivalence relations comparing graphs were characterised as homomorphism indistinguishability relations over restricted graph classes [13, 14, 15, 10, 25, 20, 2, 28, 1, 31, 9, 30]. For example, a folklore result asserts that two graphs have cospectral adjacency matrices iff they are homomorphism indistinguishable over all cycle graphs, cf. [20]. Two graphs are quantum isomorphic iff they are homomorphism indistinguishable over all planar graphs [25]. Furthermore, feasibility of the level-\(t\) Sherali-Adams relaxation of \(\mathrm{ISO}(G,H)\) has been characterised as homomorphism indistinguishability over all graphs of treewidth at most \(t-1\)[3, 18, 14]. In this way, notions from logic [14, 15, 30], category theory [10, 28, 1], algebraic graph theory [13, 20], and quantum groups [25] have been related to homomorphism indistinguishability.
### Contributions
Although feasibility of the level-\(t\) Lasserre relaxation of \(\mathrm{ISO}(G,H)\) was sandwiched between feasibility of the level-\(ct\) and level-\(t\) Sherali-Adams relaxation in [4], the constant \(c\) remained unknown. In fact, this \(c\) is not explicit and depends on the implementation details of an algorithm developed in that paper. Our main result asserts that \(c\) can be taken to be three and that this constant is best possible.
For two graphs \(G\) and \(H\) and every \(t\geq 1\), the following implications hold:
\[G\simeq_{3t}^{\mathrm{SA}}H\implies G\simeq_{t}^{\mathrm{L}}H\implies G \simeq_{t}^{\mathrm{SA}}H\]
Furthermore, for every \(t\geq 1\), there exist graphs \(G\) and \(H\) such that \(G\simeq_{3t-1}^{\mathrm{SA}}H\) and \(G\not\simeq_{t}^{\mathrm{L}}H\). Here, \(G\simeq_{t}^{\mathrm{L}}H\) and \(G\simeq_{t}^{\mathrm{SA}}H\) denote that the level-\(t\) Lasserre relaxation and respectively the level-\(t\) Sherali-Adams relaxation of \(\mathrm{ISO}(G,H)\) are feasible.
Theorem 1 is proven using the framework of homomorphism indistinguishability. In previous works [13, 27, 20, 30], the feasibility of various systems of equations associated to graphs like the Sherali-Adams relaxation of \(\mathrm{ISO}(G,H)\) was characterised in terms of homomorphism indistinguishability over certain graph classes. We continue this line of research by characterising the feasibility of the level-\(t\) Lasserre relaxation of \(\mathrm{ISO}(G,H)\) by homomorphism indistinguishability of \(G\) and \(H\) over the novel class of graphs \(\mathcal{L}_{t}\) introduced in Definition 22.
**Theorem 2**.: _For every integer \(t\geq 1\), there is a minor-closed graph class \(\mathcal{L}_{t}\) of graphs of treewidth at most \(3t-1\) such that for all graphs \(G\) and \(H\) it holds that \(G\simeq_{t}^{L}H\) if and only if \(G\equiv_{\mathcal{L}_{t}}H\)._
The bound on the treewidth of graphs in \(\mathcal{L}_{t}\) in Theorem 2 yields the upper bound in Theorem 1 given the result of [3, 18, 4, 14] that two graphs \(G\) and \(H\) satisfy \(G\simeq_{t}^{\mathrm{SA}}H\) if and only if they are homomorphism indistinguishable over the class \(\mathcal{TW}_{t-1}\) of graphs of treewidth at most \(t-1\). To our knowledge, Theorem 1 is the first result which tightly relates equivalence relations on graphs by comparing the graph classes which characterise them in terms of homomorphism indistinguishability.
Our techniques extend to a stronger version of the Lasserre hierarchy which imposes non-negativity constraints on all variables. Denoting feasibility of the level-\(t\) Lasserre relaxation of \(\mathrm{ISO}(G,H)\) with non-negativity constraints by \(G\simeq_{t}^{\mathrm{L}^{+}}H\), we characterise \(\simeq_{t}^{\mathrm{L}^{+}}\) in terms of homomorphism indistinguishability over the graph class \(\mathcal{L}_{t}^{+}\), defined in Definition 2 as a super class of \(\mathcal{L}_{t}\). This is in line with previous work in [13, 20], where the feasibility of the level-\(t\) Sherali-Adams relaxation of \(\mathrm{ISO}(G,H)\) without non-negativity constraints was characterised as homomorphism indistinguishable over the class \(\mathcal{PW}_{t-1}\) of graphs of pathwidth at most \(t-1\).
For every integer \(t\geq 1\), there is a minor-closed graph class \(\mathcal{L}_{t}^{+}\) of graphs of treewidth at most \(3t-1\) such that for all graphs \(G\) and \(H\) it holds that \(G\simeq_{t}^{\mathrm{L}^{+}}H\) if and only if \(G\equiv_{\mathcal{L}_{t}^{+}}H\).
Given the aforementioned correspondence between the Sherali-Adams relaxation with and without non-negativity constraints and homomorphism indistinguishability over graphs of bounded treewidth and pathwidth, we conduct a detailed study of the relationship between the class of graphs of bounded treewidth, pathwidth, and the classes \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\). Their results, depicted in Figure 1, yield independent proofs of the known relations between feasibility of the Lasserre relaxation with and without non-negativity constraints and the Sherali-Adams relaxation with and without non-negativity constraints [5, 4, 20] using the framework of homomorphism indistinguishability.
In the course of proving Theorems 2 and 3, we derive further equivalent characterisations of \(\simeq_{t}^{L}\) and \(\simeq_{t}^{\mathrm{L}^{+}}\). These characterisations, which are mostly of a linear algebraic nature, ultimately yield a characterisation of \(\simeq_{t}^{\mathrm{L}^{+}}\) in terms of a fragment of first-order logic with counting quantifiers and indistinguishability under a polynomial time algorithm akin to the Weisfeiler-Leman algorithm. In this way, we obtain the following algorithmic result. It implies that _exact_ feasibility of the Lasserre semidefinite program with non-negativity constraints can be tested in polynomial time. In general, only the _approximate_ feasibility of semidefinite programs can be decided efficiently, e.g. using the ellipsoid method [21, 4].
Figure 1: Relationship between \(\mathcal{L}_{t}\), \(\mathcal{L}_{t}^{+}\), the classes of graphs of bounded treewidth, bounded pathwidth, and the class of outerplanar graphs. An arrow \(\mathcal{A}\to\mathcal{B}\) indicates that \(\mathcal{A}\subseteq\mathcal{B}\) and thus that \(G\equiv_{\mathcal{B}}H\) implies \(G\equiv_{\mathcal{A}}H\) for all graphs \(G\) and \(H\). For formal statements, see Sections 4.1 and 4.2.
Let \(t\geq 1\). Given graphs \(G\) and \(H\), it can be decided in polynomial time whether \(G\simeq_{t}^{\mathrm{L}^{+}}H\).
Finally, for \(t=1\), we show that \(\mathcal{L}_{1}\) and \(\mathcal{L}_{1}^{+}\) are respectively equal to the class \(\mathcal{OP}\) of outerplanar graphs and to the class of graphs of treewidth at most \(2\). The following Theorem 3 parallels a result of [25] asserting that two graphs \(G\) and \(H\) are indistinguishable under the \(2\)-WL algorithm iff \(G\simeq_{1}^{\mathrm{L}^{+}}H\).
Two graphs \(G\) and \(H\) satisfy \(G\simeq_{1}^{\mathrm{L}}H\) iff \(G\equiv_{\mathcal{OP}}H\).
### Techniques
In the first part of the paper (Section 3), linear algebraic tools developed in [26, 25] are generalised to yield reformulations of the entire Lasserre hierarchy with and without non-negativity results. Section 4 is concerned with the graph theoretic properties of the graph classes \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\). For understanding the homomorphism indistinguishability relations over these graph classes, the framework of bilabelled graphs and their homomorphism tensors developed in [27, 20] is used. Despite this, our approach is different from [20, 30] in the sense that here the graph classes \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) are inferred from given systems of equations, namely the Lasserre relaxation, rather than that a system of equations is built for a given graph class.
## 2 Preliminaries
### Linear Algebra
Let \(\mathcal{S}_{+}\) denote the family of real _positive semidefinite matrices_, i.e. of matrices \(M\) of the form \(M_{ij}=v_{i}^{T}v_{j}\) for vectors \(v_{1},\ldots,v_{n}\), the _Gram vectors_ of \(M\). Write \(M\succeq 0\) iff \(M\in\mathcal{S}_{+}\). Let \(\mathcal{DNN}\) denote the family of _doubly non-negative matrices_, i.e. of entry-wise non-negative positive semidefinite matrices.
A linear map \(\Phi\colon\mathbb{C}^{m\times m}\to\mathbb{C}^{n\times n}\) is _trace-preserving_ if \(\operatorname{tr}\Phi(X)=\operatorname{tr}X\) for all \(X\in\mathbb{C}^{m\times m}\), _unital_ if \(\Phi(\operatorname{id}_{m})=\operatorname{id}_{n}\), \(\mathcal{K}\)_-preserving_ for a family of matrices \(\mathcal{K}\) if \(\Phi(K)\in\mathcal{K}\) for all \(K\in\mathcal{K}\), _positive_ if it is \(\mathcal{S}_{+}\)-preserving, i.e. if \(\Phi(X)\) is positive semidefinite for all positive semidefinite \(X\), _completely positive_ if \(\operatorname{id}_{r}\otimes\Phi\) is positive for all \(r\in\mathbb{N}\). The _Choi matrix_ of \(\Phi\) is \(C_{\Phi}=\sum_{i,j=1}^{m}E_{ij}\otimes\Phi(E_{ij})\in\mathbb{C}^{mn\times mn}\).
A _tensor_ is an element \(A\in\mathbb{C}^{n^{t}\times n^{t}}\) for some \(n,t\in\mathbb{N}\). The symmetric group \(\mathfrak{S}_{2t}\) acts on \(\mathbb{C}^{n^{t}\times n^{t}}\) by permuting the coordinates, i.e. for all \(\boldsymbol{u},\boldsymbol{v}\in[n]^{t}\), \(A^{\sigma}(\boldsymbol{u},\boldsymbol{v})\coloneqq A(\boldsymbol{x},\boldsymbol {y})\) where \(\boldsymbol{x}_{i}\coloneqq(\boldsymbol{u}\boldsymbol{v})_{\sigma^{-1}(i)}\) and \(\boldsymbol{y}_{j-t}\coloneqq(\boldsymbol{u}\boldsymbol{v})_{\sigma^{-1}(j)}\) for all \(1\leq i\leq t<j\leq 2t\).
For two vectors \(v,w\in\mathbb{C}^{n}\), write \(v\odot w\) for their _Schur product_, i.e. \((v\odot w)(i)\coloneqq v(i)w(i)\) for all \(i\in[n]\).
### Bilabelled Graphs and Homomorphism Tensors
All graphs in this article are undirected, finite, and without multiple edges. A graph is _simple_ if it does not contain any loops. A _homomorphism_\(h\colon F\to G\) from a graph \(F\) to a graph \(G\) is a map \(V(F)\to V(G)\) such that for all \(uv\in E(F)\) it holds that \(h(u)h(v)\in E(G)\). Note that this implies that any vertex in \(F\) carrying a loop must be mapped to a vertex carrying a loop in \(G\). Write \(\hom(F,G)\) for the number of homomorphisms from \(F\) to \(G\). For a family of graphs \(\mathcal{F}\) and graphs \(G\) and \(H\) write \(G\equiv_{\mathcal{F}}H\) if \(G\) and \(H\) are _homomorphism indistinguishable over \(\mathcal{F}\)_, i.e. \(\hom(F,G)=\hom(F,H)\) for all \(F\in\mathcal{F}\). Since the graphs \(G\) and
\(H\) into which homomorphisms are counted, are throughout assumed to be simple, looped graphs in \(\mathcal{F}\) can generally be disregarded as they do not admit any homomorphisms into simple graphs.
We recall the following definitions from [25, 20]. Let \(\ell\geq 1\). An \((\ell,\ell)\)_-bilabelled graph_ is a tuple \(\mathbf{F}=(F,\mathbf{u},\mathbf{v})\) where \(F\) is a graph and \(\mathbf{u},\mathbf{v}\in V(F)^{\ell}\). The \(\mathbf{u}\) are the _in-labelled vertices_ of \(\mathbf{F}\) while the \(\mathbf{v}\) are the _out-labelled vertices_ of \(\mathbf{F}\). Given a graph \(G\), the _homomorphism tensor_ of \(\mathbf{F}\) for \(G\) is \(\mathbf{F}_{G}\in\mathbb{C}^{V(G)^{\ell}\times V(G)^{\ell}}\) whose \((\mathbf{x},\mathbf{y})\)-th entry is the number of homomorphisms \(h\colon F\to G\) such that \(h(\mathbf{u}_{i})=\mathbf{x}_{i}\) and \(h(\mathbf{v}_{i})=\mathbf{y}_{i}\) for all \(i\in[\ell]\).
For an \((\ell,\ell)\)-bilabelled graph \(\mathbf{F}=(F,\mathbf{u},\mathbf{v})\), write \(\operatorname{so}\mathbf{F}\coloneqq F\) for the underlying unlabelled graph of \(\mathbf{F}\). Write \(\operatorname{tr}\mathbf{F}\) for the unlabelled graph underlying the graph obtained from \(\mathbf{F}\) by identifying \(\mathbf{u}_{i}\) with \(\mathbf{v}_{i}\) for all \(i\in[\ell]\). For \(\sigma\in\mathfrak{S}_{2t}\), write \(\mathbf{F}^{\sigma}\coloneqq(F,\mathbf{x},\mathbf{y})\) where \(\mathbf{x}_{i}\coloneqq(\mathbf{u}\mathbf{v})_{\sigma(i)}\) and \(\mathbf{y}_{j-t}\coloneqq(\mathbf{u}\mathbf{v})_{\sigma(j)}\) for all \(1\leq i\leq t<j\leq 2t\), i.e. \(\mathbf{F}^{\sigma}\) is obtained from \(\mathbf{F}\) by permuting the labels according to \(\sigma\). As a special case, define \(\mathbf{F}^{*}\coloneqq(F,\mathbf{v},\mathbf{u})\) the graph obtained by swapping in- and out-labels.
For two \((\ell,\ell)\)-bilabelled graphs \(\mathbf{F}=(F,\mathbf{u},\mathbf{v})\) and \(\mathbf{F}^{\prime}=(F^{\prime},\mathbf{u}^{\prime},\mathbf{v}^{\prime})\), write \(\mathbf{F}\cdot\mathbf{F}^{\prime}\) for the graph obtained from them by _series composition_. That is, the underlying unlabelled graph of \(\mathbf{F}\cdot\mathbf{F}^{\prime}\) is the graph obtained from the disjoint union of \(F\) and \(F^{\prime}\) by identifying \(\mathbf{v}_{i}\) and \(\mathbf{u}_{i}^{\prime}\) for all \(i\in[\ell]\). Multiple edges arising in this process are removed. The in-labels of \(\mathbf{F}\cdot\mathbf{F}^{\prime}\) lie on \(\mathbf{u}\), the out-labels on \(\mathbf{v}^{\prime}\). Moreover, write \(\mathbf{F}\odot\mathbf{F}^{\prime}\) for the _parallel composition_ of \(\mathbf{F}\) and \(\mathbf{F}^{\prime}\). That is, the underlying unlabelled graph of \(\mathbf{F}\odot\mathbf{F}^{\prime}\) is the graph obtained from the disjoint union of \(F\) and \(F^{\prime}\) by identifying \(\mathbf{u}_{i}\) with \(\mathbf{u}_{i}^{\prime}\) and \(\mathbf{v}_{i}\) with \(\mathbf{v}_{i}^{\prime}\) for all \(i\in[\ell]\). Again, multiple edges are dropped. The in-labels of \(\mathbf{F}\odot\mathbf{F}^{\prime}\) lie on \(\mathbf{u}\), the out-labels on \(\mathbf{v}\).
As observed in [25, 20], the benefit of these combinatorial operations is that they have an algebraic counterpart. Formally, for all graphs \(G\) and all \((\ell,\ell)\)-bilabelled graphs \(\mathbf{F},\mathbf{F}^{\prime}\), it holds that \(\operatorname{so}\mathbf{F}_{G}=\hom(\operatorname{so}\mathbf{F},G)\), \(\operatorname{tr}\mathbf{F}_{G}=\hom(\operatorname{tr}\mathbf{F},G)\), \((\mathbf{F}_{G})^{\sigma}=(\mathbf{F}^{\sigma})_{G}\), \((\mathbf{F}\cdot\mathbf{F}^{\prime})_{G}=\mathbf{F}_{G}\cdot\mathbf{F}_{G}^{\prime}\), and \((\mathbf{F}\odot\mathbf{F}^{\prime})_{G}=\mathbf{F}_{G}\odot\mathbf{F}_{G}^{\prime}\).
Slightly abusing notation, we say that two graphs \(G\) and \(H\) are homomorphism indistinguishable over a family of bilabelled graphs \(\mathbf{S}\), in symbols \(G\equiv_{\mathcal{S}}H\) if \(G\) and \(H\) are homomorphism indistinguishable over the family \(\{\operatorname{so}\mathbf{S}\mid\mathbf{S}\in\mathcal{S}\}\) of the underlying unlabelled graphs of the \(\mathbf{S}\in\mathcal{S}\).
### Pathwidth and Treewidth
Let \(F\) and \(T\) be graphs. A \(T\)-decomposition of \(F\) is a map \(\beta\colon V(T)\to 2^{V(F)}\) such that
1. \(\bigcup_{t\in V(T)}\beta(t)=V(F)\),
2. for every \(e\in E(F)\), there is \(t\in V(T)\) such that \(e\subseteq\beta(t)\),
3. for every \(v\in V(F)\), the set of \(t\in V(T)\) such that \(v\in\beta(t)\) induces a connected component of \(T\).
The _width_ of a \(T\)-decomposition \(\beta\) is \(\max_{t\in V(T)}|\beta(t)|-1\). For a graph class \(\mathcal{T}\), the \(\mathcal{T}\)-width of \(F\) is the minimal width of a \(T\)-decomposition of \(F\) for \(T\in\mathcal{T}\).
The _treewidth_\(\operatorname{tw}F\) of a graph \(F\) is the minimal width of a \(T\)-decomposition of \(F\) where \(T\) is a tree. Similarly, the _pathwidth_\(\operatorname{pw}F\) is the minimal width of a \(P\)-decomposition of \(F\) where \(P\) is a path.
### Systems of Equations for Graph Isomorphism
Two simple graphs \(G\) and \(H\) are isomorphic if and only if there exists a \(\{0,1\}\)-solution to the system of equations \(\operatorname{ISO}(G,H)\) which comprises variables \(X_{gh}\) for \(gh\in V(G)\times V(H)\)
and equations
\[\sum_{h\in V(H)}X_{gh}-1 =0 \text{for all }g\in V(G), \tag{1}\] \[\sum_{g\in V(G)}X_{gh}-1 =0 \text{for all }h\in V(H),\] (2) \[X_{gh}X_{g^{\prime}h^{\prime}} =0 \text{for all }gh,g^{\prime}h^{\prime}\in V(G)\times V(H)\] (3) \[\text{s.t. rel}_{G}(g,g^{\prime})\neq\text{rel}_{H}(h,h^{\prime}).\]
Here, \(\text{rel}_{G}(g,g^{\prime})=\text{rel}_{H}(h^{\prime},h^{\prime})\) if and only if both pairs of vertices are adjacent, non-adjacent, or identical.
The Lasserre relaxation of \(\text{ISO}(G,H)\) is defined as follows. An element \(\{g_{1}h_{1},\ldots g_{\ell}h_{\ell}\}\in\binom{V(G)\times V(H)}{\ell}\) is a _partial isomorphism_ if \(g_{i}=g_{j}\Leftrightarrow h_{i}=h_{j}\) and \(g_{i}g_{j}\in E(G)\Leftrightarrow h_{i}h_{j}\in E(H)\) for all \(i,j\in[\ell]\). See also Appendix A for a comparison to the version used in [4].
Let \(t\geq 1\). The _level-\(t\)_Lasserre relaxation for graph isomorphism _has variables \(y_{I}\) ranging over \(\mathbb{R}\) for \(I\in\binom{V(G)\times V(H)}{\leq 2t}\). The constraints are
\[M_{t}(y)\coloneqq(y_{I\cup J})_{I,J\in\binom{V(G)\times V(H)}{ \leq t}} \succeq 0, \tag{4}\] \[\sum_{h\in V(H)}y_{I\cup\{gh\}} =y_{I}\text{ for all }I\text{ s.t. }\left|I\right|\leq 2t-2\text{ and all }g\in V(G),\] (5) \[\sum_{g\in V(G)}y_{I\cup\{gh\}} =y_{I}\text{ for all }I\text{ s.t. }\left|I\right|\leq 2t-2\text{ and all }h\in V(H),\] (6) \[y_{I} =0\text{ if }I\text{ s.t. }\left|I\right|\leq 2t\text{ is not partial isomorphism}\] (7) \[y_{\emptyset} =1. \tag{8}\]
If the system is feasible for two graphs \(G\) and \(H\), write \(G\simeq_{t}^{\text{L}}H\). If the system together with the constraint \(y_{I}\geq 0\) for all \(I\in\binom{V(G)\times V(H)}{\leq 2t}\) is feasible, write \(G\simeq_{t}^{\text{L}^{+}}H\).
For a definition of the Sherali-Adams relaxation of \(\text{ISO}(G,H)\) in the version used here following [4], the reader is referred to [19, Appendix D.1]. Instead of feasibility of the level-\(t\) Sherali-Adams relaxation, one may think of the following equivalent notions:
[[4, 14, 7]] Let \(t\geq 1\). For graphs \(G\) and \(H\), the following are equivalent:
1. the level-\(t\) Sherali-Adams relaxation of \(\text{ISO}(G,H)\) is feasible, i.e. \(G\simeq_{t}^{\text{SA}}H\),
2. \(G\) and \(H\) satisfy the same sentences of \(t\)-variable first order logic with counting quantifiers,
3. \(G\) and \(H\) are homomorphism indistinguishable over the class of treewidth at most \(t-1\),
4. \(G\) and \(H\) are not distinguished by the \((t-1)\)-dimensional Weisfeiler-Leman algorithm,
## 3 From Lasserre to Homomorphism Tensors
In this section, the tools are developed which will be used to translate a solution to the level-\(t\) Lasserre relaxation into a statement on homomorphism indistinguishability. For this purpose, three equivalent characterisations of \(\simeq_{t}^{\text{L}}\) and \(\simeq_{t}^{\text{L}^{+}}\) are introduced. Theorems 3.1 and 3.2 summarise our results. The notions in items 2-4 and the graph classes \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) are defined in Sections 3.1, 3.2, 3.4, and 4, respectively. Most of the proofs are of a linear algebraic nature. Graph theoretical repercussions are discussed in Section 4.
Let \(t\geq 1\). For graphs \(G\) and \(H\), the following are equivalent:
1. the level-\(t\) Lasserre relaxation of \(\text{ISO}(G,H)\) is feasible,
\(2\). \(G\) and \(H\) are level-\(t\)\(\mathcal{S}_{+}\)-isomorphic, \(3\). there is a level-\(t\)\(\mathcal{S}_{+}\)-isomorphism map from \(G\) to \(H\), \(4\). \(G\) and \(H\) are partially \(t\)-equivalent, \(5\). \(G\) and \(H\) are homomorphism indistinguishable over \(\mathcal{L}_{t}\).
Let \(t\geq 1\). For graphs \(G\) and \(H\), the following are equivalent:
1. the level-\(t\)\(\operatorname{Lasserre}\) relaxation of \(\operatorname{ISO}(G,H)\) with non-negativity constraints is feasible,
2. \(G\) and \(H\) are level-\(t\)\(\operatorname{\mathcal{DNN}}\)-isomorphic,
3. there is a level-\(t\)\(\operatorname{\mathcal{DNN}}\)-isomorphism map from \(G\) to \(H\),
4. \(G\) and \(H\) are \(t\)-equivalent,
5. \(G\) and \(H\) are homomorphism indistinguishable over \(\mathcal{L}_{t}^{+}\).
Variants of the notions in items 2-4 have already been defined for the case \(t=1\) in [27]. Our contribution amounts to extending these definitions to the entire Lasserre hierarchy. A recurring theme in this context is accounting for additional symmetries. The variables \(y_{I}\) of the Lasserre system of equations, cf. Definition 3.1, are indexed by sets of vertex pairs rather than by tuples of such. Hence, when passing from such variables to tuple-indexed matrices, one must impose the additional symmetries arising this way. This is formalised at various points using an action of the symmetric group on the axes of the matrices. In the case \(t=1\), such a set up is not necessary since indices \(I\) are of size at most \(2\) and all occurring matrices can be taken to be invariant under transposition.
In the subsequent sections, Theorems 3.1 and 3.1 will be proven in parallel. The equivalence of items 1 and 2, 2 and 3, and 3 and 4 are established in Section 3.3, Section 3.2, and Section 3.4, respectively. The statements on homomorphism indistinguishability are proven in Section 4.
### Isomorphism Relaxations via Matrix Families
In this section, as a first step towards proving Theorems 3.1 and 3.1, the notion of level-\(t\)\(\mathcal{K}\)-isomorphic graphs for arbitrary families of matrices \(\mathcal{K}\) is introduced. In [27], level-\(1\)\(\mathcal{K}\)-isomorphic graphs where studied for various families of matrices \(\mathcal{K}\). In this work, the main interest lies on the family of positive semidefinite matrices \(\mathcal{S}_{+}\) and the family of entry-wise non-negative positive semidefinite matrices \(\operatorname{\mathcal{DNN}}\). Level-\(t\) isomorphism for these families is proven to correspond to \(\simeq_{t}^{\mathrm{L}}\) and \(\simeq_{t}^{\mathrm{L}^{+}}\) respectively, cf. Theorems 3.1 and 3.1.
Let \(\mathcal{K}\) be a family of matrices. Graphs \(G\) and \(H\) are said to be \(\operatorname{level-}t\)\(\mathcal{K}\)-isomorphic, in symbols \(G\cong_{\mathcal{K}}^{t}H\), if there is a matrix \(M\in\mathcal{K}\) with rows and columns indexed by \((V(G)\times V(H))^{t}\) such that for every \(g_{1}h_{1}\dots g_{t}h_{t},g_{t+1}h_{t+1}\dots g_{2t}h_{2t}\in(V(G)\times V(H) )^{t}\) the following equations hold:
For every \(i\in[2t]\),
\[\sum_{g_{i}\in V(G)}M_{g_{1}h_{1}\dots g_{t}h_{t},g_{t+1}h_{t+1} \dots g_{2t}h_{2t}} =\sum_{h_{i}\in V(H)}M_{g_{1}h_{1}\dots g_{t}h_{t},g_{t+1}h_{t+1} \dots g_{2t}h_{2t}}, \tag{9}\] \[\sum_{h_{1}^{\prime},\dots,h_{2t}^{\prime}\in V(H)}M_{g_{1}h_{1}^ {\prime}\dots g_{t}h_{t}^{\prime},g_{t+1}h_{t+1}^{\prime}\dots g_{2t}h_{2t}^{ \prime}} =1=\sum_{g_{1}^{\prime},\dots,g_{2t}^{\prime}\in V(G)}M_{g_{1}^{ \prime}h_{1}\dots g_{t}^{\prime}h_{t},g_{t+1}^{\prime}h_{t+1}\dots g_{2t}^{ \prime}h_{2t}}. \tag{10}\]
If \(\operatorname{rel}_{G}(g_{1},\dots,g_{2t})\neq\operatorname{rel}_{H}(h_{1}, \dots,h_{2t})\) then
\[M_{g_{1}h_{1}\dots g_{t}h_{t},g_{t+1}h_{t+1}\dots g_{2t}h_{2t}}=0. \tag{11}\]
_For all \(\sigma\in\mathfrak{S}_{2t}\),_
\[M_{g_{1}h_{1}\ldots g_{t}h_{t},g_{t+1}h_{t+1}\ldots g_{2t}h_{2t}}=M_{g_{\sigma(1 )}h_{\sigma(1)}\ldots g_{\sigma(t)}h_{\sigma(t)},g_{\sigma(t+1)}h_{\sigma(t+1) }\ldots g_{\sigma(2t)}h_{\sigma(2t)}}. \tag{12}\]
Note that for \(t=1\) and any family of matrices \(\mathcal{K}\) closed under taking transposes Equation (12) is vacuous.
Systems of equations comparing graphs akin to Equations (9)-(12) were also studied by [20]. Feasibility of such equations is typically invariant under taking the complements of the graphs as remarked below. This semantic property of the relation \(\cong_{\mathcal{K}}^{t}\) is relevant in the context of homomorphism indistinguishability as shown by [33].
For a simple graph \(G\), write \(\overline{G}\) for its complement, i.e. \(V(\overline{G})\coloneqq V(G)\) and \(E(\overline{G})\coloneqq\binom{V(G)}{2}\setminus E(G)\). For all graphs \(G\) and \(H\) and \(g_{1},\ldots,g_{2t}\in V(G)\), \(h_{1},\ldots,h_{2t}\in V(H)\), it holds that
\[\operatorname{rel}_{G}(g_{1},\ldots,g_{2t})=\operatorname{rel}_{H}(h_{1}, \ldots,h_{2t})\iff\operatorname{rel}_{\overline{G}}(g_{1},\ldots,g_{2t})= \operatorname{rel}_{\overline{H}}(h_{1},\ldots,h_{2t}).\]
Thus, \(G\cong_{\mathcal{K}}^{t}H\) if and only if \(\overline{G}\cong_{\mathcal{K}}^{t}\overline{H}\) for all families of matrices \(\mathcal{K}\) and \(t\in\mathbb{N}\).
### Choi Matrices and Isomorphism Maps
In this section, an alternative characterisation for level-\(t\)\(\mathcal{K}\)-isomorphism is given. Intuitively, the indices of the matrix \(M\in\mathbb{C}^{(V(G)\times V(H))^{t}\times(V(G)\times V(H))^{t}}\) from Definition 11 are regrouped yielding a linear map \(\Phi\colon\mathbb{C}^{V(G)^{t}\times V(G)^{t}}\to\mathbb{C}^{V(H)^{t}\times V (H)^{t}}\). In linear algebraic terms, \(M\) is the Choi matrix of \(\Phi\). The map \(\Phi\) will later be interpreted as a function sending homomorphism tensors of \((t,t)\)-bilabelled graphs \(\boldsymbol{F}_{G}\in\mathbb{C}^{V(G)^{t}\times V(G)^{t}}\) with respect to \(G\) to their counterparts \(\boldsymbol{F}_{H}\) for \(H\).
The most basic bilabelled graphs, so called _atomic_ graphs, make their first appearance in Theorem 14. These graphs are used to reformulate Equations (7) and (11). The atomic graphs are also the graphs which the sets \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) of Theorems 2 and 3 are generated by, cf. Definition 22. Examples are depicted in Figures 2 and 3.
Let \(t\geq 1\). A \((t,t)\)-bilabelled graph \(\boldsymbol{F}=(F,\boldsymbol{u},\boldsymbol{v})\) is _atomic_ if all its vertices are labelled. Write \(\mathcal{A}_{t}\) for the set of \((t,t)\)-bilabelled atomic graphs. Note that the the set of atomic graphs \(\mathcal{A}_{t}\) is generated under parallel composition by the graphs
* \(\boldsymbol{J}\coloneqq(J,(1,\ldots,t),(t+1,\ldots,2t))\) with \(V(J)=[2t]\), \(E(J)=\emptyset\)
Figure 2: Examples of the atomic graphs from Definition 13. The gray lines (the _wires_[25]) indicate the in-labels (left) and out-labels (right).
* \(\mathbf{A}^{ij}\coloneqq(A^{ij},(1,\ldots,t),(t+1,\ldots,2t))\) _with_ \(V(A^{ij})=[2t]\)_,_ \(E(A^{ij})=\{ij\}\) _for_ \(1\leq i<j\leq 2t\)_,_
* \(\mathbf{I}^{ij}\) _for_ \(1\leq i<j\leq 2t\) _which is obtained from_ \(\mathbf{A}^{ij}\) _by contracting the edge_ \(ij\)_._
The following Theorem 4.2 relates the properties of \(\Phi\) and \(M\). In Equation (15), \(J\) denotes the all-ones matrix of appropriate dimension. Its proof is deferred to Appendix C.1.
Let \(t\geq 1\). Let \(G\) and \(H\) be graphs and \(\mathcal{K}\in\{\mathcal{DNN},\mathcal{S}_{+}\}\) be a family of matrices. Let \(\Phi\colon\mathbb{C}^{V(G)^{\star}\times V(G)^{\sharp}}\to\mathbb{C}^{V(H)^{ \dagger}\times V(H)^{\sharp}}\) be a linear map. Then the following are equivalent.
1. The Choi matrix \(C_{\Phi}\) of \(\Phi\) satisfies Equations (9)-(12) and \(C_{\Phi}\in\mathcal{K}\),
2. \(\Phi\) is a _level-\(\mathcal{K}\)-isomorphism map from \(G\) to \(H\)_, i.e. it satisfies_ \[\Phi\text{ is completely $\mathcal{K}$-preserving},\] (13) \[\Phi(\mathbf{A}_{G}\odot X)=\mathbf{A}_{H}\odot\Phi(X)\text{ for all atomic $\mathbf{A}\in\mathcal{A}_{t}$ and all $X\in\mathbb{C}^{V(G)^{\sharp}\times V(G)^{\sharp}}$},\] (14) \[\Phi(J)=J=\Phi^{\star}(J),\] (15) \[\Phi(X^{\sigma})=\Phi(X)^{\sigma}\text{ for all $\sigma\in \mathfrak{S}_{2t}$ and all $X\in\mathbb{C}^{V(G)^{\sharp}\times V(G)^{\sharp}}$}.\] (16)
3. \(\Phi^{\ast}\) is a level-\(\mathcal{K}\)-isomorphism map from \(H\) to \(G\).
We remark that Theorem 4.2 and in particular its Equation (15) has brought us closer to interpreting the Lasserre system of equation from the perspective of homomorphism indistinguishability. As argued in Remark 4.2, the map \(\Phi\), which will be understood as mapping homomorphism tensors \(\mathbf{F}_{G}\) to \(\mathbf{F}_{H}\), is sum-preserving. Since the sum of the entries of these tensors equals the number of homomorphisms from their underlying unlabelled graphs to \(G\) and \(H\), respectively, for establishing a connection between \(\mathcal{K}\)-isomorphism maps and homomorphism indistinguishability.
If a linear map \(\Phi\colon\mathbb{C}^{n\times n}\to\mathbb{C}^{m\times m}\) is such that \(J=\Phi^{\ast}(J)\) then it is _sum-preserving_, i.e. \(\operatorname{soe}X=\operatorname{soe}\Phi(X)\) for all \(X\in\mathbb{C}^{n\times n}\). Indeed, \(\operatorname{soe}X=\langle X,J\rangle=\langle X,\Phi^{\ast}(J)\rangle= \langle\Phi(X),J\rangle=\operatorname{soe}\Phi(X)\) where \(\langle A,B\rangle\eqqcolon\operatorname{tr}(AB^{\ast})\). In particular, if there is \(\Phi\) satisfying Equations (14) and (15) for graphs \(G\) and \(H\) then \(|G|=|H|\).
### Connection to Lasserre
By the following Theorems 4.2 and 4.2, the notions introduced in Definition 4.2 and Theorem 4.2 are equivalent to the object of our main interest, namely feasibility of the level-\(t\) Lasserre relaxation with and without non-negativity constraints. Our results extend those of [27, Lemma 9.1] to the entire Lasserre hierarchy. The proofs are deferred to Appendix C.2.
Let \(t\geq 1\). Two graphs \(G\) and \(H\) are level-\(t\)\(\mathcal{S}_{+}\)-isomorphic if and only if the level-\(t\) system of the Lasserre hierarchy for graph isomorphism, i.e. Equations (4)-(8), is feasible.
Let \(t\geq 1\). Two graphs \(G\) and \(H\) are level-\(t\)\(\mathcal{DNN}\)-isomorphic if and only if the level-\(t\) system of the Lasserre hierarchy for graph isomorphism Equations (4)-(8) with the additional constraint \(y_{I}\geq 0\) for all \(I\in\binom{V(G)\times V(H)}{\leq 2t}\) is feasible.
### Isomorphisms between Matrix Algebras
To the two reformulations of \(\simeq_{t}^{\mathrm{L}}\) and \(\simeq_{t}^{\mathrm{L}^{+}}\) from the previous sections, a third characterisation is added in this section. It is shown that two graphs are level-\(t\)\(\mathcal{S}_{+}\)-isomorphic (\(\mathcal{DNN}\)-isomorphic) if and only if certain matrix algebras associated to them are isomorphic. These
algebras will be identified as the algebras of homomorphism tensors for graphs from the families \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\). The so called (partially) coherent algebras considered in this section are natural generalisations of the coherent algebra which are well-studied in the context of the 2-dimensional Weisfeiler-Leman algorithm [8].
#### Partially Coherent Algebras and \(\mathcal{S}_{+}\)-Isomorphism Maps
Let \(S\subseteq\mathbb{C}^{n^{t}\times n^{t}}\). A matrix algebra \(\mathcal{A}\subseteq\mathbb{C}^{n^{t}\times n^{t}}\) is _\(S\)-partially coherent_ if it is unital, self-adjoint, contains the all-ones matrix, and is closed under Schur products with any matrix in \(S\). A matrix algebra \(\mathcal{A}\subseteq\mathbb{C}^{n^{t}\times n^{t}}\) is _self-symmetrical_ if for every \(A\in\mathcal{A}\) and \(\sigma\in\mathfrak{S}_{2t}\) also \(A^{\sigma}\in\mathcal{A}\). Note that for \(t=1\), an algebra \(\mathcal{A}\) is self-symmetrical if for all \(A\in\mathcal{A}\) also \(A^{T}\in\mathcal{A}\).
Given a graph \(G\), construct its \(t\)-partially coherent algebra \(\widehat{\mathcal{A}}_{G}^{t}\) as the minimal self-symmetrical \(S\)-partially coherent algebra where \(S\) is the set of homomorphism tensors of \((t,t)\)-bilabelled atomic graphs for \(G\).
Two \(n\)-vertex graphs \(G\) and \(H\) are _partially \(t\)-equivalent_ if there is a _partial \(t\)-equivalence_, i.e. a vector space isomorphism \(\varphi\colon\widehat{\mathcal{A}}_{G}^{t}\to\widehat{\mathcal{A}}_{H}^{t}\) such that
1. \(\varphi(M^{*})=\varphi(M)^{*}\) for all \(M\in\widehat{\mathcal{A}}_{G}^{t}\),
2. \(\varphi(MN)=\varphi(M)\varphi(N)\) for all \(M,N\in\widehat{\mathcal{A}}_{G}^{t}\),
3. \(\varphi(I)=I\), \(\varphi(\boldsymbol{A}_{G})=\boldsymbol{A}_{H}\) for all \(\boldsymbol{A}\in\mathcal{A}_{t}\), and \(\varphi(J)=J\),
4. \(\varphi(\boldsymbol{A}_{G}\odot M)=\boldsymbol{A}_{H}\odot\varphi(M)\) for all \(\boldsymbol{A}\in\mathcal{A}_{t}\) and any \(M\in\widehat{\mathcal{A}}_{G}^{t}\).
5. \(\varphi(M^{\sigma})=\varphi(M)^{\sigma}\) for all \(M\in\widehat{\mathcal{A}}_{G}^{t}\) and all \(\sigma\in\mathfrak{S}_{2t}\).
The following Theorem 3 extends [27, Theorem 5.2]. Its proof is deferred to Appendix C.3.
Let \(t\geq 1\). Two graphs \(G\) and \(H\) are partially \(t\)-equivalent if and only if there is a level-\(t\)\(\mathcal{S}_{+}\)-isomorphism map from \(G\) to \(H\).
#### Coherent Algebras and \(\mathcal{DNN}\)-Isomorphism Maps
A matrix algebra \(\mathcal{A}\subseteq\mathbb{C}^{n\times n}\) is _coherent_ if it is unital, self-adjoint, contains the all-ones matrix and is closed under Schur products.
For \(t=1\), the 1-adjacency algebra as defined below is equal to the well-studied _adjacency algebra_ of a graph \(G\), cf. [8]. The latter is the smallest coherent algebra containing the adjacency matrix of the graph. The former is generated by the homomorphism tensors of \((1,1)\)-bilabelled atomic graphs. These graphs are depicted in Figure 3. Their homomorphism tensors are the all-ones matrix, the adjacency matrix of the graph, and the identity matrix.
Let \(t\geq 1\). The \(t\)-adjacency algebra \(\mathcal{A}_{G}^{t}\) of a graph \(G\) is the self-symmetrical coherent algebra generated by the homomorphism tensors of the atomic graphs \(\mathcal{A}_{t}\).
Two \(n\)-vertex graphs \(G\) and \(H\) are \(t\)-equivalent_ if there is \(t\)_-equivalence_, i.e. a vector space isomorphism \(\varphi\colon\mathcal{A}_{G}^{t}\to\mathcal{A}_{H}^{t}\) such that
1. \(\varphi(M^{*})=\varphi(M)^{*}\) for all \(M\in\mathcal{A}_{G}^{t}\),
2. \(\varphi(MN)=\varphi(M)\varphi(N)\) for all \(M,N\in\mathcal{A}_{G}^{t}\),
3. \(\varphi(I)=I\), \(\varphi(\boldsymbol{A}_{G})=\boldsymbol{A}_{H}\) for all \(\boldsymbol{A}\in\mathcal{A}_{t}\), and \(\varphi(J)=J\),
4. \(\varphi(M\odot N)=\varphi(M)\odot\varphi(N)\) for all \(M,N\in\mathcal{A}_{G}^{t}\).
Figure 3: The three atomic graphs in \(\mathcal{A}_{1}\).
5. \(\varphi(M^{\sigma})=\varphi(M)^{\sigma}\) _for all_ \(M\in\mathcal{A}_{G}^{t}\) _and all_ \(\sigma\in\mathfrak{S}_{2t}\)_._
The following Theorem 21 extends [27, Theorem 6.3]. Its proof is deferred to Appendix C.3.
Let \(t\geq 1\). Two graphs \(G\) and \(H\) are \(t\)-equivalent if and only if there is a level-\(t\)\(\mathcal{DN}\)-isomorphism map from \(G\) to \(H\).
## 4 Homomorphism Indistinguishability
Using techniques from [20], we finally establish a characterisation of when the level-\(t\) Lasserre relaxation of \(\mathrm{ISO}(G,H)\) is feasible in terms of homomorphism indistinguishability of \(G\) and \(H\). In order to do so, we introduce the graph classes \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\). In Section 4.1, we relate \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) to the classes of graphs of bounded treewidth and pathwidth obtaining the results depicted in Figure 1. In Section 4.2, \(\mathcal{L}_{1}\) and \(\mathcal{L}_{1}^{+}\) are identified as the classes of outerplanar graphs and graphs of treewidth two, respectively.
Let \(t\geq 1\). Write \(\mathcal{L}_{t}^{+}\) for the class of \((t,t)\)-bilabelled graphs generated by the set of atomic graphs \(\mathcal{A}_{t}\) under parallel composition, series composition, and the action of \(\mathfrak{S}_{2t}\) on the labels.
Write \(\mathcal{L}_{t}\subseteq\mathcal{L}_{t}^{+}\) for the class of \((t,t)\)-bilabelled graphs generated by the set of atomic graphs \(\mathcal{A}_{t}\) under parallel composition with graphs from \(\mathcal{A}_{t}\), series composition, and the action of \(\mathfrak{S}_{2t}\) on the labels.
Note that the only difference between \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) is that \(\mathcal{L}_{t}\) is closed under parallel composition with atomic graphs only. This reflects an observation by [20] relating the closure under arbitrary gluing products to non-negative solutions to systems of equations characterising homomorphism indistinguishability. Intuitively, one may use arbitrary Schur products, the algebraic counterparts of gluing, for a Vandermonde interpolation argument, cf. [19, Appendix B.4].
The following Observation 4.3 illustrates how the operations in Definition 4.3 can be used to generate more complicated graphs from the atomic graphs.
Let \(t\geq 1\). The class \(\mathcal{L}_{t}\) contains a bilabelled graph whose underlying unlabelled graph is isomorphic to the \(3t\)-clique \(K_{3t}\).
Proof.: Let \(\mathbf{E}\coloneqq\bigodot_{1\leq i<j\leq 2t}\mathbf{A}^{ij}\in\mathcal{A}_{t}\). The graph underlying \(\mathbf{E}\odot(\mathbf{E}\cdot\mathbf{E})\) is isomorphic to \(K_{3t}\).
The only missing implications of Theorems 4.2 and 4.2 follow from the next two theorems:
Let \(t\geq 1\). Two graphs \(G\) and \(H\) are homomorphism indistinguishable over \(\mathcal{L}_{t}\) if and only if they are partially \(t\)-equivalent.
Let \(t\geq 1\). Two graphs \(G\) and \(H\) are homomorphism indistinguishable over \(\mathcal{L}_{t}^{+}\) if and only if they are \(t\)-equivalent.
For the proofs of Theorems 4.3 and 4.3, we extend the framework developed by [20]. In this work, the authors introduced tools for constructing systems of equations characterising homomorphism indistinguishably over classes of labelled graphs. A requirement of these tools is that the graph class in question is _inner-product compatible_[20, Definition 24]. This means that for every two labelled graphs \(\mathbf{R}\) and \(\mathbf{S}\) one can write the inner-product of their homomorphism vectors \(\mathbf{R}_{G}\) and \(\mathbf{S}_{G}\) as the sum-of-entries of some \(\mathbf{T}_{G}\) where \(\mathbf{T}\) is labelled graph from the class. Due to the correspondence between combinatorial operations on
labelled graphs and algebraic operations on their homomorphism vectors, cf. Section 2.2, this is equivalent to the graph theoretic assumption that \(\operatorname{soe}(\boldsymbol{R}\odot\boldsymbol{S})=\operatorname{soe}( \boldsymbol{T})\), i.e. the unlabelled graph obtained by unlabelling the gluing product of \(\boldsymbol{R}\) and \(\boldsymbol{S}\) can be labelled such that the resulting labelled graph is in the class.
We extend this notion to bilabelled graphs. A class of \((t,t)\)-bilabelled graphs \(\mathcal{S}\) is said to be _inner-product compatible_ if for all \(\boldsymbol{R},\boldsymbol{S}\in\mathcal{S}\) there is a graph \(\boldsymbol{T}\in\mathcal{S}\) such that \(\operatorname{tr}(\boldsymbol{R}\cdot\boldsymbol{S}^{*})=\operatorname{soe}( \boldsymbol{T})\). This definition is inspired by the inner-product on \(\mathbb{C}^{n\times n}\) given by \(\langle A,B\rangle\coloneqq\operatorname{tr}(AB^{*})\).
Proof.: Since \(\mathcal{L}_{t}\) is closed under matrix products and taking transposes, it suffices to show that for every \(\boldsymbol{S}\in\mathcal{L}_{t}\) the graph \(\operatorname{tr}\boldsymbol{S}\) is the underlying unlabelled graph of some element of \(\mathcal{L}_{t}\). Indeed, for every \((t,t)\)-bilabelled graphs \(\boldsymbol{F}\) it holds that \(\operatorname{tr}(\boldsymbol{F})=\operatorname{soe}(\boldsymbol{I}^{1,t+1} \odot\cdots\odot\boldsymbol{I}^{t,2t}\odot\boldsymbol{F})\) where the \(\boldsymbol{I}^{ij}\) are as in Definition 13. Since \(\mathcal{L}_{t}\) is closed under parallel composition with atomic graphs, the claim follows. For \(\mathcal{L}_{t}^{+}\), an analogous argument yields the claim.
The following Theorem 27, which extends the toolkit for constructing systems of equations characterising homomorphism indistinguishability over families of bilabelled graphs, is the bilabelled analogue of [20, Theorem 13]. Write \(\mathbb{C}\mathcal{S}_{G}\subseteq\mathbb{C}^{V(G)^{t}}\times V(G)^{t}\) for the vector space spanned by homomorphism tensors \(\boldsymbol{S}_{G}\) for \(\boldsymbol{S}\in\mathcal{S}\).
Let \(t\geq 1\) and \(\mathcal{S}\) be an inner-product compatible class of \((t,t)\)-bilabelled graphs containing \(\boldsymbol{J}\). For graphs \(G\) and \(H\), the following are equivalent:
1. \(G\) and \(H\) are homomorphism indistinguishable over \(\mathcal{S}\),
2. there exists a sum-preserving vector space isomorphism \(\varphi\colon\mathbb{C}\mathcal{S}_{G}\to\mathbb{C}\mathcal{S}_{H}\) such that \(\varphi(\boldsymbol{S}_{G})=\boldsymbol{S}_{H}\) for all \(\boldsymbol{S}\in\mathcal{S}\).
The proof of Theorem 27 is deferred to Appendix D. Theorems 24 and 25 follows from this theorem as described in Appendix D.
### The Classes \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) and Graphs of Bounded Treewidth
In this section, the classes \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) are compared to the classes of graphs of bounded treewidth and pathwidth. Figure 1 depicts the relationships between these classes. The first result, Lemma 28, gives an upper bound on the treewidth of graphs in \(\mathcal{L}_{t}^{+}\). The proof, which is by induction on the structure of elements in \(\mathcal{L}_{t}^{+}\) as given by Definition 22, is deferred to Appendix D.
Let \(t\geq 1\). The treewidth of an unlabelled graph \(F\) underlying some \(\boldsymbol{F}=(F,\boldsymbol{u},\boldsymbol{v})\in\mathcal{L}_{t}^{+}\) is at most \(3t-1\).
Lemma 28 in conjunction with Theorems 9 and 10 implies Theorems 2 and 3. As a corollary, this yields the upper bound in Theorem 1. Indeed, by Theorem 8, \(G\simeq_{t}^{\mathrm{SA}}H\) if and only if \(G\) and \(H\) are homomorphism indistinguishable over the class of graphs of treewidth at most \(t-1\). Hence, if \(G\simeq_{3t}^{\mathrm{SA}}H\) then \(G\simeq_{t}^{\mathrm{L}^{+}}H\) and in particular \(G\simeq_{t}^{\mathrm{L}}H\).
It remains to show the lower bound asserted by Theorem 1, i.e. that \(3t\) cannot be replaced by \(3t-1\) for no \(t\geq 1\). To that end, first observe that Observation 23 implies that the bound in Lemma 28 is tight. However, this syntactic property of the graph class \(\mathcal{L}_{t}\) does not suffice to derive the aforementioned semantic property of \(\simeq_{t}^{\mathrm{SA}}\) and \(\simeq_{t}^{\mathrm{L}}\). In fact, it could well be that for all graphs \(G\) and \(H\) if \(G\) and \(H\) are homomorphism indistinguishable over the graphs of treewidth at most \(3t-2\) also \(\hom(K_{3t},G)=\hom(K_{3t},H)\) despite that \(\operatorname{tw}K_{3t}>3t-2\)
That this does not hold is implied by a conjecture of the first author [31] which asserts that every minor-closed graph class \(\mathcal{F}\) which is closed under taking disjoint unions (_union-closed_) is _homomorphism distinguishing closed_, i.e. for all \(F\not\in\mathcal{F}\) there exist graphs \(G\) and \(H\) such that \(G\equiv_{F}H\) but \(\hom(F,G)\neq\hom(F,H)\). Although being generally open, this conjecture was proven by Daniel Neuen (personal communication) for the class of graphs of treewidth at most \(t\) for every \(t\) using techniques of [11, 31]. Theorem 29 implies the last assertion of Theorem 1.
For every \(t\geq 1\), there exist graphs \(G\) and \(H\) such that \(G\simeq^{\mathrm{SA}}_{3t-1}H\) and \(G\not\simeq^{\mathrm{L}}_{t}H\).
Proof.: Towards a contraction, suppose that \(G\simeq^{\mathrm{SA}}_{3t-1}H\implies G\simeq^{\mathrm{L}}_{t}H\) for all graphs \(G\) and \(H\). By Theorem 8, \(G\simeq^{\mathrm{SA}}_{3t-1}H\) if and only if \(G\) and \(H\) are homomorphism indistinguishable over the class \(\mathcal{TW}_{3t-2}\) of graphs of treewidth at most \(3t-2\). By Observation 23 and Theorem 9, if \(G\equiv_{\mathcal{TW}_{3t-2}}H\) then \(G\equiv_{\mathcal{L}_{t}}H\) and in particular \(\hom(K_{3t},G)=\hom(K_{3t},H)\). As shown by Daniel Neuen (personal communication), the class \(\mathcal{TW}_{3t-2}\) is homomorphism distinguishing closed. As \(\operatorname{tw}K_{3t}=3t-1\), it follows that there exist graphs \(G\) and \(H\) with \(G\simeq^{\mathrm{SA}}_{3t-1}H\) and \(\hom(K_{3t},G)\neq\hom(K_{3t},H)\). In particular, \(G\not\simeq^{\mathrm{L}}_{t}H\) by Theorem 9.
It is worth noting that the classes of unlabelled graphs underlying the elements of \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) are themselves minor-closed and union-closed. Hence, they are subject to the aforementioned conjecture. Furthermore, by the Robertson-Seymour Theorem and [32], membership in \(\mathcal{L}_{t}\) and \(\mathcal{L}_{t}^{+}\) can be tested in polynomial time for every fixed \(t\geq 1\). The proof of Lemma 30 is deferred to Appendix D.2.
Let \(t\geq 1\). The class of graphs underlying the elements of \(\mathcal{L}_{t}\) and the class of graphs underlying the elements of \(\mathcal{L}_{t}^{+}\) are minor-closed and union-closed.
The remainder of this section is dedicated to some further relations between the classes of graphs of bounded treewidth or pathwidth, \(\mathcal{L}_{t}\), and \(\mathcal{L}_{t}^{+}\). Note that these facts give independent proofs for the correspondence between the feasibility of the level-\(t\) Sherali-Adams relaxation (without non-negativity constraints), which corresponds to homomorphism indistinguishability over graphs of treewidth (pathwidth) at most \(t-1\), as proven by [13, 20], and the feasibility of the level-\(t\) Lasserre relaxation with and without non-negativity constraints.
First of all, it is easy to see that dropping the semidefiniteness constraint Equation (4) of the level-\(t\) Lasserre system of equations turns this system essentially into the level-\(2t\) Sherali-Adams system of equations without non-negativity constraints, e.g. as defined in [19, Appendix D.1]. This is paralleled by Lemma 31.
Let \(t\geq 1\). For every graph \(F\) with \(\operatorname{pw}F\leq 2t-1\), there is a graph \(\mathbf{F}\in\mathcal{L}_{t}\) whose underlying unlabelled graph is isomorphic to \(F\).
Furthermore, one may drop Equation (4) from the level-\(t\) Lasserre system of equations with non-negativity constraints to obtain the level-\(2t\) Sherali-Adams system of equations in its original form, i.e. with non-negativity constraints. This is paralleled by Lemma 32.
Let \(t\geq 1\). For every graph \(F\) with \(\operatorname{tw}F\leq 2t-1\), there is a graph \(\mathbf{F}\in\mathcal{L}_{t}^{+}\) whose underlying unlabelled graph is isomorphic to \(F\).
Since the diagonal entries of a positive semidefinite matrix are necessarily non-negative, Equation (4) implies that any solution (\(y_{I}\)) to the level-\(t\) Lasserre system of equations is such that \(y_{I}\geq 0\) for all \(I\in\binom{V(G)\times V(H)}{\leq t}\). Hence, such a solution is a solution to the level-\(t\) Sherali-Adams system of equations as well. This is paralleled by Lemma 33.
**Lemma 33**.: _Let \(t\geq 1\). For every graph \(F\) with \(\operatorname{tw}F\leq t-1\), there is a graph \(\mathbf{F}\in\mathcal{L}_{t}\) whose underlying unlabelled graph is isomorphic to \(F\)._
The proofs of Lemmas 31-33 are all by inductively constructing an element of \(\mathcal{L}_{t}^{+}\) using a tree decomposition of the given graph. They are deferred to Appendix D.1.
### The Classes \(\mathcal{L}_{1}\) and \(\mathcal{L}_{1}^{+}\)
The classes \(\mathcal{L}_{1}\) and \(\mathcal{L}_{1}^{+}\) can be identified as the class of outerplanar graphs and as the class of graphs of treewidth at most two, respectively. This yields Theorem 5. Proofs are deferred to Appendix D.3.
**Proposition 34**.: _The class of unlabelled graphs underlying an element of \(\mathcal{L}_{1}^{+}\) coincides with the class of graphs of treewidth at most two._
A graph \(F\) is _outerplanar_ if it does not have \(K_{4}\) or \(K_{2,3}\) as a minor. Equivalent, it is outerplanar if it has a planar drawing such that all its vertices lie on the same face [36].
**Proposition 35**.: _The class of unlabelled graphs underlying an element of \(\mathcal{L}_{1}\) coincides with the class of outerplanar graphs._
As a corollary of Proposition 35, we observe the following:
**Corollary 36**.: _If \(G\equiv_{\mathcal{L}_{1}}H\) then \(G\) is connected iff \(H\) is connected._
Deciding Exact Feasibility of the Lasserre Relaxation with Non-Negativity Constraints in Polynomial Time
This section is dedicated to proving Theorem 4. To that end, it is argued that \(\simeq_{t}^{\mathrm{L}^{+}}\) has equivalent characterisations in terms of logical equivalence and a colouring algorithm akin to the \(k\)-dimensional Weisfeiler-Leman algorithm [38]. This algorithm has polynomial running time. It is defined as follows:
**Definition 37**.: _Let \(t\geq 1\), define for a graph \(G\), \(i\geq 1\), and \(\mathbf{r},\mathbf{s}\in V(G)^{t}\)_
\[\mathsf{mwl}_{G}^{0}(\mathbf{r}\mathbf{s}) \coloneqq\mathrm{rel}_{G}(\mathbf{r}\mathbf{s}),\] \[\mathsf{mwl}_{G}^{i-1/2}(\mathbf{r}\mathbf{s}) \coloneqq\left(\mathsf{mwl}_{G}^{i-1}(\sigma(\mathbf{r}\mathbf{s}))\ \big{|}\ \sigma\in\mathfrak{S}_{2t}\right),\] \[\mathsf{mwl}_{G}^{i}(\mathbf{r}\mathbf{s}) \coloneqq\left(\mathsf{mwl}_{G}^{i-1/2}(\mathbf{r}\mathbf{s}),\left\{ \hskip-1.0pt\left\{\hskip-1.0pt\left(\mathsf{mwl}_{G}^{i-1/2}(\mathbf{r}\mathbf{t}), \mathsf{mwl}_{G}^{i-1/2}(\mathbf{t}\mathbf{s})\right)\ \right|\ \mathbf{t}\in V(G)^{t}\hskip-1.0pt\right\}\hskip-1.0pt\right\} \right).\]
_The \(\mathsf{mwl}_{G}^{i}\) for \(i\in\mathbb{N}\) define increasingly fine colourings of \(V(G)^{2t}\). Let \(\mathsf{mwl}_{G}^{\infty}\) denote the finest such colouring. Two graphs \(G\) and \(H\) are not distinguished by the \(t\)-dimensional \(\mathsf{mwl}\) algorithm if the multisets_
\[\left\{\hskip-1.0pt\left\{\hskip-1.0pt\left\{\hskip-1.0pt\left.\mathsf{mwl}_{ G}^{\infty}(\mathbf{r}\mathbf{s})\ \big{|}\ \mathbf{r},\mathbf{s}\in V(G)^{t}\hskip-1.0pt\right\}\hskip-1.0pt\right\} \hskip-1.0pt\right\}\hskip-1.0pt\text{ and }\hskip 1.0pt\left\{\hskip-1.0pt\left\{\hskip-1.0pt\left\{ \hskip-1.0pt\left\{\hskip-1.0pt\mathsf{mwl}_{H}^{\infty}(\mathbf{u}\mathbf{v})\ \big{|}\ \mathbf{u},\mathbf{v}\in V(H)^{t} \hskip-1.0pt\right\}\hskip-1.0pt\right\}\hskip-1.0pt\right\}\]
_are the same._
Since the finest colouring \(\mathsf{mwl}_{G}^{\infty}\) is reached in \(\leq n^{2t}-1\) iterations for graphs on \(n\) vertices, for fixed \(t\), it can be tested in polynomial time whether two graphs are not distinguished by the \(t\)-dimensional \(\mathsf{mwl}\) algorithm. We are about to show that the latter happens if and only if the level-\(t\) Lasserre relaxation with non-negative constraints is feasible. As a by-product, we obtain a logical characterisation for this equivalence relation.
**Definition 38**.: _For \(t\geq 1\), let \(\mathsf{M}^{t}\) denote the fragment of first-order logic with counting quantifiers and at most \(3t\) variables comprising the following expressions:_
* \(x_{i}=x_{j}\) _and_ \(Ex_{i}x_{j}\) _for all_ \(i,j\in[3t]\)_,_
* _if_ \(\varphi,\psi\in\mathsf{M}^{t}\) _then_ \(\neg\varphi\)_,_ \(\varphi\wedge\psi\)_, and_ \(\varphi\vee\psi\) _are in_ \(\mathsf{M}^{t}\)_,_
* _if_ \(\varphi,\psi\in\mathsf{M}^{t}\) _and_ \(n\in\mathbb{N}\) _then_ \(\exists^{\geq n}\boldsymbol{y}\)_._ \(\varphi(\boldsymbol{x},\boldsymbol{y})\wedge\psi(\boldsymbol{y},\boldsymbol{z})\) _is in_ \(\mathsf{M}^{t}\)_. Here, the bold face letters_ \(\boldsymbol{x}\)_,_ \(\boldsymbol{y}\)_,_ \(\boldsymbol{z}\) _denote_ \(t\)_-tuples of distinct variables._
The semantic of the quantifier \(\exists^{\geq n}\boldsymbol{y}\). \(\varphi(\boldsymbol{y})\) is that there exist at least \(n\) many \(t\)-tuples of vertices from the graph over which the formula is evaluated which satisfy \(\varphi\). The following Theorem 39 may be thought of as a analogue of Theorem 3 for \(\mathcal{L}^{+}_{t}\).
Let \(t\geq 1\). For graphs \(G\) and \(H\), the following are equivalent:
* \(G\) and \(H\) are not distinguished by the \(t\)-dimensional _mwl_ algorithm,
* \(G\) and \(H\) are homomorphism indistinguishable over \(\mathcal{L}^{+}_{t}\),
* \(G\) and \(H\) satisfy the same \(\mathsf{M}^{t}\)-sentences.
The proof of Theorem 39 is deferred to Appendix E. It is conceptually similar to arguments of [7, 14, 20]. As mentioned above, Theorem 39 implies Theorem 4.
## 6 Conclusion
We have established a characterisation of the feasibility of the level-\(t\) Lasserre relaxation with and without non-negativity constraints of the integer program \(\operatorname{ISO}(G,H)\) for graph isomorphism in terms of homomorphism indistinguishability over the graph classes \(\mathcal{L}_{t}\) and \(\mathcal{L}^{+}_{t}\). By analysing the treewidth of the graphs \(\mathcal{L}_{t}\) and \(\mathcal{L}^{+}_{t}\) and invoking results from the theory of homomorphism indistinguishability, we have determined the precise number of Sherali-Adams levels necessary such that their feasibility guarantees the feasibility of the level-\(t\) Lasserre relaxation. This concludes a line of research brought forward in [4]. For feasibility of the level-\(t\) Lasserre relaxation with non-negativity constraints, we have given, besides linear algebraic reformulations generalising the adjacency algebra of a graph, a polynomial time algorithm deciding this property.
An interesting extension of our work might be an efficient algorithm for computing an explicit partial \(t\)-equivalence between two graphs, cf. Definitions 3 and 3, or deciding that no such map exists. This would yield an efficient algorithm for deciding the exact feasibility of the Lasserre semidefinite program without non-negativity constraints, cf. [4].
Another object meriting further investigations is the maximum number of iterations needed by the mwl-algorithm to terminate. Recent work [17] suggests that the trivial bound of \(O(n^{2t})\) rounds is far from best possible. In virtue of Theorem 39, tighter bounds on this number would yield for every \(n\) proper subclasses of \(\mathcal{L}_{t}\) whose homomorphism counts suffice to determine whether \(G\simeq_{t}^{\sqcup^{*}}H\) for \(n\)-vertex graphs \(G\) and \(H\). |
2308.05608 | On a nonlocal two-phase flow with convective heat transfer | We study a system describing the dynamics of a two-phase flow of
incompressible viscous fluids influenced by the convective heat transfer of
Caginalp-type. The separation of the fluids is expressed by the order parameter
which is of diffuse interface and is known as the Cahn-Hilliard model. We shall
consider a nonlocal version of the Cahn-Hilliard model which replaces the
gradient term in the free energy functional into a spatial convolution operator
acting on the order parameter and incorporate with it a potential that is
assumed to satisfy an arbitrary polynomial growth. The order parameter is
influenced by the fluid velocity by means of convection, the temperature
affects the interface via a modification of the Landau-Ginzburg free energy.
The fluid is governed by the Navier--Stokes equations which is affected by the
order parameter and the temperature by virtue of the capillarity between the
two fluids. The temperature on the other hand satisfies a parabolic equation
that considers latent heat due to phase transition and is influenced by the
fluid via convection. The goal of this paper is to prove the global existence
of weak solutions and show that, for an appropriate choice of sequence of
convolutional kernels, the solutions of the nonlocal system converges to its
local version. | Šárka Nečasová, John Sebastian H. Simon | 2023-08-10T14:38:04Z | http://arxiv.org/abs/2308.05608v2 | # On a nonlocal two-phase flow with convective heat transfer
###### Abstract
We study a system describing the dynamics of a two-phase flow of incompressible viscous fluids influenced by the convective heat transfer of Caginalp-type. The separation of the fluids is expressed by the order parameter which is of diffuse interface and is known as the Cahn-Hilliard model. We shall consider a nonlocal version of the Cahn-Hilliard model which replaces the gradient term in the free energy functional into a spatial convolution operator acting on the order parameter and incorporate with it a potential that is assumed to satisfy an arbitrary polynomial growth. The order parameter is influenced by the fluid velocity by means of convection, the temperature affects the interface via a modification of the Landau-Ginzburg free energy. The fluid is governed by the Navier-Stokes equations which is affected by the order parameter and the temperature by virtue of the capillarity between the two fluids. The temperature on the other hand satisfies a parabolic equation that considers latent heat due to phase transition and is influenced by the fluid via convection. The goal of this paper is to prove the global existence of weak solutions and show that, for an appropriate choice of sequence of convolutional kernels, the solutions of the nonlocal system converges to its local version.
keywords: nonlocal Cahn-Hilliard equations, Boussinesq equations, well-posedness, nonlocal-to-local convergence Msc: [2020] 45K05, 76D03, 76T06, 35B40 +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
The study of multi-phase flows has been of interest among experts due to their complexities and robust properties. One of the well-known models is the so-called Cahn-Hilliard equation which models a binary immiscible fluid that assumes a diffuse interface between the two fluids [1; 2]. Since then several modifications have been introduced to take into account the physical and biological implications of the model, see eg. [3; 4] for application to tumor growth, [5; 6] for binary fluids with unmatched densities, [7; 8] for binary magnetic fluids, [9] for separation of diblock copolymers, and [10] for image reconstruction.
Aside from modifying the original system or coupling it with other physical models to describe more physical (or biological) phenomena, the Cahn-Hilliard equations have been recasted to consider long-range particle interactions on the interface of the two fluids. This version of the Cahn-Hilliard equations, known to many as the nonlocal Cahn-Hilliard equations, was introduced by G. Giacomin and J. Lebowitz [11; 12]. Such model has been the subject of many expositions, among them are the following works [13; 14; 15; 16; 17; 18; 19]. These modifications serve as motivations for this article.
In particular, we will consider an incompressible two-phase flow undergoing phase separation due to diffusion and is influenced by temperature by means of a Caginalp-type equation. The dynamics of the binary fluids described by the Cahn-Hilliard equations will make use of the aforementioned nonlocal system.
Let \(\Omega\subset\mathbb{R}^{d}\) be a fixed domain - with sufficiently smooth boundary \(\partial\Omega\) - which a two-phase fluid occupies, and \(T>0\) be a fixed final time. We denote by \((\mathbf{u},p):\Omega\times(0,T)\to\mathbb{R}^{d}\times\mathbb{R}\) the velocity-pressure pairing of the fluid, \(\theta:\Omega\times(0,T)\to\mathbb{R}\) the relative temperature around the critical temperature \(\theta_{c}=0\), and \(\varphi:\Omega\times(0,T)\to\mathbb{R}\) the (relative) concentration of the binary fluid.
The concentration \(\varphi\) is described by the Cahn-Hilliard equation with a transport term influenced by the average velocity \(\mathbf{u}\), which takes values \(\varphi\in[-1,1]\), where \(\varphi=1\) or \(\varphi=-1\) corresponds to the two different phases of the fluids. To be specific, the dynamics of \(\varphi\) are governed by
\[D_{t}\varphi=m\Delta\mu,\]
where \(D_{t}=\partial_{t}+\mathbf{u}\cdot\nabla\) is the material derivative due to the fluid velocity, \(\mu\) is the chemical potential, and \(m\) corresponds to the mobility of the interface. Such model - if one, for the meantime, ignores the contribution of the fluid velocity - is based on the so-called H-model which considers a diffuse interface between two interacting fluids [1, 20].
The average velocity and pressure of the fluids is governed by the Navier-Stokes equations and is influenced by the order parameter \(\varphi\) and the relative temperature \(\theta\) by a Korteweg-type force proportional to \((\mu-\ell_{c}\theta)\nabla\varphi\) where \(\ell_{c}\) is a parameter concerning latent heat [21, Appendix]. A Boussinesq approximation which incorporates a constant gravitational force with the linearized equation of state expressed as \(\ell(\varphi,\theta)\mathbf{g}=(\alpha_{0}+\alpha_{1}\varphi+\alpha_{2}\theta) \mathbf{g}\) is also assumed to affect the fluid. To be specific, the fluid is governed by the equation
\[D_{t}\mathbf{u}-\mathrm{div}(\nu(\varphi)2\mathrm{D}\mathbf{u})+\nabla p= \mathcal{K}(\mu-\ell_{c}\theta)\nabla\varphi+\ell(\varphi,\theta)\mathbf{g}+ \mathbf{q}.\]
Here \(\mathcal{K}\) is a capillarity constant, \(\nu(\varphi)>0\) denotes the fluid viscosity, \(\mathbf{q}\) denotes an external force and the operator \(\mathrm{D}\) is defined as \(\mathrm{D}\mathbf{u}=(\nabla\mathbf{u}+\nabla\mathbf{u}^{\top})/2\). Ignoring for the meantime the contribution of the temperature \(\theta\), the combination of the two equations previously introduced is the well-known Navier-Stokes/Cahn-Hilliard system which has been considered in numerous expositions, see for example the papers [5, 7, 22, 23, 24, 25, 26, 27] and the references therein.
The equation concerning the temperature is based on Caginalp's interpretation of free boundary problems derived from phase trasition [28]. By introducing the quantity \(H(\theta,\varphi):=\theta-\ell_{h}\varphi\) we consider the dynamics governed by the transport equation
\[D_{t}H(\theta,\varphi)-\kappa\Delta\theta=\mathbf{g}\cdot\mathbf{u}+z,\]
where \(\ell_{h}\) is a constant related to the latent heat, \(\mathbf{g}\) is the gravitational force, and \(z\) is an external heat source. The Laplacian above comes from the assumption that the heat flux follows Fourier's thermal conduction law.
Before we go any further, let us talk about the chemical potential \(\mu\). Here, we note that the usual Landau-Ginzburg free energy functional is written as
\[\tilde{\mathcal{E}}(\varphi,\theta):=\int_{\Omega}\left(\frac{\xi}{2}|\nabla \varphi|^{2}+\eta F(\varphi)+\ell_{c}\theta\varphi\right)\mathrm{d}x,\]
where the chemical potential \(\mu\) is obtained by the first variation of \(\mathcal{E}(\cdot,\theta)\), i.e., \(\mu=-\xi\Delta\varphi+\eta F^{\prime}(\varphi)+\ell_{c}\theta\). Such functional is used in [29, 30] to model a nonisothermal diffuse interface two-phase flow where the author showed the existence of solutions and its application to optimal control problems.
Recent developments, however, saw the applicability of nonlocal energy functionals, for example in tumor growth models [4] and in exhibiting sharp interface without having to taking asymptotic limit of the diffusive constant [15]. In this paper, we shall modify the temperate Landau free energy functional into a nonlocal one by changing the gradient into a nonlocal spatial operator. To be specific, we shall use the non-local Landau-Ginzburg free energy functional
\[\mathcal{E}(\varphi,\theta):= \frac{1}{4}\int_{\Omega}\int_{\Omega}J(x-y)(\varphi(x)-\varphi(y ))^{2}\,\mathrm{d}x\,\mathrm{d}y+\int_{\Omega}\left(\eta F(\varphi(x))+\ell_{ c}\theta(x)\varphi(x)\right)\mathrm{d}x,\]
where \(J:\mathbb{R}^{2}\to\mathbb{R}\) is a sufficiently smooth function that satisfies \(J(x)=J(-x)\). The first variation of \(\mathcal{E}(\cdot,\theta)\) will then give us the chemical potential \(\mu=a\varphi-J*\varphi+\eta F^{\prime}(\varphi)+\ell_{c}\theta\), where
\[a(x):=\int_{\Omega}J(x-y)\,\mathrm{d}y\ \ \text{and}\ \ (J*\varphi)(x)=\int_{ \Omega}J(x-y)\varphi(y)\,\mathrm{d}y. \tag{1.1}\]
Such energy functional has been proven to take into account microscopic interactions that describe phase segregation [11, 12]. We also mention [17, 18] where the authors proved the convergence of the nonlocal system to the local one which also serve as a justification of the contention that \(\tilde{\mathcal{E}}\) is the macroscopic limit of the energy functional \(\mathcal{E}\).
Combining all these, we get the non-local Cahn-Hilliard-Boussinesq equations
\[\partial_{t}\varphi+\mathbf{u}\cdot\nabla\varphi=m\Delta\mu,\quad\mu=a \varphi-J*\varphi+\eta F^{\prime}(\varphi)+\ell_{c}\theta, \tag{1.2a}\] \[\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}-\mathrm{div}(\nu( \varphi)2\mathrm{D}\mathbf{u})+\nabla p=\mathcal{K}(\mu-\ell_{c}\theta)\nabla \varphi+\ell(\varphi,\theta)\mathbf{g}+\mathbf{q},\] (1.2b) \[\partial_{t}\theta-\ell_{h}\partial_{t}\varphi+\mathbf{u}\cdot\nabla(\theta- \ell_{h}\varphi)-\kappa\Delta\theta=\mathbf{g}\cdot\mathbf{u}+z, \tag{1.2c}\]
all of which are satisfied in the space-time domain \(Q:=\Omega\times I\), where \(I=(0,T)\), with the divergence-free assumption \(\mathrm{div}\,\mathbf{u}=0\) in \(Q\), the boundary boundary conditions \(\frac{\partial\mu}{\partial\mathbf{n}}=0\), \(\mathbf{u}=0\), and \(\frac{\partial\theta}{\partial\mathbf{n}}=0\) on \(\Gamma:=\partial\Omega\times I\), and initial conditions \(\varphi(0)=\varphi_{0}\), \(\mathbf{u}(0)=\mathbf{u}_{0}\), and \(\theta(0)=\theta_{0}\) in \(\Omega\).
The purpose of this article is to show the existence of weak solutions through spectral Galerkin method and to provide proof that, under appropriate conditions and choice of the convolutional kernel, solutions of the nonlocal system converge to its local version. We shall apply the techniques used in [31] for the Cahn-Hilliard system with periodic boundary conditions and sigular free energy density, by [17] for the Cahn-Hilliard equations with singular potentials and \(W^{1,1}\) kernels, and [6] for a Cahn-Hilliard-Navier-Stokes equations with unmatched densities.
_The novelty in this article are as follows:_
(i) compared to the works of P. Colli et al. [16] and S. Frigeri et al. [19] we considered a nonisothermal system, where the influence of the temperature is based on G. Caginalp [28]; (ii) aside from modifying the local free energy functional into the nonlocal version in [29, 30], we considered a non-constant viscosity dependent on the order parameter, a generalized potential of polynomial growth, and considered not only the two-dimensional case but also delved into the system in three dimensions.
The subsequent parts of this paper is as follows: the next section is devoted to the mathematical setting such as the functional spaces and the necessary embeddings. Section 3 deals with the existence of weak solutions for (1.2) where we use the spectral Galerkin method as a discretization which gets to converge to the original continuous system. The final section establishes convergence of the nonlocal system to its local version by using the framework introduced in [32; 33] and the \(\Gamma\)-convergence in [34; 35].
## 2 Preliminaries
Let us introduce the function spaces upon which the analysis will anchor on. Given a Banach space \(X\) with its dual \(X^{*}\), the duality pairing is simply denoted as \(\langle x^{*},x\rangle_{X}\) for \(x^{*}\in X^{*}\) and \(x\in X\). We denote by \(L^{p}(\Omega)\) the space of Lebesgue integrable function of order \(p\geq 1\) and the usual Sobolev-Slobodeckij spaces are denoted as \(W^{s,p}(\Omega)\) with \(H^{s}(\Omega)=W^{s,2}(\Omega)\). For simplicity, we use \(\|\cdot\|\) and \((\cdot,\cdot)_{\Omega}\) to denote the norm and inner product in \(L^{2}(\Omega)\) or \(L^{2}(\Omega)^{d}\), for \(d=2,3\), \(\|\cdot\|_{L^{p}}\) for the norm in \(L^{p}(\Omega)\) or \(L^{p}(\Omega)^{d}\), and \(\|\cdot\|_{W^{s,p}}\) the norm in \(W^{s,p}(\Omega)\) or \(W^{s,p}(\Omega)^{d}\). To take into account the incompressibility of the fluid we utilize the following solenoidal spaces \(V_{\sigma}=\{\mathbf{u}\in H^{1}_{0}(\Omega)^{d}:\operatorname{div}\mathbf{u }=0\text{ in }\Omega\}\), and \(H_{\sigma}=\{\mathbf{u}\in L^{2}(\Omega)^{d}:\operatorname{div}\mathbf{u}=0 \text{ in }\Omega,\,\mathbf{u}\cdot\mathbf{n}=0\text{ on }\partial\Omega\}\).
Let us define \(V_{s}:=D(B^{s/2})\), where \(B=-\Delta\) with homogeneous Neumann boundary conditions. We hence have
\[V_{2}=D(B)=\left\{\varphi\in H^{2}(\Omega):\frac{\partial\varphi}{\partial \mathbf{n}}=0\text{ on }\partial\Omega\right\}.\]
We also use the notation \(H:=V_{0}=L^{2}(\Omega)\) and \(V:=V_{1}=H^{1}(\Omega)\). We note that \(B:D(B)\subset H\to H\) is an unbounded linear operator in \(H\), and \(B^{-1}:H\to H\) is a self-adjoint compact operator on \(H\), which is also maximal monotone. We obtain a sequence of eigenvalues \(\nu_{j}\) with \(0<\nu_{1}\leq\nu_{2}\leq\ldots\leq\nu_{j}\to\infty\), with the associated eigenfunctions \(\psi_{j}\in D(B)\) such that \(B\psi_{j}=\nu_{j}\psi_{j}\) for all \(j\). The set \(\{\psi_{j}\}\) is an orthonormal eigenbasis for \(H\) while orthogonal in \(V\) and \(D(B)\).
Let us introduce the Stokes operator \(A:D(A)\subset H_{\sigma}\to H_{\sigma}\) defined as \(A=-P\Delta\), where \(P:L^{2}(\Omega)^{d}\to H_{\sigma}\) is the Leray projector. We note that \(D(A)=H^{2}(\Omega)^{d}\cap V_{\sigma}\) whenever \(\Omega\) is of either of class \(\mathcal{C}^{2}\) or a convex polygonal domain. A sequence of eigenvalues \(\lambda_{j}\) with \(0\leq\lambda_{1}\leq\lambda_{2}\leq\ldots\leq\lambda_{j}\to\infty\) can be obtained with the associated eigenfunctions \(v_{j}\in D(A)\) so that \(Av_{j}=\lambda_{j}v_{j}\). The set \(\{v_{j}\}\) is an orthonormal eigenbasis for \(H_{\sigma}\) which is orthogonal in \(D(A)\).
Let \(\mathbf{u}\in V_{\sigma}\) and \(\varphi\in H\), the operator \(\mathcal{A}:V_{\sigma}\times H\to V_{\sigma}^{*}\) is defined as
\[\langle\mathcal{A}(\mathbf{u},\varphi),\mathbf{v}\rangle_{V_{\sigma}}=2(\nu( \varphi)\mathrm{D}\mathbf{u},\mathrm{D}\mathbf{v})_{\Omega}.\]
The trilinear form \(b:[H^{1}(\Omega)^{d}]^{3}\to\mathbb{R}\) coming out of the weak formulation of the Navier-Stokes equations is defined as \(b(\mathbf{u};\mathbf{v},\mathbf{w})=((\mathbf{u}\cdot\nabla)\mathbf{v}, \mathbf{w})_{\Omega}\). Such trilinear form is continuous and known to satisfy \(b(\mathbf{u};\mathbf{v},\mathbf{w})=-b(\mathbf{u};\mathbf{w},\mathbf{v})\) for all \(\mathbf{u},\mathbf{v},\mathbf{w}\in V_{\sigma}\). From \(b\), we also define \(\mathcal{B}:V_{\sigma}\times V_{\sigma}\to V_{\sigma}^{*}\) as \(\langle\mathcal{B}(\mathbf{u},\mathbf{v}),\mathbf{w}\rangle_{V_{\sigma}}=b( \mathbf{u};\mathbf{v},\mathbf{w})\) for \(\mathbf{u},\mathbf{v},\mathbf{w}\in V_{\sigma}\). From which we shall use the notation \(\mathcal{B}(\mathbf{u},\mathbf{u}):=\mathcal{B}(\mathbf{u})\) for simplicity.
To take into account the transport terms involving the scalar functions, we introduce the operators \(\mathcal{C}_{1}:V_{\sigma}\times V\to V^{*}\) and \(\mathcal{C}_{2}:V\times V\to V_{\sigma}^{*}\) respectively defined as
\[\langle\mathcal{C}_{1}(\mathbf{v},\varphi),\psi\rangle_{V}=( \mathbf{v}\cdot\nabla\varphi,\psi)_{\Omega},\] \[\langle\mathcal{C}_{2}(\varphi,\psi),\mathbf{v}\rangle_{V_{\sigma}} =(\mathbf{v}\cdot\nabla\varphi,\psi)_{\Omega}=(\psi\nabla\varphi,\mathbf{v})_{ \Omega}.\]
The following inequalities are well-known but we reiterate them here for completeness.
**Lemma 2.1**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be an open and bounded domain that is at least of Lipschitzian class._
1. _Rellich-Kondrachov: If_ \(1\leq mp<n\) _by setting_ \(p^{*}=np/(n-mp)\) _then the embedding_ \(W^{m,p}(\Omega)\hookrightarrow L^{p^{*}}(\Omega)\) _is continuous, while the embedding_ \(W^{m,p}(\Omega)\hookrightarrow L^{q}(\Omega)\) _is compact whenever_ \(1\leq q<p^{*}\)_. If_ \(mp\geq n\) _then we have the compact embedding_ \(W^{m,p}(\Omega)\hookrightarrow L^{q}(\Omega)\) _whenever_ \(1\leq q<\infty\)_._
2. _Gagliardo-Nirenberg: Let_ \(1\leq p_{1},p_{2},p_{3}\leq+\infty\)_,_ \(j,m\in\mathbb{N}\) _with_ \(j<m\) _and_ \(\theta\in[0,1]\) _satisfy the relation_ \[\frac{1}{p_{1}}=\theta\left(\frac{1}{p_{3}}-\frac{m}{n}\right)+\frac{1-\theta }{p_{2}},\quad\frac{j}{m}\leq\theta\leq 1.\] _Then if_ \(u\in L^{p_{2}}(\Omega)\cap W^{m,p_{3}}(\Omega)\) _we get, for some_ \(c>0\) _independent of_ \(u\)_,_ \[\|u\|_{L^{p_{1}}}\leq c\|u\|_{W^{m,p_{3}}}^{\theta}\|u\|_{L^{p_{2}}}^{1-\theta}.\]
3. _Interpolation in_ \(L^{p}\)_: Let_ \(u\in L^{p_{1}}(\Omega)\)_,_ \(p_{2}<p_{1}\) _and_ \(p_{\theta}=p_{1}p_{2}/[(1-\theta)p_{1}+\theta p_{2}]\)_. There exists_ \(c>0\) _independent of_ \(u\) _such that_ \[\|u\|_{L^{p_{\theta}}}\leq c\|u\|_{L^{p_{1}}}^{\theta}\|u\|_{L^{p_{2}}}^{1- \theta}.\]
By virtue of Gagliardo-Nirenberg inequality the following properties hold true for any \(\mathbf{u},\mathbf{v},\mathbf{w}\in V_{\sigma}\):
\[|b(\mathbf{u};\mathbf{v},\mathbf{w})| \leq c\|\mathbf{u}\|_{H_{\sigma}}^{1/2}\|\mathbf{u}\|_{V_{\sigma} }^{1/2}\|\mathbf{v}\|_{V_{\sigma}}\|\mathbf{w}\|_{V_{\sigma}}, \text{for }d=3 \tag{2.1}\] \[|b(\mathbf{u};\mathbf{v},\mathbf{w})| \leq c\|\mathbf{u}\|_{H_{\sigma}}^{1/2}\|\mathbf{u}\|_{V_{\sigma} }^{1/2}\|\mathbf{v}\|_{V_{\sigma}}|\mathbf{w}\|_{H_{\sigma}}^{1/2}\|\mathbf{w }\|_{V_{\sigma}}^{1/2}, \text{for }d=2. \tag{2.2}\]
Functions which are defined on the interval mapped to a real Banach space \(X\) shall also be utilized. We denote by \(C(I;X)\) the functions which are continuous from \(I\) to \(X\) upon which we use the norm \(\|u\|_{C(I;X)}:=\sup\limits_{t\in\overline{I}}\|u(t)\|_{X}\). The Bochner spaces \(L^{p}(I;X)\) with the norm
\[\|u\|_{L^{p}(I;X)}=\begin{cases}\left(\int_{I}\|u(t)\|_{X}\,\mathrm{d}t\right) ^{1/p}&\text{if }p<\infty,\\ \operatorname*{ess}\sup\limits_{t\in I}\|u(t)\|_{X}&\text{if }p=\infty, \end{cases}\]
shall also be utilized, as well as the space \(W^{m,p}(I;X):=\{u\in L^{p}(I;X):\partial_{t}^{j}u\in L^{p}(I;X),0\leq j\leq m\}\).
We recall Aubin-Lions-Simon embedding. Let \(X,Y,Z\) be Banach spaces such that the embeddings \(X\hookrightarrow Y\) and \(Y\hookrightarrow Z\) are compact and continuous, respectively. Given \(1\leq p,q\leq\infty\), the space \(W^{p,q}(X,Z):=\{u\in L^{p}(I;X):\partial_{t}u\in L^{q}(I;Z)\}\) is compactly embedded to \(L^{p}(I;Y)\) if \(p<+\infty\). On the other hand, if \(p=+\infty\) and \(q>1\) the embedding \(W^{p,q}(X,Z)\hookrightarrow C(\overline{I};Y)\) is compact.
## 3 Existence of weak solution
In this section we provide the existence of weak solutions to system (1.2). By weak solution we use the following notion:
**Definition 3.1**.: _Suppose that \(\varphi_{0}\in H\) with \(F(\varphi_{0})\in L^{1}(\Omega)\), \(\mathbf{u}_{0}\in H_{\sigma}\), and \(\theta_{0}\in H\). We say that \([\varphi,\mathbf{u},\theta]\) is a weak solution to (1.2) (with the boundary condition incorporated) on \([0,T]\) corresponding to the initial conditions if_
* \([\varphi,\mathbf{u},\theta]\) _including_ \(\mu\) _satisfies_ \[\begin{cases}\varphi\in L^{\infty}(I;H)\cap L^{2}(I;V),\\ \mu\in L^{2}(I;V),\\ \partial_{t}\varphi\in L^{4/d}(I;V^{*}),\\ \end{cases}\] \[\begin{cases}\mathbf{u}\in L^{\infty}(I;H_{\sigma})\cap L^{2}(I;V_{ \sigma}),\\ \partial_{t}\mathbf{u}\in L^{4/d}(I;V^{*}_{\sigma}),\\ \end{cases}\] \[\begin{cases}\theta\in L^{\infty}(I;H)\cap L^{2}(I;V),\\ \partial_{t}\theta\in L^{4/d}(I;V^{*}),\end{cases}\]
* _letting_ \(\rho(x,\varphi):=a(x)\varphi+F^{\prime}(\varphi)\) _the following system of variational problems hold:_ \[\begin{split}&\langle\partial_{t}\varphi(t),\psi\rangle_{V}+( \mathbf{u}(t)\cdot\nabla\varphi(t),\psi)_{\Omega}+(m(\varphi)\nabla\rho(t), \nabla\psi)_{\Omega}\\ &=(m(\varphi)\nabla(J*\varphi(t)),\nabla\psi)_{\Omega}-\ell_{c}(m( \varphi)\nabla\theta(t),\nabla\psi)_{\Omega},\end{split}\] (3.1a) \[\begin{split}&\langle\partial_{t}\mathbf{u}(t),\mathbf{v} \rangle_{V_{\sigma}}+((\mathbf{u}(t)\cdot\nabla)\mathbf{u}(t),\mathbf{v})_{ \Omega}+2(\nu(\varphi)\mathrm{D}\mathbf{u}(t),\mathrm{D}\mathbf{v})\\ &=\mathcal{K}(\mathbf{v}\cdot\nabla\varphi(t),\mu(t)-\ell_{c} \theta(t))_{\Omega}+(\ell(\varphi(t),\theta(t))\mathbf{g},\mathbf{v})_{\Omega }+\langle\mathbf{q},\mathbf{v}\rangle_{V_{\sigma}}\end{split}\] (3.1b) \[\begin{split}&\langle\partial_{t}\theta(t),\vartheta\rangle_{V}- \ell_{h}\langle\partial_{t}\varphi(t),\vartheta\rangle_{V}++\kappa(\nabla \theta(t),\nabla\vartheta)\\ &=(\mathbf{u}(t)\cdot\nabla(\ell_{h}\varphi(t)-\theta(t)), \vartheta)_{\Omega}+(\mathbf{g}\cdot\mathbf{u}(t),\vartheta)_{\Omega}+ \langle z,\vartheta\rangle_{V}\end{split}\] (3.1c) _for all_ \(\psi\in V\)_,_ \(\mathbf{v}\in V_{\sigma}\)_,_ \(\vartheta\in V\) _and almost every_ \(t\in(0,T)\)__
* _the initial conditions hold in weak sense, i.e._ \[(\varphi(t),\psi)_{\Omega}\rightarrow(\varphi_{0},\psi)_{\Omega} \text{ as }t\to 0\quad\forall\psi\in V,\] (3.2) \[(\mathbf{u}(t),\mathbf{v})_{\Omega}\rightarrow(\mathbf{u}_{0}, \mathbf{v})_{\Omega}\text{ as }t\to 0\quad\forall\mathbf{v}\in V_{\sigma},\] (3.3) \[(\theta(t),\vartheta)_{\Omega}\rightarrow(\theta_{0},\vartheta)_{\Omega} \text{ as }t\to 0\quad\forall\vartheta\in V.\] (3.4)
Note that \((\mathbf{u}(t)\cdot\nabla\varphi(t),\psi)_{\Omega}=-(\mathbf{u}(t)\cdot \nabla\psi,\varphi(t))_{\Omega}\). This implies that by taking \(\psi=1\), we get \(\langle\partial_{t}\varphi(t),1\rangle_{V}=0\). Whence, we get a total mass conservation for the order parameter.
To be able to provide such existence, we have to impose the following assumptions on the kernel \(J\), the viscosity \(\nu\), the potential \(F\), and the external forces \(\mathbf{q}\) and \(z\):
* \(J\in W^{1,1}(\mathbb{R}^{d})\), \(\quad J(x)=J(-x)\), and \(a(x)\geq 0\) a.e. \(x\in\Omega\).
* We assume that the mobility is constant upon which we assume \(m(\varphi)=1\), while the viscosity function \(\nu\) is locally Lipschitz on \(\mathbb{R}\) and that there exists \(\underline{\nu},\overline{\nu}>0\) such that \(\underline{\nu}\leq\nu(s)\leq\overline{\nu}\) for all \(s\in\mathbb{R}\).
* \(F\in C^{2,1}_{loc}(\mathbb{R})\) and that there exists \(c_{0}>0\) such that \(F^{\prime\prime}(s)+a(x)\geq c_{0}\), for all \(s\in\mathbb{R}\) and a.e. \(x\in\Omega\).
* There exists \(c_{1}>\frac{1}{2}\|J\|_{L^{1}(\mathbb{R}^{d})}\) and \(c_{2}\in\mathbb{R}\) such that \(F(s)\geq c_{1}s^{2}-c_{2}\), for all \(s\in\mathbb{R}\).
* There exists \(c_{3}>0\), \(c_{4}\geq 0\), and \(p\in(1,2]\) such that \(|F^{\prime}(s)|^{p}\leq c_{3}|F(s)|+c_{4}\) for all \(s\in\mathbb{R}\).
* \(\mathbf{q}\in L^{2}(I;V_{\sigma}^{*})\), and \(z\in L^{2}(I;V^{*})\).
We mention that (A1) is a standard assumption for the nonlocal Cahn-Hilliard equations which properly serves our purpose of establishing existence of weak solutions and is satisfied by the sequence of kernels for establising nonlocal-to-local convergence, nevertheless we mention [14] for a slightly stricter assumption and [19, Remark 1] for a weaker assumption.
Note that (A3) implies that \(F\) can be written as \(F(s)=G(s)-\frac{a^{*}}{2}s^{2}\), where \(a^{*}=\|a\|_{L^{\infty}(\Omega)}\) and \(G\in C^{2,1}(\mathbb{R})\) is a strictly convex function. While (A5) accounts for an arbitrary polynomial growth for \(F\).
We also mention Korn's inequality which will be useful in the subsequent analyses
**Lemma 3.2**.: _There exists a constant \(c_{\mathrm{K}}>0\) such that the inequalities, \(\|D(\mathbf{v})\|\leq\|\,\mathbf{v}\,\|_{V_{\sigma}}\leq c_{\mathrm{K}}\|D( \mathbf{v})\|\), hold for any \(\mathbf{v}\in V\)._
Lastly, let us introduce the total energy for the system which is given by:
\[\begin{split} 2\mathbb{E}(\varphi,\mathbf{u},\theta):=& \frac{1}{\ell_{c}}\left\{\frac{1}{2}\int_{\Omega}\int_{\Omega}J(x-y)( \varphi(x)-\varphi(y))^{2}\,\mathrm{d}y\,\mathrm{d}x+2\int_{\Omega}F(\varphi( x))\,\mathrm{d}x\right\}\\ &+\frac{1}{\mathcal{K}\ell_{c}}\int_{\Omega}|\mathbf{u}(x)|^{2}\, \mathrm{d}x+\frac{1}{\ell_{h}}\int_{\Omega}|\theta(x)|^{2}\,\mathrm{d}x,\end{split} \tag{3.5}\]
and the dissipation functional denoted and written as:
\[\mathbb{D}(\varphi,\mu,\mathbf{u},\theta):=\frac{1}{\ell_{c}}\|\nabla\mu\|^{ 2}+\frac{2}{\mathcal{K}\ell_{c}}\|\sqrt{\nu(\varphi)}\mathrm{D}\mathbf{u}\|^ {2}+\frac{\kappa}{\ell_{h}}\|\nabla\theta\|^{2} \tag{3.6}\]
We also use the following notation for the nonlocal energy
\[\mathbb{E}_{nl}(\varphi):=\frac{1}{2}\int_{\Omega}\int_{\Omega}J(x-y)(\varphi (x)-\varphi(y))^{2}\,\mathrm{d}y\,\mathrm{d}x.\]
The proof of existence of weak solutions that we shall provide will be based to that of [16, Theorem 1]. We provide a comprehensive proof for completeness as well as to simplify the computations we go through for the asymptotic analysis in the next section. We also point out that instead of beginning with a higher regularity for the initial data of the order parameter, as done in the aforementioned reference, we go directly into using Yosida approximation for the initial data. The promised existence of weak solution is summarized and proven below.
**Theorem 3.3**.: _Suppose that the assumptions on the initial data is as in Definition 3.1. Additionally, assume that (A1)-(A6) hold. Then, for any \(T>0\) there exists a weak solution \([\varphi,\mathbf{u},\theta]\) to the system (1.2) on \([0,T]\). Furthermore, the following energy inequality holds for almost every \(t>0\)_
\[\begin{split}&\mathbb{E}(\varphi(t),\mathbf{u}(t),\theta(t))+\int_{0 }^{t}\mathbb{D}(\varphi(s),\mu(s),\mathbf{u}(s),\theta(s))\,\mathrm{d}s\\ &\leq\mathbb{E}(\varphi_{0},\mathbf{u}_{0},\theta_{0})+\int_{0}^{t }\big{\{}\langle\mathbf{q}(s),\mathbf{u}(s)\rangle_{V_{\sigma}}+(\ell(\varphi (s),\theta(s))\mathbf{g},\mathbf{u}(s))_{\Omega}\\ &\quad+\langle z(s),\theta(s)\rangle_{V}+(\mathbf{g}\cdot \mathbf{u}(s),\theta(s))_{\Omega}\big{\}}\,\mathrm{d}s.\end{split} \tag{3.7}\]
Proof.: The existence of solutions is established using spectral Galerkin method. We project the system (3.1a)-(3.1c) onto the finite dimensional spaces spanned by the orthonormal eigenfunctions of \(H\) and \(H_{\sigma}\). To be precise, we use the finite dimensional spaces \(H^{n}:=\operatorname{span}\{\psi_{j}\}_{j=1}^{n}\) and \(H_{\sigma}^{n}:=\operatorname{span}\{\mathbf{v}_{j}\}_{j=1}^{n}\). We also utilize the orthogonal projections \(P_{\sigma}^{n}:H_{\sigma}\to H_{\sigma}^{n}\) and \(P^{n}:H\to H^{n}\) respectively defined as \(P_{\sigma}^{n}\mathbf{u}=\sum_{j=1}^{n}(\mathbf{u},\mathbf{v}_{j})_{\Omega} \mathbf{v}_{j}\) and \(P^{n}\varphi=\sum_{j=1}^{n}(\varphi,\psi_{j})_{\Omega}\psi_{j}\) for any \(\mathbf{u}\in H_{\sigma}\) and \(\varphi\in H\).
The projected variational problem now reads
\[\langle\partial_{t}\varphi^{n}(t),\psi\rangle_{V}+(\mathbf{u}^{n }(t)\cdot\nabla\varphi^{n}(t),\psi)_{\Omega}+(\nabla\rho^{n}(t),\nabla\psi)_{\Omega} \tag{3.8a}\] \[=(\nabla(J*\varphi^{n}(t)),\nabla\psi)_{\Omega}-\ell_{c}(\nabla \theta^{n}(t),\nabla\psi)_{\Omega},\] (3.8b) \[\langle\partial_{t}\mathbf{u}^{n}(t),\mathbf{v}\rangle_{V_{ \sigma}}+((\mathbf{u}^{n}(t)\cdot\nabla)\mathbf{u}^{n}(t),\mathbf{v})_{ \Omega}+2(\nu(\varphi^{n})\mathrm{D}\mathbf{u}^{n}(t),\mathrm{D}\mathbf{v})\] \[=\mathcal{K}(\mathbf{v}\cdot\!\nabla\varphi^{n}(t),\mu^{n}(t)- \ell_{c}\theta^{n}(t))_{\Omega}+(\ell(\varphi^{n}(t),\theta^{n}(t))\mathbf{g },\mathbf{v})_{\Omega}+\langle\mathbf{q}^{n}(t),\mathbf{v}\rangle)_{V_{\sigma}},\]
\[\langle\partial_{t}\theta^{n}(t),\vartheta\rangle_{V}-\ell_{h} \langle\partial_{t}\varphi^{n}(t),\vartheta\rangle_{V}+\kappa(\nabla\theta^{ n}(t),\nabla\vartheta) \tag{3.8c}\] \[=(\mathbf{u}^{n}(t)\cdot\nabla(\ell_{h}\varphi^{n}(t)-\theta^{n}( t)),\vartheta)_{\Omega}+(\mathbf{g}\cdot\mathbf{u}^{n}(t),\vartheta)_{ \Omega}+\langle z^{n}(t),\vartheta\rangle_{V}\]
with \(\rho(\cdot,\varphi^{n}):=a(\cdot)\varphi^{n}+F^{\prime}(\varphi^{n})\), \(\mu^{n}=P^{n}(\rho(\cdot,\varphi^{n})-J*\varphi^{n}+\ell_{c}\theta^{n})\), \(\varphi^{n}(0)=\varphi_{0}^{n}\), \(\mathbf{u}^{n}(0)=\mathbf{u}_{0}^{n}\), and \(\theta^{n}(0)=\theta_{0}^{n}\). Here the discrete initial conditions for the fluid velocity and temperature are obtained using the projections, i.e., \(\mathbf{u}_{0}^{n}=P_{\sigma}^{n}\mathbf{u}_{0}\) and \(\theta_{0}^{n}=P^{n}\theta_{0}\), while the initial condition for order parameter is obtained as \(\varphi_{0}^{n}=(I+\frac{1}{n}B)^{-1}\varphi_{0}\in D(B)\).
Using the ansatz
\[\varphi^{n}(t)=\sum_{j=1}^{n}a^{j}(t)\psi_{j},\quad\mathbf{u}^{n}(t)=\sum_{j =1}^{n}b^{j}(t)\mathbf{v}_{j},\quad\theta^{n}(t)=\sum_{j=1}^{n}c^{j}(t)\psi_{j}\]
system (3.8a)-(3.8c) can be written as an ode involving the variables \(\boldsymbol{a}(t):=[a^{1}(t),a^{2}(t),\dots,a^{n}(t)]^{\top}\), \(\boldsymbol{b}(t):=[b^{1}(t),b^{2}(t),\dots,b^{n}(t)]^{\top}\), and \(\boldsymbol{c}(t):=[c^{1}(t),c^{2}(t),\dots,c^{n}(t)]^{\top}\) whose existence on the space \(C^{1}([0,t^{n}];\mathbb{R}^{n})\), for some \(t^{n}\in(0,+\infty]\), can be established quite easily using, for example, Cauchy-Lipschitz theorem.
We now derive _a priori_ estimates that establish \(t^{n}=T\), and that the elements \(\varphi^{n}\), \(\mathbf{u}^{n}\) and \(\theta^{n}\) are uniformly bounded under appropriate norms. For the meantime, we shall drop the time variable \(t\) for simplicity. Let us begin with taking \(\psi=\mu^{n}(t)\) in (3.8a) to get
\[(\partial_{t}\varphi^{n},\mu^{n})_{\Omega}+(\mathbf{u}^{n}\cdot\nabla\varphi^ {n},\mu^{n})_{\Omega}+(\nabla\rho^{n},\nabla\mu^{n})_{\Omega}=(\nabla(J* \varphi^{n}),\nabla\mu^{n})_{\Omega}-\ell_{c}(\nabla\theta^{n},\nabla\mu^{n})_ {\Omega}. \tag{3.9}\]
Here, we used the notation \(\rho^{n}(t):=P^{n}\rho(\cdot,\varphi^{n}(t))=\mu^{n}(t)+P^{n}(J*\varphi^{n}(t)) -\ell_{c}P^{n}(\theta^{n})\).
Furthermore, the first term on the left-hand side of (3.9) can be written by using the explicit expression for \(\mu^{n}\) and using the fact that \((\phi_{1},J*\phi_{2})_{\Omega}=(\phi_{2},J*\phi_{1})_{\Omega}\) from the property \(J(x)=J(-x)\). Indeed, we have
\[\begin{split}&(\partial_{t}\varphi^{n},\mu^{n})_{\Omega}=( \partial_{t}\varphi^{n},a\varphi^{n}+F^{\prime}(\varphi^{n})-J*\varphi^{n}+\ell _{c}\theta^{n})_{\Omega}\\ &=\frac{1}{2}\frac{d}{dt}\left(\int_{\Omega}a|\varphi^{n}(x)|^{2} \,\mathrm{d}x+2\int_{\Omega}F(\varphi^{n}(x))\,\mathrm{d}x-\int_{\Omega} \varphi^{n}(x)(J*\varphi^{n}(x))\,\mathrm{d}x\right)+\ell_{c}(\partial_{t} \varphi^{n},\theta^{n})_{\Omega}\\ &=\frac{1}{2}\frac{d}{dt}\left(\mathbb{E}_{nl}(\varphi^{n})+2 \int_{\Omega}F(\varphi^{n}(x))\,\mathrm{d}x\right)+\ell_{c}(\partial_{t} \varphi^{n},\theta^{n})_{\Omega}.\end{split} \tag{3.10}\]
Using the definition of \(\rho^{n}\), and (3.10) we rewrite (3.9) as
\[\begin{split}&\frac{1}{2}\frac{d}{dt}\left(\mathbb{E}_{nl}(\varphi^{n})+2 \int_{\Omega}F(\varphi^{n}(x))\,\mathrm{d}x\right)+\|\nabla\mu^{n}\|^{2}\\ &=(\mathbf{u}^{n}\cdot\nabla\mu^{n},\varphi^{n})_{\Omega}+( \nabla J*\varphi^{n},\nabla\mu^{n})_{\Omega}-(\nabla P^{n}(J*\varphi^{n}), \nabla\mu^{n})_{\Omega}-\ell_{c}(\partial_{t}\varphi^{n},\theta^{n})_{\Omega}. \end{split} \tag{3.11}\]
Let us now test (3.8b) with \(\mathbf{v}=\mathbf{u}^{n}(t)\) to get
\[\begin{split}&\frac{1}{2}\frac{d}{dt}\|\mathbf{u}^{n}\|^{2}+2\| \sqrt{\nu(\varphi)}\mathrm{D}\mathbf{u}^{n}\|^{2}=\mathcal{K}(\mathbf{u}^{n} \cdot\nabla\varphi^{n},\mu^{n}-\ell_{c}\theta^{n})_{\Omega}+(\ell(\varphi^{n},\theta^{n})\mathbf{g},\mathbf{u}^{n})_{\Omega}+\langle\mathbf{q}^{n}, \mathbf{u}^{n}\rangle_{V_{\sigma}}\end{split} \tag{3.12}\]
Similarly, substituting \(\vartheta=\theta^{n}(t)\) into (3.8c) gives us
\[\begin{split}&\frac{1}{2}\frac{d}{dt}\|\theta^{n}\|^{2}+\kappa\| \nabla\theta^{n}\|^{2}=\ell_{h}(\partial_{t}\varphi^{n},\theta^{n})_{\Omega}+ \ell_{h}(\mathbf{u}^{n}\cdot\nabla\varphi^{n},\theta^{n})_{\Omega}+(\mathbf{ g}\cdot\mathbf{u}^{n},\theta^{n})_{\Omega}+\langle z^{n},\theta^{n}\rangle_{V} \end{split} \tag{3.13}\]
To reconcile the computations above we divide the equations (3.11), (3.12) and (3.13) by the constants \(\ell_{c}\), \(\mathcal{K}\ell_{c}\) and \(\ell_{h}\), respectively, and add the results together. Such process yields
\[\begin{split}&\frac{d}{dt}\mathbb{E}(\varphi^{n}(t),\mathbf{u}^{n} (t),\theta^{n}(t))+\mathbb{D}(\varphi^{n}(t),\mu^{n}(t),\mathbf{u}^{n}(t), \theta^{n}(t))\\ &=\frac{1}{\ell_{c}}\left\{(\nabla J*\varphi^{n}(t),\nabla\mu^{n} (t))_{\Omega}-(\nabla P^{n}(J*\varphi^{n}(t)),\nabla\mu^{n}(t))_{\Omega} \right\}\\ &\quad+\frac{1}{\mathcal{K}\ell_{c}}\left\{(\ell(\varphi^{n}(t), \theta^{n}(t))\mathbf{g},\mathbf{u}^{n}(t))_{\Omega}+\langle\mathbf{q}^{n}(t),\mathbf{u}^{n}(t)\rangle_{V_{\sigma}}\right\}\\ &\quad+\frac{1}{\ell_{h}}\left\{(\mathbf{g}\cdot\mathbf{u}^{n}(t ),\theta^{n}(t))_{\Omega}+\langle z^{n}(t),\theta^{n}(t)\rangle_{V}\right\} \end{split} \tag{3.14}\]
The next step is to gain some estimates for the right-hand side of (3.14). Let us begin with the terms multiplied with \(1/\ell_{c}\):
\[\begin{split}&|(\nabla J*\varphi^{n}(t),\nabla\mu^{n}(t))_{ \Omega}-(\nabla P^{n}(J*\varphi^{n}(t)),\nabla\mu^{n}(t))_{\Omega}|\\ &\quad\leq\|\nabla J\|_{L^{1}}\|\varphi^{n}(t)\|\|\nabla\mu^{n}(t )\|+\|\nabla P^{n}(J*\varphi^{n}(t))\|\|\nabla\mu^{n}(t)\|\\ &\quad\leq\|\nabla J\|_{L^{1}}\|\varphi^{n}(t)\|\|\nabla\mu^{n}(t )\|+\|B^{1/2}P^{n}(J*\varphi^{n}(t))\|\|\nabla\mu^{n}(t)\|\\ &\quad\leq\|\nabla J\|_{L^{1}}\|\varphi^{n}(t)\|\|\nabla\mu^{n}(t )\|+\|\nabla J*\varphi^{n}(t)\|\|\nabla\mu^{n}(t)\|\\ &\quad\leq c\|J\|_{W^{1,1}}\|\varphi^{n}(t)\|\|\nabla\mu^{n}(t)\| \leq\frac{1}{2}\|\nabla\mu^{n}(t)\|^{2}+\frac{c}{2}\|J\|_{W^{1,1}}^{2}\| \varphi^{n}(t)\|^{2}\end{split} \tag{3.15}\]
In the computations above, we utilized Holder's inequality as well as Young's convolution inequality. The last estimate, on the other hand is achieved using Young's inequality. Similarly, the terms with \(1/\mathcal{K}\ell_{c}\) are estimated with the Holder's and Young's inequalities, and the constancy of the gravity \(\mathbf{g}\). The computation is shown below:
\[\begin{split}&|(\ell(\varphi^{n}(t),\theta^{n}(t))\mathbf{g}, \mathbf{u}^{n}(t))_{\Omega}+\langle\mathbf{q}^{n}(t),\mathbf{u}^{n}(t) \rangle_{V_{\sigma}}|\\ &\quad\leq\|\ell(\varphi^{n}(t),\theta^{n}(t))\mathbf{g}\|\| \mathbf{u}^{n}(t)\|+\|\mathbf{q}^{n}(t)\|_{V_{\sigma}^{\prime}}\|\mathbf{u}^{n}( t)\|_{V_{\sigma}}\\ &\quad\leq c\left(1+\frac{\alpha\mathcal{K}}{2}\|\varphi^{n}(t)\|^ {2}+\frac{\mathcal{K}\ell_{c}}{4\ell_{h}}\|\theta^{n}(t)\|^{2}+\frac{1}{4}\| \mathbf{u}^{n}(t)\|^{2}\right)+\frac{c_{\mathrm{K}}}{4\underline{\nu}}\| \mathbf{q}^{n}(t)\|_{V_{\sigma}^{\prime}}^{2}+\frac{\underline{\nu}}{c_{ \mathrm{K}}}\|\mathbf{u}^{n}(t)\|_{V_{\sigma}}^{2},\end{split} \tag{3.16}\]
where \(c>0\) depends on \(\mathcal{K}\), \(\ell_{h}\), \(\ell_{c}\), \(\alpha\) (which is defined as \(\alpha:=2c_{1}-\|J\|_{L^{1}}>0\) based on assumption (A4)) and \(\mathbf{g}\), the constant \(c_{\mathrm{K}}\) on the other hand corresponds to that in Korn's inequality. Lastly, we employ the same techniques we previously utilized to the terms multiplied with \(1/\ell_{h}\)
\[\begin{split}&|(\mathbf{g}\cdot\mathbf{u}^{n}(t),\theta^{n}(t))_{ \Omega}+\langle z^{n}(t),\theta^{n}(t)\rangle_{V}|\\ &\quad\leq c\left(\frac{1}{4}\|\theta^{n}(t)\|^{2}+\frac{\ell_{h }}{4\mathcal{K}\ell_{c}}\|\mathbf{u}^{n}(t)\|^{2}\right)+\frac{1}{2\kappa}\|z^ {n}(t)\|_{V^{*}}^{2}+\frac{\kappa}{2}\|\theta^{n}(t)\|_{V}^{2}.\end{split} \tag{3.17}\]
Plugging estimates (3.15), (3.16) and (3.17) into (3.14), and using the lower bound for the viscosity and Korn's inequality will yield
\[\begin{split}&\frac{d}{dt}\mathbb{E}(\varphi^{n}(t),\mathbf{u}^{n}(t), \theta^{n}(t))+\widehat{\mathbb{D}}(\mu^{n}(t),\mathbf{u}^{n}(t),\theta^{n}( t))\\ &\leq c\left(1+\frac{\alpha}{2\ell_{c}}\|\varphi^{n}(t)\|^{2}+ \frac{1}{2\ell_{h}}\|\theta^{n}(t)\|^{2}+\frac{1}{2\mathcal{K}\ell_{c}}\| \mathbf{u}^{n}(t)\|^{2}\right)\\ &\quad+\frac{c}{2}\|J\|_{W^{1},1}^{2}\|\varphi^{n}(t)\|^{2}+ \frac{c_{\mathrm{K}}}{4\underline{\nu}\mathcal{K}\ell_{c}}\|\mathbf{u}^{n}(t) \|_{V^{*}_{\sigma}}^{2}+\frac{1}{2\kappa\ell_{h}}\|z^{n}(t)\|_{V^{*}}^{2}, \end{split} \tag{3.18}\]
where \(\widehat{\mathbb{D}}(\mu,\mathbf{u},\theta):=\frac{1}{2\ell_{c}}\|\nabla\mu \|^{2}+\frac{\nu}{c_{\mathrm{K}}\mathcal{K}\ell_{c}}\|\mathbf{u}\|_{V_{\sigma }}^{2}+\frac{\kappa}{2\ell_{h}}\|\nabla\theta\|^{2}\).
To this end, let us note that the contribution of \(\varphi^{n}\) in the total energy functional \(\mathbb{E}\) can be estimated from below. First, we can rewrite such expression using (1.1) and bound it below using Young's convolution inequality:
\[\begin{split}&\frac{1}{2}\int_{\Omega}\int_{\Omega}J(x-y)( \varphi^{n}(x)-\varphi^{n}(y))^{2}\,\mathrm{d}x\,\mathrm{d}y+2\int_{\Omega}F( \varphi^{n}(x))\,\mathrm{d}x\\ &\quad=\int_{\Omega}a(x)\varphi^{n}(x)^{2}\,\mathrm{d}x-\int_{ \Omega}(J*\varphi^{n})(x)\varphi^{n}(x)\,\mathrm{d}x+2\int_{\Omega}F(\varphi^ {n}(x))\,\mathrm{d}x\\ &\geq\int_{\Omega}a(x)\varphi^{n}(x)^{2}\,\mathrm{d}x-\|J\|_{L^{1 }}\int_{\Omega}\varphi^{n}(x)^{2}\,\mathrm{d}x+2\int_{\Omega}F(\varphi^{n}(x) )\,\mathrm{d}x.\end{split}\]
From assumption (A4) and the nonegativity of \(a(\cdot)\) we further minorize the left-hand side of the expression above:
\[\begin{split}&\frac{1}{2}\int_{\Omega}\int_{\Omega}J(x-y)( \varphi^{n}(x)-\varphi^{n}(y))^{2}\,\mathrm{d}x\,\mathrm{d}y+2\int_{\Omega}F( \varphi^{n}(x))\,\mathrm{d}x\\ &\quad\geq\int_{\Omega}a(x)\varphi^{n}(x)^{2}\,\mathrm{d}x-\|J\|_ {L^{1}}\int_{\Omega}\varphi^{n}(x)^{2}\,\mathrm{d}x+2\int_{\Omega}F(\varphi^{n }(x))\,\mathrm{d}x\\ &\quad\geq(2c_{1}-\|J\|_{L^{1}})\int_{\Omega}\varphi^{n}(x)^{2} \,\mathrm{d}x-2c_{2}|\Omega|=:\alpha\|\varphi^{n}\|^{2}-c.\end{split} \tag{3.19}\]
Integrating (3.18) over \((0,t)\) for \(t\in(0,t^{n}]\) and utilizing (3.19) - after some rearrangement - gives us
\[\begin{split}&\widehat{\mathbb{E}}(\varphi^{n}(t),\mathbf{u}^{n}(t), \theta^{n}(t))+\int_{0}^{t}\widehat{\mathbb{D}}(\mu^{n}(s),\mathbf{u}^{n}(s), \theta^{n}(s))\,\mathrm{d}s\\ &\leq\mathbb{E}(\varphi^{n}(0),\mathbf{u}^{n}(0),\theta^{n}(0))+ \frac{c_{\mathrm{K}}}{4\underline{\nu}\mathcal{K}\ell_{c}}\int_{0}^{t}\| \mathbf{q}^{n}(s\|_{V^{*}_{\sigma}}^{2}\,\mathrm{d}s+\frac{1}{2\kappa\ell_{h}} \int_{0}^{t}\|z^{n}(s)\|_{V^{*}}^{2}\,\mathrm{d}s\\ &\quad+c\left(1+\int_{0}^{t}\left\{\frac{c}{2}\|J\|_{W^{1},1}^{2} \|\varphi^{n}(s)\|^{2}+\widehat{\mathbb{E}}(\varphi^{n}(s),\theta^{n}(s), \mathbf{u}^{n}(s))\right\}\mathrm{d}s\right),\end{split}\]
where \(\widehat{\mathbb{E}}(\varphi,\mathbf{u},\theta):=\frac{\alpha}{2\ell_{c}}\|\varphi\| ^{2}+\frac{1}{2\ell_{h}}\|\theta\|^{2}+\frac{1}{2\mathcal{K}\ell_{c}}\|\mathbf{ u}\|^{2}\). Gronwall's inequality assures us of the existence of a constant \(c>0\) such that
\[\sup_{t\in(0,t^{n}]}\widehat{\mathbb{E}}(\varphi^{n}(t),\mathbf{ u}^{n}(t),\theta^{n}(t))+\int_{0}^{t^{n}}\widehat{\mathbb{D}}(\mu^{n}(s), \mathbf{u}^{n}(s),\theta^{n}(s))\,\mathrm{d}s \tag{3.20}\] \[\leq ce^{cT}\left(1+\mathbb{E}(\varphi^{n}(0),\mathbf{u}^{n}(0), \theta^{n}(0))+\|\mathbf{q}\|_{L^{2}(I;V_{\sigma}^{*})}+\|z\|_{L^{2}(I;V^{*})} \right).\]
Note that the constant \(c>0\) above is independent of \(n\), so that further estimate on \(\mathbb{E}(\varphi^{n}(0),\mathbf{u}^{n}(0),\theta^{n}(0))\) with independence on \(n\) will yield uniform boundedness of the left-hand side of (3.20) which implies
\[\|\varphi^{n}\|_{L^{\infty}(I;H)}\leq M \tag{3.21}\] \[\|\nabla\mu^{n}\|_{L^{2}(I;L^{2}(\Omega)^{2})}\leq M\] (3.22) \[\|\mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})\cap L^{2}(I;V_{ \sigma})}\leq M\] (3.23) \[\|\theta^{n}\|_{L^{\infty}(I;H)}\leq M.\] (3.24) \[\|\nabla\theta^{n}\|_{L^{2}(I;L^{2}(\Omega)^{2})}\leq M. \tag{3.25}\]
Indeed, from the definition of the orthogonal projections \(P^{n}\) and \(P^{n}_{\sigma}\) whose norms are bounded by \(1\) in the linear spaces \(\mathcal{L}(H,H)\) and \(\mathcal{L}(H_{\sigma},H_{\sigma})\), respectively, we see that \(\|\mathbf{u}^{n}(0)\|_{H_{\sigma}}\leq\|\mathbf{u}_{0}\|_{H_{\sigma}}\) and \(\|\theta^{n}(0)\|_{H}\leq\|\theta_{0}\|_{H}\). For the contribution of the order parameter, we first note that the Neumann operator \(B\) is a maximal monotone operator hence \(\varphi_{0}^{n}\to\varphi_{0}\) in \(H\). Hence, there exists \(c>0\) independent of \(n\) such that
\[\mathbb{E}_{nl}(\varphi_{0}^{n})+2\int_{\Omega}F(\varphi_{0}^{n} (x))\,\mathrm{d}x\leq c\|J\|_{L^{1}}\|\varphi_{0}^{n}\|^{2}+2\int_{\Omega}F( \varphi_{0}^{n}(x))\,\mathrm{d}x\] \[\leq c\|J\|_{L^{1}}\|\varphi_{0}\|^{2}+2\int_{\Omega}F(\varphi_{0 }^{n}(x))\,\mathrm{d}x.\]
The next challenge lies on controlling the nonlinear part by the initial data. As previously remarked, we can write \(F\) - by virtue of (A3) - as \(F(s)=G(s)-\frac{a^{*}}{2}s^{2}\), hence
\[\int_{\Omega}F(\varphi_{0}^{n}(x))\,\mathrm{d}x=\int_{\Omega}G(\varphi_{0}^{n }(x))\,\mathrm{d}x-\frac{a^{*}}{2}\|\varphi_{0}^{n}\|^{2}.\]
Notably the function \(h=G^{\prime}\) is monotonically increasing which we can suppose to satisfy \(h(0)=0\). If we multiply \(h(\varphi_{0}^{n})\) by \(-(\varphi_{0}-\varphi_{0}^{n})=-\frac{1}{n}B\varphi_{0}^{n}\) and integrate over \(\Omega\) one gets
\[\int_{\Omega}h(\varphi_{0}^{n})(\varphi_{0}^{n}-\varphi_{0})\, \mathrm{d}x =-\frac{1}{n}\int_{\Omega}h(\varphi_{0}^{n})B\varphi_{0}^{n}\, \mathrm{d}x=-\frac{1}{n}\int_{\Omega}\left\{\nabla h(\varphi_{0}^{n})\cdot \nabla\varphi_{0}^{n}\right\}\mathrm{d}x\] \[=-\frac{1}{n}\int_{\Omega}\left\{h^{\prime}(\varphi_{0}^{n})| \nabla\varphi_{0}^{n}|^{2}\right\}\mathrm{d}x\leq 0.\]
Expanding \(G\) about \(\varphi_{0}^{n}\), the convexity of \(G\) then implies
\[\int_{\Omega}G(\varphi_{0}^{n})\,\mathrm{d}x\leq\int_{\Omega}G(\varphi_{0})+h (\varphi_{0}^{n})(\varphi_{0}^{n}-\varphi_{0})\,\mathrm{d}x\leq\int_{\Omega}G (\varphi_{0})\,\mathrm{d}x.\]
Using Fatou's lemma, we get
\[\limsup_{n\to\infty}\int_{\Omega}F(\varphi_{0}^{n})\,\mathrm{d}x\leq\int_{ \Omega}G(\varphi_{0})\,\mathrm{d}x-\frac{a^{*}}{2}\|\varphi_{0}\|^{2}=\int_{ \Omega}F(\varphi_{0})\,\mathrm{d}x.\]
In summary, we get the energy estimate
\[\sup_{t\in[0,T]}\widehat{\mathbb{E}}(\varphi^{n}(t),\mathbf{u}^{n}(t),\theta^{n}(t))+\int_{0}^{T}\widehat{\mathbb{D}}(\mu^{n}(s),\mathbf{u}^{n}(s), \theta^{n}(s))\,\mathrm{d}s\] \[\leq ce^{cT}\left(1+\mathbb{E}(\varphi_{0},\mathbf{u}_{0},\theta_ {0})+\frac{c_{\mathrm{K}}}{4\underline{\nu}\mathcal{K}\ell_{c}}\|\mathbf{q}\|_ {L^{2}(I;V_{\sigma}^{*})}+\frac{1}{2\kappa\ell_{h}}\|z\|_{L^{2}(I;V^{*})}\right)\]
which validates (3.21)-(3.25).
On the other hand, if we test (3.8c) with \(\vartheta=1\) we get
\[\left|\int_{\Omega}\theta^{n}(x)\,\mathrm{d}x\right|\leq c\left(\|\theta_{0} \|_{H}+\|\mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})}+\|z\|_{L^{2}(I;V^{*})} \right).\]
The inequality above, together with (3.23) and (3.25), imply
\[\|\theta^{n}\|_{L^{\infty}(I;H)\cap L^{2}(I;V)}\leq M. \tag{3.26}\]
At this moment, we shall derive some estimates for \(\varphi^{n}\) in \(L^{2}(I;V)\). First, we use the definition of \(\mu^{n}\), assumption (A3), and Young's convolution and Young's inequalities to get
\[(\nabla\varphi^{n},\nabla\mu^{n})\alpha \tag{3.27}\] \[=(\nabla\varphi^{n},(F^{\prime\prime}(\varphi^{n})+a)\nabla \varphi^{n}+\varphi^{n}\nabla a-\nabla J*\varphi^{n}+\ell_{c}\nabla\theta^{n}) _{\Omega}\] \[\geq c_{0}\|\nabla\varphi^{n}\|^{2}-2\|J\|_{W^{1,1}}\|\nabla \varphi^{n}\|\|\varphi^{n}\|+\ell_{c}(\nabla\varphi^{n},\nabla\theta^{n})\] \[\geq\frac{3c_{0}}{4}\|\nabla\varphi^{n}\|^{2}-\frac{4}{c_{0}}\|J \|_{W^{1,1}}\|\varphi^{n}\|^{2}+\ell_{c}(\nabla\varphi^{n},\nabla\theta^{n})\]
Using Holder's and Young's inequalities on \((\nabla\varphi^{n},\nabla\mu^{n})_{\Omega}\), on the other hand, one gets \((\nabla\varphi^{n},\nabla\mu^{n})_{\Omega}\leq(c_{0}/4)\|\nabla\varphi^{n}\| ^{2}+(1/c_{0})\|\nabla\mu^{n}\|^{2}\). Combining this with (3.27) we have
\[\frac{c_{0}}{2}\|\nabla\varphi^{n}\|^{2}\leq\left|\frac{1}{c_{0}} \|\nabla\mu^{n}\|^{2}+\frac{4}{c_{0}}\|J\|_{W^{1,1}}\|\varphi^{n}\|^{2}-\ell_ {c}(\nabla\varphi^{n},\nabla\theta^{n})\right| \tag{3.28}\] \[\leq\frac{1}{c_{0}}\|\nabla\mu^{n}\|^{2}+\frac{4}{c_{0}}\|J\|_{W^ {1,1}}\|\varphi^{n}\|^{2}+\frac{\ell_{c}^{2}}{c_{0}}\|\nabla\theta^{n}\|+ \frac{c_{0}}{4}\|\nabla\varphi^{n}\|^{2}.\]
Moving the last term on the right-hand side of (3.28) to the other side, utilizing the total mass conservation of the order parameter, and the inequalities (3.21), (3.22) and (3.26) we get the uniform boundedness of \(\varphi^{n}\) in \(L^{2}(I;V)\), i.e.,
\[\|\varphi^{n}\|_{L^{2}(I;V)}\leq M. \tag{3.29}\]
To strengthen (3.22) let us prove that the average of \(\mu^{n}\) - which we denote as \(\widehat{\mu^{n}}:=\frac{1}{|\Omega|}\int_{\Omega}\mu^{n}\,\mathrm{d}x\)-over \(\Omega\) is uniformly bounded as well. Indeed, by the definition of \(\mu^{n}\) and by assumption (A5) we have
\[\left|\int_{\Omega}\mu^{n}\,\mathrm{d}x\right| =\left|\int_{\Omega}P^{n}(a\varphi^{n}-J*\varphi^{n}+F^{\prime}( \varphi^{n})+\ell_{c}\theta^{n})\,\mathrm{d}x\right|\] \[\leq\int_{\Omega}|F^{\prime}(\varphi^{n})|\,\mathrm{d}x+\ell_{c} \sqrt{|\Omega|}\|\theta^{n}\|\] \[\leq c_{3}\int_{\Omega}|F(\varphi^{n})|\,\mathrm{d}x+c_{4}|\Omega| +\ell_{c}\sqrt{|\Omega|}\|\theta^{n}\|.\]
Here we used the fact that \(\int_{\Omega}P^{n}(a\varphi^{n}-J*\varphi^{n})\,\mathrm{d}x=0\). To close the estimate above we note that \(\|F(\varphi^{n})\|_{L^{1}}\) can be shown to be uniformly bounded by integrating (3.18) over \((0,T)\), using Gronwall's inequality and using the upper bound for \(\mathbb{E}(\varphi^{n}(0),\mathbf{u}^{n}(0),\theta^{n}(0))\). The boundedness above, together with (3.22) and Poincare-Wirtinger inequality implies
\[\|\mu^{n}\|_{L^{2}(I;V)}\leq M. \tag{3.30}\]
Before we move on, we mention - by virtue of assumption (A5) - that \(\|\rho(\cdot,\varphi^{n})\|_{L^{\infty}(I;L^{p}(\Omega))}\) is uniformly bounded where \(p\in(1,2]\) is as in the aforementioned assumption. Indeed, we have
\[\|\rho(\cdot,\varphi^{n})\|_{L^{p}} \leq\|a\|_{L^{\infty}}\|\varphi^{n}\|_{L^{p}}+\left(\int_{\Omega} |F^{\prime}(\varphi^{n}(x))|^{p}\,\mathrm{d}x\right)^{1/p}\] \[\leq\|a\|_{L^{\infty}}\|\varphi^{n}\|+\left(\int_{\Omega}c_{3}|F (\varphi^{n}(x))|+c_{4}\,\mathrm{d}x\right)^{1/p}\] \[\leq\|a\|_{L^{\infty}}\|\varphi^{n}\|+c_{3}\|F(\varphi^{n})\|_{L ^{1}}^{1/p}+c_{4}|\Omega|^{1/p}.\]
The uniform boundedness follows from the boundedness of \(\|F(\varphi^{n})\|_{L^{1}}\) and estimate (3.21), i.e.
\[\|\rho(\cdot,\varphi^{n})\|_{L^{\infty}(I;L^{p}(\Omega))}\leq M. \tag{3.31}\]
Our next aim is to establish some boundedness for the time differentiated variables. Let us begin with the time derivative of the fluid velocity \(\mathbf{u}^{n}\). We rewrite (3.8b) as
\[\begin{split}&\langle\partial_{t}\mathbf{u}^{n}(t),\mathbf{v} \rangle_{V_{\sigma}}+\langle P_{\sigma}^{n}\mathcal{B}(\mathbf{u}^{n}(t)), \mathbf{v}\rangle_{V_{\sigma}}+\langle P_{\sigma}^{n}\mathcal{A}(\mathbf{u}^ {n}(t),\varphi^{n}(t)),\mathbf{v}\rangle_{V_{\sigma}}\\ &=\mathcal{K}\langle P_{\sigma}^{n}\mathcal{C}_{2}(\varphi^{n}(t), \mu^{n}(t)-\ell_{c}\theta^{n}(t)),\mathbf{v}\rangle_{V_{\sigma}}+\langle P_{ \sigma}^{n}\ell(\varphi^{n}(t),\theta^{n}(t))\mathbf{g},\mathbf{v}\rangle_{ \Omega}+\langle P_{\sigma}^{n}\mathbf{q}(t),\mathbf{v}\rangle\rangle_{V_{ \sigma}}.\end{split} \tag{3.32}\]
Using the antisymmetry of \(b(\mathbf{u};\cdot,\cdot)\) for \(\mathbf{u}\in V_{\sigma}\), and the estimates (2.1) and (2.2) we get
\[\begin{split}\|P_{\sigma}^{n}\mathcal{B}(\mathbf{u}^{n}(t))\|_{V_ {\sigma}^{n}}&\leq\inf_{\begin{subarray}{c}\mathbf{v}\in V_{ \sigma}\\ \|\mathbf{v}\|_{V_{\sigma}}=1\end{subarray}}|b(\mathbf{u}^{n}(t);\mathbf{u}^{n} (t),\mathbf{v})|\\ &\leq\inf_{\begin{subarray}{c}\mathbf{v}\in V_{\sigma}\\ \|\mathbf{v}\|_{V_{\sigma}}=1\end{subarray}}c\left\{\begin{matrix}\|\, \mathbf{u}^{n}(t)\|_{H_{\sigma}}^{1/2}\,\|\,\mathbf{u}^{n}(t)\|_{V_{\sigma}}^{ 3/2}\,\|\,\mathbf{v}\,\|_{V_{\sigma}}&\text{for }d=3\\ \|\,\mathbf{u}^{n}(t)\|_{H_{\sigma}}\,\|\,\mathbf{u}^{n}(t)\|_{V_{\sigma}}&\|\, \mathbf{v}\,\|_{V_{\sigma}}&\text{for }d=2\end{matrix}\right.\\ &\leq c\left\{\begin{matrix}\|\,\mathbf{u}^{n}(t)\|_{H_{\sigma}}^{1/2}\,\| \,\mathbf{u}^{n}(t)\|_{V_{\sigma}}^{3/2}&\text{for }d=3\\ \|\,\mathbf{u}^{n}(t)\|_{H_{\sigma}}\,\|\,\mathbf{u}^{n}(t)\|_{V_{\sigma}}& \text{for }d=2.\end{matrix}\right.\end{split}\]
For \(d=3\), we thus see that
\[\|P_{\sigma}^{n}\mathcal{B}(\mathbf{u}^{n})\|_{L^{4/3}(I;V_{\sigma}^{n})}\leq \|\mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})}^{1/2}\|\mathbf{u}^{n}\|_{L^{2}(I; V_{\sigma})}^{3/2}\leq M^{2}, \tag{3.33}\]
while when \(d=2\) we have
\[\|P_{\sigma}^{n}\mathcal{B}(\mathbf{u}^{n})\|_{L^{2}(I;V_{\sigma}^{n})}\leq\| \mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})}\|\mathbf{u}^{n}\|_{L^{2}(I;V_{ \sigma})}\leq M^{2}. \tag{3.34}\]
For the viscosity term we use Holder's inequality and assumption (A2) for both \(d=2,3\) to get
\[\begin{split}\|P_{\sigma}^{n}\mathcal{A}(\mathbf{u}^{n}(t), \varphi^{n}(t))\|_{V_{\sigma}^{n}}&=\inf_{\begin{subarray}{c} \mathbf{v}\in V_{\sigma}\\ \|\mathbf{v}\|_{V_{\sigma}}=1\end{subarray}}2(\nu(\varphi^{n}(t))\mathrm{D} \mathbf{u}^{n}(t),\mathrm{D}\mathbf{v})_{\Omega}\\ &\leq 2\inf_{\begin{subarray}{c}\mathbf{v}\in V_{\sigma}\\ \|\mathbf{v}\|_{V_{\sigma}}=1\end{subarray}}\|\nu(\varphi^{n}(t))\mathrm{D} \mathbf{u}^{n}(t)\|\|\mathrm{D}\mathbf{v}\|\leq 2\overline{\nu}\|\,\mathbf{u}^{n}(t)\|_{V_{ \sigma}}.\end{split} \tag{3.35}\]
This implies that \(\|P_{\sigma}^{n}\mathcal{A}(\mathbf{u}^{n},\varphi^{n})\|_{L^{2}(I;V_{\sigma}^{n})}\leq M\).
For the term that takes into account the capillarity we first deal with the case \(d=3\) as follows:
\[\begin{split}\|P_{\sigma}^{n}\mathcal{C}_{2}(\varphi^{n}(t),\mu^{ n}(t)-\ell_{c}\theta^{n}(t))\|_{V_{\sigma}^{*}}&=\inf_{\begin{subarray}{c} \mathbf{v}\in V_{\sigma}\\ \|\mathbf{v}\|_{V_{\sigma}}=1\end{subarray}}|\langle P_{\sigma}^{n} \mathcal{C}_{2}(\ell_{c}\theta^{n}(t)-\mu^{n}(t),\varphi^{n}(t)),\mathbf{v} \rangle_{V_{\sigma}}|\\ &=\inf_{\begin{subarray}{c}\mathbf{v}\in V_{\sigma}\\ \|\mathbf{v}\|_{V_{\sigma}}=1\end{subarray}}|\langle\varphi^{n}(t)\nabla(\mu^ {n}(t)-\ell_{c}\theta^{n}(t)),\mathbf{v}\rangle_{\Omega}|\\ &\leq\inf_{\begin{subarray}{c}\mathbf{v}\in V_{\sigma}\\ \|\mathbf{v}\|_{V_{\sigma}}=1\end{subarray}}\|\varphi^{n}(t)\|_{L^{3}}\| \nabla(\mu^{n}(t)-\ell_{c}\theta^{n}(t))\|\|\mathbf{v}\|_{L^{6}}\\ &\leq\inf_{\begin{subarray}{c}\mathbf{v}\in V_{\sigma}\\ \|\mathbf{v}\|_{V_{\sigma}}=1\end{subarray}}\|\varphi^{n}(t)\|_{L^{3}}\| \nabla(\mu^{n}(t)-\ell_{c}\theta^{n}(t))\|\|\mathbf{v}\|_{V_{\sigma}}.\end{split} \tag{3.36}\]
We note that to be able to reach the third line, we used Holder's inequality, while Rellich-Kondrachov embedding theorem -- which ensures us of the continuous embeddings \(W^{1,2}(\Omega)\hookrightarrow L^{6}(\Omega)\) -- to get the last inequality. Now, the interpolation of \(L^{p}\) spaces, with \(\theta=\frac{1}{2}\), \(p_{1}=6\) and \(p_{2}=2\), gives us
\[\|\varphi\|_{L^{3}}\leq c\|\varphi\|_{L^{6}}^{1/2}\|\varphi\|^{1/2}. \tag{3.37}\]
Using (3.37) to (3.36), and the continuous embedding \(W^{1,2}(\Omega)\hookrightarrow L^{6}(\Omega)\) lead to
\[\begin{split}\|P_{\sigma}^{n}\mathcal{C}_{2}(\varphi^{n}(t),\mu^{ n}(t)-\ell_{c}\theta^{n}(t))\|_{V_{\sigma}^{*}}&\leq c\| \varphi^{n}(t)\|^{1/2}\|\varphi^{n}(t)\|_{L^{6}}^{1/2}\|\nabla(\mu^{n}(t)-\ell _{c}\theta^{n}(t))\|\\ &\leq c\|\varphi^{n}(t)\|^{1/2}\|\varphi^{n}(t)\|_{V}^{1/2}\| \nabla(\mu^{n}(t)-\ell_{c}\theta^{n}(t))\|.\end{split}\]
For the next computations, we employ Young's inequality \(\alpha\beta\leq\frac{\varepsilon}{p}\alpha_{1}^{p}+\frac{1}{p_{2}\varepsilon^ {p^{\prime}/p}}\beta^{p^{\prime}}\), which holds for any \(\varepsilon,\alpha,\beta>0\) and \(1/p_{1}+1/p_{2}=1\). Taking \(p_{1}=3\), \(p_{2}=3/2\), \(\alpha=\|\varphi^{n}(t)\|_{V}^{2/3}\), and \(\beta=\|\nabla(\mu^{n}(t)-\ell_{c}\theta^{n}(t))\|^{4/3}\) and from the estimates (3.21), (3.22), (3.26) and (3.29) we get the following estimates for the three dimensional case:
\[\begin{split}&\|P_{\sigma}^{n}\mathcal{C}_{2}(\varphi^{n},\mu^{ n}-\ell_{c}\theta^{n})\|_{L^{4/3}(I;V_{\sigma}^{*})}\\ &=c\left(\int_{0}^{T}\left(\|\varphi^{n}(t)\|^{\frac{1}{2}}\| \varphi^{n}(t)\|_{V}^{\frac{1}{2}}\|\nabla(\mu^{n}(t)-\ell_{c}\theta^{n}(t))\| \right)^{\frac{4}{3}}\mathrm{d}t\right)^{\frac{3}{4}}\\ &\leq c\|\varphi^{n}\|_{L^{\infty}(I;H)}^{\frac{1}{2}}\left(\int_{ 0}^{T}\|\varphi^{n}(t)\|_{V}^{\frac{2}{3}}\|\nabla(\mu^{n}(t)-\ell_{c}\theta^ {n}(t))\|^{\frac{4}{3}}\mathrm{d}t\right)^{\frac{3}{4}}\\ &\leq c\|\varphi^{n}\|_{L^{\infty}(I;H)}^{\frac{1}{2}}\left\{\| \varphi^{n}\|_{L^{2}(I;V)}^{\frac{3}{2}}+\|\nabla(\mu^{n}-\ell_{c}\theta^{n} (t))\|_{L^{2}(I;L^{2}(\Omega)^{2})}^{\frac{3}{2}}\right\}\\ &\leq M^{2}.\end{split} \tag{3.38}\]
For the two dimensional case, we rewrite \((\mu^{n}-\ell_{c}\theta^{n})\nabla\varphi^{n}\) as
\[(\mu^{n}-\ell_{c}\theta^{n})\nabla\varphi^{n}=\nabla\left(F(\varphi^{n})+a( \cdot)\frac{(\varphi^{n})^{2}}{2}\right)-\nabla a(\cdot)\frac{(\varphi^{n})^{2} }{2}-(J*\varphi^{n})\nabla\varphi^{n}. \tag{3.39}\]
This allows us to have
\[\begin{split}&\int_{0}^{T}((\mu^{n}(t)-\ell_{c}\theta^{n}(t))\nabla \varphi^{n}(t),\mathbf{v}(t))_{\Omega}\,\mathrm{d}t\\ &\leq\int_{0}^{T}|(\nabla a\frac{(\varphi(t)^{n})^{2}}{2},\mathbf{ v}(t))_{\Omega}|\,\mathrm{d}t+\int_{0}^{T}|((\nabla J*\varphi^{n}(t))\varphi^{n}(t), \mathbf{v}(t))_{\Omega}|\,\mathrm{d}t\\ &\leq c(\|\nabla a\|_{L^{\infty}}\|+\|\nabla J\|_{L^{1}})\int_{0}^ {T}\|\varphi^{n}(t)\|_{L^{4}}^{2}\|\,\mathbf{v}(t)\|_{H_{\sigma}}\,\mathrm{d}t \\ &\leq c(\|\nabla a\|_{L^{\infty}}\|+\|\nabla J\|_{L^{1}})\| \varphi^{n}\|_{L^{4}(I;L^{4}(\Omega))}^{2}\|\,\mathbf{v}\,\|_{L^{2}(I;V_{ \sigma})}.\end{split} \tag{3.40}\]
From Gagliardo-Nirenberg inequality, (3.21) and (3.29), we infer that
\[\|\varphi^{n}\|_{L^{4}(I;L^{4}(\Omega))}^{4}\leq\int_{0}^{T}\| \varphi^{n}(t)\|_{L^{2}}^{2}\|\nabla\varphi^{n}(t)\|_{L^{2}}^{2}\,\mathrm{d}t \leq\|\varphi^{n}\|_{L^{\infty}(I;H)}^{2}\|\varphi^{n}\|_{L^{2}(I;V)}^{2}\leq M ^{4}. \tag{3.41}\]
Therefore,
\[\|P_{\sigma}^{n}\mathcal{C}_{2}(\varphi^{n},\mu^{n}-\ell_{c} \theta^{n})\|_{L^{2}(I;V_{\sigma}^{*})}\leq M^{2}. \tag{3.42}\]
The remaining terms in (3.32) are trivially handled and it can be easily shown that
\[\|P_{\sigma}^{n}\ell(\varphi^{n}(t),\theta^{n}(t))\mathbf{g}\|_{ L^{2}(I;V_{\sigma}^{*})}\leq 1+2M, \tag{3.43}\] \[\|P_{\sigma}^{n}\mathbf{q}(t)\|_{L^{2}(I;V_{\sigma}^{*})}\leq 1+ \|\mathbf{q}(t)\|_{L^{2}(I;V_{\sigma}^{*})}, \tag{3.44}\]
where (3.43) is achieved from (3.21) and (3.26), while (3.44) is owed from the fact that \(P_{\sigma}^{n}\in\mathcal{L}(V_{\sigma}^{*},V_{\sigma}^{*})\). Combining the estimates (3.33), (3.34), (3.35), (3.38), (3.42), (3.43), and (3.44), we get the estimate
\[\|\partial_{t}\mathbf{u}^{n}\|_{L^{q}(I;V_{\sigma}^{*})}\leq c(1+ (1+M)M+\|\mathbf{q}(t)\|_{L^{2}(I;V_{\sigma}^{*})}) \tag{3.45}\]
for some constant \(c>0\), and where \(q=4/3\) if \(d=3\), and \(q=2\) if \(d=2\).
Now let us derive estimates for the time derivative of the order parameter. To do this we first rewrite (3.8a)
\[\langle\partial_{t}\varphi^{n}(t),\psi\rangle_{V_{s}}+\langle P^{ n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),\varphi^{n}(t)),\psi\rangle_{V_{s}}+ \langle P^{n}B(\mu^{n}(t)),\psi\rangle_{V_{s}}=0,\]
with \(\mu^{n}=\rho(\cdot,\varphi^{n})-J*\varphi^{n}+\ell_{c}\theta^{n}\). Before going further, let us mention that the estimates will highly depend on \(p\in(1,2]\) from assumption (A5) due to the appearance of \(\rho(\cdot,\varphi^{n})\) in the definition of \(\mu^{n}\) which we have proven to be bounded in \(L^{\infty}(I;L^{p}(\Omega))\). Due to the limited of regularity of \(\rho(\cdot,\varphi^{n})\) we are compelled to consider test functions \(\psi\in V_{s}\) for some \(s\geq 2\) that will allow us to utilize the Sobolev embedding \(H^{s-2}(\Omega)\hookrightarrow L^{p^{\prime}}(\Omega)\) where \(p^{\prime}\) is the Holder conjugate of \(p\). If such embedding holds then Holder's inequality, (A1), and (3.31) will imply
\[|\langle P^{n}B(\mu^{n}(t)),\psi\rangle_{V_{s}}| \leq|(\nabla\rho(\cdot,\varphi^{n}),\nabla\psi)_{\Omega}|+|( \nabla J*\varphi^{n},\nabla\psi)_{\Omega}|+\ell_{c}|(\nabla\theta^{n},\nabla \psi)_{\Omega}|\] \[\leq|(\rho(\cdot,\varphi^{n}),\Delta\psi)_{\Omega}|+\|\nabla J\| _{L^{1}}\|\varphi^{n}\|\|\nabla\psi\|+\ell_{c}|(\theta^{n},\Delta\psi)_{\Omega}|\] \[\leq\|\rho(\cdot,\varphi^{n})\|_{L^{p}}\|\Delta\psi\|_{L^{p^{ \prime}}}+\|\nabla J\|_{L^{1}}\|\varphi^{n}\|\|\psi\|_{V_{s}}+\ell_{c}\| \theta^{n}\|\|\Delta\psi\|_{L^{p^{\prime}}}\] \[\leq(\|\rho(\cdot,\varphi^{n})\|_{L^{p}}+\|\nabla J\|_{L^{1}}\| \varphi^{n}\|+\ell_{c}\|\theta^{n}\|)\|\psi\|_{V_{s}}.\]
Using (3.21), (3.26) and (3.31), we thus get
\[\|P^{n}B(\mu^{n}(t))\|_{L^{\infty}(I;V_{s}^{*})}\leq M.\]
Let us now take our gaze onto the term \(\langle P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),\varphi^{n}(t)),\psi\rangle_{V_{ s}}\). We know that the embedding \(H^{s-2}(\Omega)\hookrightarrow L^{p^{\prime}}(\Omega)\) -- to maintain the validity of the previous computation -- holds if either of the following cases is true:
* for \(2\leq p^{\prime}\leq+\infty\) if \((s-2)2>d\),
* for \(2\leq p^{\prime}<+\infty\) if \((s-2)2=d\),
* for \(2\leq p^{\prime}\leq 2d/(d-(s-2)2)\) if \((s-2)2<d\).
The first two cases instantly yields a good estimate for \(\langle P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),\varphi^{n}(t)),\psi\rangle_{V}\). Indeed, from Holder's inequality, and from the fact that \((s-2)2\geq d\) implies \((s-1)2>d\) -- which validates the embedding \(H^{s-1}(\Omega)\hookrightarrow L^{\infty}(\Omega)\) -- we get
\[\begin{split}|\langle P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t), \varphi^{n}(t)),\psi\rangle_{V_{s}}|&=|(\mathbf{u}^{n}\cdot \nabla\psi,\varphi^{n})_{\Omega}|\leq\|\mathbf{u}^{n}\|\|\nabla\psi\|_{L^{ \infty}}\|\varphi^{n}\|\\ &\leq\|\mathbf{u}^{n}\|\|\nabla\psi\|_{H^{s-1}}\|\varphi^{n}\| \leq\|\mathbf{u}^{n}\|\|\varphi^{n}\|\|\psi\|_{V_{s}}\end{split} \tag{3.46}\]
The third case and the fact that \(p^{\prime}\) is the Holder conjugate of \(p\) implies that it would suffice to just consider \(\frac{(4-d)p+2d}{2p}\leq s<\frac{d+4}{2}\). The arbitrariness of \(p\in(1,2]\) also calls for us to consider \(p\in(1,d/(d-1))\), \(p=d/(d-1)\), and \(p\in(3/2,2]\) exclusively for \(d=3\). Let us first consider the following cases:
* \(\frac{(4-d)p+2d}{2p}\leq s\) and \(p\in(1,d/(d-1))\),
* \(\frac{(4-d)p+2d}{2p}<s\) and \(p=d/(d-1)\).
We see that the scenarios above imply \((s-1)2>d\), hence the embedding \(H^{s-1}(\Omega)\hookrightarrow L^{\infty}(\Omega)\). Following the same computations as in (3.46), we get
\[|\langle P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),\varphi^{n}(t)),\psi\rangle_{V _{s}}|\leq\|\mathbf{u}^{n}\|\|\varphi^{n}\|\|\psi\|_{V_{s}}\]
Using (3.21) and (3.23) we see that
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),\varphi^{n}(t))\|_{L^{\infty}(I;V_{s }^{*})}\leq M^{2},\]
if either \((s-2)2\geq d\), \(\frac{(4-d)p+2d}{2p}\leq s\) and \(p\in(1,d/(d-1))\), or \(\frac{(4-d)p+2d}{2p}<s\) and \(p=d/(d-1)\).
The case \(\frac{(4-d)p+2d}{2p}=s\) and \(p=d/(d-1)\) implies \((s-1)2=d\) which gives us the embedding \(H^{s-1}(\Omega)\hookrightarrow L^{r}(\Omega)\) for any \(2<r<+\infty\). Using Holder's inequality, the embedding previously mentioned, and the interpolation in \(L^{p}\) with \(\theta=(r-4)/r\), \(p=2\), and \(q=4\), we arrive at the following computation:
\[\begin{split}|\langle P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t), \varphi^{n}(t)),\psi\rangle_{V_{s}}|&\leq\|\mathbf{u}^{n}\|\| \nabla\psi\|_{L^{r}}\|\varphi^{n}\|_{L^{\frac{2r}{r-2}}}\\ &\leq\|\mathbf{u}^{n}\|\|\nabla\psi\|_{H^{s-1}}\|\varphi^{n}\|^{ \frac{r-4}{r}}\|\varphi^{n}\|_{L^{4}}^{\frac{4}{r}}\\ &\leq\|\mathbf{u}^{n}\|\|\psi\|_{V_{s}}\|\varphi^{n}\|^{\frac{r-4 }{r}}\|\varphi^{n}\|_{V}^{\frac{4}{r}}.\end{split}\]
Using (3.21), (3.23) and (3.29), we get that the case currently being considered yields
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n},\varphi^{n})\|_{L^{\frac{r}{2 }}(I;V_{s}^{*})} \leq\left(\int_{0}^{T}(\|\mathbf{u}^{n}(t)\|\|\varphi^{n}(t)\|^{ \frac{r-4}{r}}\|\varphi^{n}(t)\|_{V}^{\frac{4}{r}})^{\frac{r}{2}}\,\mathrm{d}t \right)^{2/r}\] \[\leq\|\mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})}\|\varphi^{n}\|_ {L^{\infty}(I;H)}^{\frac{r-4}{r}}\|\varphi^{n}\|_{L^{2}(I;V)}^{\frac{4}{r}}\]
Lastly, \(\frac{3}{2}<p\leq 2\) and \(s=\frac{(4-d)p+2d}{2p}=\frac{1}{2}+\frac{3}{p}\) we get that \((s-1)2<3=d\). This implies the embedding \(H^{s-1}(\Omega)\hookrightarrow L^{r}(\Omega)\) with \(r=\frac{3p}{2p-3}\). By letting \(r^{\prime}>1\) be such that \(\frac{1}{r}+\frac{1}{r^{\prime}}=\frac{1}{2}\), together with \(L^{p}\) interpolation (with \(\theta=\frac{20p-30}{9p}\), \(p_{1}=5\), and \(p_{2}=2\)), and the embedding \(H^{1}(\Omega)\hookrightarrow L^{5}(\Omega)\) we get the following estimate
\[|\langle P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),\varphi^{n}(t)), \psi\rangle_{V_{s}}| \leq\|\mathbf{u}^{n}\|\|\nabla\psi\|_{L^{r}}\|\varphi^{n}\|_{L^{ r^{\prime}}}\] \[\leq\|\mathbf{u}^{n}\|\|\nabla\psi\|_{H^{s-1}}\|\varphi^{n}\|^{ \frac{30-11p}{9p}}\|\varphi^{n}\|_{L^{5}}^{\frac{10}{9}\frac{2p-3}{p}}\] \[\leq\|\mathbf{u}^{n}\|\|\psi\|_{V_{s}}\|\varphi^{n}\|^{\frac{30-11 p}{9p}}\|\varphi^{n}\|_{V}^{\frac{10}{9}\frac{2p-3}{p}}.\]
From the \(L^{\infty}(I;H_{\sigma})\) and \(L^{\infty}(I;H)\) estimates of \(\mathbf{u}^{n}\) and \(\varphi^{n}\), respectively, and the \(L^{2}(I;V)\) estimate of \(\varphi^{n}\) from (3.29) we infer that
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n},\varphi^{n})\|_{L^{\frac{9 p}{5(2p-3)}}(I;V_{s}^{*})} \leq\left(\int_{0}^{T}\left(\|\mathbf{u}^{n}(t)\|\|\varphi^{n}\|^ {\frac{30-11p}{9p}}\|\varphi^{n}(t)\|_{V}^{\frac{10}{9}\frac{2p-3}{p}}\right) ^{\frac{9p}{5(2p-3)}}\,\mathrm{d}t\right)^{\frac{5(2p-3)}{9p}}\] \[\leq\|\mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})}\|\varphi^{n}\| _{L^{\infty}(I;H)}^{\frac{30-11p}{9p}}\|\varphi^{n}\|_{L^{2}(I;V)}^{\frac{10} {9}\frac{2p-3}{p}}.\]
From all these we finally get the following estimates for \(\partial_{t}\varphi^{n}\).
\[\|\partial_{t}\varphi^{n}\|_{L^{\infty}(I;V_{s}^{*})\cap L^{\frac{r}{2}}(I;V_ {\frac{d+2}{2}}^{*})}\leq M+M^{2}, \tag{3.47}\]
where \(r>2\) and if any of the following cases hold:
* \((s-2)2\geq d\),
* \(1<p<\frac{d}{d-1}\) with \(\frac{d+4}{2}>s\geq\frac{(4-d)p+2d}{2p}\), or
* \(p=\frac{d}{d-1}\) with \(\frac{d+4}{2}>s>\frac{(4-d)p+2d}{2p}\).
Lastly, if \(d=3\) and \(\frac{3}{2}<p\leq 2\) we have
\[\|\partial_{t}\varphi^{n}\|_{L^{\frac{9p}{5(2p-3)}}(I;V_{s}^{*})}\leq M+M^{2}. \tag{3.48}\]
For the time derivative of the temperature, we rewrite (3.8c) as
\[\begin{split}&\langle\partial_{t}H(\theta^{n}(t),\varphi^{n}(t)), \vartheta\rangle_{V_{s}}+\langle P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),H( \theta^{n}(t),\varphi^{n}(t))),\vartheta\rangle_{V_{s}}\\ &+\kappa\langle P^{n}\mathcal{B}\theta^{n}(t),\vartheta\rangle_{V_ {s}}=\langle P^{n}(\mathbf{g}\cdot\mathbf{u}^{n}(t)),\vartheta\rangle_{V_{s} }+\langle z^{n}(t),\vartheta\rangle_{V_{s}},\end{split} \tag{3.49}\]
where \(H(\theta^{n}(t),\varphi^{n}(t))=\theta^{n}(t)-\ell_{h}\varphi^{n}(t)\). Apparently, we also have to work on the space \(V_{s}\) due to the appearance of the time derivative of the order parameter. Of course, the linear parts of (3.49) can be handled conveniently, and in fact -- because of the embedding \(H^{s-2}(\Omega)\hookrightarrow L^{2}(\Omega)\) -- we have
\[\|P^{n}\mathcal{B}\theta^{n}(t)\|_{V_{s}^{*}} =\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}|\langle P^{n}\mathcal{B}\theta^{n}(t ),\vartheta\rangle_{V_{s}}|=\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}|(\theta^{n}(t),\Delta\vartheta)_{ \Omega}|\] \[=\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}\|\theta^{n}(t)\|\|\Delta\vartheta\| \leq\|\theta^{n}(t)\|.\]
Hence, we obtain from (3.26)
\[\|P^{n}\mathcal{B}\theta^{n}\|_{L^{\infty}(I;V_{s}^{*})}\leq\| \theta^{n}\|_{L^{\infty}(I;H)}\leq M.\]
Holder's inequality, the constancy of the gravitational parameter \(\mathbf{g}\) and (3.23), on the other hand, give us
\[\|P^{n}(\mathbf{g}\cdot\mathbf{u}^{n})\|_{L^{\infty}(I;V_{s}^{*}) }\leq c\|\mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})}\leq M.\]
For the transport term, we shall take advantage of the embedding \(H^{s-1}(\Omega)\hookrightarrow L^{r}(\Omega)\) for some \(2<r\leq+\infty\) as illustrated in the previous computations for the time derivative of \(\varphi^{n}\). For this purpose, we shall skip some parts which we have covered previously, but mention the values of \(r\) that we used.
**Case 1: \(r=+\infty\)**. Before we begin, let us mention that due to (3.21), (3.26) and (3.29) we get that \(H(\theta^{n},\varphi^{n})\) belongs to \(L^{\infty}(I;H)\cap L^{2}(I;V)\), and in fact we have
\[\|H(\theta^{n},\varphi^{n})\|_{L^{\infty}(I;H)\cap L^{2}(I;V)} \leq M. \tag{3.50}\]
Such property is global and does not only cover the current case.
Going back, we see that by utilizing Holder's inequality, we get
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),H(\theta^{n}(t),\varphi ^{n}(t)))\|_{V_{s}^{*}} =\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}|\langle P^{n}\mathcal{C}_{1}( \mathbf{u}^{n}(t),H(\theta^{n}(t),\varphi^{n}(t))),\vartheta\rangle_{V_{s}}|\] \[=\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}|(\mathbf{u}^{n}(t)\cdot\nabla\vartheta,H(\theta^{n}(t),\varphi^{n}(t)))_{\Omega}|\] \[\leq\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}\|\mathbf{u}^{n}(t)\|\|\nabla\vartheta\| _{L^{\infty}}\|H(\theta^{n}(t),\varphi^{n}(t))\|\] \[\leq\|\mathbf{u}^{n}(t)\|\|H(\theta^{n}(t),\varphi^{n}(t))\|.\]
From (3.21) and (3.23) we hence achieve
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),H(\theta^{n}(t),\varphi ^{n}(t)))\|_{L^{\infty}(I;V_{s}^{*})}\leq M^{2}.\]
**Case 2: \(2<r<+\infty\)**. This particular case is divided in two subcases. The first one is when \(\frac{(4-d)p+2d}{2p}=s\) and \(p=d/(d-1)\), which implies \(2<r<+\infty\) can be chosen arbitrarily. In this
instance, we see that
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),H(\theta^{n}(t),\varphi^{n}( t)))\|_{V_{s}^{*}} =\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}|\langle P^{n}\mathcal{C}_{1}( \mathbf{u}^{n}(t),H(\theta^{n}(t),\varphi^{n}(t))),\vartheta\rangle_{V_{s}}|\] \[=\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}|(\mathbf{u}^{n}(t)\cdot\nabla\vartheta,H(\theta^{n}(t),\varphi^{n}(t)))_{\Omega}|\] \[\leq\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}\|\mathbf{u}^{n}(t)\|\|\nabla\vartheta \|_{L^{r}}\|H(\theta^{n}(t),\varphi^{n}(t))\|_{L^{\frac{2r}{r-2}}}\] \[\leq\|\mathbf{u}^{n}(t)\|\|H(\theta^{n}(t),\varphi^{n}(t))\|^{ \frac{r-4}{r}}\|H(\theta^{n}(t),\varphi^{n}(t))\|^{\frac{4}{r}}_{V}.\]
Here, the last inequality is achieved by additionally employing \(L^{p}\)-interpolation with \(\theta=(r-4)/r\), \(p_{1}=4\) and \(p_{2}=2\), and the embedding \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\). We thus get
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n},H(\theta^{n},\varphi^{n})) \|_{L^{\frac{r}{r}}(I;V_{s}^{*})}\] \[\leq\left(\int_{0}^{T}\left(\|\mathbf{u}^{n}(t)\|\|H(\theta^{n}( t),\varphi^{n}(t))\|^{\frac{r-4}{r}}\|H(\theta^{n}(t),\varphi^{n}(t))\|^{ \frac{4}{r}}_{V}\right)^{\frac{r}{2}}\mathrm{d}t\right)^{2/r}\] \[\leq\|\mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})}\|H(\theta^{n}, \varphi^{n})\|^{\frac{r-4}{r}}_{L^{\infty}(I;H)}\|H(\theta^{n}(t),\varphi^{n} (t))\|^{\frac{4}{r}}_{L^{2}(I;V)}\leq M^{2},\]
where the last inequality follows from (3.23) and (3.50).
The second subcase is when \(d=3\) and \(\frac{3}{2}<p\leq 2\), which gives us a particular value \(r=\frac{3p}{2p-3}\). By additionally utilizing \(L^{p}\) interpolation with \(\theta=\frac{30-11p}{9p}\), \(p_{1}=5\), and \(p_{2}=2\) and the embedding \(H^{1}(\Omega)\hookrightarrow L^{5}(\Omega)\) we get
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n}(t),H(\theta^{n}(t),\varphi^{n}(t)))\|_{V _{s}^{*}} =\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}|\langle P^{n}\mathcal{C}_{1}(\mathbf{u} ^{n}(t),H(\theta^{n}(t),\varphi^{n}(t))),\vartheta\rangle_{V_{s}}|\] \[=\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}|(\mathbf{u}^{n}(t)\cdot\nabla\vartheta,H(\theta^{n}(t),\varphi^{n}(t)))_{\Omega}|\] \[\leq\inf_{\begin{subarray}{c}\vartheta\in V_{s}\\ \|\vartheta\|_{V_{s}}=1\end{subarray}}\|\mathbf{u}^{n}(t)\|\|\nabla\vartheta\| _{L^{r}}\|H(\theta^{n}(t),\varphi^{n}(t))\|_{L^{\frac{2r}{r-2}}}\] \[\leq\|\mathbf{u}^{n}(t)\|\|H(\theta^{n}(t),\varphi^{n}(t))\|^{ \frac{30-11p}{9p}}\|H(\theta^{n}(t),\varphi^{n}(t))\|^{\frac{10}{9}\frac{2p-3} {p}}_{V}\]
We further estimate with respect to the time variable as follows:
\[\|P^{n}\mathcal{C}_{1}(\mathbf{u}^{n},H(\theta^{n},\varphi^{n})) \|_{L^{\tilde{p}}(I;V_{s}^{*})}\] \[\leq\left(\int_{0}^{T}\left(\|\mathbf{u}^{n}(t)\|\|H(\theta^{n}(t ),\varphi^{n}(t))\|^{\frac{30-11p}{9p}}\|H(\theta^{n}(t),\varphi^{n}(t))\|^{ \frac{2}{p}}_{V}\right)^{\tilde{p}}\mathrm{d}t\right)^{\frac{1}{p}}\] \[\leq\|\mathbf{u}^{n}\|_{L^{\infty}(I;H_{\sigma})}\|H(\theta^{n},\varphi^{n})\|^{\frac{30-11p}{9p}}_{L^{\infty}(I;H_{\sigma})}\|H(\theta^{n},\varphi^{n})\|^{\frac{10}{9}\frac{2p-3}{p}}_{L^{2}(I;V)}\leq M^{2},\]
where \(\tilde{p}=\frac{9p}{5(2p-3)}\).
Finally, the computations above give us the estimate for the temperature variable \(\partial_{t}\theta^{n}\)
\[\|\partial_{t}\theta^{n}\|_{L^{\infty}(I;V_{s}^{*})\cap L^{\frac{r}{2}}(I;V_{ \frac{d+2}{2}}^{*})}\leq M+M^{2} \tag{3.51}\]
where \(r>2\) and if any of the following cases hold:
* \((s-2)2\geq d\),
* \(1<p<\frac{d}{d-1}\) with \(\frac{d+4}{2}>s\geq\frac{(4-d)p+2d}{2p}\), or
* \(p=\frac{d}{d-1}\) with \(\frac{d+4}{2}>s>\frac{(4-d)p+2d}{2p}\).
And if \(d=3\) and \(\frac{3}{2}<p\leq 2\) we have
\[\|\partial_{t}\theta^{n}\|_{L^{\frac{9p}{5(2p-3)}}(I;V_{s}^{*})}\leq M+M^{2}. \tag{3.52}\]
The estimates (3.21)-(3.26), (3.29), (3.30) and (3.31) imply the existence of \(\varphi\in L^{\infty}(I;H)\cap L^{2}(I;V)\), \(\mu\in L^{2}(I;V)\), \(\rho\in L^{\infty}(I;L^{p}(\Omega))\), \(\mathbf{u}\in L^{\infty}(I;H_{\sigma})\cap L^{2}(I;V_{\sigma})\) and \(\theta\in L^{\infty}(I;H)\cap L^{2}(I;V)\) for which -- up to a sebsequence -- the following properties hold
\[\varphi^{n}\rightharpoonup\varphi \text{in }L^{2}(I;V) \tag{3.53}\] \[\varphi^{n}\rightharpoonup\varphi \text{in }L^{\infty}(I;H)\] (3.54) \[\mu^{n}\rightharpoonup\mu \text{in }L^{2}(I;V)\] (3.55) \[\rho(\cdot,\varphi^{n})\rightharpoonup\rho \text{in }L^{\infty}(I;L^{p}(\Omega))\] (3.56) \[\mathbf{u}^{n}\rightharpoonup\mathbf{u} \text{in }L^{2}(I;V_{\sigma})\] (3.57) \[\mathbf{u}^{n}\rightharpoonup\mathbf{u} \text{in }L^{\infty}(I;H_{\sigma})\] (3.58) \[\theta^{n}\rightharpoonup\theta \text{in }L^{2}(I;V)\] (3.59) \[\theta^{n}\rightharpoonup\theta \text{in }L^{\infty}(I;H). \tag{3.60}\]
Estimates (3.45), (3.47), (3.48), (3.51) and (3.52), further gives us the convergences of the time derivatives:
\[\partial_{t}\mathbf{u}^{n}\rightharpoonup\partial_{t}\mathbf{u}\text{ in }L^{4/d}(I;V_{\sigma}^{*}),\quad\partial_{t}\varphi^{n}\rightharpoonup \partial_{t}\varphi\text{ and }\partial_{t}\theta^{n}\rightharpoonup\partial_{t}\theta \text{ in }L^{q}(I;V_{s}^{*}) \tag{3.61}\]
for \(q=\frac{9p}{5(2p-3)}\) and \(s=\frac{p+6}{2p}\) if \(d=3\) and \(\frac{3}{2}<p\leq 2\), and for \(q=\frac{r}{2}\) and \(s=\frac{d+2}{2}\) -- with \(r>2\) taken arbitrarily -- if \(p=\frac{d}{d-1}\) and \(d=2,3\),
\[\partial_{t}\varphi^{n}\rightharpoonup\partial_{t}\varphi,\qquad\partial_{t} \theta^{n}\rightharpoonup\partial_{t}\theta \text{in }L^{\infty}(I;V_{s}^{*}) \tag{3.62}\]
for either \((s-2)2\geq d\), \(1<p<\frac{d}{d-1}\) with \(\frac{d+4}{2}>s\geq\frac{(4-d)p+2d}{2p}\), or \(p=\frac{d}{d-1}\) with \(\frac{d+4}{2}>s>\frac{(4-d)p+2d}{2p}\).
Taking advantage of Aubin-Lions-Simon embedding theorems would give us the compact embedding \(W^{2,4/d}(V_{\sigma},V_{\sigma}^{*})\hookrightarrow L^{2}(I;H_{\sigma})\). Similarly, the embedding \(W^{2,q}(V,V_{s}^{*})\hookrightarrow L^{2}(I;H)\) holds for the following cases
* \(q=\frac{9p}{5(2p-3)}\) and \(s=\frac{p+6}{2p}\) if \(d=3\) and \(\frac{3}{2}<p\leq 2\);
* \(q=\frac{r}{2}\), \(r>2\) taken arbitrarily, and \(s=\frac{d+2}{2}\) if \(p=\frac{d}{d-1}\)\(d=2,3\); and
* \(q=+\infty\) if either \((s-2)2\geq d\); \(1<p<\frac{d}{d-1}\) with \(\frac{d+4}{2}>s\geq\frac{(4-d)p+2d}{2p}\); or \(p=\frac{d}{d-1}\) with \(\frac{d+4}{2}>s>\frac{(4-d)p+2d}{2p}\).
From these, we get the following strong convergences
\[\mathbf{u}^{n}\to\mathbf{u}\,\,\text{in}\,\,L^{2}(I;H_{\sigma}),\quad\varphi^{n} \to\varphi\,\,\text{in}\,\,L^{2}(I;H),\quad\theta^{n}\to\theta\,\,\text{in}\,\,L ^{2}(I;H) \tag{3.63}\]
all of which converge a.e. in \(\Omega\times(0,T)\).
The passage of the limit for the order parameter and the fluid velocity follows that of in [16]. Although such steps would be straightforward to establish, we highlight the crucial points for which we prove that the triple \([\varphi,\mathbf{u},\theta]\) is a weak solution of the system in the sense of Definition 3.1. Firstly, we point out some direct consequences of (3.63).
* By definition of \(\rho(\cdot,\varphi^{n})\), the regularity of \(F\), and the pointwise convergence of the order parameter, \(\rho(\cdot,\varphi^{n})\to a(\cdot)\varphi+F^{\prime}(\varphi)\) a.e. in \(\Omega\times(0,T)\), from which we also infer \(\rho=a(\cdot)\varphi+F^{\prime}(\varphi)\).
* The strong convergence of the order parameter in \(L^{2}(I;H)\) implies \(J*\varphi^{n}\to J*\varphi\) in \(L^{2}(I;V)\). Such convergence and the definition of \(\mu^{n}\) imply \(\mu=\rho-J*\varphi+\ell_{c}\theta\). Indeed, for any \(\psi\in H^{n}\) and \(\chi\in C_{0}^{\infty}(0,T)\), we get \[\int_{0}^{T}(\mu^{n},\psi)_{\Omega}\chi\,\mathrm{d}t =\int_{0}^{T}(\rho(\cdot,\varphi^{n})-J*\varphi^{n}+\ell_{c} \theta^{n},\psi)_{\Omega}\chi\,\mathrm{d}t\] \[\to\int_{0}^{T}(\rho-J*\varphi+\ell_{c}\theta,\psi)_{\Omega} \chi\,\mathrm{d}t,\] where we used the pointwise convergence of \(\rho\), the strong convergence of convolution, the strong convergence of the temperature variable in \(L^{2}(I;H)\), and the density of \(\operatorname{span}\{\psi_{\flat}\}\) in \(V\). We thus infer that \(\mu=\rho-J*\varphi+\ell_{c}\theta\) from (3.55), and that \(\rho\in L^{2}(I;V)\).
Moving forward, we multiply (3.8a), (3.8b), (3.8c) by \(\chi,\mathfrak{w},\omega\in C_{0}^{\infty}(0,T)\), and integrate over the interval \((0,T)\) and take advantage of (3.53)-(3.63). Since some of the terms in the resulting integral equation can be handled quite easily, we highlight only some of the parts which we deemed to be crucial.
* The term \((\nabla\rho(\cdot,\varphi^{n}),\nabla\psi)_{\Omega}\) is handled by passing the derivative to the test function, i.e. \(-(\rho(\cdot,\varphi^{n}),\Delta\psi)\), and utilizing (3.56) with the knowledge that \(\psi\in V_{s}\) and \(H^{s-2}\hookrightarrow L^{p^{\prime}}\).
* Due to the convergence \(J*\varphi^{n}\to J*\varphi\) in \(L^{2}(I;V)\) we also achieve convergence for the term \((\nabla J*\varphi^{n},\nabla\psi)_{\Omega}\).
* The expression involving the transport term \((\mathbf{u}^{n}(t)\cdot\nabla\varphi^{n}(t),\psi)_{\Omega}\) can be established to converge to \((\mathbf{u}(t)\cdot\nabla\varphi(t),\psi)_{\Omega}\) by utilizing the convergences (3.57), (3.53) and (3.63). The same argument is used to show the convergence of the transport term occurring in the equation for the temperature, i.e., the term \((\mathbf{u}^{n}(t)\cdot\nabla(\theta^{n}(t)-\ell_{h}\varphi^{n}(t)),\vartheta )_{\Omega}\).
* The strong convergence of the order parameter in (3.63), the imposed properties of \(\nu(\cdot)\) in (A2), and the dominated convergence theorem imply \(\nu(\varphi^{n})\to\nu(\varphi)\) in \(L^{p}(I;L^{p}(\Omega))\) for any \(1\leq p<+\infty\). From the convergence of the viscosity term above and (3.57) we implore that \(\nu(\varphi^{n})\mathrm{D}\mathbf{u}^{n}\rightharpoonup\nu(\varphi)\mathrm{D} \mathbf{u}\) in \(L^{2}(I;L^{2}(\Omega)^{d\times d})\). The weak and strong convergences of the velocity vector in \(L^{2}(I;V_{\sigma})\) and \(L^{2}(I;H_{\sigma})\), respectively, give us the convergence for the trilinear form in (3.8b) [36, Lemma III.3.2].
The linear terms can be handled quite easily using the convergences we have in hand. For the terms concerning the time derivative, we pass the time derivatives to the functions \(\chi,\mathfrak{w},\omega\in C^{\infty}_{0}(0,T)\) and use (3.63). From these we see that the triple \([\varphi,\mathbf{u},\theta]\) satisfies (3.1a), (3.1b) and (3.1c) by virtue of the density of \(\mathrm{span}\{\mathbf{v}_{\mathrm{j}}\}\) and \(\mathrm{span}\{\psi_{\mathrm{j}}\}\) in \(V_{\sigma}\) and \(V\).
Now we suppose \(\chi,\mathfrak{w},\omega\in C_{\infty}(0,T)\) with values equal to one at the initial time \(t=0\) and equal to zero at the terminal time \(t=T\) and apply the same passage of the limit. Multiplying the same functions to (3.1a), (3.1b) and (3.1c) and apply integration by parts eventually lead to
\[(\varphi(0)-\varphi_{0},\psi)_{\Omega}=0 \forall\psi\in V,\] \[(\mathbf{u}(0)-\mathbf{u}_{0},\mathbf{v})_{\Omega}=0 \forall\mathbf{v}\in V_{\sigma},\] \[(\theta(0)-\theta_{0},\vartheta)_{\Omega}=0 \forall\vartheta\in V.\]
We also mention that due to the Aubin-Lions lemma we have the embeddings \(W^{+\infty,4/d}(H_{\sigma},V_{\sigma}^{*})\hookrightarrow C(\overline{I};H_{ \sigma})\) and \(W^{+\infty,q}(H,V_{s}^{*})\hookrightarrow C(\overline{I};H)\). These validate the evaluation of the weak solutions at the initial time \(t=0\), and shows the satisfaction of (3.2), (3.3) and (3.4).
The improved regularity of \(\rho\in L^{2}(I;V)\) lets us write (3.1a) as
\[\langle\partial_{t}\varphi(t),\psi\rangle_{V}+\langle\mathcal{C}_{ 1}(\mathbf{u}(t),\varphi(t)),\psi\rangle_{V}+\langle B\mu(t),\psi\rangle_{V}=0 \forall\psi\in V.\]
The transport term can thus be estimated -- following analogous arguments as in (3.38) and (3.42) -- as
\[\|\mathcal{C}_{1}(\mathbf{u},\varphi)\|_{L^{4/d}(I;V^{*})}\leq M^{ 2}.\]
This improves the regularity of the time derivative as well, i.e. \(\partial_{t}\varphi\in L^{4/d}(I,V^{*})\). The space upon which the time derivative of the temperature inherits the improvement gained by the order parameter as well, i.e. \(\partial_{t}\theta\in L^{4/d}(I,V^{*})\).
Another consequence of the embeddings above is that up to a subsequence we have \(\varphi^{n}(t)\to\varphi(t)\) in \(H\) and almost everywhere in \(\Omega\). Consequently, by virtue of Fatou's lemma
\[\int_{\Omega}F(\varphi(t))\,\mathrm{d}x\leq\liminf_{n\to\infty} \int_{\Omega}F(\varphi^{n}(t))\,\mathrm{d}x. \tag{3.64}\]
We also have, from the weak lower semi-continuity of the nonlocal energy \(\mathbb{E}_{nl}\) and the weak convergence of \(\varphi^{n}\) to \(\varphi\),
\[\int_{\Omega}\mathbb{E}_{nl}(\varphi(t))\,\mathrm{d}x\leq\liminf_ {n\to\infty}\int_{\Omega}\mathbb{E}_{nl}(\varphi^{n}(t))\,\mathrm{d}x. \tag{3.65}\]
We recall that \(J*\varphi^{n}\to J*\varphi\) in \(L^{2}(I;V)\) which implies \(P^{n}(J*\varphi^{n})\to J*\varphi\) in \(L^{2}(I;V)\), and \(\sqrt{\nu(\varphi^{n})}\mathrm{D}\mathbf{u}^{n}\rightharpoonup\sqrt{\nu( \varphi)}\mathrm{D}\mathbf{u}\) in \(L^{2}(I;L^{2}(\Omega)^{d\times d})\). Integrating (3.14) over \([0,t]\), utilizing (3.64), (3.65) and (3.53)-(3.60), and taking advantage of the weak lower semi-continuity of norms, we achieve the energy inequality (3.7).
## 4 Nonlocal-to-Local Convergence
The purpose of the section is to show, under an appropriate choice of the kernel \(J\), that solutions of the nonlocal system converges to its local version. Notably, we should be able to determine by which convergence we shall anchor on, we also recall that the only assumption we imposed for the kernel is (A1), so it is imperative that such assumption is not violated by the particular kinds of kernel we shall consider.
Suppose \(\gamma\in(0,d-1)\), \(\eta\in C^{1}([0,+\infty);[0,+\infty))\) such that the map \(s\mapsto|\eta^{\prime}(s)|s^{d-1-\gamma}\) is in \(L^{1}(\mathbb{R}^{+})\), and satisfies the renormalization
\[\int_{0}^{+\infty}\eta(s)s^{d+1-\gamma}\,\mathrm{d}s=\frac{2}{C_{d}},\text{ where }C_{d}:=\int_{S^{d-1}}|\sigma\cdot e_{1}|^{2}\,\mathrm{d}\mathcal{H}^{d-1}(\sigma).\]
We define the family of mollifiers \((\eta_{\varepsilon})_{\varepsilon>0}\) as \(\eta_{\varepsilon}(s)=\frac{1}{\varepsilon^{d}}\eta(s/\varepsilon)\) for \(s\geq 0\), from which we define the kernel \(J_{\varepsilon}:\mathbb{R}^{d}\to\mathbb{R}\) by \(J_{\varepsilon}(x)=\frac{1}{\varepsilon^{2-\gamma}}\frac{\eta_{\varepsilon}( |x|)}{|x|^{\gamma}}\). Evidently, for any \(\varepsilon>0\)\(J_{\varepsilon}(x)=J_{\varepsilon}(-x)\) for all \(x\in\Omega\) and \(a_{\varepsilon}(x)=\int_{\Omega}J_{\varepsilon}(x-y)\,\mathrm{d}y\geq 0\) for almost every \(x\in\Omega\). Meanwhile, from [17, Lemma 3.1] we infer that \(J_{\varepsilon}\in W^{1,1}(\mathbb{R}^{d})\).
For a given \(\varepsilon>0\) we now define the corresponding nonlocal energy functional to each kernel \(J_{\varepsilon}\) as
\[\mathbb{E}_{nl}^{\varepsilon}(\varphi):=\frac{1}{2}\int_{\Omega}\int_{\Omega }J_{\varepsilon}(x-y)(\varphi(x)-\varphi(y))^{2}\,\mathrm{d}y\,\mathrm{d}x.\]
And for each \(\epsilon>0\) and each kernel \(J_{\varepsilon}\), Theorem 3.3 assures us of the existence of solutions of (1.2) in the sense of Definition 3.1 which we shall denote as \([\varphi_{\varepsilon},\mathbf{u}_{\varepsilon},\theta_{\varepsilon}]\) together with the chemical potential \(\mu_{\varepsilon}:=a_{\varepsilon}\varphi_{\varepsilon}-J_{\varepsilon}* \varphi_{\varepsilon}+F^{\prime}(\varphi_{\varepsilon})+\ell_{c}\theta_{\varepsilon}\). We also denote by \(\mathbb{E}^{\varepsilon}\) the total energy of the system (1.2) which we recall to be written as in (3.5) but now with the kernel \(J_{\varepsilon}\).
Our goal is to establish that the solutions \([\varphi_{\varepsilon},\mathbf{u}_{\varepsilon},\theta_{\varepsilon}]\) converge to a triple \([\widetilde{\varphi},\widetilde{\mathbf{u}},\widetilde{\theta}]\) that solves the local version of system (1.2) as \(\varepsilon\to 0\). In particular, we shall show that \([\widetilde{\varphi},\widetilde{\mathbf{u}},\widetilde{\theta}]\) satisfies -- in weak sense -- the Cahn-Hilliard-Boussinesq system similar to what was proposed in [29]:
\[\partial_{t}\varphi+\mathbf{u}\cdot\nabla\varphi=\Delta\mu,\quad\mu=-\Delta \varphi+\eta F^{\prime}(\varphi)+\ell_{c}\theta, \tag{4.1a}\] \[\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}-\operatorname{div}( \nu(\varphi)2\mathrm{D}\mathbf{u})+\nabla p=\mathcal{K}(\mu-\ell_{c}\theta) \nabla\varphi+\ell(\varphi,\theta)\mathbf{g}+\mathbf{q},\] (4.1b) \[\partial_{t}\theta-\ell_{h}\partial_{t}\varphi+\mathbf{u}\cdot\nabla(\theta- \ell_{h}\varphi)-\kappa\Delta\theta=\mathbf{g}\cdot\mathbf{u}+z. \tag{4.1c}\]
with the incompressibility condition \(\operatorname{div}\mathbf{u}=0\) in \(Q\), the incorporated initial conditions \(\varphi(0)=\varphi_{0}\), \(\mathbf{u}(0)=\mathbf{u}_{0}\), and \(\theta(0)=\theta_{0}\) in \(\Omega\) and closed with the boundary conditions \(\frac{\partial\mu}{\partial\mathbf{n}}=0\), \(\mathbf{u}=0\), and \(\frac{\partial\theta}{\partial\mathbf{n}}=0\) on \(\Gamma\). We mention that the total energy for the local system can be written as
\[2\widetilde{\mathbb{E}}(\varphi,\mathbf{u},\theta):=\frac{1}{\ell_{c}}\int_{ \Omega}\frac{1}{2}|\nabla\varphi(x)|^{2}+F(\varphi(x))\,\mathrm{d}x+\frac{1}{ \mathcal{K}\ell_{c}}\int_{\Omega}|\mathbf{u}(x)|^{2}\,\mathrm{d}x+\frac{1}{\ell _{h}}\int_{\Omega}|\theta(x)|^{2}\,\mathrm{d}x, \tag{4.2}\]
while the local energy is defined for any \(\varphi\in V\) as
\[\mathbb{E}_{l}(\varphi):=\frac{1}{2}\int_{\Omega}|\nabla\varphi(x)|^{2}\, \mathrm{d}x.\]
Finally, the purpose of this section is to establish the following result.
**Theorem 4.1**.: _Given \(\varepsilon>0\) let \([\varphi_{\varepsilon,0},\mathbf{u}_{\varepsilon,0},\theta_{\varepsilon,0}]\in H \times H_{\sigma}\times H\), and assume that there exists \(m_{\Omega}\in(-1,1)\) such that \(\widehat{\varphi_{\varepsilon,0}}=m_{\Omega}\) for any \(\varepsilon>0\). Suppose that there exists \([\widetilde{\varphi}_{0},\widetilde{\mathbf{u}}_{0},\widetilde{\theta}_{0}] \in V\times H_{\sigma}\times H\) such that the following convergences hold:_
\[\varphi_{\varepsilon,0}\to\widetilde{\varphi}_{0}\text{ in }H, \quad\mathbf{u}_{\varepsilon,0}\to\widetilde{\mathbf{u}}_{0}\text{ in }H_{\sigma},\quad\theta_{ \varepsilon,0}\to\widetilde{\theta}_{0}\text{ in }H\] \[\mathbb{E}^{\varepsilon}(\varphi_{\varepsilon,0},\mathbf{u}_{ \varepsilon,0},\theta_{\varepsilon,0})\to\widetilde{\mathbb{E}}(\widetilde{ \varphi}_{0},\widetilde{\mathbf{u}}_{0},\widetilde{\theta}_{0}).\]
_If for any \(\varepsilon>0\), \([\varphi_{\varepsilon},\mathbf{u}_{\varepsilon},\theta_{\varepsilon}]\) solves (1.2) in weak sense with initial conditions \([\varphi_{\varepsilon,0},\mathbf{u}_{\varepsilon,0},\theta_{\varepsilon,0}]\), then as \(\varepsilon\to 0\)_
\[\varphi_{\varepsilon}\rightharpoonup\widetilde{\varphi}\text{ in }L^{2}(I;V) \tag{4.3}\] \[\varphi_{\varepsilon}\rightharpoonup\widetilde{\varphi}\text{ in }L^{\infty}(I;H)\] (4.4) \[\varphi_{\varepsilon}\to\widetilde{\varphi}\text{ in }L^{2}(I;H) \text{ and a.e. in }\Omega\times(0,T)\] (4.5) \[\mathbf{u}_{\varepsilon}\rightharpoonup\widetilde{\mathbf{u}}\text{ in }L^{2}(I;V_{\sigma})\] (4.6) \[\mathbf{u}_{\varepsilon}\rightharpoonup\widetilde{\mathbf{u}}\text{ in }L^{\infty}(I;H_{\sigma})\] (4.7) \[\mathbf{u}_{\varepsilon}\to\widetilde{\mathbf{u}}\text{ in }L^{2}(I;H_{\sigma}) \text{ and a.e. in }\Omega\times(0,T)\] (4.8) \[\theta_{\varepsilon}\rightharpoonup\widetilde{\theta}\text{ in }L^{2}(I;V)\] (4.9) \[\theta_{\varepsilon}\rightharpoonup\widetilde{\theta}\text{ in }L^{\infty}(I;H)\] (4.10) \[\theta_{\varepsilon}\to\widetilde{\theta}\text{ in }L^{2}(I;H) \text{ and a.e. in }\Omega\times(0,T) \tag{4.11}\]
_where \([\widetilde{\varphi},\widetilde{\mathbf{u}},\widetilde{\theta}]\) solves (4.1) in weak sense with \([\widetilde{\varphi}(0),\widetilde{\mathbf{u}}(0),\widetilde{\theta}(0)]=[ \widetilde{\varphi}_{0},\widetilde{\mathbf{u}}_{0},\widetilde{\theta}_{0}]\) and_
\[\widetilde{\varphi}\in W^{+\infty,4/d}(H;V^{*})\cap L^{2}(I;V) \cap L^{2}(I;V_{2}) \tag{4.12}\] \[\widetilde{\mathbf{u}}\in W^{+\infty,4/d}(H_{\sigma};V^{*}_{ \sigma})\cap L^{2}(I;V_{\sigma})\] (4.13) \[\widetilde{\theta}\in W^{+\infty,4/d}(H;V^{*})\cap L^{2}(I;V) \tag{4.14}\]
_where \(\widetilde{\mu}=-\Delta\widetilde{\varphi}+F^{\prime}(\widetilde{\varphi})+ \ell_{c}\widetilde{\theta}\in L^{2}(I;H)\). Furthermore, for any \(t\in[0,T]\) the following energy inequality holds_
\[\begin{split}&\widetilde{\mathbb{E}}(\widetilde{\varphi}(t), \widetilde{\mathbf{u}}(t),\widetilde{\theta}(t))+\int_{0}^{t}\mathbb{D}( \widetilde{\varphi}(s),\widetilde{\mu}(s),\widetilde{\mathbf{u}}(s), \widetilde{\theta}(s))\,\mathrm{d}s\\ &\leq\widetilde{\mathbb{E}}(\widetilde{\varphi}_{0},\widetilde{ \mathbf{u}}_{0},\widetilde{\theta}_{0})+\int_{0}^{t}\big{\{}\langle\mathbf{q}(s ),\widetilde{\mathbf{u}}(s)\rangle_{V_{\sigma}}+(\ell(\widetilde{\varphi}(s), \widetilde{\theta}(s))\mathbf{g},\widetilde{\mathbf{u}}(s))_{\Omega}\\ &\quad+\langle z(s),\widetilde{\theta}(s)\rangle_{V}+(\mathbf{g }\cdot\widetilde{\mathbf{u}}(s),\widetilde{\theta}(s))_{\Omega}\big{\}}\, \mathrm{d}s.\end{split} \tag{4.15}\]
By saying \([\widetilde{\varphi},\widetilde{\mathbf{u}},\widetilde{\theta}]\) solves (4.1) in weak sense, we mean it to satisfy the variational problem
\[\langle\partial_{t}\widetilde{\varphi}(t),\psi\rangle_{V}+(\widetilde{ \mathbf{u}}(t)\cdot\nabla\widetilde{\varphi}(t),\psi)_{\Omega}+(\nabla \widetilde{\mu}(t),\nabla\psi)_{\Omega}=0 \tag{4.16a}\] \[\begin{split}&\langle\partial_{t}\widetilde{\mathbf{u}}(t), \mathbf{v}\rangle_{V_{\sigma}}+((\widetilde{\mathbf{u}}(t)\cdot\nabla) \widetilde{\mathbf{u}}(t),\mathbf{v})_{\Omega}+2(\nu(\varphi)\mathrm{D} \widetilde{\mathbf{u}}(t),\mathrm{D}\mathbf{v})\\ &=\mathcal{K}(\mathbf{v}\cdot\!\nabla\widetilde{\varphi}(t),( \widetilde{\mu}(t)-\ell_{c}\widetilde{\theta}(t)))_{\Omega}+(\ell(\widetilde{ \varphi}(t),\widetilde{\theta}(t))\mathbf{g},\mathbf{v})_{\Omega}+\langle \mathbf{q}(t),\mathbf{v}\rangle_{V_{\sigma}}\end{split} \tag{4.16b}\]
\[\begin{split}&\langle\partial_{t}\widetilde{\theta}(t),\vartheta\rangle_ {V}-\ell_{h}\langle\partial_{t}\widetilde{\varphi}(t),\vartheta\rangle_{V}+ \kappa(\nabla\widetilde{\theta}(t),\nabla\vartheta)\\ &=(\widetilde{\mathbf{u}}(t)\cdot\nabla(\ell_{h}\widetilde{ \varphi}(t)-\widetilde{\theta}(t)),\vartheta)_{\Omega}+(\mathbf{g}\cdot \widetilde{\mathbf{u}}(t),\vartheta)_{\Omega}+\langle z(t),\vartheta\rangle_ {V}\end{split} \tag{4.16c}\]
for all \(\psi\in V\), \(\mathbf{v}\in V_{\sigma}\), \(\vartheta\in V\) and almost every \(t\in(0,T)\).
To proceed with the proof of the theorem above we shall need the following Lemma which was proven in [17, Lemma 3.3].
**Lemma 4.2**.: _If \(\varphi_{1},\varphi_{2}\in V\) then_
\[\lim_{\varepsilon\to 0}\mathbb{E}_{nl}^{\varepsilon}(\varphi_{1})= \mathbb{E}_{l}(\varphi_{1}), \tag{4.17}\] \[\lim_{\varepsilon\to 0}\int_{\Omega}(a_{\varepsilon}(x) \varphi_{1}(x)-J_{\varepsilon}*\varphi_{1}(x))\varphi_{2}(x)\,\mathrm{d}x=\int _{\Omega}\nabla\varphi_{1}(x)\cdot\nabla\varphi_{2}(x)\,\mathrm{d}x. \tag{4.18}\]
_Furthermore, if \(\{\varphi_{\varepsilon}\}\subset H\) is a sequence that converges strongly to \(\varphi\in H\) in \(H\) then_
\[\mathbb{E}_{l}(\varphi)\leq\liminf_{\varepsilon\to 0}\mathbb{E}_{nl}^{ \varepsilon}(\varphi_{\varepsilon}). \tag{4.19}\]
We mention that due to the computations in the previous section, the proof of the main theorem will be simplified. Mainly because the arguments for establishing the uniform boundedness of the discretized solutions can be done analogously for the solutions of the nonlocal system with the newly specified convolution kernels. We are now in the position to prove the main result of this section.
Proof of Theorem 4.1.: Since every solution \([\varphi_{\varepsilon},\mathbf{u}_{\varepsilon},\theta_{\varepsilon}]\) satisfies the energy inequality (3.7), following the same arguments to be able to reach (3.20) we get
\[\sup_{t\in[0,T]}\widehat{\mathbb{E}}(\varphi_{\varepsilon}(t),\mathbf{u}_{ \varepsilon}(t),\theta_{\varepsilon}(t))+\int_{0}^{T}\widehat{\mathbb{D}}( \mu_{\varepsilon}(s),\mathbf{u}_{\varepsilon}(s),\theta_{\varepsilon}(s))\, \mathrm{d}s\leq M,\]
for some constant \(M>0\) independent of \(\varepsilon>0\). This provides uniform boundedness of \(\{\varphi_{\varepsilon}\}\) in \(L^{\infty}(I;H)\) and \(L^{\infty}(I;H)\), \(\{\mathbf{u}_{\varepsilon}\}\) in \(L^{\infty}(I;H_{\sigma})\cap L^{2}(I;V_{\sigma})\) and \(\{\nabla\mu_{\varepsilon}\}\) in \(L^{2}(I;L^{2}(\Omega)^{d})\). Testing (3.1c) with \(\vartheta=1\) and integrating over \((0,T)\) we get
\[\left|\int_{\Omega}\theta_{\varepsilon}\,\mathrm{d}x\right|\leq c\left(\| \mathbf{u}_{\varepsilon}\|_{L^{\infty}(I;H_{\sigma})}+\|z\|_{L^{2}(I;V^{*})} \right),\]
and thus the uniform boundedness of \(\{\theta_{\varepsilon}\}\) in \(L^{2}(I;V)\).
Following the same steps as in the proof of Theorem 3.3, we can show that (3.18) holds but with the superscript \(n\) replaced with the subscript \(\varepsilon\) and the superscripts on the external force \(\mathbf{q}^{n}\) and external heat source \(z^{n}\) removed. Taking the integral over \((0,T)\) on such estimate and using the assumption on convergence of the nonlocal total energy to the local total energy evaluated at the initial data shows that \(\{F(\varphi_{\varepsilon})\}\) is uniformly bounded in \(L^{\infty}(I;L^{1}(\Omega))\). From assumption (A5), we thus get
\[\begin{split}\left|\int_{\Omega}\mu_{\varepsilon}\,\mathrm{d}x \right|&=\left|\int_{\Omega}a_{\varepsilon}\varphi_{\varepsilon} -J*\varphi_{\varepsilon}+F^{\prime}(\varphi_{\varepsilon})+\ell_{c}\theta_{ \varepsilon}\,\mathrm{d}x\right|\\ &\leq c_{3}\int_{\Omega}|F(\varphi_{\varepsilon})|\,\mathrm{d}x+c_ {4}|\Omega|+\ell_{c}\sqrt{|\Omega|}\|\theta_{\varepsilon}\|.\end{split}\]
which implies uniform boundedness of \(\{\mu_{\varepsilon}\}\) in \(L^{2}(I;V)\).
From these, we establish the existence of \([\widetilde{\varphi},\widetilde{\mathbf{u}},\widetilde{\theta}]\) and \(\widetilde{\mu}\) such that (4.3), (4.4), (4.6), (4.7), (4.9), (4.10) and \(\mu_{\varepsilon}\rightharpoonup\widetilde{\mu}\) in \(L^{2}(I;V)\). Following analogous arguments as in the well-posedness of the nonlocal system, we derive uniform boundedness for the time derivatives of the variables \([\varphi_{\varepsilon},\mathbf{u}_{\varepsilon},\theta_{\varepsilon}]\), i.e.
\[\|\partial_{t}\varphi_{\varepsilon}\|_{L^{4/d}(I;V^{*})}+\|\partial_{t} \mathbf{u}_{\varepsilon}\|_{L^{4/d}(I;V^{*}_{\sigma})}+\|\partial_{t}\theta_{ \varepsilon}\|_{L^{4/d}(I;V^{*})}\leq M. \tag{4.20}\]
Due to Aubin-Lions-Simon we infer that \(W^{+\infty,4/d}(H_{\sigma},V^{*}_{\sigma})\hookrightarrow C(\overline{I};H_{ \sigma})\), \(W^{+\infty,4/d}(H,V^{*})\hookrightarrow C(\overline{I};H)\), \(W^{2,4/d}(V_{\sigma},V^{*}_{\sigma})\hookrightarrow L^{2}(I;H_{\sigma})\) and \(W^{2,4/d}(V,V^{*})\hookrightarrow L^{2}(I;H)\) are compact which give us the strong and a.e. convergences (4.5), (4.8) and (4.11). The convergences we have in hand, by passing through the limits, are enough to establish that \([\widetilde{\varphi},\widetilde{\mathbf{u}},\widetilde{\theta}]\) satisfies (4.16a), (4.16b) and (4.16c).
Multiplying \(G^{\prime}(\varphi_{\varepsilon})\) and \(\mu_{\varepsilon}\), using the second equation in (1.2a), integrating over \(\Omega\), and rearranging the resulting integral, one gets by virtue of Holder and Young inequalities
\[\int_{\Omega}|G^{\prime}(\varphi_{\varepsilon}(x))|^{2}\,\mathrm{ d}x+\frac{1}{2}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)\delta G^{ \prime}_{\varphi}(x,y)(\varphi_{\varepsilon}(x)-\varphi_{\varepsilon}(y))\, \mathrm{d}x\,\mathrm{d}y\] \[=\int_{\Omega}(\mu_{\varepsilon}(x)+a^{*}\varphi_{\varepsilon}(x )-\ell_{c}\theta_{\varepsilon}(x))G^{\prime}(\varphi_{\varepsilon}(x))\, \mathrm{d}x\leq M+\frac{1}{2}\int_{\Omega}|G^{\prime}(\varphi_{\varepsilon}(x ))|^{2}\,\mathrm{d}x,\]
where \(M>0\) is a constant independent of \(\varepsilon>0\) and \(\delta G^{\prime}_{\varphi}(x,y):=(G^{\prime}(\varphi_{\varepsilon}(x))-G^{ \prime}(\varphi_{\varepsilon}(y)))\). Since \(G\) is a strictly convex function, we see the the second term on the left-hand side above is nonnegative, and thus the uniform boundedness of \(\{F^{\prime}(\varphi_{\varepsilon})\}\) in \(L^{2}(I;H)\). This implies the existence of \(\mathbf{f}\in L^{2}(I;H)\) such the \(F^{\prime}(\varphi_{\varepsilon})\rightharpoonup\mathbf{f}\) in \(L^{2}(I;H)\). Furthermore, by virtue of Egorov's theorem (see footnote in [13, p. 1093]), the strong convergence (4.5) implies \(F^{\prime}(\varphi_{\varepsilon})\to F^{\prime}(\widetilde{\varphi})\) a.e., and hence in \(L^{2}(I;L^{2}(\Omega))\). On the other hand, since \(\mu_{\varepsilon}\) and \(\theta_{\varepsilon}\) are bounded in \(L^{2}(I;H)\), with independence on \(\varepsilon>0\), we get uniform boundedness of \(a_{\varepsilon}(x)\varphi_{\varepsilon}-J_{\varepsilon}*\varphi_{\varepsilon}\) in \(L^{2}(I;H)\) which implies the existence of \(\mathbf{g}\in L^{2}(I;H)\) such that \(a_{\varepsilon}(x)\varphi_{\varepsilon}-J_{\varepsilon}*\varphi_{\varepsilon} \rightharpoonup\mathbf{g}\) in \(L^{2}(I;H)\).
Referring to [17, Lemma 3.2]\(\mathbb{E}_{nl}^{\varepsilon}\) is Gateaux differentiable which can be computed for any \(\varphi,\psi\) as
\[\langle D\mathbb{E}_{nl}^{\varepsilon}(\varphi),\psi\rangle =\frac{1}{2}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)(\varphi (x)-\varphi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{d}y\] \[=\int_{\Omega}(a_{\varepsilon}(x)\varphi(x)-J_{\varepsilon}* \varphi(x))\psi(x)\,\mathrm{d}x.\]
The second order Gateaux derivative as a bilinear form in \(H\) can also be computed for any \(\varphi,\psi_{1},\psi_{2}\) as
\[\langle D^{2}\mathbb{E}_{nl}^{\varepsilon}(\varphi),[\psi_{1},\psi_{2}]\rangle =\frac{1}{2}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)(\psi_{1}(x)-\psi_{1 }(y))(\psi_{2}(x)-\psi_{2}(y))\,\mathrm{d}x\,\mathrm{d}y.\]
The nonegativity of \(J_{\varepsilon}\) thus imply that \(\langle D^{2}\mathbb{E}_{nl}^{\varepsilon}(\varphi),[\psi,\psi]\rangle\geq 0\) for any \(\varphi,\psi\in H\). Letting \(\psi\in H\) and \(\varphi\in H\), we get
\[\langle D^{2}\mathbb{E}_{nl}^{\varepsilon}(\varphi),[\psi,\psi]\rangle =\frac{1}{2}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)(\varphi (x)-\varphi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{d }y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{ d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{ d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{ d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{ d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{ d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\, \mathrm{d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{ d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\, \mathrm{d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\, \mathrm{d}x\,\mathrm{d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\, \mathrm{d}y\] \[=\int_{\Omega}\int_{\Omega}\int_{\Omega}J_{\varepsilon}(x-y)( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))(\psi(x)-\psi(y))\,\mathrm{d}x\,\mathrm{ d}y\] \[=\int_
\(L^{2}(I;V)\) such that \(\widetilde{\psi}=m_{\Omega}\), we take the Taylor's expansion of \(\mathbb{E}_{nl}^{\varepsilon}\) about \(\varphi_{\varepsilon}\) so that
\[\int_{0}^{T}\mathbb{E}_{nl}^{\varepsilon}(\psi(t))\,\mathrm{d}t= \int_{0}^{T}\mathbb{E}_{nl}^{\varepsilon}(\varphi_{\varepsilon}(t))+\langle D \mathbb{E}_{nl}^{\varepsilon}(\varphi_{\varepsilon}(t)),(\psi(t)-\varphi_{ \varepsilon}(t))\rangle \tag{4.21}\] \[\quad+\frac{1}{2}\langle D^{2}\mathbb{E}_{nl}^{\varepsilon}( \widetilde{\varphi}),(\psi(t)-\varphi_{\varepsilon}(t))^{2}\rangle\,\mathrm{d}t\] \[\geq\int_{0}^{T}\mathbb{E}_{nl}^{\varepsilon}(\varphi_{ \varepsilon}(t))+(a_{\varepsilon}(x)\varphi_{\varepsilon}(t)-J_{\varepsilon }*\varphi_{\varepsilon}(t),\psi(t)-\varphi_{\varepsilon}(t))_{\Omega}\, \mathrm{d}t.\]
Taking advantage of the convergences (4.17) and (4.19) in Lemma 4.2, Fatou's lemma, and the convergence \(\varphi_{\varepsilon}\to\widetilde{\varphi}\) in \(C(\overline{I};H)\) we infer that
\[\int_{0}^{T}\mathbb{E}_{l}(\widetilde{\varphi}(t))+(\mathbf{g}(t),\psi(t)- \widetilde{\varphi}(t))_{\Omega}\,\mathrm{d}t\leq\int_{0}^{T}\mathbb{E}_{l}( \psi(t))\,\mathrm{d}t \tag{4.22}\]
This implies that \(\mathbf{g}\in\partial\mathbb{E}_{l}(\widetilde{\varphi})\), from which we further infer -- by additionally considering the differentiability and convexity of \(\mathbb{E}_{l}\) -- that \(\langle\mathbf{g},\psi\rangle=\langle D\mathbb{E}_{l}(\widetilde{\varphi}), \psi\rangle\) for any \(\psi\in L^{2}(I;V)\), i.e.
\[\int_{0}^{T}\int_{\Omega}\mathbf{g}\psi\,\mathrm{d}x\,\mathrm{d}t=\int_{0}^{T} \int_{\Omega}\nabla\widetilde{\varphi}\cdot\nabla\psi\,\mathrm{d}x\,\mathrm{ d}t, \tag{4.23}\]
and we further conclude that \(\mathbf{g}=-\Delta\widetilde{\varphi}\) in \(L^{2}(I;H)\) and \(\frac{\partial\widetilde{\varphi}}{\partial\mathbf{n}}=0\) almost everywhere in \(\Gamma\), for more details see [18, pages 142-143] for example.
Summarizing the computations above, we were able to show that \(\widetilde{\mu}=-\Delta\widetilde{\varphi}+F^{\prime}(\widetilde{\varphi})+ \ell_{c}\widetilde{\theta}\) in \(L^{2}(I;H)\) and that \(\widetilde{\varphi}\in L^{2}(I;V_{2})\).
Finally, passing to the limits would show that the equations (4.16a)-(4.16c) hold. While, Lemma 4.2, the weak lower semicontinuity of norms, the assumption on the energy convergence of the initial data, and the convergences we have in hand show that (4.15) holds.
## Acknowledgement
S. N. and J.S.H.S have been supported by Praemium Academiae of S. Necasova. Moreover, S. N. has been supported by by the Czech Science Foundation (GACR) through project GA22-01591S. The Institute of Mathematics CAS is supported by RVO:67985840.
|
2310.10729 | Flattening of the EFT-Hedron: Supersymmetric Positivity Bounds and the
Search for String Theory | We examine universal positivity constraints on $2 \to 2$ scattering in 4d
planar $N=4$ supersymmetric Yang-Mills theory with higher-derivative
corrections. We present numerical evidence that the convex region of allowed
Wilson coefficients (the ``EFT-hedron'') flattens completely along about
one-third of its dimensions when an increasing number of constraints on the
spectral density from crossing-symmetry are included. Our analysis relies on
the formulation of the positivity constraints as a linear optimization problem,
which we implement using two numerical solvers, SDPB and CPLEX. Motivated by
the flattening, we propose a novel partially resummed low-energy expansion of
the $2 \to 2$ amplitude. As part of the analysis, we provide additional
evidence in favor of the conjecture [1] that the Veneziano amplitude is the
only amplitude compatible with both S-matrix bootstrap constraints and string
monodromy. | Justin Berman, Henriette Elvang, Aidan Herderschee | 2023-10-16T18:00:05Z | http://arxiv.org/abs/2310.10729v1 | # Flattening of the EFT-Hedron: Supersymmetric Positivity Bounds and the Search for String Theory
###### Abstract
We examine universal positivity constraints on \(2\to 2\) scattering in 4d planar \(\mathcal{N}=4\) supersymmetric Yang-Mills theory with higher-derivative corrections. We present numerical evidence that the convex region of allowed Wilson coefficients (the "EFT-hedron") flattens completely along about one-third of its dimensions when an increasing number of constraints on the spectral density from crossing-symmetry are included. Our analysis relies on the formulation of the positivity constraints as a linear optimization problem, which we implement using two numerical solvers, SDPB and CPLEX. Motivated by the flattening, we propose a novel partially resummed low-energy expansion of the \(2\to 2\) amplitude. As part of the analysis, we provide additional evidence in favor of the conjecture [1] that the Veneziano amplitude is the only amplitude compatible with both S-matrix bootstrap constraints and string monodromy.
## 1 Introduction
* 2 Amplitudes in \({\cal N}=4\) SYM + h.d.
* 2.1 \({\cal N}=4\) Superamplitude
* 2.2 Low-Energy Ansatz
* 2.3 Examples
* 2.3.1 Veneziano Amplitude
* 2.3.2 1-loop Contribution from the Coulomb Branch
* 2.3.3 Infinite Spin Tower
* 3 Dispersive Representation
* 3.1 Assumptions
* 3.2 Dispersive Representation of Wilson Coefficients
* 3.3 Basic Consequences
* 3.4 Null Constraints
* 4 Bounds as an Optimization Problem
* 4.1 Formulation as an Optimization Problem
* 4.2 Implementation in SDPB
* 4.3 Implementation in CPLEX
* 5 Allowed Regions
* 5.1 Examples
* 5.2 Comparison of SDPB and CPLEX
* 6 Veneziano from String Monodromy
* 6.1 String Monodromy
* 6.2 Bootstrapping Veneziano
* 7 Flattening of the EFT-hedron
* 7.1 Flattening Conjecture
* 7.2 Evidence for Flattening
* 7.3 Good EFT-hedron "Coordinates"
* 8 Discussion
A Convergence of Numerical ResultsA.1 Convergence without MonodromyA.2 Convergence with Monodromy
## 1 Introduction
In the framework of effective field theory (EFT), high-energy physics can be encoded into local higher-derivative operators. If the UV theory is known, the massive degrees of freedom can be integrated out to determine the Wilson coefficients of these operators in the EFT description. In contrast, from a purely low-energy perspective that ignores any details about the UV physics, the coefficients can take on any values. However, even without knowing specific details of the UV theory, such as its spectrum or couplings, fundamental physical principles -- locality, unitarity, and suitable assumptions about the high-energy behavior of the full scattering amplitude -- impose non-trivial constraints on the allowed values of the Wilson coefficients. These basic high-energy assumptions facilitate a dispersive representation of the Wilson coefficients which implies that they must lie in a convex region sometimes called the "EFT-hedron" [2]. The allowed region of Wilson coefficients has been explored recently for theories of massless particles, including for scalars [2; 3; 4], massless pions [5; 6; 7; 8; 9; 10], photons [11; 12; 13; 14], and gravitons [15; 16].
In this paper, we derive universal bounds on 4d planar \(\mathcal{N}=4\) super Yang-Mills (SYM) theory with higher-derivative corrections. This is done using the \(2\to 2\) scattering amplitude, assuming a weak coupling approximation that allows us to suppress loops of the massless SYM states. Using the 4-point supersymmetry Ward identities, we show that the low-energy expansion of the \(2\to 2\) color-ordered amplitude must take the form1
Footnote 1: Our 4-point Mandelstam variables are \(s=-(p_{1}+p_{2})^{2}\), \(t=-(p_{1}+p_{3})^{2}\), and \(u=-(p_{1}+p_{4})^{2}\), treating all momenta as outgoing.
\[A[zz\bar{z}\bar{z}]=-\frac{s}{u}+s^{2}\sum_{k=0}^{\infty}\sum_{q=0}^{k}a_{k,q} \,s^{k-q}\,u^{q}\,, \tag{1}\]
where \(z\) and \(\bar{z}\) are conjugate scalars of the massless \(\mathcal{N}=4\) SYM spectrum and maximal supersymmetry requires \(a_{k,k-q}=a_{k,q}\) ("SUSY crossing symmetry"). The \(a_{k,q}\) are in 1-1 correspondence with the coefficients of the local single-trace \(\mathcal{N}=4\) higher-derivative 4-field operators.
Additionally assuming a mass-gap and a Froissart-like bound, we derive a dispersive representation for the Wilson coefficients \(a_{k,q}\) for all \(k,q\). We also derive two types of sum rules (or "null constraints") resulting from a supersymmetric version of crossing symmetry. Including both sum rules in the analysis gives optimal bounds for any finite cutoff \(k_{\rm max}\) on Mandelstam terms in the low-energy ansatz (1) and a spin cut-off \(\ell_{\rm max}\). The choice
of \(k_{\rm max}\) corresponds to including all local \({\cal N}=4\) SUSY compatible 4-point operators of the schematic form \({\rm tr}(D^{2k+4}z^{2}\bar{z}^{2})\) with \(k\leq k_{\rm max}\) in the analysis. For example, \(k_{\rm max}=8\) includes operators with up to 20 derivatives in the analysis. The higher \(k_{\rm max}\), the stronger the bounds tend to be.
The dispersive representation of the \(a_{k,q}\)'s allow us to derive bounds on ratios of Wilson coefficients. For example, we find that \(a_{0,0}\), the coefficient of the \({\cal N}=4\) supersymmetrization of \({\rm tr}F^{4}\sim{\rm tr}(D^{4}z^{2}\bar{z}^{2})\), must be bigger than any other Wilson coefficient, so it is natural to bound ratios \(\bar{a}_{k,q}\equiv a_{k,q}/a_{0,0}\). The full allowed region is then a convex subregion within the hypercube \(0\leq\bar{a}_{k,q}\leq 1\). Although we do not utilize the analytic EFT-hedron bounds of [2; 4], we still loosely refer to the allowed region as the "\({\cal N}=4\) supersymmetric EFT-hedron" or simply the "EFT-hedron".
To determine the bounds on (projections of) the supersymmetric EFT-hedron, we formulate the constraints as a linear optimization problem. We use two well-established linear programming solvers to numerically determine these bounds. One program is the semi-definite programming code, SDPB, developed by Simmons-Duffin for the purpose of the conformal bootstrap [17; 18]. The second solver is CPLEX, a commercial code maintained by IBM [19]. SDPB has previously been used for positivity bounds, see for example [3; 4; 7; 9], but to our knowledge this is the first time CPLEX is used in the context of the S-matrix bootstrap. The main purpose of using both methods is to have non-trivial checks on, and comparisons of, the numerical results. We find excellent agreement between the two methods. CPLEX runs faster for the precision needed to illustrate the large-scale bounds of the allowed regions in plots, but when high-precision results are needed, SDPB is the more efficient and reliable choice. Most of the plots in the paper were generated with SDPB.
The study of the \({\cal N}=4\) supersymmetric EFT-hedron is partially motivated by string theory. Specifically, the Veneziano tree amplitude for massless Type-I open superstring scattering must be contained within the \({\cal N}=4\) SUSY EFT-hedron.2 Investigating this space may shed light on the unique properties of string theory. One long-term goal is to explore what fundamental conditions isolate the open string as the only viable UV completion of low-energy \({\cal N}=4\) SYM or, more generally, of even just YM theory, at tree-level.
Footnote 2: With the \(1/\sqrt{\alpha^{\prime}}\) as the mass gap, the Veneziano amplitude corresponds to a single point in the supersymmetric EFT-hedron. Unlike the string loop-amplitudes, the string tree amplitude is not sensitive to details of the compactification from 10d to 4d.
As a step in this direction, the authors of [1] studied the combination of EFT-hedron bounds with the string monodromy relations [20; 21; 22; 23; 24],
\[0=A[2134]+e^{i\pi\alpha^{\prime}s}A[1234]+e^{-i\pi\alpha^{\prime}t}A[1324]\,. \tag{2}\]
When imposed on the low-energy ansatz (1), the monodromy relations (2) fix particular linear combinations of the Wilson coefficients \(a_{k,q}\), while an infinite set of coefficients remain unfixed, e.g. at the lowest orders \(a_{1,0}\), \(a_{3,0}\), and \(a_{4,1}\). The authors of [1] showed that when combined with the EFT-hedron bounds of [2], \(a_{1,0}\) and \(a_{3,0}\) were numerically fixed to be within about a percent of the string values and \(a_{4,1}\) within about 50%. They went on to
propose that string monodromy, together with positivity bounds, would isolate the open string.
As part of our analysis, we revisit the monodromy+EFT-hedron proposal of [1] and extend the results up to 20th derivative order. Using SDPB (along with some CPLEX cross-checks), we show that these additional constraints now bring \(a_{1,0}\) and \(a_{3,0}\) to within less than \(0.01\%\) of their string values. More generally, we find that the allowed regions for the coefficients unfixed by monodromy become tiny islands around the open string values. The islands continue to shrink as \(k_{\rm max}\) is increased. This leads to the expectation that the islands will reduce to a point in the limit of \(k_{\rm max}\to\infty\).
Working to \(k_{\rm max}=8\), we find the following bounds on the eight lowest Wilson coefficients left unfixed by monodromy relations:
\[\begin{array}{ll}\mbox{\bf SDPB bounds}&\mbox{\bf String Value $a_{k,q}^{\rm str}$}\\ 1.201982\leq a_{1,0}\leq\,1.202061&a_{1,0}^{\rm str}=\zeta_{3}\approx 1.202057 \\ 1.036923\leq a_{3,0}\leq\,1.036937&a_{3,0}^{\rm str}=\zeta_{5}\approx 1.036928 \\ 0.04053\leq a_{4,1}\leq\,0.04063&a_{4,1}^{\rm str}=\frac{3}{4}\zeta_{6}- \frac{1}{2}\zeta_{3}^{2}\approx 0.04054\\ 1.0083481\leq a_{5,0}\leq\,1.0083495&a_{5,0}^{\rm str}=\zeta_{7}\approx 1.0083493 \\ 0.008649\leq a_{6,1}\leq\,0.008729&a_{6,1}^{\rm str}=\frac{5}{4}\zeta_{8}- \zeta_{3}\zeta_{5}\approx 0.008651\\ 1.00200830\leq\,4_{7,0}\leq\,1.00200891&a_{7,0}^{\rm str}=\zeta_{9}\approx 1.0 0200839\\ 0.00031\leq a_{7,2}\leq\,0.00041&a_{7,2}^{\rm str}=\frac{7}{4}\zeta_{6}\zeta_ {3}+\frac{1}{6}\zeta_{3}^{3}-\frac{9}{4}\zeta_{4}\zeta_{5}-3\zeta_{2}\zeta_{7 }+\frac{28}{3}\zeta_{9}\approx 0.00032\\ 0.00203\leq a_{8,1}\leq\,0.00212&a_{8,1}^{\rm str}=\frac{7}{4}\zeta_{10}- \frac{1}{2}\zeta_{5}^{2}-\zeta_{3}\zeta_{7}\approx 0.00204\,.\end{array} \tag{3}\]
Going to higher \(k_{\rm max}\) to get even stronger bounds is in principle straightforward and just requires more computation time. We find that the bounds shrink toward zero as a power-law (or faster) in \(k_{\rm max}\), so this supports the proposal of [1] that string monodromy combined with positivity bounds single out the Veneziano amplitude.
The string monodromies impose linear relations among the Wilson coefficients. From a geometric perspective, in the space of \(a_{k,q}\)'s these relations define a higher-dimensional "plane"; we call this space the _monodromy plane_. Meanwhile, at finite \(k_{\rm max}\), the positivity bounds give an allowed region, the supersymmetric EFT-hedron, which has co-dimension zero in the space of SUSY crossing-symmetric Wilson coefficients. When monodromy and positivity isolate a small island at finite \(k_{\rm max}\), that is the statement that the monodromy plane and the supersymmetric EFT-hedron intersect each other in a small volume. The claim that this small volume of allowed values of Wilson coefficients shrinks to a point with increasing \(k_{\rm max}\) is then the statement that the monodromy plane intersects the supersymmetric EFT-hedron at a single point in the limit \(k_{\rm max}\to\infty\). That one point has the values of the Wilson coefficients corresponding to the Veneziano amplitude.
The intersection at a point could happen in two distinct ways, as illustrated at the cartoon level in Figure 1: either the monodromy plane is tangent to the allowed EFT-hedron region in the limit \(k_{\rm max}\to\infty\) or the monodromy plane intersects the interior of the finite-\(k_{\rm max}\) EFT-hedron in a manner such that, as \(k_{\rm max}\to\infty\), the EFT-hedron flattens, leading to a point of intersection between the two spaces.
ne of the core results of this paper is numerical evidence for the conjecture that the allowed space of Wilson coefficients flattens out to a space of lower dimensionality in the limit of \(k_{\rm max}\to\infty\); i.e. that the second of the two geometric options in Figure 1 is correct. This flattening conjecture implies that in the \(k_{\rm max}\to\infty\) limit, all theories are boundary theories and there are much stronger constraints among Wilson coefficients than one might naively have anticipated.
To show this, we move the monodromy plane around so that it intersects the EFT-hedron at different points determined by randomly generated linear combinations of known allowed models. If the first picture in Figure 1 is correct, doing so would lead to islands that do not continue to shrink with increasing \(k_{\rm max}\). What we find is that these allowed islands do indeed continue to shrink.
Specifically, we find evidence that fixing two-thirds of the Wilson coefficients and imposing locality, unitary, and the Froissart bound is sufficient to fix the remaining one-third of Wilson coefficients. Thus, the flattening suggests that there is a "better" low-energy representation of the EFT amplitude than the standard one in (1), perhaps even one in which certain combinations of Mandelstam polynomials have been resummed. The parameters should split up into two sets: those corresponding to coordinates along the flattened EFT-hedron (we call these _monovariables_\(r_{i}^{(k)}\)) and those transverse to it, \(A_{i}^{(\bar{k})}=a_{1,0},a_{3,0},a_{4,1},a_{5,0},a_{6,1}\), etc. This is illustrated in Figure 2. Our analysis suggests that we rewrite the EFT amplitude as
\[A[zz\bar{z}\bar{z}]=-\frac{s}{u}+s^{2}\bigg{(}\sum_{k,i}r_{i}^{(k)}P_{i}^{(k) }(s,u)+\sum_{k,i}A_{i}^{(k)}Q_{i}^{(k)}(s,u)\bigg{)}\,, \tag{4}\]
where \(P_{i}^{(k)}(s,u)=P_{i}^{(k)}(u,s)\) are specific symmetric degree-\(k\) polynomials in \(s\) and \(u\). The \(Q_{i}^{(k)}(s,u)=Q_{i}^{(k)}(u,s)\) are infinite sums of \(s\),\(u\) symmetric polynomial terms whose lowest-order terms are degree \(k\). The key point of flattening is the claim that for any choice of monovariables \(r_{i}^{(k)}\) in the EFT-hedron, the positivity constraints of the S-matrix bootstrap
Figure 1: 3D cartoon of how the monodromy line (blue) could intersect with an EFT-hedron of codimension zero (left) or nonzero (right).
niquely fix all coefficients \(A_{i}^{(k)}\). At large \(k\), the monovariables \(r_{i}^{(k)}\) account for two-thirds of all the variables: thus one only needs to specify two-thirds of all the EFT coefficients to know the whole low-energy expansion. Importantly, we also find evidence that there does exist a form of the amplitude (4) in which the \(Q_{i}^{(k)}(s,u)\) can be resummed. The answer is surprisingly simple and of the form
\[\sum_{k,i}A_{i}^{(k)}Q_{i}^{(k)}(s,u)=\frac{\sin(\pi t)}{\pi}\sum_{k,i}\tilde{ A}_{i}^{(k)}\mathcal{S}_{i}^{(k)}(s,t,u)\,, \tag{5}\]
where \(t=-s-u\) and \(\mathcal{S}_{i}^{(k)}\) represents the degree-\(k\) Mandelstam polynomials that are fully symmetric in \(s,t,u\). The coefficients \(\tilde{A}_{i}^{(k)}\) are finite linear combinations of the \(A_{i}^{(k)}\). We have verified this ansatz up to 20th order in the Mandelstam variables.3
Footnote 3: Some readers may recognize the RHS of (5) as an ansatz that trivially solves the string monodromy relations without restricting the coefficients \(\tilde{A}_{i}^{(k)}\). We discuss this in Section 7.3.
Let us come back to the statement that monodromy and positivity constraints combine to single out the open string tree amplitude. Since the string monodromy relations arise from the worldsheet description of the string, it seems dissatisfactory to impose them in order to isolate the Veneziano amplitude. However, monodromy relations can be shown [25] to arise also in purely field theoretic context, namely from scalar bi-adjoint (BAS) effective field theory. The BAS EFT appears in the tree-level double-copy where it can be used as a way to generate the higher-derivative corrections to other theories; relevant for us here is the double-copy relation
\[(\mathcal{N}=4\text{ SYM EFT})=(\text{BAS EFT})\otimes_{\text{FT}}(\text{ pure }\mathcal{N}=4\text{ SYM})\,, \tag{6}\]
The conjecture of [25] is that the most general tree amplitude (S)YM EFT obtained by
the double-copy (1.6) automatically satisfies the string monodromy relations. The results obtained here, expanding on the earlier results of [1], then leads to the conjecture that _among all the 4-point \(\mathcal{N}=4\) SYM EFT tree amplitudes obtained from the double-copy (1.6), the unique one compatible with unitarity, locality, the existence of a mass-gap, and the Froissart bound is the Veneziano open-string tree amplitude_.
The above conjecture brings the assumption of monodromy constraints a step down toward a more purely low-energy effective field theory approach. It would of course be very interesting to have assumptions that are even more fundamental than the existence of the EFT double-copy relation (1.6) and that is a goal of future work.
Finally, let us note that with SDPB, it is also possible to extract the spectrum of theories that lie at the boundaries of the allowed space.4 One might expect that when we numerically fix Wilson coefficients to be close to their string values, the spectrum of the extremal theory closely mirrors that of string theory. We find that the spectra for these extremal theories do match some of the leading Regge trajectories, but that there are also spurious states that do not match the open string spectrum. These presumably disappear at higher \(k_{\rm max}\) and \(\ell_{\rm max}\). We leave a more detailed analysis of these numerical spectra to the future.
Footnote 4: We thank Jan Albert and David Poland for useful discussions related to this topic.
**The paper is organized as follows.** We start in Section 2 by deriving the constraints of \(\mathcal{N}=4\) supersymmetry on the \(2\to 2\) scattering amplitude. Next, in Section 3, we state the technical assumptions, then derive the dispersive representation of the Wilson coefficients as well as null constraints on the spectral density from SUSY crossing symmetry. In Section 4 we formulate the optimization problems and briefly discuss the implementations in SDPB and CPLEX. Readers familiar with the dispersive arguments may choose to skip ahead to the core results.
In Section 5, we explore some of the simplest bounds and offer brief comparisons of SDPB and CPLEX. The main takeaway from this section is there is no sign that the Veneziano amplitude should lie at a kink or any other particular feature of the bounds in these projections.
The analysis with monodromy imposed as additional null constraints is presented in Section 6. In Section 7, we change the monodromy constraints to "monovariable constraints" and present numerical evidence supporting the conjecture that the supersymmetric EFT-hedron flattens in the \(k_{\rm max}\to\infty\) limit. We also introduce the novel partially-resummed parameterization of the low-energy amplitude. We conclude with a discussion and future outlook in Section 8. The Appendix contains technical discussions of the numerical implementation.
**Note added:** while preparing this paper, we became aware of partially overlapping results in [26] in which the authors find even stronger bounds on the \(a_{1,0}\), \(a_{3,0}\), and \(a_{4,1}\) Wilson coefficients when the string monodromy relation is imposed.
## 2 Amplitudes in \(\mathcal{N}=4\) SYM + h.d.
In this section, we derive an ansatz for the low-energy expansion of the \(2\to 2\) scattering amplitudes in \(\mathcal{N}=4\) supersymmetric Yang-Mills EFT with gauge group \(SU(N)\) in the strict large-\(N\) limit. We provide some examples of different UV completions that give non-zero Wilson coefficients in the low-energy theory.
### \(\mathcal{N}=4\) Superamplitude
The massless \(\mathcal{N}=4\) vector supermultiplet consists of 16 states: two gluon helicity states \(g^{\pm}\), four pairs of positive and negative helicity gluinos \(\lambda^{A}\) and \(\lambda^{ABC}\), and three pairs of complex scalars \(z^{AB}\). The on-shell states transform in fully antisymmetric irreducible representations of the \(SU(4)_{R}\) R-symmetry group; \(A,B,C=1,2,3,4\) are R-indices.
The scattering amplitudes of an \(\mathcal{N}=4\) SYM EFT can be encoded into on-shell superamplitudes. At 4-point, we write
\[\mathcal{A}_{4}=\delta^{8}(\tilde{Q})\frac{[12]^{2}}{\langle 34\rangle^{2}}\,f(s, u)\quad\text{ with }\quad\delta^{8}(\tilde{Q})=\frac{1}{2^{4}}\prod_{A=1}^{4}\sum_{i,j=1}^{4} \langle ij\rangle\eta_{iA}\eta_{jA}\,. \tag{1}\]
The ordering of the external states is understood to be 1234 unless otherwise specified. The on-shell superspace formalism with the Grassmann variables \(\eta_{iA}\) can be found in Chapter 4 of [27, 28]. To project out component amplitudes from the superamplitude, one takes derivatives with respect to the Grassmann variables \(\eta_{iA}\) to match the R-indices of the \(i\)th state. A positive helicity gluon corresponds to the \(SU(4)_{R}\) singlet with no indices, whereas the negative helicity gluon corresponds to the singlet with all four R-indices, i.e. \(g^{-}=g^{1234}\). Thus, projecting out the 4-gluon amplitude from (1) gives
\[A[++--]=[12]^{2}\langle 34\rangle^{2}f(s,u)\,, \tag{2}\]
where \(\pm\) is shorthand for the gluon helicity states. In pure (S)YM theory, the tree-level Parke-Taylor gluon amplitude is
\[A^{\text{YM}}[++--]=\frac{\langle 34\rangle^{4}}{\langle 12\rangle\langle 23 \rangle\langle 34\rangle\langle 41\rangle}=-\frac{[12]^{2}\langle 34\rangle^{2} }{su}\,, \tag{3}\]
so \(f(s,u)=-1/(su)\) in pure (S)YM.
Consider a pair of conjugate scalars \(z=z^{12}\) and \(\bar{z}=z^{34}\) of the massless \(\mathcal{N}=4\) supermultiplet. Projecting out three different 4-scalar amplitudes from the superamplitude (1), we find
\[A[zz\bar{z}\bar{z}]=s^{2}f(s,u)\,,\quad A[z\bar{z}z\bar{z}]=t^{2}f(s,u)=A[\bar{ z}z\bar{z}z]\,. \tag{4}\]
Cyclicity requires \(A_{4}[2341]=A_{4}[1234]\), so, together with the supersymmetry requirement \(A_{4}[z\bar{z}z\bar{z}]=A_{4}[\bar{z}z\bar{z}z]\) from (4), we see that \(f\) must be symmetric in \(s\) and \(u\):
\[f(u,s)=f(s,u)\,. \tag{5}\]
We call this equality "crossing symmetry". It is clearly satisfied by the Parke-Taylor amplitude, but it must hold for the full amplitude as well.
### Low-Energy Ansatz
On-shell, local, higher-derivative operators are in 1-1 correspondence with polynomial terms in \(f(s,u)\), subject to momentum conservation \(s+t+u=0\). Hence, in the low-energy expansion, the most general form5 of the 4-point amplitude is
Footnote 5: We exclude pole terms \((u/s+s/u)\) or \((1/s+1/u)\) in \(f(s,u)\) because by (2.4) they would imply that \(A[zz\bar{z}\bar{z}]\) has residues of order \(s^{2}\) and \(s^{3}\) in the \(u\)-channel corresponding to exchanges of massless spin 2 and 3 states. Alternatively, one can argue the absence of these pole terms by the fact that there exist no \(\mathcal{N}=4\) SUSY compatible 3-point interactions made from the \(\mathcal{N}=4\) SYM fields.
\[f(s,u)=-\frac{1}{su}+\sum_{0\leq q\leq k}a_{k,q}s^{k-q}u^{q}\,. \tag{2.6}\]
This assumes a weak-coupling limit in which we exclude contributions from loops of massless particles which would generate logarithms in the low-energy ansatz (2.6) and running of the EFT couplings.
Of particular interest to us is the 4-scalar amplitude \(A[zz\bar{z}\bar{z}]\). By (2.4), the most general ansatz for this component is
\[\boxed{A[zz\bar{z}\bar{z}]=-\frac{s}{u}+s^{2}\sum_{0\leq q\leq k}a_{k,q}\,s^{k -q}\,u^{q}\,.} \tag{2.7}\]
Since not all higher-derivative operators are compatible with \(\mathcal{N}=4\) supersymmetry, the coefficients \(a_{k,q}\) are restricted. Specifically, the crossing relation (2.5) requires us to impose
\[\boxed{\text{Crossing / SUSY: \ \ \ }a_{k,k-q}=a_{k,q}\ \ \ \text{for all \ }0\leq q\leq k\,.} \tag{2.8}\]
The \(a_{k,q}\) are Wilson coefficients for (linear combinations of) the on-shell local operators compatible with supersymmetry. The factor of \(s^{2}\) multiplying the sum in (2.7) means that no interaction with less than four derivatives contributes to this amplitude, i.e. there are no \(\mathcal{N}=4\) compatible interactions of the form \(\text{tr}(z^{2}\bar{z}^{2})\) and \(\text{tr}(D^{2}z^{2}\bar{z}^{2})\).6 This is simply the statement that \(\text{tr}(F^{4})\) is the lowest-dimensional \(\mathcal{N}=4\) supersymmetric higher-derivative operator available in the vector sector. Indeed, \(a_{0,0}\) is the coefficient of \(\text{tr}(F^{4})\), \(a_{1,0}=a_{1,1}\) is the coefficient of the unique \(\mathcal{N}=4\) SUSY compatible operator \(\text{tr}(D^{2}F^{4})\), etc.
Footnote 6: \(\mathcal{N}=4\) SYM does, of course, have local 4-scalar interactions, but these have a different R-symmetry index structure, for example \(z^{12}z^{23}z^{34}z^{4}\); i.e. they involve two different pairs of conjugate scalars, not just one.
### Examples
Here, we present relevant examples of amplitudes compatible with the SUSY crossing constraint (2.5).
#### 2.3.1 Veneziano Amplitude
The Veneziano amplitude for tree-level scattering of massless open superstrings is unitary [29; 30] and compatible with \(\mathcal{N}=4\) supersymmetry upon restriction to 4d. Projecting to two pairs of massless external scalars, the Veneziano amplitude is
\[A^{\text{str}}[zz\bar{z}\bar{z}]=-(\alpha^{\prime}s)^{2}\frac{\Gamma(-\alpha^{ \prime}s)\Gamma(-\alpha^{\prime}u)}{\Gamma(1-\alpha^{\prime}(s+u))}. \tag{9}\]
Expanding in small \(\alpha^{\prime}s\) and \(\alpha^{\prime}u\) we find
\[A^{\text{str}}[zz\bar{z}\bar{z}]=-\frac{s}{u}+s^{2}\bigg{(}\zeta_{2}\alpha^{ \prime 2}+\zeta_{3}\alpha^{\prime 3}(s+u)+\zeta_{4}\alpha^{\prime 4}\big{(}s^{2}+u^{2 }\big{)}+\frac{1}{4}\zeta_{4}\alpha^{\prime 4}su+\dots\bigg{)}\, \tag{10}\]
where \(\zeta_{s}\) is the Riemann zeta function. We can read off
\[a^{\text{str}}_{0,0}=\zeta_{2}\,\alpha^{\prime 2}\,,\quad a^{\text{str}}_{1,0 }=a^{\text{str}}_{1,1}=\zeta_{3}\,\alpha^{\prime 3}\,,\quad a^{\text{str}}_{2,0 }=a^{\text{str}}_{2,2}=\zeta_{4}\,\alpha^{\prime 4}\,,\quad a^{\text{str}}_{2,1}= \tfrac{1}{4}\zeta_{4}\,\alpha^{\prime 4}\,,\ \ \text{etc.} \tag{11}\]
from the comparison to the general ansatz (7).
#### 2.3.2 1-loop Contribution from the Coulomb Branch
Consider one-loop contributions from BPS states on the Coulomb branch as an example of a UV completion [31; 32; 33].7 We start with \(\mathcal{N}=4\) SYM with a \(SU(N^{\prime})\) gauge group and go onto the Coulomb branch such that the gauge symmetry is broken to \(SU(N)\times SU(N^{\prime\prime})\) with \(N^{\prime}=N+N^{\prime\prime}\). We restrict the external states to be massless states transforming in the adjoint of the \(SU(N)\) sector. The massive states that transform in the (anti-)fundamental of \(SU(N)\) and (anti-)fundamental of \(SU(N^{\prime\prime})\) couple quadratically to the massless external states and therefore start contributing only at 1-loop order.
Footnote 7: We thank Enrico Hermann for suggesting this example.
The loop contributions of the massive states of \(\mathcal{N}=4\) SYM on the Coulomb branch do not include bubble or triangle integrals (see for example [34] and [35]), so the only contribution is from box-diagrams. The explicit contribution of a single massive BPS state with mass \(m\) running in the loop is
\[A^{\text{1-loop}}[zz\bar{z}\bar{z}]=\frac{6s^{2}}{\pi^{2}}\int\frac{d^{4}\ell} {[s_{\ell}-m^{2}][s_{\ell,1}-m^{2}][s_{\ell,12}-m^{2}][s_{\ell,123}-m^{2}]}\,. \tag{12}\]
This box-diagram was shown in [36] to be given by an Appell's hypergeometric function of two variables, \(F_{3}\):
\[\begin{split} A^{\text{1-loop}}[zz\bar{z}\bar{z}]& =\frac{s^{2}}{m^{4}}F_{3}\Big{(}1,1,1,1;\frac{5}{2}\Big{|}\frac{s} {4m^{2}},\frac{u}{4m^{2}}\Big{)},\\ &=\frac{s^{2}\Gamma(5/2)}{m^{4}}\sum_{j,l=0}^{\infty}\frac{\Gamma (1+l)\Gamma(1+j)}{\Gamma(5/2+j+l)}\left(\frac{s}{4m^{2}}\right)^{j}\left( \frac{u}{4m^{2}}\right)^{l}\,.\end{split} \tag{13}\]
The Wilson coefficients are
\[a_{0,0} =\frac{1}{m^{4}}\,,\quad a_{1,0}=a_{1,1}=\frac{1}{10}\frac{1}{m^{6}} \,,\quad a_{2,0}=a_{2,2}=\frac{1}{70}\frac{1}{m^{8}}\,,\quad a_{2,1}=\frac{1}{14 0}\frac{1}{m^{8}}\,,\] \[a_{3,0} =a_{3,3}=\frac{1}{420}\frac{1}{m^{10}}\,,\quad a_{3,1}=a_{3,2}= \frac{1}{1260}\frac{1}{m^{10}}\,,\quad\text{etc.} \tag{14}\]
Note that we have dropped overall factors in the box-diagram and tuned the normalization of the amplitude to make \(a_{0,0}=1/m^{4}\). The dispersive representation bounds ratios of Wilson coefficients, so the overall scaling does not matter.
#### 2.3.3 Infinite Spin Tower
Another amplitude that satisfies the crossing constraint (8) is
\[A^{\text{IST}}[zz\bar{z}\bar{z}]=-\frac{s}{u}+\frac{s^{2}}{\left(m^{2}-s\right) \left(m^{2}-u\right)}\,. \tag{15}\]
The coefficients of the low-energy expansion are
\[a_{k,q}=\frac{1}{m^{2k+4}}\quad\text{for all $k,q$}\,. \tag{16}\]
The \(A^{\text{IST}}\)-amplitude tends to show up as an allowed solution in S-matrix bootstrap analyses [3; 7]. However, it has an unsuppressed infinite tower of higher spin states, all with the same mass, so it is not expected to arise from a physical theory even though it is not explicitly forbidden by our assumptions.
## 3 Dispersive Representation
We study the full color-ordered \(\mathcal{N}=4\) SYM EFT scalar amplitude
\[A(s,u)=A[zz\bar{z}\bar{z}]\,, \tag{17}\]
with supersymmetry constraints and a low-energy expansion as discussed in the previous section. In this section, we exploit the expected analytic structure of the amplitude to derive positivity bounds for the Wilson coefficients \(a_{k,q}\) of the lower-energy expansion. We summarize the technical assumptions in Section 3.1, then derive the dispersive representation of the Wilson coefficients \(a_{k,q}\) in Section 3.2. The final result is given in equation (12), and the most basic consequences are discussed in Section 3.3. In Section 3.4 we derive additional "null constraints" on the Wilson coefficients.
### Assumptions
We make the following set of assumptions:
1. The gauge group has large rank, so we can work in the large-\(N\) limit. This ensures that the color-ordered amplitude (17) has no \(t\)-channel poles or discontinuities.
2. The theory admits a weak coupling description. This means that we can ignore loops of massless particles and take the low-energy expansion of the amplitude to be (7).
3. The theory has a mass gap, \(M_{\rm gap}\), such that there are no states with nonzero mass below \(M_{\rm gap}\).
4. The amplitude admits a partial wave decomposition \[A(s,u)=16\pi\sum_{\ell=0}^{\infty}(2\ell+1)\,a_{\ell}(s)\,P_{\ell}\big{(}\cos( \theta)\big{)}\,,\] (10) where \(\cos(\theta)=1+2u/s\) and the Legendre polynomials \(P_{\ell}\) are labeled by the spin \(\ell\). Crucially, unitarity requires \(\text{Im}\big{(}a_{\ell}(s)\big{)}\geq 0\).8 Footnote 8: \(\text{Im}(a_{\ell}(s))\) is also bounded from above, but we do not impose the upper bound in our analysis.
5. For fixed \(u<0\) and sufficiently large \(|s|\), the amplitude is analytic in \(s\) away from the real axis in the complex \(s\)-plane.
6. The amplitude obeys a Froissart-Martin-like bound: \[\text{fixed }u<0\text{: }\lim_{s\to\infty}\frac{A(s,u)}{s^{2}}=0\,,\] (11) fixed \[t<0\text{: }\lim_{s\to\infty}\frac{A(s,-s-t)}{s^{2}}=0\.\]
A rigorous derivation of Property 5 for general theories is not currently known, but it does hold at all orders in perturbation theory [37; 2; 3]. Property 6 can be shown to hold with assumptions about the UV behavior of the theory [38; 39]: it was argued in [2] that if the amplitude is analytic and polynomially bounded as \(A(s,u)<s^{N}\) for any \(N\) at large \(s\), then (11) follows from unitarity.
### Dispersive Representation of Wilson Coefficients
Each of the Wilson coefficients in the low-energy expansion (7) of \(A(s,u)\) can be extracted by the contour integral
\[a_{k,q}=\frac{1}{q!}\frac{\partial^{q}}{\partial u^{q}}\int_{\mathcal{C}^{ \star}}\frac{ds^{\prime}}{2\pi i}\frac{A(s^{\prime},u)}{s^{\prime k-q+3}} \bigg{|}_{u=0}\, \tag{12}\]
where the contour \(\mathcal{C}^{\star}\) is a small circle surrounding \(s=0\) in the complex \(s\)-plane. The "\(+3\)" in the power of \(s^{\prime}\) in the denominator accounts for the factor of \(s^{2}\) in the low-energy ansatz (7). Together with the assumption (11), the "\(+3\)" ensures that the contour deformation described in Figure 3 has vanishing contribution from the contour at infinity for any \(0\leq q\leq k\). Therefore, we get
\[a_{k,q}=\frac{1}{q!}\frac{\partial^{q}}{\partial u^{q}}\left(\frac{1}{\pi} \int\frac{ds^{\prime}}{s^{\prime k-q+3}}\,\text{Im}A(s^{\prime},u)\right) \bigg{|}_{u=0} \tag{13}\]
for all \(0\leq q\leq k\). Here we used that the discontinuity of the amplitude is proportional to its imaginary part,9\(2i\text{Im}[A(s,u)]=A(s+i\epsilon,u)-A(s-i\epsilon,u)\). There are no \(t\)-channel contributions because we work in the planar limit and no \(u\)-channel contributions because we work at fixed \(u<0\).
Footnote 9: For simplicity of the presentation, we have absorbed single-particle contributions into the definition of the discontinuity. The single particle contributions can be treated separately, as done in Refs. [2; 3], but they are eventually absorbed into the spectral function and make no practical difference for the final form of the dispersive representation.
Next, we use the partial wave decomposition,
\[\text{Im}(A)=16\pi\sum_{\ell=0}(2\ell+1)\,\text{Im}(a_{\ell}(s^{\prime}))\,P_{ \ell}\bigg{(}1+\frac{2u}{s^{\prime}}\bigg{)}. \tag{3.6}\]
The Legendre polynomials can be written
\[P_{\ell}\big{(}1+2\delta\big{)}=\sum_{q=0}^{\ell}v_{\ell,q}\delta^{q}\quad \text{with}\ \ v_{\ell,q}=\frac{\prod_{a=1}^{q}\big{[}\ell(\ell+1)-a(a-1)\big{]}}{(q!)^{2}}\,, \tag{3.7}\]
where \(v_{\ell,q}\geq 0\) for \(\ell\geq q\) and we define \(v_{\ell,q}=0\) for \(q>\ell\). Since the only dependence on \(u\) enters (3.5) via the Legendre polynomials, taking \(q\)\(u\)-derivatives and then setting \(u=0\) picks out the coefficient \(v_{\ell,q}\). Hence, after a change of integration variable, \(s^{\prime}=M^{2}\), (3.5) becomes
\[a_{k,q}=\sum_{\ell=0}\int_{M^{2}_{\text{gap}}}^{\infty}dM^{2}\,\rho_{\ell}(M^{ 2})\left(\frac{1}{M^{2}}\right)^{k+3}v_{\ell,q}\, \tag{3.8}\]
where
\[\rho_{\ell}(M^{2})=16(2\ell+1)\,\text{Im}\big{(}a_{\ell}(M^{2})\big{)}. \tag{3.9}\]
Unitarity requires \(\rho_{\ell}(M^{2})\geq 0\) and this places non-trivial restrictions on the \(a_{k,q}\).
Figure 3: The contour deformation that converts (3.4) to (3.5). The contribution from the arc at infinity vanishes due to Property 6. The contour around the branch cut can be identified with the discontinuity of the \(s\)-channel branch-cut. We only include a single simple pole explicitly in the figure, but there can be an infinite number of massive simple poles on the real positive \(s\)-axis.
It is useful to rewrite (3.9) in terms of dimensionless quantities.10 To make the Wilson coefficients dimensionless, we multiply (3.8) by \((M_{\rm gap}^{2})^{(k+2)}\) and redefine the \(a_{k,q}\) as
Footnote 10: Equivalently, we could set \(M_{\rm gap}=1\).
\[(M_{\rm gap}^{2})^{(k+2)}a_{k,q}\to a_{k,q}. \tag{3.10}\]
We then define
\[x=\frac{M_{\rm gap}^{2}}{M^{2}}\quad\text{and}\quad p_{\ell}(x)=x\,\rho_{\ell} \big{(}M_{\rm gap}^{2}/x\big{)}\geq 0 \tag{3.11}\]
in terms of which (3.8) becomes
\[\boxed{\quad a_{k,q}=\sum_{\ell=0}\int_{0}^{1}dx\,p_{\ell}(x)\,x^{k}\,v_{\ell,q },\quad\quad p_{\ell}(x)\geq 0\.} \tag{3.12}\]
This is the dispersive representation of the Wilson coefficients that we use to derive bounds in the following sections. Physically, equation (3.12) relates the individual low-energy Wilson coefficient to the integral over the high-energy spectrum.
### Basic Consequences
It is immediately clear from (3.11) and (3.12) that all Wilson coefficients have to be non-negative,
\[a_{k,q}\geq 0\,. \tag{3.13}\]
Further, since \(0\leq x\leq 1\) in (3.12), we must have
\[a_{k^{\prime},q}\leq a_{k,q}\quad\text{for}\quad k\leq k^{\prime}. \tag{3.14}\]
We can now use the crossing conditions, \(a_{k,k-q}=a_{k,q}\), along with (3.14) to see that
\[\begin{split} a_{0,0}\geq a_{1,0}\geq a_{2,0}&\geq a _{3,0}\ldots\\ a_{1,1}\geq a_{2,1}&\geq a_{3,1}\ldots\\ a_{3,2}&\ldots\\ \vdots\end{split} \tag{3.15}\]
Thus, \(a_{0,0}\) is the largest Wilson coefficient, so if \(a_{0,0}=0\), all other \(a_{k,q}\)'s must vanish. In other words, unless the supersymmetrization of the operator \({\rm tr}F^{4}\) is included, there can be no other higher-derivative operators.
Given a set of Wilson coefficients \(a_{k,q}\) with a valid a dispersive representation (3.12), a new
set of Wilson coefficients defined by
\[\forall k,q:\ a^{\prime}_{k,q}=\lambda a_{k,q},\quad\lambda>0 \tag{3.16}\]
also trivially admits a valid dispersive representation. Therefore, the bounds only apply to ratios of Wilson coefficients. Since \(a_{0,0}\) is the largest Wilson coefficient, it is natural to study the bounds on the ratios \(a_{k,q}/a_{0,0}\). Each such ratio must obey
\[0\leq\frac{a_{k,q}}{a_{0,0}}\leq 1\,. \tag{3.17}\]
The more detailed shape of the higher-dimensional bounded space of allowed Wilson coefficients is studied using numerical methods in the following sections. For optimal bounds, we need to incorporate additional constraints, as discussed next.
### Null Constraints
When the dispersive representation (3.12) is plugged into the SUSY crossing condition \(a_{k,q}-a_{k,k-q}=0\), we find the following "null constraint" on \(p_{\ell}(x)\):
\[\forall\ k,q\colon\quad\sum_{\ell=0}\int_{0}^{1}dx\,p_{\ell}(x)\,\mathcal{X}_ {k,q}^{\ell,x}=0\quad\text{with}\quad\mathcal{X}_{k,q}^{\ell,x}=x^{k}\big{[}v _{\ell,q}-v_{\ell,k-q}\big{]}\,. \tag{3.18}\]
An additional null constraint arises from a version of the dispersive argument implemented for fixed \(t\) rather than fixed \(u\). It takes the form
\[\begin{split}\forall\ k,q\colon&\quad\sum_{\ell} \int_{0}^{1}dx\,p_{\ell}(x)\,\mathcal{Y}_{k,q}^{\ell,x}=0\\ \text{with}&\quad\mathcal{Y}_{k,q}^{\ell,x}=x^{k} \left[v_{\ell,q}-(-1)^{\ell}\sum_{q^{\prime}=0}^{k}(-1)^{q^{\prime}}v_{\ell,q ^{\prime}}\left(\binom{q^{\prime}}{k-q}+\binom{q^{\prime}}{q}\right)\right]. \end{split} \tag{3.19}\]
We derive this relation below. It can be thought of as the supersymmetric version of the crossing symmetry sum rule found in [7] for the four-pion amplitude.11 Note that the \(\mathcal{X}_{k,q}^{\ell,x}\) and \(\mathcal{Y}_{k,q}^{\ell,x}\) null constraints are not all linearly independent. For example, at a given \(k\), only the null constraints from \(\mathcal{Y}_{k,q}^{\ell,x}\) with \(q\leq\lfloor k/2\rfloor\) are linearly independent when the \(\mathcal{X}_{k,q}^{\ell,x}\) null constraints are imposed for all \(q\leq k\). Physically, we can interpret (3.18) and (3.19) as non-trivial constraints from maximal supersymmetry on the spectrum of intermediate states.
Footnote 11: The sum rules in Eq. (3.19) are particular linear combinations of those given in Ref. [7]. For example, \(\mathcal{Y}_{2,1,l}^{\ell,x\ \text{ours}}=2\mathcal{Y}_{2,0,l}^{\ell,x\ \text{helix}}- \mathcal{Y}_{2,1,l}^{\ell,x\ \text{helix}}\).
**Derivation of the Null Constraint (3.19).**
The core idea necessary to derive the null constraint (3.19) is that there is a fundamentally
new representation of the \(a_{k,q}\) when working at fixed \(t\) instead of fixed \(u\). To start, we define
\[b_{k,q}=\frac{1}{q!}\frac{\partial^{q}}{\partial t^{q}}\int_{\mathcal{C}^{*}} \frac{ds^{\prime}}{2\pi i}\frac{A(s^{\prime},-s^{\prime}-t)}{s^{\prime k-q+3}} \bigg{|}_{t=0}. \tag{3.20}\]
The low-energy expansion of the amplitude identifies the \(b_{k,q}\) as the Wilson coefficients in the representation,
\[A(s,-s-t)=\frac{s}{s+t}+s^{2}\sum_{0\leq q\leq k}b_{k,q}s^{k-q}t^{q}\,, \tag{3.21}\]
and hence the \(b_{k,q}\) are related to the \(a_{k,q}\) of (2.7) as
\[a_{k,q}=\sum_{q^{\prime\prime}=q}^{k}(-1)^{q^{\prime\prime}}\binom{q^{\prime \prime}}{q}\,b_{k,q^{\prime\prime}}. \tag{3.22}\]
Performing the same contour deformation as before, we find a contribution from both the \(u\)- and \(s\)-channel branch-cuts:
\[\begin{split}\oint_{\mathcal{C}^{*}}\frac{ds^{\prime}}{2\pi i} \frac{A(s^{\prime},-s^{\prime}-t)}{s^{\prime k-q+3}}&=\frac{1} {\pi}\int_{M_{\rm gap}^{2}}^{\infty}ds^{\prime}\,\frac{\operatorname{Im}A(s^ {\prime},-s^{\prime}-t)}{s^{\prime k-q+3}}\\ &-\frac{1}{\pi}\int_{-\infty}^{-M_{u}^{2}-t}ds^{\prime}\,\frac{ \operatorname{Im}A(s^{\prime},-s^{\prime}-t)}{s^{\prime k-q+3}}\,,\end{split} \tag{3.23}\]
where \(M_{\rm gap}^{2}\) is the start of the cut / lowest mass in the \(s\)-channel and \(M_{u}^{2}\) is the start of the \(u\)-channel cut / lowest mass in the \(u\)-channel. We make no assumptions regarding \(M_{u}^{2}\) at this stage in the calculation. For the \(u\)-channel cut, we use that
\[A(s^{\prime},-s^{\prime}-t)=s^{\prime 2}f(s^{\prime},-s^{\prime}-t)=s^{\prime 2 }f(-s^{\prime}-t,s^{\prime})=\frac{s^{\prime 2}}{(s^{\prime}+t)^{2}}A(-s^{ \prime}-t,s^{\prime}) \tag{3.24}\]
where \(\mathcal{N}=4\) supersymmetry required the crossing symmetry for \(f\) in (2.5). A change variables \(s^{\prime}\to-s^{\prime}-t\) gives
\[\begin{split}\oint_{\mathcal{C}^{*}}\frac{ds^{\prime}}{2\pi i} \frac{A(s^{\prime},-s^{\prime}-t)}{s^{\prime k-q+3}}&=\frac{1}{ \pi}\int_{M_{\rm gap}^{2}}^{\infty}ds^{\prime}\,\frac{\operatorname{Im}A(s^{ \prime},-s^{\prime}-t)}{s^{\prime k-q+3}}\\ &-\frac{1}{\pi}\int_{M_{u}^{2}}^{\infty}ds^{\prime}\,\frac{1}{s^{ \prime 2}}\,\frac{\operatorname{Im}A(s^{\prime},-s^{\prime}-t)}{(-s^{\prime}-t)^{k- q+1}}\,.\end{split} \tag{3.25}\]
Now the integrand in the second line is over positive \(s^{\prime}\) and we know that the discontinuity in the \(s\)-channel cannot begin below \(M_{\rm gap}^{2}\), so we can replace \(M_{u}^{2}\) with \(M_{\rm gap}^{2}\).
Next, we use the partial wave expansion for \(\operatorname{Im}A(s^{\prime},-s^{\prime}-t)\) at fixed \(t\),
\[A(s,-s-t)=16\pi\sum_{\ell=0}^{\infty}(-1)^{\ell}(2\ell+1)\,a_{\ell}(s)\,P_{ \ell}\Big{(}1+\frac{2t}{s}\Big{)}\,, \tag{3.26}\]
where we have used that \(P_{\ell}(-x)=(-1)^{\ell}P_{\ell}(x)\). The dispersive representation for \(b_{k,q}\) then
becomes
\[\begin{split} b_{k,q}=\frac{1}{q!}\frac{\partial^{q}}{\partial t^{q}} \bigg{(}&\sum_{\ell=0}^{\infty}\int_{M_{\rm gap}^{2}}^{\infty}dM^{2} \frac{(-1)^{\ell}\rho_{\ell}(M^{2})}{M^{2(k-q+3)}}P_{\ell}\left(1+\frac{2t}{M^ {2}}\right)\\ &-\sum_{\ell=0}^{\infty}\int_{M_{\rm gap}^{2}}^{\infty}dM^{2} \frac{(-1)^{\ell}\rho_{\ell}(M^{2})}{M^{4}(-M^{2}-t)^{k-q+1}}P_{\ell}\left(1+ \frac{2t}{M^{2}}\right)\bigg{)}\bigg{|}_{t=0}\.\end{split} \tag{3.27}\]
We make the \(b_{k,q}\) dimensionless by rescaling them with powers of \(M_{\rm gap}^{2}\) as in (3.10), and we change integration variable from \(M^{2}\) to \(x\) as in (3.11). The result is independent of \(M_{\rm gap}\) and can be written
\[\begin{split} b_{k,q}&=\frac{1}{q!}\frac{\partial^{ q}}{\partial t^{q}}\bigg{[}\sum_{\ell=0}^{\infty}\int_{0}^{1}dx\ (-1)^{\ell}\,p_{\ell}(x)\,x^{k-q}M_{\rm gap}^{2q}\Bigg{(}1-\frac{(-1)^{k-q+1}} {\left(1+\frac{xt}{M_{\rm gap}^{2}}\right)^{k-q+1}}\Bigg{)}P_{\ell}\Big{(}1+ \frac{2xt}{M_{\rm gap}^{2}}\Big{)}\bigg{]}\bigg{|}_{t=0},\\ &=\sum_{\ell=0}^{\infty}\int_{0}^{1}dx\ p_{\ell}(x)(-1)^{\ell}x^ {k}\left[v_{\ell,q}+(-1)^{k}\sum_{q^{\prime}=0}^{q}(-1)^{-q^{\prime}}\binom{k- q^{\prime}}{q-q^{\prime}}v_{\ell,q^{\prime}}\right]\,.\end{split} \tag{3.28}\]
Finally, we plug the dispersive representation (3.12) for \(a_{k,q}\) and (3.28) for \(b_{k,q}\) into (3.22). Using the binomial product identity
\[\sum_{q^{\prime\prime}=q}^{k}(-1)^{q^{\prime\prime}}\binom{q^{\prime\prime}}{ q}\binom{k-q^{\prime}}{q^{\prime\prime}-q^{\prime}}=(-1)^{k}\binom{q^{\prime}}{k- q}\, \tag{3.29}\]
we arrive at the \(\mathcal{Y}_{k,q}^{\ell,x}\) null constraints (3.19).
## 4 Bounds as an Optimization Problem
The dispersive representation (3.12), along with the null constraints, bound the region of allowed Wilson coefficients. The space is projective since we place bounds only on the ratio of Wilson coefficients. Moreover, the allowed region is convex since any positive sum of allowed coefficients much again be allowed. We refer to the convex space of allowed coefficients as the "supersymmetric EFT-hedron", even though the way we determine the bounds is different from the moment map approaches in [2] and [4].
Since the space of Wilson coefficients has a large dimension, we typically study projections of the supersymmetric EFT-hedron into a plane in order to visualize the bounds. Determining optimal bounds of such projections can be formulated as an optimization problem suitable for linear and semi-definite programming as we show in Section 4.1. We use the semi-definite program SDPB [17; 18] and the IBM program CPLEX [19] to numerically compute near-optimal bounds.
### Formulation as an Optimization Problem
For a projection of the supersymmetric EFT-hedron to the \((a_{k,q}/a_{0,0},a_{k^{\prime},q^{\prime}}/a_{0,0})\)-plane, we determine the allowed range of \(a_{k,q}/a_{0,0}\) for a given fixed value of \(a_{k^{\prime},q^{\prime}}/a_{0,0}=R\) subject to the null constraints (3.18) and (3.19). This is implemented by writing the dispersive representation and null constraints in a vector equation
\[\vec{V}=\sum_{\ell=0}^{\ell_{\rm max}}\int_{0}^{1}dx\,p_{\ell}(x)\,\vec{E}_{ \ell,x} \tag{4.1}\]
where
\[\vec{V}=\begin{pmatrix}a_{0,0}\\ a_{k,q}\\ a_{k^{\prime},q^{\prime}}-Ra_{0,0}\\ \sum_{\ell}\int dx\,p_{\ell}(x)\mathcal{Y}_{0,0}^{\ell,x}\\ \vdots\\ \sum_{\ell}\int_{0}^{1}dx\,p_{\ell}(x)\mathcal{X}_{1,0}^{\ell,x}\\ \vdots\end{pmatrix},\ \ \ \ \ \ \vec{E}_{\ell,x}=\begin{pmatrix}1\\ x^{k}v_{\ell,q}\\ x^{k^{\prime}}v_{\ell,q^{\prime}}-R\\ 1-2(-1)^{\ell}\\ \vdots\\ x(v_{\ell,0}-v_{\ell,1})\\ \vdots\end{pmatrix} \tag{4.2}\]
The first two rows encode the dispersive representations of \(a_{0,0}\) and \(a_{k,q}\). The third row enforces the condition \(a_{k^{\prime},q^{\prime}}=Ra_{0,0}\) as a null constraint together with all the SUSY crossing null constraints in the fourth row and down. We include the linearly independent \(\mathcal{X}_{k,q}^{\ell,x}\) and \(\mathcal{Y}_{k,q}^{\ell,x}\) null constraints for all \(0\leq q\leq k\) up to some maximum value for \(k\), \(k_{\rm max}\); this corresponds to considering constraints from local operators in the higher-derivative expansion up to and including \(2k_{\rm max}+4\) derivatives. For practical implementation, the sum over spins is truncated at some maximum value \(\ell_{\rm max}\). The bounds we derive consequently depend on the choice of \(k_{\rm max}\) and \(\ell_{\rm max}\).
Consider the the relation \(\sum\int dx\,p_{\ell}(x)=a_{0,0}\); since all \(p_{\ell}(x)\) are positive, each \(p_{\ell}(x)\) is bounded from above by \(a_{0,0}\) and can only reach that value if all the other \(p_{\ell}(x)\)'s vanish. The geometric interpretation of (4.1) is then that (projectively mod \(a_{0,0}\)) the vector \(\vec{V}\) must lie inside the convex region whose vertices are determined by the \(\vec{E}_{i}\)'s. Our goal is to find the maximum allowed value of the 2nd component of \(\vec{V}\) subject to the constraint of \(a_{k^{\prime},q^{\prime}}/a_{0,0}=R\) and the null constraints.
The maximization problem can be brought to the standard form for linear optimization as follows. Introduce a vector \(\vec{\alpha}\) of the same length as \(\vec{V}\),
\[\vec{\alpha}=(A,\,-1,\,\alpha_{3},\alpha_{4},\ldots) \tag{4.3}\]
and dot it into (4.1) to get
\[\vec{\alpha}\cdot\vec{V}=\sum_{\ell}\int dx\,\,p_{\ell}(x)\,\,\vec{\alpha} \cdot\vec{E}_{\ell,x}\,. \tag{4.4}\]
Imposing the null constraints gives \(\vec{\alpha}\cdot\vec{V}=A\,a_{0,0}-a_{k,q}\). Hence, _if_ the righthand side of (4.4) is positive, we get
\[\frac{a_{k,q}}{a_{0,0}}\leq A. \tag{4.5}\]
Thus, \(A\) is the upper bound on allowed values of \(a_{k,q}/a_{0,0}\) on the support of the null constraints. One can then argue that the problem of maximizing \(a_{k,q}/a_{0,0}\) subject to the null constraints is equivalent to _minimizing_\(A\) subject to the positivity constraints
\[\vec{\alpha}\cdot\vec{E}_{\ell,x}\geq 0\ \ \text{for all $\ell=0,1,\ldots,\ell_{\max}$ and $0\leq x\leq 1$}. \tag{4.6}\]
The parameterization of \(\vec{\alpha}\) in (4.3) is such that the optimization of \(A\) under the inequalities (4.6) imposes the null constraints of (4.2).
To summarize, the linear optimization problem is: find \(\vec{\alpha}\) such that \(A=\vec{\alpha}\cdot(1,0,0,\ldots)\) is minimized subject to \(\vec{\alpha}\cdot\vec{E}_{\ell,x}\geq 0\) for all \(\ell\) up to \(\ell_{\max}\) and all \(0\leq x\leq 1\). The relevant part of the output \(\vec{\alpha}\) is the first component \(A\), because that tells us the maximally allowed value of \(a_{k,q}/a_{0,0}\) subject to the null constraints. The setup (4.1)-(4.2) can be adjusted to compute both upper and lower bounds on the Wilson coefficients \(a_{k,q}/a_{0,0}\). Additional null constraints, such as monodromy conditions and variants thereof, can also be included; see Section 6.
### Implementation in SDPB
SDPB takes as input a finite set of vertex vectors, \(\vec{E}_{a,x^{\prime}}\), labeled by the discrete index \(a\). Each element of the vector is a polynomial in a variable \(x^{\prime}\) that is assumed to take values between zero and infinity. SDPB numerically solves for the optimal solution \(\vec{\alpha}\) subject to the positivity constraints \(\vec{\alpha}\cdot\vec{E}_{a,x^{\prime}}\geq 0\) for all \(a\) and \(x^{\prime}\).
Our optimization problem is not quite of this form because our \(x\) ranges from \(0\leq x\leq 1\), so we define \(x\) in terms if \(x^{\prime}\) as
\[x\equiv\frac{1}{1+x^{\prime}}\,. \tag{4.7}\]
Furthermore, because the elements of the SDPB vertex vectors must be polynomial in \(x^{\prime}\), we rescale our vertex vectors as
\[\vec{E}_{\ell,x}\to(1+x^{\prime})^{k_{\max}}\vec{E}_{\ell,x^{\prime}}. \tag{4.8}\]
This can also be thought of as simply rescaling \(p_{\ell}(x)\).
Now our optimization problem can be directly implemented in SDPB. For example, suppose
we are maximizing \(a_{2,1}/a_{0,0}\) while fixing \(a_{2,0}/a_{0,0}=R\). The corresponding \(\vec{V}\) is given by
\[\vec{V}=\begin{pmatrix}a_{0,0}\\ a_{2,1}\\ a_{2,0}-Ra_{0,0}\\ \sum_{\ell}\int_{0}^{1}dx\,p_{\ell}(x)\mathcal{Y}_{0,0}^{\ell,x}\\ \sum_{\ell}\int_{0}^{1}dx\,p_{\ell}(x)\mathcal{Y}_{1,0}^{\ell,x}\\ \vdots\\ \sum_{\ell}\int_{0}^{1}dx\,p_{\ell}(x)\mathcal{X}_{1,0}^{\ell,x}\\ \vdots\end{pmatrix} \tag{4.9}\]
and, specifically for \(k_{\rm max}=2\), the \(\vec{E}_{\ell,x}\)-vectors become
\[\vec{E}_{\ell,x}=\begin{pmatrix}1\\ x^{2}\\ x^{2}v_{\ell,1}-R\\ 1-2(-1)^{\ell}\\ \left(1-(-1)^{\ell}(1-2\ell(\ell+1))\right)x\\ \vdots\\ x(v_{\ell,0}-v_{\ell,1})\\ \vdots\end{pmatrix}\rightarrow\begin{pmatrix}(1+x^{\prime})^{2}\\ 1\\ v_{\ell,1}-R(1+x^{\prime})^{2}\\ (1+x^{\prime})^{2}(1-2(-1)^{\ell})\\ (1+x^{\prime})\left(1-(-1)^{\ell}(1-2\ell(\ell+1))\right)\\ \vdots\\ (1+x^{\prime})(v_{\ell,0}-v_{\ell,1})\\ \vdots\end{pmatrix} \tag{4.10}\]
In Appendix A, we discuss the algorithm's sensitivity to the choice of \(\ell_{\rm max}\).
### Implementation in CPLEX
In addition to using SDPB, we also compute bounds using the linear programming solver CPLEX. Unlike semi-definite programming, for which we can input vectors \(\vec{E}_{\ell,x}\) with a continuous variable \(x\), CPLEX needs input vectors with discrete values of \(x\). Therefore, we discretize the mass-spectrum in the integral over \(x=M_{\rm gap}^{2}/M^{2}\) in (4.1) by selecting a set of \(x_{\rm max}\) values \(0<x_{1}<x_{2}<\ldots<x_{x_{\rm max}}=1\) and approximating the integral as a sum. We introduce a collective index \(i=(x_{n_{i}},\ell_{i})\) that allows us to combine the sums over the mass-spectrum and the spins \(\ell\), so that (4.1) becomes
\[\vec{V}=\sum_{\ell=0}^{\ell_{\rm max}}\int_{0}^{1}dx\,p_{\ell}(x)\,\vec{E}_{ \ell,x}\;\;\rightarrow\;\;\vec{V}=\sum_{i}p_{i}\vec{E}_{i}. \tag{4.11}\]
Because of the mass discretization, CPLEX underestimates the bounds compared to SDPB for given \(k_{\rm max}\) and \(\ell_{\rm max}\). The finer the discretization (i.e. greater values of \(x_{\rm max}\)), the closer the CPLEX bounds are to the SDPB bounds. We provide some representative examples in Section 5.2.
Allowed Regions
In this section, we give examples of allowed regions and compare SDPB with CPLEX. We study how the bounds depend on the number of higher-derivative operators included in the analysis. Recall that \(k\) labels local \(\mathcal{N}=4\) SUSY operators of the schematic form \(\mathrm{tr}D^{2k}F^{4}\sim\mathrm{tr}(D^{2k+4}z^{2}\bar{z}^{2})\), so including operators with \(k\leq k_{\mathrm{max}}\) corresponds to including scalar field operators with up to and including \(2k_{\mathrm{max}}+4\) derivatives. The \(a_{k,q}\) are the Wilson coefficients, with \(q\) labeling the different independent \(\mathcal{N}=4\) SUSY operators at order \(k\). For each \(k_{\mathrm{max}}\), the choice of upper bound on spins, \(\ell_{\mathrm{max}}\), is made to ensure the bounds converge as a function of \(\ell_{\mathrm{max}}\) to the desired numerical precision. Examples of such benchmarking are given in Appendix A.
To compare with known amplitudes, such as the open string and other examples in Section 2.3, we perform the rescaling (3.10), \(a_{k,q}\to a_{k,q}M_{\mathrm{gap}}^{2k+4}\), to make the Wilson coefficients dimensionless in units of the mass gap.
Section 5.1 presents examples of bounds on the lowest-dimension Wilson coefficients, and in Section 5.2 we compare results of SDPB and CPLEX.
### Examples
We found in Section 3.3 that \(a_{0,0}\) is the largest Wilson coefficient, and it is therefore natural to focus on bounds on the ratios \(a_{k,q}/a_{0,0}\). To simplify the notation, we define
\[\bar{a}_{k,q}\equiv\frac{a_{k,q}}{a_{0,0}}\quad\text{with}\quad\;0\leq\bar{a}_ {k,q}\leq 1\,. \tag{5.1}\]
To visualize the bounds on the multi-dimensional space of Wilson coefficients \(\bar{a}_{k,q}\), we project onto 2-dimensional regions \((\bar{a}_{k,q},\bar{a}_{k^{\prime},q^{\prime}})\). In these 2d plots, the Veneziano amplitude (2.9)-(2.11) with \(M_{\mathrm{gap}}^{2}=1/\alpha^{\prime}\) is shown as a **red dot**. With \(a_{0,0}=\zeta_{2}\) for the open string, the lowest \(\bar{a}_{k,q}\) values are
\[\text{Veneziano:}\quad\bar{a}_{1,0}=\frac{\zeta_{3}}{\zeta_{2}}\approx 0.73\,, \quad\bar{a}_{2,0}=\frac{\zeta_{4}}{\zeta_{2}}\approx 0.66\,,\quad\bar{a}_{3,0 }=\frac{\zeta_{5}}{\zeta_{2}}\approx 0.63\,,\quad\text{etc.} \tag{5.2}\]
Varying \(M_{\mathrm{gap}}^{2}\alpha^{\prime}\) between 0 and 1 gives a set of Wilson coefficients that must also lie in the allowed region. These values for the open string are shown as the **red dashed curves** in the plots.
The Coulomb branch 1-loop amplitude from Section 2.3.2 with \(M_{\mathrm{gap}}=m\) has
\[\text{1-loop Coulomb:}\quad\bar{a}_{1,0}=\frac{1}{10}=0.1\,,\quad\bar{a}_{2,0 }=\frac{1}{70}\approx 0.014\,,\quad\bar{a}_{3,0}=\frac{1}{420}\approx 0.0024\,, \quad\text{etc.} \tag{5.3}\]
and is shown as a **blue dot**. Since the Coulomb branch 1-loop amplitudes has Wilson coefficients \(\bar{a}_{k,q}\) that are numerically very small, especially with increasing \(k\), we only include the Coulomb point in plots for \(k\leq 3\). For the same reason, we do not include the curves of the 1-loop amplitudes with \(M_{\mathrm{gap}}/m\) varying between 0 and 1, though they too must lie
with the allowed region.
\((\bar{a}_{k,0},\bar{a}_{k^{\prime},0})\) **Regions.** Analytic bounds on the space of Wilson coefficients, such as Hankel matrix and cyclic polytope constraints, were derived in [2] and extended in [4]. In general, for given finite12\(k_{\rm max}\) and \(\ell_{\rm max}\), the collection of these analytic bounds tends to overestimate the allowed regions compared the bounds found with numerical methods such as CPLEX or SDPB. However, in the special case of projections onto the \((\bar{a}_{k,0},\bar{a}_{k^{\prime},0})\) planes, a finite subset of Hankel constraints imply the region is bounded by
Footnote 12: It is possible that the bounds would be equivalent in the limit of \(k_{\rm max},\ell_{\rm max}\to\infty\).
\[\bar{a}_{k,0}^{k^{\prime}/k}\leq\bar{a}_{k^{\prime},0}\leq\bar{a}_{k,0}\quad \mbox{for $k\leq k^{\prime}$}\,, \tag{5.4}\]
which agrees with the SDPB/CPLEX numerical results. The bounds (5.4) are independent of \(k_{\rm max}\). Figure 4 displays the projections into the \((\bar{a}_{1,0},\bar{a}_{2,0})\) and \((\bar{a}_{2,0},\bar{a}_{3,0})\) planes as examples of such regions. These plots also show the locations of the Veneziano amplitude and the 1-loop Coulomb branch within the region.
The infinite spin tower amplitude discussed in Section 2.3.3 has Wilson coefficients
\[a_{k,q}=\left(\frac{M_{\rm gap}}{m}\right)^{2k+4}\qquad\Longrightarrow\qquad (\bar{a}_{k,0})^{\frac{1}{k}}=(\bar{a}_{k^{\prime},0})^{\frac{1}{k^{\prime}}}\,. \tag{5.5}\]
This saturates the lower bound on the region (5.4).
Note that \(M_{\rm gap}=m\) corresponds to the point (1,1) in _any_ 2d projection \((\bar{a}_{k,q},\bar{a}_{k^{\prime},q^{\prime}})\), so our
2d plots always include the (1,1) point. Similarly, the extreme limit \(M_{\rm gap}\ll m\) corresponds to (0,0) in any such 2d projection; that is the limit of the \({\cal N}=4\) SUSY operator \({\rm tr}F^{4}\) having a coupling that dominates every other operator. Because (0,0) and (1,1) are included in all plots, convexity of the allowed region implies that the diagonal \(\bar{a}_{k,q}=\bar{a}_{k^{\prime},q^{\prime}}\) is also included. In general it need not correspond to a bound of the region, though it does for the \((\bar{a}_{k,0},\bar{a}_{k^{\prime},0})\) projections.
**The \((\bar{a}_{2,0},\bar{a}_{2,1})\) Region.** The bounds on the \((\bar{a}_{k,0},\bar{a}_{k^{\prime},0})\) regions were independent of \(k_{\rm max}\), but for general projections \((\bar{a}_{k,q},\bar{a}_{k^{\prime},q^{\prime}})\) the bounds depend on \(k_{\rm max}\) and we are interested in how they converge as \(k_{\rm max}\to\infty\). For that reason, we study the bounds for increasing \(k_{\rm max}\), with the choice limited only by computation time.
The simplest example of these \(k_{\rm max}\) dependent regions is the \(\bar{a}_{2,1}\) vs. \(\bar{a}_{2,0}\) projection, which we display in Figure 5. The bounds shown were obtained with both SDPB and CPLEX whose results are visually indistinguishable in these plots. A more detailed comparison of the SDPB and CPLEX numerics is presented in Section 5.2. Benchmarking for the choices of \(\ell_{\rm max}\) is discussed in Appendix A.
The numerical results indicate that string theory with \(\alpha^{\prime}M_{\rm gap}^{2}=1\) is close to, but not on, the boundary of this projection. Moreover, for \(k_{\rm max}\leq 15\), there is no indication of a kink
Figure 5: Left: The allowed regions for the projection to the \((\bar{a}_{2,0},\bar{a}_{2,1})\) plane. The orange bounds are for \(k_{\rm max}=4\) and \(\ell_{\rm max}=200\), while the violet bound is \(k_{\rm max}=10\) and \(\ell_{\rm max}=300\). (Taking \(\ell_{\rm max}\) higher results in differences at order \(10^{-4}\) or less, not visible in the plot.) The red dot marks the Veneziano amplitude and the blue dot the 1-loop Coulomb amplitude with coefficients (5.2) and (5.3), respectively.
Right: Zoom-in on the bounds near the Veneziano amplitude (red) to compare the \(k_{\rm max}=4\) and 10 bounds with the \(k_{\rm max}=15\) bounds obtained with \(\ell_{\rm max}=800\). The green dot shows the maximum allowed value of \(\bar{a}_{2,0}\) for \(k_{\rm max}=20\) and \(\ell_{\rm max}=600\) when \(\bar{a}_{2,1}\) is fixed at the string value. These results give no indication that the bounds converge to the string as \(k_{\rm max}\to\infty\).
near the string. There does appear to be a kink on the \(\bar{a}_{2,0}\)-axis, namely where the lower bound on \(\bar{a}_{2,1}\) goes from being zero to non-zero. As \(k_{\rm max}\) increases, the kink moves slowly to lower values of \(\bar{a}_{2,0}\); for \(k_{\rm max}=15\), it is at \(\bar{a}_{2,0}\) slightly below 0.6, but it is not clear what it asymptotes to for \(k_{\rm max}\to\infty\).13
Footnote 13: The amplitude
\[A^{\rm NF}[zz\bar{z}\bar{z}]=-\frac{s}{u}+\frac{s^{2}}{2M_{\rm gap}^{2}}\left( \frac{1}{M_{\rm gap}^{2}-s}+\frac{1}{M_{\rm gap}^{2}-u}\right) \tag{5.6}\]
has Wilson coefficients \(a_{0,0}=1\), \(a_{k,0}=a_{k,k}=1/2\) for \(k>0\), and \(a_{k,q}=0\) for \(0<q<k\). As such, it is a candidate for the point \((1/2,0)\) in any \((\bar{a}_{k,0},\bar{a}_{k^{\prime},q})\) projection. However, (5.6) does not satisfy the Froissart bound (3.3). (This is similar to the “spin 1 theory” discussed in the pion-bootstrap [7].) One could speculate that the cusp approaches \((1/2,0)\) in the limit of \(k_{\rm max}\to\infty\), but at large \(k,k^{\prime}\), the Veneziano amplitude has \((\bar{a}_{k,0},\bar{a}_{k^{\prime},1})\to(6/\pi^{2},0)\approx(0.608,0)\), so that proposal seems implausible for all \(k,k^{\prime}\).
**The \((\bar{a}_{3,0},\bar{a}_{3,1})\) and \((\bar{a}_{4,1},\bar{a}_{4,2})\) Regions** We also consider the \(\bar{a}_{3,1}\) vs. \(\bar{a}_{3,0}\) and \(\bar{a}_{4,2}\) vs. \(\bar{a}_{4,1}\) projections in Figure 6. In both cases, the string again lies close to, but not on, the boundary. The \(\bar{a}_{3,1}\) vs. \(\bar{a}_{3,0}\) projection is qualitatively similar to the \((\bar{a}_{2,0},\bar{a}_{2,1})\) projection. In particular, it also shows indications of a kink on the horizontal axis, in this case near \(\bar{a}_{3,0}\sim 0.6\).
The lower bound in the \((\bar{a}_{4,1},\bar{a}_{4,2})\) projection is qualitatively different from the previous two in that it does not include points on the horizontal axis and there is no indication of a kink. It is noteworthy that the allowed region is very slim: this implies a strong correlation between the allowed coefficients of the corresponding \({\cal N}=4\) SUSY \({\rm tr}D^{8}F^{4}\) operators.
### Comparison of SDPB and CPLEX
Computing the bounds in both SDPB and CPLEX provides a cross-check on the numerical methods. We find excellent agreement between these techniques.
Figure 6: The allowed regions in the \((\bar{a}_{3,0},\bar{a}_{3,1})\) and \((\bar{a}_{4,1},\bar{a}_{4,2})\) projections for \(k_{\rm max}=4,\ell_{\rm max}=200\) (orange) and \(k_{\rm max}=10,\ell_{\rm max}=300\) (purple). The red dot represents the Veneziano amplitude.
As an example, the bounds in Figure 5 were computed with both SDPB and CPLEX. Figure 7 shows the difference between the upper and lower bounds for \(k_{\rm max}=10\) and \(\ell_{\rm max}=300\) as obtained by both methods, using \(x_{\rm max}=300\) for CPLEX.
Because of the discretization, CPLEX underestimates the allowed space slightly compared to SDPB, but the difference becomes increasingly small with increasing discretization parameter \(x_{\rm max}\). This is illustrated in Figure 8 which shows that the CPLEX bounds converge to the SDPB result as a power law in \(x_{\rm max}\).
In terms of computation time, CPLEX with lower values of \(x_{\rm max}\) runs faster than SDBP. However, for high precision results, higher \(x_{\rm max}\) is needed and the time-advantage goes away. For high-precision, we find SDPB faster and more reliable. In the remainder of the paper, all plots are made with SDPB while CPLEX is used for basic "sanity-checks".
Figure 7: Left: Minimum (top) and maximum (bottom) \(\bar{a}_{2,1}\) calculated with SDPB (blue) and CPLEX (orange) at \(\ell_{\rm max}=x_{\rm max}=300\). While SDPB is represented as a continuous curve, the code is run at the same set of points as CPLEX and the points are then joined so they can be distinguished from the CPLEX results.
Right: The absolute difference between SDPB and CPLEX for the points given on the left for \(x_{\rm max}=300\) (orange) and \(x_{\rm max}=500\) (green). As expected, SDPB gives a slightly larger allowed region because it does not rely on discretizing \(x\), and the agreement becomes better as we increase \(x_{\rm max}\).
## 6 Veneziano from String Monodromy
### String Monodromy
The tree-level amplitudes in Type-I string theory can be written as period integrals multiplied by a universal pre-factor. Specifically at 4-point, we have
\[A[z_{1}z_{2}\bar{z}_{3}\bar{z}_{4}] =-\frac{\alpha^{\prime}s^{2}}{t}\,\int_{0}^{1}dzz^{-\alpha^{\prime }s-1}(1-z)^{-\alpha^{\prime}u-1}\, \tag{103}\] \[A[z_{1}\bar{z}_{3}z_{2}\bar{z}_{4}] =\frac{\alpha^{\prime}s^{2}}{t}\,\int_{1}^{\infty}dzz^{-\alpha^{ \prime}s-1}(z-1)^{-\alpha^{\prime}u-1}\,\] \[A[z_{2}z_{1}\bar{z}_{3}\bar{z}_{4}] =\frac{\alpha^{\prime}s^{2}}{t}\,\int_{-\infty}^{0}dzz(-z)^{- \alpha^{\prime}s-1}(1-z)^{-\alpha^{\prime}u-1}\.\]
Here and below, \(\bar{z}\) are the complex \(\mathcal{N}=4\) scalars introduced in Section 2.1. \(A[z_{1}z_{2}\bar{z}_{3}\bar{z}_{4}]\) is the Veneziano amplitude (10) and the two other amplitudes are the color rearranged versions of it.
The three amplitudes (103) differ only by their integration region. A contour deformation [20; 21; 22; 23; 24] relates the three amplitudes linearly to each other, with monodromy factors picked up at \(z=0\) and \(z=1\). The resulting 4-point _string monodromy relation_ is
\[0=A[z_{2}z_{1}\bar{z}_{3}\bar{z}_{4}]+e^{i\pi\alpha^{\prime}s}A[z_{1}z_{2}\bar {z}_{3}\bar{z}_{4}]+e^{-i\pi\alpha^{\prime}t}A[z_{1}\bar{z}_{3}z_{2}\bar{z}_{4}] \tag{104}\]
Now using the SUSY Ward identities (5) and that \(A[z_{1}z_{2}\bar{z}_{3}\bar{z}_{4}]=s^{2}f(s,u)\), where \(f\) is real, we can write the real and imaginary parts of (104) as
\[0 =f(s,t)+\cos(\pi\alpha^{\prime}s)f(s,u)+\cos(\pi\alpha^{\prime}t) f(t,u)\,, \tag{105}\] \[0 =\sin(\pi\alpha^{\prime}s)f(s,u)-\sin(\pi\alpha^{\prime}t)f(t,u)\,.\]
Let us impose the monodromy relation on the low-energy expansion of the \(\mathcal{N}=4\) SUSY
Figure 8: We show the convergence of the CPLEX bounds to the SDPB bounds goes as a power law in \(x_{\rm max}\) for both the maximum (left) and minimum (right) \(\bar{a}_{2,1}\) with \(\bar{a}_{2,0}=3/4\) fixed and \(\ell_{\rm max}=300\).
EFT. We plug in the low-energy ansatz (6), along with the SUSY crossing constraints (8), and solve (63) order by order in the Mandelstam expansion. This fixes particular linear combinations of Wilson coefficients as shown in Table 1. There and in the remainder of this section we set \(\alpha^{\prime}=1\) and \(M_{\rm gap}=1\).
The monodromy relations do not fix all Wilson coefficients. The Wilson coefficients _unfixed_ by monodromy with \(k\leq 8\) are
\[a_{1,0}\,,\ \ a_{3,0}\,,\ \ a_{4,1}\,,\ \ a_{5,0}\,,\ \ a_{6,1}\,,\ \ a_{7,0}\,,\ \ a_{7,2}\,,\ \ a_{8,1}\,. \tag{64}\]
Comparing to the Veneziano amplitude (with \(\alpha^{\prime}=1\)), these monodromy-unfixed coefficients all involve \(\zeta_{\rm odd}\): we have
\[\begin{split}& a_{1,0}^{\rm str}=\zeta_{3}\,,\ \ a_{3,0}^{\rm str}=\zeta_{5}\,,\ \ a_{4,1}^{\rm str}=\tfrac{3}{4}\zeta_{6}-\tfrac{1}{2}\zeta_{3}^{2}\,,\ \ a_{5,0}^{\rm str}=\zeta_{7}\,,\ \ a_{6,1}^{\rm str}= \tfrac{5}{4}\zeta_{8}-\zeta_{3}\zeta_{5}\,,\\ & a_{7,0}^{\rm str}=\zeta_{9}\,,\ \ a_{7,2}^{\rm str}=-\tfrac{7}{4} \zeta_{6}\zeta_{3}+\tfrac{1}{6}\zeta_{3}^{3}-\tfrac{9}{4}\zeta_{4}\zeta_{5}-3 \zeta_{2}\zeta_{7}+\tfrac{28}{3}\zeta_{9}\,,\ \ a_{8,1}^{\rm str}=\tfrac{7}{4}\zeta_{10}- \tfrac{1}{2}\zeta_{5}^{2}-\zeta_{3}\zeta_{7}\,.\end{split} \tag{65}\]
The monodromy relations only "know" \(\pi\), i.e. \(\zeta_{\rm even}\), so they cannot fix the \(\zeta_{\rm odd}\)-dependence in the amplitude.
### Bootstrapping Veneziano
Huang, Liu, Rodina, and Wang [1] found numerical evidence that when a subset of analytic EFT-hedron bounds from [2] were combined with the monodromy constraints, \(a_{1,0}\), \(a_{3,0}\), and \(a_{4,1}\) were within \(1.5\%\), \(0.2\%\), and \(53\%\) of the string values (65).14
\begin{table}
\begin{tabular}{l l l} linear combination fixed & string value & monovariable \\ \hline \(a_{0,0}\) & \(=\zeta_{2}=\tfrac{\pi^{2}}{6}\) & \(r_{0}^{(0)}\) \\ \(a_{2,0}\) & \(=\zeta_{4}=\tfrac{\pi^{4}}{90}\) & \(r_{1}^{(2)}\) \\ \(a_{2,1}\) & \(=\tfrac{1}{4}\zeta_{4}=\tfrac{\pi^{4}}{360}\) & \(r_{2}^{(2)}\) \\ \(a_{3,1}-2a_{3,0}+\zeta_{2}\,a_{1,0}\) & \(=0\) & \(r_{3}^{(3)}\) \\ \(a_{4,0}\) & \(=\zeta_{6}=\tfrac{\pi^{6}}{945}\) & \(r_{4}^{(4)}\) \\ \(a_{4,2}-2a_{4,1}\) & \(=-\tfrac{1}{16}\zeta_{6}=-\tfrac{\pi^{6}}{15120}\) & \(r_{5}^{(4)}\) \\ \(a_{5,1}-3a_{5,0}+\zeta_{2}a_{3,0}+\zeta_{4}a_{1,0}\) & \(=0\) & \(r_{6}^{(5)}\) \\ \(a_{5,2}-5a_{5,0}+2\zeta_{2}\,a_{3,0}+\tfrac{5}{4}\zeta_{4}\,a_{1,0}\) & \(=0\) & \(r_{7}^{(5)}\) \\ \end{tabular}
\end{table}
Table 1: The string monodromy relation (62) fixes particular linear combination of the Wilson coefficients \(a_{k,q}\) in the supersymmetric ansatz (7)-(8) as shown here up to \(k=5\) with \(\alpha^{\prime}=1\). The monovariables were introduced in Section 1 and are reviewed in Section 7.
To extend the results of [1], we include the monodromy constraints in Table 1 as additional null constraints in the formulation of the linearized optimization problem in (4.1)-(4.2) and work systematically up to \(k_{\rm max}=8\).
Starting with the \((\bar{a}_{1,0},\bar{a}_{3,0})\) region, we know from Section 5.1 that _without_ the monodromy constraints, the allowed region is bounded as \(\bar{a}_{1,0}^{3}\leq\bar{a}_{3,0}\leq\bar{a}_{1,0}\). This is the blue region in the top-left plot in Figure 9. In that same plot, the orange region is the allowed region found with SDPB when monodromy constraints are imposed to order \(k_{\rm max}=3\). The red dot within the \(k_{\rm max}=3\) monodromy region is the Veneziano amplitude. Zooming in on the orange \(k_{\rm max}=3\) monodromy region, we increase \(k_{\rm max}\) up to \(8\), as progressively shown in the three other plots in Figure 9, to see how a smaller and smaller island around the Veneziano amplitude.
Figure 9: Regions allowed by SDPB bounds on the \(\bar{a}_{1,0}\) vs. \(\bar{a}_{3,0}\) when monodromy and crossing are imposed up to a given \(k_{\rm max}\) with \(\ell_{\rm max}=800\). The blue region on the top left is the exact allowed region without monodromies imposed. The red dot marks the Veneziano amplitude.
amplitude is isolated. This progression indicates that the intersection of the monodromy plane and the allowed supersymmetric EFT-hedron region shrinks to a point in the limit \(k_{\rm max}\to\infty\) as anticipated by the authors of [1].
A similar result is found for the other coefficients (6.4) that were unrestricted by monodromy. At \(k_{\rm max}=8\), the bounds we find (working at \(\ell_{\rm max}=800\)) are
\[\begin{array}{ll}\mbox{\bf SDPB}\;\mbox{\bf bounds}&\mbox{\bf String Value}\\ 1.201982\leq a_{1,0}\leq 1.202061&1.202057\\ 1.036923\leq a_{3,0}\leq 1.036937&1.036928\\ 0.04053\leq a_{4,1}\leq 0.04063&0.04054\\ 1.0083481\leq a_{5,0}\leq 1.0083495&1.0083493\\ 0.008649\leq a_{6,1}\leq 0.008729&0.008651\\ 1.00200830\leq a_{7,0}\leq 1.00200891&1.00200839\\ 0.00031\leq a_{7,2}\leq 0.00041&0.00032\\ 0.00203\leq a_{8,1}\leq 0.00212&0.00204\end{array} \tag{6.6}\]
Our bounds bring \(a_{1,0}\), \(a_{3,0}\), and \(a_{4,1}\) within \(0.0066\%\), \(0.0013\%\), and \(0.24\%\) of the string value. The shrinking of the allowed ranges with increasing \(k_{\rm max}\) is visualized for the first five coefficients in Figure 10.
## 7 Flattening of the EFT-hedron
In the previous section, we provided new numerical evidence that the supersymmetric EFT-hedron constraints together with monodromy select an island that shrinks around the open string Veneziano amplitude, as first proposed in [1]. In this section, we explore the geometric consequences of this phenomenon and present evidence for flattening of the allowed space. We also reparameterize the low-energy expansion of the amplitude to coefficients motivated by the flattening and show that it can be partially resummed.
### Flattening Conjecture
From a geometric perspective, we can think of the linear monodromy constraints, listed at lowest orders in Table 1, as defining a higher-dimensional plane in the space of Wilson coefficients. We call this the "monodromy plane". The claim of [1] is that the monodromy plane and the supersymmetric EFT-hedron intersect each other at a point in the limit \(k_{\rm max}\to\infty\) and that this point corresponds to the Veneziano amplitude.
We discussed in the Introduction the two different ways the intersection may happen: as illustrated in Figure 1, either the monodromy plane is tangent to the EFT-hedron or the EFT-hedron must flatten in such a way that the intersection with the monodromy plane shrinks to a point. To assess which option is realized, imagine taking the monodromy plane in Figure 1 and shifting it around. If the monodromy plane were tangent to SUSY EFT-hedron, some shifts would give no solution at all while others would result in convergence to a finite size region of parameters, unlike the continued shrinking towards a point. On the
other hand, if the EFT-hedron itself is flattening, then the shifted monodromy plane should continue to intersect the space at a single point. In Section 7.2, we vary the monodromy plane in a controlled way and find evidence that the latter option is realized: the EFT-hedron becomes increasingly narrow as \(k_{\rm max}\) gets larger.
The result is that the supersymmetric EFT-hedron must be flattening in certain directions when \(k_{\rm max}\) increases. Specifically, at large \(k_{\rm max}\), the number of independent Wilson coefficients increases as \({\cal O}(k_{\rm max}^{2}/4)\) after imposing the SUSY crossing constraints. Hence, the "naive" dimension of the SUSY EFT-hedron is \({\cal O}(k_{\rm max}^{2}/4)\) at large \(k_{\rm max}\). The monodromy relations fix linear relations among \({\cal O}(k_{\rm max}^{2}/6)\) of these coefficients, thus leaving \(1/3\) of the Wilson coefficients unfixed.
Figure 10: SDPB bounds with the monodromy imposed. As \(k_{\rm max}\) is increased, the allowed range of each Wilson coefficient shrinks. These bounds were found with \(\ell_{\rm max}=500\) for \(k_{\rm max}=2,3,\ldots,7\) and \(\ell_{\rm max}=800\) for \(k_{\rm max}=8\).
### Evidence for Flattening
The monodromy relations fix certain linear combinations of the Wilson coefficients to particular values. If we simply change those values, we can move the monodromy plane in a controlled way. To do so, define the linear combinations fixed by monodromies to be "monovariables" \(r_{i}^{(k)}\), where \(k\) denotes the largest \(k\) value for any \(a_{k,q}\) that appears in the linear combination. It follows from Table 1 that:
\[r_{0}^{(0)}=a_{0,0}\,,\quad r_{1}^{(2)}=a_{2,0}\,,\quad r_{2}^{(2)}=a_{2,1}\,, \quad r_{3}^{(3)}=a_{3,1}-2a_{3,0}+\zeta_{2}\,a_{1,0}\,,\ldots \tag{112}\]
Note that we use the linear combination of the monodromy relations with \(\alpha^{\prime}=1\) for simplicity. We could have reintroduced the scale as \(M_{\rm gap}\) or another mass \(m\). For string theory, the monovariables take on the values shown in Table 1. Changing the values changes the underlying theory and means the new constraints may no longer necessarily correspond to some relationship between color-ordered amplitudes.
One way to systematically generate new examples of monovariables is to exploit convexity of the SUSY EFT-hedron and use linear combinations of known models to construct more general points inside the allowed space. This way, it is also known what the values of remaining unfixed \(a_{k,q}\)'s are so there is a check on the numerical bootstrap.
To test the flattening of the SUSY EFT-hedron, we consider linear combinations of the infinite spin tower amplitude, the Veneziano amplitude, and the one-loop Coulomb branch amplitude. To be specific, we consider an ansatz of the form
\[A[zz\bar{z}\bar{z}]=\frac{-s}{u}+s^{2} \Bigg{(}\int_{1}^{\infty}dm^{2}\,\rho_{m^{2}}^{(1)}\left[\frac{1} {(m^{2}-s)(m^{2}-u)}\right] \tag{113}\] \[\qquad+\int_{1}^{\infty}dm^{2}\,\frac{\rho_{m^{2}}^{(2)}}{m^{4}} \left[\frac{m^{4}}{su}-\frac{\Gamma(-s/m^{2})\Gamma(-u/m^{2})}{\Gamma(1+t/m^{2 })}\right]\] \[\qquad+\int_{1}^{\infty}dm^{2}\,\frac{\rho_{m^{2}}^{(3)}}{m^{4}} \,F_{3}\Big{(}1,1,1,1;\frac{5}{2}\Big{|}\frac{s}{4m^{2}},\frac{u}{4m^{2}} \Big{)}\Bigg{)}\,,\]
where we work with \(M_{\rm gap}^{2}=1\). The ansatz in (113) obeys crossing symmetry by construction. Positivity \(\rho_{m^{2}}^{(I)}\geq 0\) is ensured by the densities being randomized positive sums over \(\delta\)-functions:
\[\rho_{m^{2}}^{(I)}=\sum_{i}a_{i}^{(I)}\,\delta\big{(}m^{2}-m_{(I),i}^{2}\big{)} \quad\text{with}\quad a_{i}^{(I)}\geq 0\,. \tag{114}\]
As an example of the procedure, let us choose the values
\[\text{Test Example:}\quad\begin{array}{c|cccc|c}\rho_{m^{2}}^{(I)}&m_{(I),1}^{2 }&m_{(I),2}^{2}&m_{(I),3}^{2}&a_{1}^{(I)}&a_{2}^{(I)}&a_{3}^{(I)}\\ \hline\rho_{m^{2}}^{(1)}&7&19&21&\frac{80}{53}&\frac{98}{57}&\frac{81}{23}\\ \rho_{m^{2}}^{(2)}&1&4&28&\frac{90}{1012}&\frac{9}{5c_{5}}&\frac{63}{19c_{2}} \\ \rho_{m^{2}}^{(3)}&3&15&23&\frac{1}{103}&\frac{2}{77}&\frac{4}{91}\end{array} \tag{115}\]
Expanding (7.2) with these choices for \(\rho_{m^{2}}^{(I)}\) gives the Wilson coefficients15
Footnote 15: In contrast, the numerical string values (2.11) are
\[a_{0,0}^{\rm str}=1.64493\,,\ \ a_{1,0}^{\rm str}=1.20206\,,\ \ a_{2,0}^{\rm str}=1.08 232\,,\ \ a_{2,1}^{\rm str}=0.270581\,.\]
so the test model is in a different part of parameter space.
\[a_{0,0}=1.05265\,,\ \ \ \ a_{1,0}=0.676907\,,\ \ \ \ a_{2,0}=0.591605\,,\ \ \ \ a_{2,1}=0.148397\,,\ \ \ldots \tag{7.5}\]
which lead to monovariable values
\[\frac{r_{1}^{(2)}}{r_{0}^{(0)}}=0.562015\,,\ \ \ \frac{r_{2}^{(2)}}{r_{0}^{(0)}}=0. 140974\,,\ \ \ \frac{r_{3}^{(3)}}{r_{0}^{(0)}}=0.038116\,,\ \ \ \frac{r_{4}^{(4)}}{r_{0}^{(0)}}=0.523818\,,\ \ \ldots \tag{7.6}\]
Figure 11: Plots of the allowed \((\bar{a}_{1,0},\bar{a}_{3,0})\) region when \(r_{i}^{(k)}/r_{0}\) variables and crossing are imposed up to a given \(k_{\rm max}=8\) and \(\ell_{\rm max}=800\) for the test example specified in equation (7.4). The Veneziano amplitude is indicated with a red dot, whereas the test model is shown as a black dot. Qualitatively this is very similar to the string monodromy case in Figure 9, but quantitatively the islands isolate a different point in the SUSY EFT-hedron.
Imposing (7.6) as null constraints along with the \(\mathcal{X}\) and \(\mathcal{Y}\) crossing constraints (3.18)-(3.19) isolates islands of the allowed space that decrease in size as we increase \(k_{\rm max}\), as shown in Figure 11, just like the case when the actual monodromy relations isolate an island around the string in Figure 9. We use SDPB to fix the non-monovariables to the ranges (\(k_{\rm max}=8\)):
\[\begin{array}{ll}\mbox{\bf SDPB bounds}&\mbox{\bf Model Value}\\ 0.676864<a_{1,0}<0.676913&0.676907\\ 0.562915<a_{3,0}<0.562928&0.562921\\ 0.021981<a_{4,1}<0.022031&0.021983\\ 0.5463085<a_{5,0}<0.5463095&0.5463094\\ 0.004685<a_{6,1}<0.004729&0.004687\\ 0.5428093<a_{7,0}<0.5428099&0.5428094\\ 0.00017274<a_{7,2}<0.00022540&0.00017362\\ 0.0011034<a_{8,1}<0.0011495&0.0011039\,.\end{array} \tag{7.7}\]
Here "Model Value" is the value of the coefficient for the theory we constructed in (7.4). For the first three cases, SDBP gets within 0.007%, 0.002%, and 0.22%, respectively, of the known model value.
We have run multiple other test theories for which the \(\rho_{m^{2}}^{(I)}\) are chosen to be random positive sums of hundreds of different delta functions with a randomly generated mass spectrum above \(M_{\rm gap}=1\). A sample of these test theories is shown in Figure 12 to illustrate how varying the monovariables allow us to intersect the \((\bar{a}_{1,0},\bar{a}_{3,0})\) plane in widely different
Figure 12: Locations of a selection of test models in the \((\bar{a}_{1,0},\bar{a}_{3,0})\) plane. The colors match those in Figure 13. The red dot marks the Veneziano amplitude.
locations. Remarkably, we find a similar behavior as above for each of the test theories: the SDPB bounds narrow in on the known values for each of the Wilson coefficients left unfixed by the monovariable constraints. To illustrate this, consider the interval lengths of the SPDB bounds
\[L_{k,q}=(a_{k,q})_{\text{max}}-(a_{k,q})_{\text{min}}\,. \tag{7.8}\]
For a sample of test models, Figure 13 shows how the \(L_{k,q}\)'s for the \(a_{1,0}\), \(a_{3,0}\), \(a_{4,1}\), and \(a_{5,0}\) tend to zero as \(k_{\text{max}}\) is increased. For comparison, the string is shown in red. These log-log plots indicate that each \(L_{k,q}\to 0\) at least as a power law in \(k_{\text{max}}\).
### Good EFT-hedron "Coordinates"
The flattening of the allowed space shows that there are stronger constraints among certain combinations of Wilson coefficients than one would naively have expected. This suggests that there is a different low-energy expansion that makes these correlations more manifest. To work towards such an alternate representation of the amplitude, we start with the general low-energy ansatz (2.7) and use the monovariable definitions in Table 1 to bring the monovariables \(r_{i}^{(k)}\) directly into the parameterization of the amplitude. Thus, in (2.7),
Figure 13: Interval lengths \(L_{k,q}\), as defined in (7.8), as a function of \(k_{\text{max}}\) for a sample of models of the form (7.2) found with SDPB at \(\ell_{\text{max}}=500\). “Test 0” corresponds to the example test model (7.4), while the other test theories are created from randomly generated values for \(a_{i}^{(I)}\) and \(m_{(I),i}^{2}\). It is illustrated in Figure 12 where these models lie in the \((\bar{a}_{1,0},\bar{a}_{3,0})\) plane.
we replace
\[a_{0,0}\to r_{0}^{(0)}\,,\quad a_{2,0}\to r_{1}^{(2)}\,,\quad a_{2,1}\to r_{2}^{(2 )}\,,\quad a_{3,1}\to 2a_{3,0}-\zeta_{2}a_{1,0}+r_{3}^{(3)}\,, \tag{111}\] \[a_{4,0}\to r_{4}^{(4)}\,,\quad a_{4,2}\to 2a_{4,1}+r_{5}^{(4)}\,, \quad a_{5,1}\to 3a_{5,0}-\zeta_{2}a_{3,0}-\zeta_{4}a_{1,0}+r_{6}^{(5)}\,, \quad\text{etc.}\]
We organize the terms in the amplitude into two groups: those with monovariable coefficients \(r_{i}^{(k)}\), which each multiply a simple degree \(k\) polynomial symmetric in \(s\) and \(u\), and those with the remaining \(a_{k,q}\) variables which each multiply an infinite tower of \(s\)-\(u\) symmetric polynomials starting at degree \(k\). Specifically, we find
\[A[zz\bar{z}\bar{z}]=-\frac{s}{u}+s^{2}\bigg{(}\sum_{k,i}r_{i}^{(k)}P_{i}^{(k)} (s,u)+\sum_{k,i}A_{i}^{(k)}Q_{i}^{(k)}(s,u)\bigg{)}\,, \tag{112}\]
where
\[\sum_{k,i}r_{i}^{(k)}P_{i}^{(k)}(s,u)= r_{0}^{(0)}+r_{1}^{(2)}(s^{2}+u^{2})+r_{2}^{(2)}su+r_{3}^{(3)} su(s+u)+r_{5}^{(4)}s^{2}u^{2}+ \tag{113}\] \[r_{4}^{(4)}(s^{4}+u^{4})+r_{6}^{(5)}su(s^{3}+u^{3})+r_{7}^{(5)}s ^{2}u^{2}(s+u)+\ldots\]
and
\[\sum_{k,i}A_{i}^{(k)}Q_{i}^{(k)}(s,u) \tag{114}\] \[= a_{1,0}(s+u)\bigg{[}1-\zeta_{2}su-\zeta_{4}su(s^{2}+\tfrac{1}{4} su+u^{2})-\zeta_{6}su(s^{4}-s^{3}u-\tfrac{33}{16}s^{2}u^{2}-su^{3}+u^{4})+ \ldots\bigg{]}\] \[+a_{3,0}(s+u)\bigg{[}(s^{2}+su+u^{2})-\zeta_{2}su(s^{2}+su+u^{2} )-\zeta_{4}su(s^{4}-s^{3}u-\tfrac{9}{4}s^{2}u^{2}-su^{3}+u^{4})+\ldots\bigg{]}\] \[+a_{4,1}su(s+u)^{2}\bigg{[}1-\zeta_{2}su-\zeta_{4}su(s^{2}+ \tfrac{1}{4}su+u^{2})+\ldots\bigg{]}\] \[+a_{5,0}(s+u)\bigg{[}(s^{2}+su+u^{2})^{2}-\zeta_{2}su(s^{4}-s^{3} u-3s^{2}u^{2}-su^{3}+u^{4})+\ldots\bigg{]}\] \[+a_{6,1}su(s+u)^{2}\bigg{[}(s^{2}+su+u^{2})-\zeta_{2}su(s^{2}+su+u ^{2})+\ldots\bigg{]}\] \[+\ldots\]
The coefficients in the expression (114) can be shifted while preserving the Mandelstam polynomial of lowest degree in each \(Q_{i}^{(k)}(s,u)\). For example, taking \(a_{3,0}\to\tilde{a}_{3,0}-\zeta_{2}a_{1,0}\) changes the term \(-\zeta_{2}su\) in the first line of (114) to \(-\zeta_{2}(s+u)^{2}=-\zeta_{2}t^{2}\) while also modifying higher powers in the Mandelstams multiplying \(a_{1,0}\). Next, take \(a_{5,0}\to\tilde{a}_{5,0}-\tfrac{3}{4}\zeta_{4}a_{1,0}\) to make the \(\zeta_{4}\)-terms in the first line of (114) only depend on \(t\). Doing this repeatedly, we find that the series of Mandelstam terms that multiply \(a_{1,0}\) only depends on \(t\). Moreover, the terms are easily recognized as those in the series expansion of \(\sin(\pi t)/\pi\). Thus, after these basic reparametrizations, we find evidence that the coefficient of \(a_{1,0}\) resums to \(\sin(\pi t)/\pi\). Remarkably, a similar set of shifts also works for the higher-order coefficients \(\tilde{a}_{3,0}\), \(\tilde{a}_{4,1}\) etc, bringing each of them to a form that can be resummed to \(\sin(\pi t)/\pi\) times a fully symmetric polynomial in \(s,t,u\) of degree \(k-1\). We have checked this explicitly to \(k=20\) and find
that
\[\sum_{k,i}A_{i}^{(k)}Q_{i}^{(k)}(s,u) = -\frac{1}{\pi}\sin(\pi t)\bigg{[}\tilde{a}_{1,0}+\tilde{a}_{3,0}\, \sigma_{2}+\tilde{a}_{4,1}\,\sigma_{3}+\tilde{a}_{5,0}\,\sigma_{2}^{2}+\tilde{a }_{6,1}\,\sigma_{2}\sigma_{3} \tag{7.13}\] \[\qquad\qquad\qquad+\tilde{a}_{7,0}\,\sigma_{2}^{3}+\tilde{a}_{7,2}\,\sigma_{3}^{2}+\tilde{a}_{8,1}\,\sigma_{2}^{2}\sigma_{3}+\ldots\bigg{]}\,,\]
where we have defined
\[\sigma_{2}=\frac{1}{2}(s^{2}+t^{2}+u^{2})\quad\text{and}\quad\sigma_{3}=-stu \tag{7.14}\]
and the sum continues over all the independent Mandelstam polynomials \(\sigma_{2}^{n}\sigma_{3}^{m}\) fully symmetric in \(s,t,u\). The coefficients in (7.13) are related to those in (7.12) via finite shifts:
\[\tilde{a}_{1,0}= a_{1,0}\,, \tag{7.15}\] \[\tilde{a}_{3,0}= a_{3,0}+\zeta_{2}\,a_{1,0}\,,\] \[\tilde{a}_{4,1}= a_{4,1}\,,\] \[\tilde{a}_{5,0}= a_{5,0}+\zeta_{2}\,a_{3,0}+\frac{7}{4}\zeta_{4}\,a_{1,0}\,,\] \[\tilde{a}_{6,1}= a_{6,1}+\zeta_{2}\,a_{4,1}\,,\] \[\tilde{a}_{7,0}= a_{7,0}+\zeta_{2}\,a_{5,0}+\frac{7}{4}\zeta_{4}\,a_{3,0}+ \frac{31}{16}\zeta_{6}\,a_{1,0}\,,\] \[\tilde{a}_{7,2}= a_{7,2}-9a_{7,0}+3\zeta_{2}\,a_{5,0}+\frac{9}{4}\zeta_{4}\,a_{3,0} +\frac{9}{4}\zeta_{6}\,a_{1,0}\,,\] \[\tilde{a}_{8,1}= a_{8,1}+\zeta_{2}\,a_{6,1}+\frac{7}{4}\zeta_{4}\,a_{4,1}\,, \quad\text{ etc}\]
The point of this different parameterization of the low-energy amplitude is that for any choice of monovariables \(r_{i}^{(k)}\) in the allowed region, the large-\(k_{\text{max}}\) limit of the S-matrix bootstrap will fix the coefficients \(\tilde{a}_{k,q}\) in the partially resummed symmetric Mandelstam polynomial expression (7.13), as illustrated Figure (1) and tested in examples in Section 7.2. What this tell us about the UV theory remains a question for the future.
However, we can make some sense of the parameterization in (7.10) and (7.13). Recall that the motivation for the monovariables came from the string monodromy relations (6.2). The Wilson coefficients that appear in \(\sum_{k,i}A_{i}^{(k)}Q_{i}^{(k)}(s,u)\) are those that were left unfixed by the monodromy relations. With that in mind, it is straighforward to see that any function \(f\) of the form
\[f(s,u)=\sin(\pi t)\,g(s,t,u)\,,\quad\,s+t+u=0 \tag{7.16}\]
where \(g\) is fully symmetric in \(s,t,u\), solves the string monodromy relations (6.3). Expanding \(g\) in the most general polynomial form then gives precisely the partially resummed form of \(\sum_{k,i}A_{i}^{(k)}Q_{i}^{(k)}(s,u)\) in (7.13).
Including the general monovariables has nothing to do with string monodromy relations: their definition was inspired by the monodromy relations, but rather than having fixed
values for the string (as given in Table 1) they can be free chosen in the allowed region and for general choices may not be any linear amplitudes relations associated with the change of variables.16
Footnote 16: The Veneziano amplitude can also be written as
\[A^{\text{str}}[zz\bar{z}\bar{z}]=(\alpha^{\prime}s)^{2}\frac{\sin(\pi\alpha^{ \prime}t)}{\pi}\Gamma(-\alpha^{\prime}s)\Gamma(-\alpha^{\prime}t)\Gamma(- \alpha^{\prime}u). \tag{7.17}\]
In this form, its low-energy expansion takes the form of (7.13) with some particular set of \(\tilde{a}^{\prime}_{k,q}\) coefficients. That \(\tilde{a}^{\prime}_{k,q}\) basis includes dependencies on \(a_{k,q}\) Wilson coefficients that are fixed by monodromy such that the monovariables, some \(\tilde{r}^{\prime(k)}_{i}\), which would appear in (7.11), are all identically zero for the string. This is not the basis that we have chosen because we want to explicitly separate those values we fix with the monodromy constraints from those we do not.
## 8 Discussion
We have studied universal bounds in a simple theory: planar \(\mathcal{N}=4\) SYM with higher-derivative corrections. Supersymmetry allows us to derive dispersive representations for all nonzero Wilson coefficients and the resulting bounds are studied using numerical implementation in SDPB and CPLEX.
A key finding in this paper is the evidence that the EFT-hedron flattens out in the \(k_{\text{max}}\to\infty\) limit. This leads us to conjecture that imposing positivity and fixing two-thirds of the Wilson coefficients (asymptotically) fixes the remaining third of Wilson coefficients. We numerically cross-checked the conjecture for a collection of randomly generated theories in Section 7.2. One moral of this story is that bounds on large numbers of Wilson coefficients are significantly stronger than one might naively expect. It would be interesting to understand what this phenomenon tells us about the UV theory. The novel, partially resummed expansion of the low-energy amplitude may be a step in that direction.
Our analysis shares many features with the pion-bootstrap papers [7; 8]. One difference is that supersymmetry effectively imposes additional constraints and allows us to bound all nonzero Wilson coefficients without having to assume a stronger Froissart bound. As an example of similar results, Figure 1 of [7] shares qualitative features with our \((\bar{a}_{2,0},\bar{a}_{2,1})\) plot in Figure 5.
In the Introduction, we mentioned that the input of string monodromy can be reframed as the assumption that the \(\mathcal{N}=4\) SYM EFT amplitudes are obtained from a double copy (1.6). Let us elaborate on how this can be seen as more of a "pure field theory" input.
Consider the double-copy
\[\begin{split}\text{(YM EFT)}&=\text{(BAS EFT)}\otimes_{\text{FT}}\text{(pure YM)}\,,\\ A_{n}^{\text{YM EFT}}[\alpha]&=\sum_{\beta,\gamma} m_{n}[\alpha|\beta]\,S_{n}[a|\gamma]\,A_{n}^{\text{YM}}[\gamma]\,,\end{split} \tag{8.1}\]
where BAS EFT stands for a general Bi-Adjoint Scalar (BAS) model with local higher-derivative interactions and the subscript "FT" indicates that double-copy is done in the field theory limit; i.e. the double-copy kernel \(S_{n}[\beta|\gamma]\) adds no higher-derivative terms. The
BAS EFT tree amplitudes are doubly-color ordered, \(m_{n}[\alpha|\beta]\). In the double-copy (8.1), a choice of \((n-3)!\) out of \((n-1)!\) inequivalent color-orderings for \(\beta\) and \(\gamma\) must be summed over. In order for the result, \(A_{n}^{\text{YM EFT}}\), to be independent of these choices (and thereby validate the double-copy as a map between field theories), it was argued in [40] that the amplitudes \(m_{n}[\alpha|\beta]\) must obey the same linear relations on the second color-structure as the pure YM amplitudes, namely the Kleiss-Kuijf (KK) and Bern-Carrasco-Johansson (BCJ) field theory relations. It was shown in [25] that imposing, as required above, the KK and BCJ relations on the second color-ordering of \(m_{n}[\alpha|\beta]\) implies (as checked to 36th order the derivative expansion at 4-point) that the \(m_{n}[\alpha|\beta]\) amplitude obeys a second set of linear relations on the first color-ordering that are highly constrained by locality. These new relations can be identified as the low-energy expansion of the monodromy relations! Since the YM EFT amplitude \(A_{n}^{\text{YM EFT}}[\alpha]\) inherits its color-structure from \(m_{n}[\alpha|\beta]\), we conclude that any amplitude constructed this way necessarily obeys the low-energy expansion of the monodromy relations.
Adding \(\mathcal{N}=4\) supersymmetry to the double-copy (8.1) gives (1.6) and, putting all the above information together, we arrive at the claim stated in the Introduction: among the 4-point amplitudes that arise from (1.6), the unique one that is compatible with unitarity, locality, and the Froissart bound is the Veneziano open string tree amplitude.
There are a number of possible future directions. Of particular interest would be to investigate how one might isolate string theory in the \(\mathcal{N}=4\) supersymmetric EFT-hedron in a more generic way. Other than imposing monodromy or that the low-energy amplitude must satisfy some double-copy constraints, there could be purely physical assumptions one can add that either uniquely pick out the tree-level open string amplitude or clearly place the string at a corner of the allowed region. Finding such physical assumptions would provide insight into what distinguishes string theory, at least from the low-energy perspective.
We found evidence that fixing about two-thirds of the Wilson coefficients as monovariables is sufficient to show that the EFT-hedron flattens in the \(k_{\text{max}}\to\infty\) limit. It is possible that fixing just some subset of the monovariables is enough to fix all other coefficients. Finding the lower bound on the number of monovariables would reveal the most efficient parameterization of the flattened EFT-hedron.
It is unclear how generic the flattening phenomenon is. For example, it would be interesting to examine whether flattening also occurs in the pion bootstrap [5; 6; 7; 8; 9; 10] or for abelian scalar models as in [3]. A theory with reduced or no supersymmetry has more independent Wilson coefficients and the positivity bounds may be significantly more complex as well. Therefore, one could also examine whether flattening occurs in \(\mathcal{N}=8\) supergravity. In our analysis, the monodromy relations were helpful for identifying which directions the flattening happens along. The challenge of studying flattening in other cases, especially those without color-structure, is that there is no obvious candidate for a replacement of monodromy relations.
## Acknowledgements
We would like to thank Jan Albert, Enrico Hermann, Loki Lin, Andrew Neitzke, Leonardo Rastelli, David Poland, and Nick Geiser for useful comments and discussions. HE and JB are supported in part by DE-SC0007859. AH was supported in part by a Rackham Predoctoral Fellowship from the University of Michigan and in part by the Simons Foundation.
## Appendix A Convergence of Numerical Results
The semi-definite and linear programming algorithms described in Sections 4.2 and 4.3 approximate the EFT-hedron from the inside due to the truncation in \(\ell\) and, in the case of linear programming, \(x_{\rm max}\). In Section 5.2, we discussed how CPLEX approaches the SDPB results as we increase \(x_{\rm max}\) for given \(k_{\rm max}\) and \(\ell_{\rm max}\). In this Appendix, we illustrate the dependence of the SDPB results on \(\ell_{\rm max}\) and describe how we find accurate bounds on Wilson coefficients.
### Convergence without Monodromy
Because of the spin cut-off \(\ell_{\rm max}\), the bounds on ratios of Wilson coefficients are approximated from the inside of the allowed region for given fixed \(k_{\rm max}\). With higher \(k_{\rm max}\), i.e. more null constraints, it is necessary to increase \(\ell_{\rm max}\) to get accurate results. To find the necessary \(\ell_{\rm max}\), we compute the bound for a given Wilson coefficient by increasing \(\ell_{\rm max}\) until we obtain convergence as a function of \(\ell_{\rm max}\). The computation time typically increases linearly with \(\ell_{\rm max}\). Practically, for our plots we use the value of \(\ell_{\rm max}\) that matches the asymptotic bound with the precision needed.
For example, in Fig. 14, we stop computing the maximal values for \(a_{2,0}\) at \(\ell_{\rm max}=600\). We
Figure 14: Values of the maximal \(a_{2,0}/a_{0,0}\) at \(k_{\rm max}=20\) when \(a_{2,1}/a_{0,0}\) is fixed to its string value from points with \(\ell_{\rm max}\) between 200 and 600. The orange curve is a fit of these points as a power law and the asymptotic value is given as a gray, dashed line.
fit the points up to \(\ell_{\rm max}\) to a power law function
\[a_{2,0}^{\rm max}(\ell_{\rm max})=\frac{A}{\ell_{\rm max}^{\gamma}}+b \tag{104}\]
to find the asymptotic value, \(b\). When we plot the point with \(a_{2,0}=b\) at the string value on the right hand side of Fig. 5, there is no visual difference to the value computed at \(\ell_{\rm max}=600\). In this sense, the \(\ell_{\rm max}=600\) value is precise enough for our plots. This is the general technique we use to determine the minimal \(\ell_{\rm max}\) to use when computing bounds.
### Convergence with Monodromy
Adding in monodromy constraints increases the needed \(\ell_{\rm max}\). Figure 15, shows a plot of the maximum and minimum of \(a_{1,0}\) given by SDPB as a function of \(\ell_{\rm max}\) at \(k_{\rm max}=8\) with monodromy constraints imposed. It is clear that the lower and upper bounds converge starting around \(\ell_{\rm max}=800\) and \(\ell_{\rm max}=500\) respectively. Other plots for upper bounds on \(a_{3,0}\) for \(k_{\rm max}=3,5,7,8\) are given in Figure 16. These plots show how too-low \(\ell_{\rm max}\) yields nonsensical non-monotonically increasing results. Therefore, for each \(k_{\rm max}\), one needs to take \(\ell_{\rm max}\) large enough to get extremization values that converge properly. We cross-checked results between CPLEX and SDPB at higher \(\ell_{\rm max}\) and found agreement. As discussed in Sections (4.3) and (5.2), for CPLEX, one similarly has to benchmark the fineness of discretization, \(x_{\rm max}\).
|
2303.10611 | Rethinking Dual-Domain Undersampled MRI reconstruction: domain-specific
design from the perspective of the receptive field | Undersampled MRI reconstruction is crucial for accelerating clinical
scanning. Dual-domain reconstruction network is performant among SoTA deep
learning methods. In this paper, we rethink dual-domain model design from the
perspective of the receptive field, which is needed for image recovery and
K-space interpolation problems. Further, we introduce domain-specific modules
for dual-domain reconstruction, namely k-space global initialization and
image-domain parallel local detail enhancement. We evaluate our modules by
translating a SoTA method DuDoRNet under different conventions of MRI
reconstruction including image-domain, dual-domain, and reference-guided
reconstruction on the public IXI dataset. Our model DuDoRNet+ achieves
significant improvements over competing deep learning methods. | Ziqi Gao, S. Kevin Zhou | 2023-03-19T09:15:50Z | http://arxiv.org/abs/2303.10611v2 | # DuDoRNeXt: A hybrid model for dual-domain undersampled MRI reconstruction
###### Abstract
Undersampled MRI reconstruction is crucial for accelerating clinical scanning procedures. Recent deep learning methods for MRI reconstruction adopt CNN or ViT as backbone, which lack in utilizing the complementary properties of CNN and ViT. In this paper, we propose DuDoRNeXt, whose backbone hybridizes CNN and ViT in an domain-specific, intra-stage way. Besides our hybrid vertical layout design, we introduce domain-specific modules for dual-domain reconstruction, namely image-domain parallel local detail enhancement and k-space global initialization. We evaluate different conventions of MRI reconstruction including image-domain, k-space-domain, and dual-domain reconstruction with a reference protocol on the IXI dataset and an in-house multi-contrast dataset. DuDoRNeXt achieves significant improvements over competing deep learning methods.
Keywords:Dual-domain MRI reconstruction Vision Transformer Neural network Hybrid model
## 1 Introduction
Magnetic resonance imaging (MRI) is an non-invasive and flexible imaging modality widely used in clinical practice. A tricky problem in utilizing MRI is that complete K-space measurements lead to unbearable acquisition time while fewer measurements lead to aliasing and blurring in the image. Undersampled MRI reconstruction aims to reconstruct the high-quality, clean MRI image from its low-quality, aliased counterpart. Previously, Compressed Sensing (CS) and Parallel Imaging (PI) accelerate MRI reconstruction for 2-3 times. Since the revolutionary work[27], convolutional neural networks (CNN) have become the primary workhorse for undersampled MRI reconstruction.
Many CNN-based methods [31, 14, 23] focus on elaborate architecture designs such as residual learning [10] and dense connections[12] stemming from UNet [3] or other existing baseline methods. Customization of conventional CNNs further benefited MRI reconstruction, including K-space data consistency (DC) [25, 22], dual-domain recurrent learning [5, 34], over-complete representation [9]. Recently, Transformer [26, 4] has been considered as an alternative to CNN. Its Multi-head Self-Attention (MSA) mechanism captures long-range interactions among
contexts globally or inside local windows [17]. As the success of Transformer is now indisputable in computer vision, Transformer has shown great potential for undersampled MRI reconstruction as well [13, 33, 8, 7, 19].
Yet performant, ViTs have not fully substituted CNNs as ViTs require a larger amount of training data due to a low inductive bias and have longer training schedules [30, 32]. Furthermore, CNNs and ViTs have different emphases. Fig. 1 prevails their distinctive emphases on MRI reconstruction. DuDoRNet is a CNN model while Dual-SwinIR mostly consists of Swin Transformer blocks. DuDoRNet can extract small, isolated features better (the red one) while Dual-SwinIR generates sharper results for large structural details (the green one). Finally, DuDoRNet generates (sometimes wrong) details of higher frequency, which are seen especially in soft tissues; while Dual-SwinIR generates a smoother image. In recent works in decomposing Transformer from the basic theory [20] to empirical network design [18, 28], a potential direction for modernizing deep learning models arises: **hybridizing CNNs and ViTs**. While several work [30, 32, 20] in computer vision show the effectiveness of hybrid structures, there is no research systematically studying a hybrid model for MRI reconstruction.
In our study, (1) we systematically study designing hybrid models for MRI reconstruction and propose a vertical layout design of hybrid MRI reconstruction models under a computational constraint; (2) we propose **DuDoRNeXt** with several domain-specific modules for MRI reconstruction, including image-domain parallel local detail enhancement and k-space global initiation; and (3) we test our model on multiple settings of MRI reconstruction including guided by a reference protocol. The public IXI-dataset and in-house multi-contrast MRI
Figure 1: (a) Ground truth and the reconstructed T2 images by a CNN model DuDoRNet, a ViT model Dual-SwinIR, and our hybrid model DuDoRNeXt. The green box is an instance of ROI with rich structures while the red one contains a small, isolated object. (b)(c) Zoom-in of two ROIs. (d) PSNR distribution of 36 reconstructed testing images.
dataset are used for evaluation and ablation study, respectively. The results show that our models exceed baseline comparison methods in all settings.
## 2 Method
### Undersampled MRI Reconstruction
Let \(k_{u}\in\mathbb{C}^{mn}\) and \(k_{f}\in\mathbb{C}^{mn}\) be the undersampled and fully-sampled k-space signal respectively; \(i_{u}\in\mathbb{C}^{mn}\), \(i_{f}\in\mathbb{C}^{mn}\), and \(i_{r}\in\mathbb{C}^{mn}\) be the undersampled, fully-sampled and reconstructed image signal respectively; \(M\in\mathbb{R}^{mn}\) be the binary k-space mask for acceleration.
The undersampled MRI reconstruction can be formulated as an image recovery problem [27, 9, 1, 15, 13] with K-space DC as a regularisation term[25]:
\[\operatorname*{arg\,min}_{\theta_{i}}\left(\left\|i_{f}-\mathcal{P}_{i}\left( i_{u};\theta_{i}\right)\right\|_{2}^{2}\right.\left.+\lambda\left\|k_{u}-M \odot\mathcal{F}\left(\mathcal{P}_{i}\left(i_{u};\theta_{i}\right)\right) \right\|_{2}^{2}\right), \tag{1}\]
where \(\mathcal{F}\) is 2D discrete Fourier Transform and the approximation function \(\mathcal{P}_{x}(\cdot;\theta_{x})\) is used to predict a reconstructed signal \(x_{r}\) given its parameter \(\theta_{x}\) and any undersampled input. A few works [34, 5, 19] leverage the relationship between image and k-space domain using Fourier Transform pairs \(\left(\mathcal{F},\mathcal{F}^{-1}\right)\) and transform MRI reconstruction into a multivariable optimization problem described as
\[\begin{split}\operatorname*{arg\,min}_{\theta_{i},\theta_{k}}& \left(\left\|k_{f}-\mathcal{P}_{k}\left(\mathcal{F}\left(\mathcal{P}_{i} \left(i_{u};\theta_{i}\right)\right);\theta_{k}\right)\right\|_{2}^{2}+ \left\|i_{f}-\mathcal{P}_{i}\left(\mathcal{F}^{-1}\left(\mathcal{P}_{k} \left(k_{u};\theta_{k}\right)\right);\theta_{i}\right)\right\|_{2}^{2}\\ &+\left.\lambda\left\|k_{u}-M\odot\mathcal{F}\left(\mathcal{P}_{i }\left(\mathcal{F}^{-1}\left(\mathcal{P}_{k}\left(k_{u};\theta_{k}\right) \right);\theta_{i}\right)\right)\right\|_{2}^{2}\right),\end{split} \tag{2}\]
and solve it using a dual-domain recurrent learning strategy [34].
Recently, a growing number of works [21, 29, 6, 34, 33, 19] utilize a fast-to-acquire fully-sampled auxiliary MRI protocol to guide the reconstruction of a slow protocol.
### DuDoRNeXt: Towards Hybridizing CNNs and ViTs
**Design motivation.** The design of DuDoRNeXt is motivated by two findings: (i) ViTs benefit dual domain reconstruction with larger receptive fields and (ii) vertical layout design works better when hybridizing CNNs and ViTs in an intra-stage manner. The details behind these motivations are elaborated in supplementary materials.
**Architecture description.** Compared with DuDoRNet[34], we improve dual domain recurrent blocks using an intra-stage CNN-ViT hybrid strategy and customize global and local structure based on domain-specific properties. Our model is illustrated in Figure 2, including domain-specific Shallow Feature Extraction (X-SFE), Global Feature Refinement(GFR) and 4-stage domain-specific hybrid
building blocks (X-BB) as backbone. Global residual learning and global feature fusion are preserved in our model. The overall pipeline goes as follow:
\[F_{-1}=Conv^{3}(x_{u}),F_{0}^{C}=Conv^{3}(F_{-1}). \tag{3}\]
where \(Conv^{k}\) denotes a convolution operation with kernel size \(k*k\) and \(F_{-1}\) denotes the first extracted feature used for global residual learning. \(F_{0}^{C}\) denotes the intermediate extracted feature by convolution. The second extracted feature \(F_{0}\) is the input of X-\(S_{i}\). For image domain, \(F_{0}=F_{0}^{C}\). For K-space, a Global Initiation Module (K-GLIM) is introduced based on the idea of a sketchy K-space initiation: \(F_{0}=P_{K-GLIM}(F_{0}^{C})=P_{K-GLIM}(Conv^{3}(F_{-1}))\). Global Feature Fusion (GFF) fuses features from X-BB's and is input for global residual learning:
\[F_{i}=P_{X-S_{i}}(F_{i}-1),x_{r}=P_{GFR}(F_{-1}+P_{GFF}(concat(F_{1},F_{2},F_{3 },F_{4}))), \tag{4}\]
where GFF consists of 1x1 and 3x3 convolutions and GFR consists of two 3x3 convolutions, creating the refined reconstructed image/K-space \(x_{r}\).
**X-BB: Domain-specific hybrid building block.** The design of X-BB is shown in the lower right part of Figure 2. It is constructed by vertically stacking Dilated Residual Dense Block (DRDB) and Spatial Local Transformer Block followed by a Hybrid Local Feature Fusion (HLFF).
Following DuDoRNet, we use DRDB for effective local feature extraction, while our DRDB composes a smaller feature pyramid using one less atrous convolution layers with dilation rate of 1, 2, and 4, expressed by: \(F_{i}^{C}=P_{DRDB}(F_{i-1})\), where \(F_{i}^{C}\) denotes convolutional features in the \(i_{th}\) stage. Then, a Spatial Local Transformer Block is used to aggregate convolution features \(F_{i}^{C}\) in a large local window, whose main part is Windowed Multi-head Self-Attention (W-MSA):
\[F_{i}^{H^{\prime}}=W-MSA(LN(WinEmb(F_{i}^{C},w)))+WinEmb(F_{i}^{C},w), \tag{5}\]
Figure 2: Framework of a recurrent block of DuDoRNeXt. It is constructed by a domain-specific Shallow Feature Extraction (X-SFE), Global Feature Refinement(GFR) and 4-stage domain-specific hybrid building blocks(X-BB). Building blocks’ color follow the notation in the color table. Convolution is followed by ReLU unless noted.
\[F_{i}^{H}=W-MSA(LN(WinEmb(F_{i}^{C},w)))+WinEmb(F_{i}^{C},w)+F_{i}^{CE}. \tag{8}\]
**K-GLIM: K-space global initiation.** Missing K-space recovery is an interpolation problem. MSAs of larger receptive fields are placed at the end of each stage, making it difficult for earlier layers' interpolation. To this end, we propose K-space global initialization module (K-GLIM) placing at the beginning of DuDoRNeXt. K-GLIM is applied on the transformed features of two convolution layers in SFE, providing channel-wise interaction and a global view. The main part of K-GLIM is a Channel-wise Multi-head Self-attention (C-MSA). Instead of performing pixel-level or patch-level attention in a conventional spatial attention way, C-MSA is performed on the transpose of pixel-level tokens. Similar
to MSA, C-MSA is an extension of Self-Attention (C-SA) in which \(h\) times SA operations are run. C-SA can be expressed as \(\mathbf{z}_{j}^{c}=\sum_{i}\text{Softmax}\left(\frac{\mathbf{Q}^{T}\mathbf{K}}{\sqrt{d}} \right)_{i}\mathbf{V}_{i,j}^{T}\), and its computational complexity is \(O(6(hw)C^{2})\), linear to \((hw)\). C-MSA naturally captures global information and interactions for visual recognition tasks and complements (windowed-)spatial attention[2]. Compared with channel-wise convolution, it is data-specific and fine-grained.
## 3 Experiment
### Settings and Results
**Dataset and training.** Our evaluation is carried out on the Multi-Contrast IXI dataset1. We use all 575 subjects with paired T2-PD and uniformly sample 14 slices from each subject volume. We split the dataset patient-wise into training, validation and testing set with a ratio of \(7:1:2\), corresponding to 5628 training images, 812 validation images, and 1610 test images each protocol. Images are edge-cropped from 256x256 to 224x224. Code is written in Pytorch and experiments are performed using an an NVIDIA GeForce RTX 3090 with 24GB memory. As for the rest, we follow the same experiment settings in DuDoRNet[34] including loss function. As a result, our experiments are run under the best setup of DuDoRNet. It is conceivable that more extensive hyper-parameter searches may further improve the performance of our hybrid model.
Footnote 1: [https://brain-development.org/ixi-dataset/](https://brain-development.org/ixi-dataset/), CC BY-SA 3.0 license
**Performance evaluation.** We compare our methods with other baseline deep learning methods in three conventions of MRI reconstruction: image-domain[24, 34, 6, 9, 13], dual-domain[24, 34, 33] and reference-protocol-guided dual-domain reconstruction [29, 6, 34, 33, 19]. All reference-guided methods are self-implemented besides DuDoRNet and examined without considering multi-modal fusion modules for controlled backbone comparisons. We further unite these methods considering their backbones. [6, 29] adopt Dense-Unet; [33, 19, 13] share similar Swin-Transformer backbones derived from SwinIR [16]. [33] proposed k-space filling using the reference protocol for de-aliasing initially in self-supervised reconstruction, yet it does not improve the model's performance when fully-supervised.
All models besides UNet [24] have \(\approx 1\)M parameters by only tuning hidden dimensions. UNet has 2M parameters. All models are recurred twice and a DC is added at the end of each recurrent block. Recurrent time of local residual blocks in OUCR [9] is set to 5, complying with their default setting; yet DCs at the end of their local recurrent structure are discarded while those at the end of their global recurrent structure are preserved for fairness. All models are trained for 100 epochs and the hyperparameters of each method are tuned on the validation set with test data held-out for final evaluation. We consider Cartesian sampling pattern with acceleration rate ranging from 4 to 8; center sampling fraction \(R_{acs}\) is set to 0.125. Peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are used as the quantitative evaluation metrics.
**Results on undersampled MRI reconstruction.** In Table 1, we demon
strated PD reconstruction evaluations using \(\times 4\), \(\times 6\), \(\times 8\) acceleration in three common approaches of MRI reconstruction: image-domain, dual-domain, and T2-guided dual-domain MRI reconstruction. The best results under the same setting and acceleration rate are colored with red. Our method achieves **the best performances** under all settings and acceleration rates. Notice that only the center region in K-space is sampled with \(\times 8\) acceleration, an extreme case to the disadvantage of K-space recovery networks. Witnessing the performance drops of other dual-domain models compared with their image-domain counterparts, DuDoRNeXt maintains its superior performance.
Results on \(\times 5\), \(\times 7\) and qualitative comparisons with similar performances are provided in supplementary materials. Fig. 1 visualizes the reconstructed images from different models, with zoom-in comparisons.
### Ablation Study
To isolate various components for our hybrid model, we carry out ablation study using an in-house MRI dataset with 20 patients with pre-aligned T1 and T2 under the setting of reference-guided (T1) dual-domain reconstruction. All the experiment settings are the same with those above.
**Hybrid strategy.** Firstly, we evaluate our hybrid strategy from two perspectives: reconstruction performance and runtime. To evaluate hybrid strategy only, we compare our model without domain-specific design module with naive inter-stage hybrid models. The evaluation is summarized in Figure 4. Hybridizing CNN and ViT does improve performance even with a naive strategy while our hybrid strategy achieves the best in performance-runtime tradeoff.
\begin{table}
\begin{tabular}{l|l l l|l l l|l l} \hline \hline \multirow{2}{*}{Acceleration} & \multicolumn{2}{l|}{4x} & \multicolumn{2}{l|}{6x} & \multicolumn{2}{l}{8x} \\ \cline{2-9} & PSNR & SSIM & MSE & PSNR & SSIM & MSE & PSNR & SSIM & MSE \\ \hline Zero Padding & 25.17\({}^{\pm 1.80}\) & 80.20\({}^{\pm 6.34}\) & 223.51 & 24.86\({}^{\pm 1.81}\) & 80.36\({}^{\pm 2.27}\) & 240.99 & 24.16\({}^{\pm 1.79}\) & 79.90\({}^{\pm 6.37}\) & 282.28 \\ \hline Unet & 31.73\({}^{\pm 1.98}\) & 95.48\({}^{\pm 2.12}\) & 53.03 & 29.91\({}^{\pm 2.01}\) & 94.32\({}^{\pm 2.46}\) & 80.91 & 26.98\({}^{\pm 2.01}\) & 91.43\({}^{\pm 3.75}\) & 157.41 \\ Dense-Unet & 32.49\({}^{\pm 2.23}\) & 96.47\({}^{\pm 1.79}\) & 46.10 & 30.50\({}^{\pm 2.18}\) & 95.10\({}^{\pm 2.40}\) & 72.08 & 27.70\({}^{\pm 1.97}\) & 92.16\({}^{\pm 3.50}\) & 132.27 \\ OUCR & 32.52\({}^{\pm 2.26}\) & 96.57\({}^{\pm 1.76}\) & 45.90 & 30.59\({}^{\pm 2.19}\) & 95.22\({}^{\pm 2.37}\) & 70.79 & 27.80\({}^{\pm 2.00}\) & 92.33\({}^{\pm 3.44}\) & 129.40 \\ SwinIR & 32.75\({}^{\pm 2.27}\) & 96.69\({}^{\pm 1.71}\) & 43.60 & 30.65\({}^{\pm 2.24}\) & 95.26\({}^{\pm 2.37}\) & 70.11 & 27.78\({}^{\pm 2.08}\) & 92.49\({}^{\pm 4.5}\) & 130.76 \\ DuDoRNet\_J & 32.95\({}^{\pm 2.28}\) & 96.75\({}^{\pm 1.69}\) & 41.82 & 30.88\({}^{\pm 2.22}\) & 95.45\({}^{\pm 2.38}\) & 66.52 & 28.00\({}^{\pm 2.04}\) & 92.76\({}^{\pm 3.32}\) & 123.94 \\ Ours\_J & 33.36\({}^{\pm 2.25}\) & 97.05\({}^{\pm 1.59}\) & 38.47 & 31.13\({}^{\pm 2.29}\) & 95.67\({}^{\pm 2.24}\) & 63.44 & 28.10\({}^{\pm 2.06}\) & 92.82\({}^{\pm 8.33}\) & 121.93 \\ \hline Dual-DenseUnet & 32.76\({}^{\pm 2.22}\) & 96.67\({}^{\pm 1.65}\) & 43.21 & 30.71\({}^{\pm 2.30}\) & 95.21\({}^{\pm 2.32}\) & 68.92 & 27.68\({}^{\pm 1.50}\) & 91.99\({}^{\pm 4.5}\) & 131.75 \\ Dual-SwinIR & 33.37\({}^{\pm 2.44}\) & 97.09\({}^{\pm 1.57}\) & 38.86 & 31.10\({}^{\pm 2.37}\) & 95.60\({}^{\pm 2.29}\) & 64.72 & 27.73\({}^{\pm 2.02}\) & 92.16\({}^{\pm 3.50}\) & 131.70 \\ DuDoRNet & 33.00\({}^{\pm 2.31}\) & 96.81\({}^{\pm 1.63}\) & 41.46 & 30.92\({}^{\pm 2.28}\) & 95.36\({}^{\pm 2.31}\) & 66.60 & 27.84\({}^{\pm 2.05}\) & 92.24\({}^{\pm 4.36}\) & 128.73 \\ Ours (w/o ref) & 33.93\({}^{\pm 2.51}\) & 97.35\({}^{\pm 1.52}\) & 34.99 & 31.60\({}^{\pm 2.48}\) & 95.94\({}^{\pm 2.19}\) & 58.64 & 28.13\({}^{\pm 2.16}\) & 92.68\({}^{\pm 4.61}\) & 122.37 \\ \hline Dual-DenseUnet & 40.23\({}^{\pm 2.57}\) & 99.23\({}^{\pm 0.53}\) & 8.23 & 39.39\({}^{\pm 2.56}\) & 99.09\({}^{\pm 0.62}\) & 10.00 & 38.19\({}^{\pm 2.69}\) & 98.90\({}^{\pm 0.74}\) & 12.99 \\ Dual-SwinIR & 40.63\({}^{\pm 2.65}\) & 99.28\({}^{\pm 0.51}\) & 7.64 & 39.76\({}^{\pm 2.61}\) & 99.15\({}^{\pm 0.59}\) & 9.29 & 38.50\({}^{\pm 2.52}\) & 98.96\({}^{\pm 0.71}\) & 12.19 \\ DuDoRNet & 40.45\({}^{\pm 2.59}\) & 99.26\({}^{\pm 0.51}\) & 7.86 & 39.58\({}^{\pm 2.58}\) & 99.13\({}^{\pm 0.61}\) & 9.62 & 38.38\({}^{\pm 2.52}\) & 98.93\({}^{\pm 0.72}\) & 12.52 \\ Ours & 40.94\({}^{\pm 2.68}\) & 99.33\({}^{\pm 0.49}\) & 7.17 & 40.03\({}^{\pm 2.65}\) & 99.20\({}^{\pm 0.37}\) & 8.81 & 38.70\({}^{\pm 2.56}\) & 98.99\({}^{\pm 0.70}\) & 11.75 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison (PSNR, SSIM(%), MSE(*1e-5)) with baseline methods on the IXI-dataset. The first, second and third part correspond to image-domain reconstruction of PD, dual-domain reconstruction of PD and dual-domain reconstruction of PD with a reference protocol T2. The best results are marked as red.
**Dual-domain hybrid structure and modules.** Next, we verify our domain-specific designs. We evaluate five main components of our model, namely hybrid vertical design(HVL), HLFF, K-GLIM, I-PLDE, X-TL. For each component, we apply it on image-domain and dual-domain subsequently. If there is an performance drop in either way, we apply it on K-space only to observe the performance. The reason for this design is that we weigh the utility of image reconstruction network and the synergy between image and K-space reconstruction networks over K-space reconstruction. For K-GLIM and I-LDE, we try applying it to the other domain, both leading to performance drops. This demonstrates our design is domain-specific. For X-TL, \(\theta_{x}\) is set to 1 by default, corresponding to the best choice for \(\theta_{i}\). As a result, only \(\theta_{k}\) is modified in the last row. By gradually changing DuDoRNet to ours, our method also provides a possible hybridizing strategy for current CNN models. The effect of recurrent time is similar to DuDoRNet[34] and results are included in supplementary materials.
\begin{table}
\begin{tabular}{l|l l l|l l|l l|l l} \hline \hline & \multicolumn{10}{c|}{PSNR SSIM Time} \\ \hline DuDoRNet & \multicolumn{1}{c}{33.44 96.92 9.59} \\ SwinIR & \multicolumn{1}{c}{33.48 97.00 50.23} \\ Hybrid 1 & \multicolumn{1}{c}{33.48 96.97 15.19} \\ Hybrid 2 & \multicolumn{1}{c}{33.56 97.00 20.23} \\ Hybrid 3 & \multicolumn{1}{c}{33.63 97.05 26.74} \\ Ours w/o DSM & \multicolumn{1}{c}{33.83 97.13 20.62} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Domain-wise quantitative evaluation (PSNR, SSIM(%)) of hybrid structure and domain-specific modules. The best results are marked as red.
Figure 3: Hybrid strategy evaluation. Left: Architecture of the naive hybrid strategy. Right: Quantitative comparison (PSNR, SSIM and Inference Time (ms)) of CNN model, naive hybrid models, our intra-stage hybrid model and ViT model. ”Ours w/o DSM” denotes our intra-stage hybrid model without domain-specific modules. The best results are marked as red.
## 4 Conclusion
We propose a CNN-ViT hybrid model DuDoRNeXt for MRI reconstruction. By introducing domain-specific, intra-stage hybrid designs, DuDoRNeXt surpasses popular deep learning methods in three common settings of MRI reconstruction. The improvement over DuDoRNet provides a possible direction for improving current CNN or ViT models since we consider both effectiveness and efficiency. In the future, we will further validate our hybridizing strategy by transforming other baseline models. Future work also includes extending DuDoRNeXt from single-coil to multi-coil reconstruction under different sampling patterns.
|
2302.07044 | Graviton-Photon Conversion in Atoms and the Detection of Gravitons | We study graviton-photon conversion in potential ground-based experiments.
From graviton to photon transition, we calculate the cross section of
graviton-atom interaction in the presence of spherical atomic electric fields;
the obtained results hold for graviton energy around 100 keV to 1 GeV, and
would be enhanced along the coherent length in extremely high frequencies; thus
it gives a chance to catch MeV level gravitons from the universe with current
neutrino facilities. From photon to graviton transition, we propose an
experiment using entangled photon pairs to count missing photons passing
through transverse magnetic tunnel, which could be used to verify the energy
quantization of gravitational field. | Jin Dai, Gui-Rong Liang | 2023-02-14T13:55:25Z | http://arxiv.org/abs/2302.07044v3 | # Graviton-Photon Conversion in Atoms
###### Abstract
We study graviton-photon conversion in potential ground-based experiments. From graviton to photon transition, we calculate the cross section of graviton-atom interaction in the presence of spherical atomic electric fields; the obtained results hold for graviton energy around 100 keV to 1 GeV, and would be enhanced in two spectrum regions: the one in lower but close to internal electron oscillation frequencies, and another in extremely high frequencies; thus it gives a chance to catch MeV level gravitons from the universe with current neutrino facilities. From photon to graviton transition, we propose an experiment using entangled photon pairs to count missing photons passing through transverse magnetic tunnel, which could be used to verify the energy quantization of gravitational field.
Introduction
The direct detection of gravitational wave (GW) has led us to the era of gravitational wave astronomy [1]. It signifies the triumph of Einstein's theory of general relativity (GR) -- the geometric description of classical gravity. Yet, the observed frequency band ranging from \(10\sim 10^{4}\) Hz are relatively much narrower than that of electromagnetic wave (EMW), which is generally above \(10^{3}\) Hz till up to \(10^{26}\) Hz. To observe the quantum aspects of gravity, it is necessary to extend the ceiling of the range to much higher frequencies, preferably to that of visible light. Various of methods [2; 3; 4] have been proposed to detect high frequency GW, with working mechanisms different from that of interferometry. The graviton-photon conversion (GRAPH [5]), or known as the "gravitational Hertz experiment"[6; 7; 8], is supposed to detect ultra-high frequency GW about \(10^{8}\sim 10^{12}\) Hz. The mechanism works when a background electromagnetic field is provided, with the converting direction double-sided: from graviton to photon (G\(\rightarrow\)P) or from photon to graviton (P\(\rightarrow\)G), thus "mixing" or "oscillation" is sometimes invoked to name it. Since it came to sight, GRAPH has been investigated in a large amount of literatures. Analytically, the conversion is solved in a background of simple static electromagnetic (EM) fields and readily generalized to cases with different EM backgrounds [9; 10; 11]. After that, it applies to real astronomical context to extract information on properties of relevant astro-objects [12; 13], the evolution of the universe [14; 15; 16], and even the dark components [17]. Further, it was also studied in modified theories and models of gravity [18; 19], or with higher order corrections in GR [20; 21; 22; 23], and via new mechanism as parametric resonance [24]. On the other side, the possible sources to generate GW with such high frequencies are also proposed [25; 26; 27], with the evaporating primordial black holes as one of the important candidate [28; 29; 30; 31], and recently magnetospheres of a single supermassive black hole as a new origin [13].
In this paper, we mainly focus on GRAPH on ground-based experiments, and particularly we calculate the interaction of gravitons with earth matter through atomic electric field, the resulting G\(\rightarrow\)P cross section holds for graviton energy around \(10^{5}\sim 10^{9}\) eV and would be enhanced by diffraction in crystals, thus giving a chance to catch MeV level gravitons from the universe with current neutrino facilities. Further, we discuss the reverse P\(\rightarrow\)G process in magnetic tunnel between parallel plates and give hints to testify the energy quantization of gravitational field. The paper is organized in a corresponding manner. We will provide a general formalism of
GRAPH in this introduction, and in Section II we will firstly give a review on G\(\rightarrow\)P in transverse electromagnetic field with showing the transition probability and its physical implications, and then calculate the process in atoms as an important example with drawing useful inference on ground-based detection from the results. In Section III, we discuss the possibility of testing the energy quantization of gravitational field on earth with a beam of entangled photons as a source of P\(\rightarrow\)G process. Conclusions and prospects are presented in the last section. We will work in flat spacetime throughout this paper, and use geometrical units \(c=G=\hbar=1\) in analytical process but recover to SI units when applying results to phenomenology.
The lowest order GRAPH in a general spacetime is fully described by perturbations of the action
\[S=S_{g}+S_{\rm EM}=\int\,{\rm d}^{4}x\sqrt{-g}\left(\frac{R}{\kappa^{2}}-\frac {1}{4}F_{\mu\nu}F^{\mu\nu}\right) \tag{1}\]
with \(\kappa^{2}\equiv 16\pi\), \(R\) the Ricci scalar, and \(F_{\mu\nu}\) the electromagnetic tensor as
\[F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}=\partial_{\mu}A_{\nu}- \partial_{\nu}A_{\mu}, \tag{2}\]
and indices are raised by the metric, \(F^{\mu\nu}=g^{\mu\rho}g^{\nu\sigma}F_{\rho\sigma}\). Since we're working in flat spacetime, GW is treated as a perturbation on Minkowski spacetime, the metric is taken to be
\[g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}. \tag{3}\]
Doing the metric expansion on the Einstein-Hilbert action \(S_{g}=\frac{1}{16\pi}\int\,{\rm d}^{4}x\sqrt{-g}R\) of the purely gravitational part to the second order with respect to \(h_{\mu\nu}\), would lead to a Lagrangian
\[{\cal L}_{h}=\frac{1}{2\kappa^{2}}\left(\nabla^{\mu}h^{\lambda\nu}\nabla_{ \lambda}h_{\mu\nu}-\frac{1}{2}\nabla^{\lambda}h^{\mu\nu}\nabla_{\lambda}h_{ \mu\nu}-\nabla^{\rho}h_{\lambda\rho}\nabla^{\lambda}h+\frac{1}{2}\nabla^{ \lambda}h\nabla_{\lambda}h\right) \tag{4}\]
which further gives, after choosing the transverse traceless (TT) gauge, the equation of motion (EOM) of the propagating part of GW
\[\partial_{\lambda}\partial^{\lambda}h_{\mu\nu}=0, \tag{5}\]
generally a wave from solution is given by
\[h_{\mu\nu}=e_{\mu\nu}{\rm e}^{{\rm i}kx}+e^{*}_{\mu\nu}{\rm e}^{-{\rm i}kx}, \tag{6}\]
with \(k_{\mu}=(\omega,0,0,\omega)\), and
\[\left(\begin{array}{cccc}0&0&0&0\\ 0&e_{11}&e_{12}&0\\ 0&e_{21}&-e_{11}&0\\ 0&0&0&0\end{array}\right). \tag{7}\]
The propagating part of EMW and the interactive GRAPH part is encoded in the electromagnetic action \(S_{\rm EM}=-\frac{1}{4}\int\,{\rm d}^{4}x\sqrt{-g}\cdot F_{\mu\nu}F^{\mu\nu}\), and we will see that the interaction part would give a source term both to the free GW and EMW equations.
Now we decompose the full EM fields into a background (with a bar on top) and a free part,
\[A_{\mu}\rightarrow\overline{A}_{\mu}+A_{\mu},\quad F_{\mu\nu}\rightarrow \overline{F}_{\mu\nu}+f_{\mu\nu}. \tag{8}\]
Keeping terms containing both \(f_{\mu\nu}\) and \(h_{\mu\nu}\) up to the 2nd order, we expand the EM action to obtain
\[\delta S_{\rm EM}=\int\,{\rm d}^{4}x\left(-\frac{1}{4}f_{\mu\nu}f^{\mu\nu}+ \overline{F}_{\lambda(\mu}f^{\lambda}_{\ \nu)}h^{\mu\nu}-\frac{1}{4}h^{\lambda}_{\ \lambda}\overline{F}^{\rho\sigma}f_{\rho\sigma}\right), \tag{9}\]
the corresponding Lagrangian is naturally composed of a free term and an interaction term,
\[{\cal L}_{\rm EM}={\cal L}_{f}+{\cal L}_{\rm int}=-\frac{1}{4}f_{\mu\nu}f^{ \mu\nu}+\frac{1}{2}{\cal T}^{\mu\nu}h_{\mu\nu} \tag{10}\]
with the "interactive tensor", we name it, governing the core of GRAPH, written as
\[{\cal T}^{\mu\nu}=\overline{F}^{\mu\lambda}f^{\nu}_{\ \lambda}-\frac{1}{4}\eta^ {\mu\nu}\overline{F}_{\rho\sigma}f^{\rho\sigma}. \tag{11}\]
Piecing \({\cal L}_{\rm EM}\) and \({\cal L}_{h}\) together will give the full description of GRAPH, with the source term also obtained by choosing TT gauge.
For G\(\rightarrow\)P conversion, \({\cal L}_{\rm EM}\) alone is enough, but in a more explicit form with a current \(J^{\mu}\), extracted as
\[J_{\rho}=-\partial^{\sigma}\left[\left(\overline{F}_{\mu\rho}\eta_{\nu\sigma} -\overline{F}_{\mu\sigma}\eta_{\nu\rho}-\frac{1}{2}\eta_{\mu\rho}\overline{F }_{\sigma\rho}\right)h^{\mu\nu}\right], \tag{12}\]
where the last term vanishes due to the TT gauge. This can be done from an integration by parts to the interactive term, \(\int\,{\rm d}^{4}x\sqrt{-g}\ {\cal T}_{\mu\nu}h^{\mu\nu}=\int\,{\rm d}^{4}x\ J_{\rho}A^{\rho}\). Therefore, the EOM for G\(\rightarrow\)P conversion is
\[\nabla_{\mu}f^{\mu\nu}=-J^{\mu} \tag{13}\]
and the retarded potential is
\[A^{\mu}(r,t)=\frac{1}{4\pi}\int\frac{J^{\mu}(r^{\prime},t-|r-r^{\prime}|)}{|r-r^{ \prime}|}\,\mathrm{d}V^{\prime}. \tag{14}\]
For P\(\rightarrow\)G conversion, \(\mathcal{L}_{int}\) is pieced with \(\mathcal{L}_{h}\), giving the source term on the right hand side of equation (5),
\[\partial_{\lambda}\partial^{\lambda}h_{\mu\nu}=-2\kappa^{2}\mathcal{T}_{\mu \nu}=-2\kappa^{2}\overline{F}_{\lambda(\mu}f^{\lambda}_{\ \nu)} \tag{15}\]
and the retarded solution is thus
\[h_{\mu\nu}(r,t)=\frac{\kappa^{2}}{2\pi}\int\frac{\mathcal{T}_{\mu\nu}(r^{ \prime},t-|r-r^{\prime}|)}{|r-r^{\prime}|}\,\mathrm{d}V^{\prime}. \tag{16}\]
The above formalism suits in a general sense. We will quantitatively study G\(\rightarrow\)P process in atoms, and qualitatively discuss P\(\rightarrow\)G process and its physical implications in the following sections.
## II Graviton to photon conversion in transverse electro-magnetic fields and atoms
Among the earliest analytical solution of GRAPH, a static transverse EM field is provided to be the background [9], we will reorganize the procedure in our consistent treatment, and review some crucial properties and implications of the transition probability.
### G\(\rightarrow\)P conversion in parallel plates and the transition probability
Consider a pair of electrically charged parallel plates, in between there is a static electric field in the \(x\) direction, when a gravitational wave with plane wave form (6) passes through the space in between the plates in the \(z\) direction, space distortion will happen. The \(e_{11}\) mode will effectively cause the plates to oscillate up and down. One might expect the plates to emit dipole radiation, which means a graviton changes into a photon in the same frequency. Because a single photon with high enough frequency can be observed, at high frequency, this effect may be used to detect gravitons.
From the TT-gauged interactive Lagrangian \(\mathcal{L}_{\rm int}=\frac{1}{2}\mathcal{T}^{\mu\nu}h_{\mu\nu}=\frac{1}{2} \overline{F}^{\mu\lambda}f^{\nu}h_{\mu\nu}\) and its varied form \(J_{\rho}A^{\rho}\), we see that background electric or magnetic field in the transverse direction can induce graviton-photon switching, while fixed charge on the plates cannot. If the back ground field is constant, and we replace the classical field with quantum field, \(\mathcal{L}_{\rm int}\) will give us a term that a graviton turns into a photon of the same frequency and direction, and another term for the reverse process. Note that we still have space time translational symmetry but no rotational symmetry, therefore energy and momentum is conserved but angular momentum is not.
To calculate the probability of a graviton turning into a photon, we go back to classical EM field theory. Let there be an incident gravitational wave on z direction \(h_{\mu\nu}=2{\rm Re}\left[e_{\mu\nu}{\rm e}^{{\rm i}\omega(z-t)}\right]\) and a background electric field \(\overline{E}\) in the \(x\) direction \(\overline{E}=\overline{F}_{10}=-\overline{F}_{01}\), we get the Lagrangian as
\[\mathcal{L}(h,E)=-2\overline{E}{\rm Re}\left[\left(E_{1}e_{11}+E_{2}e_{12} \right){\rm e}^{{\rm i}\omega(z-t)}\right], \tag{17}\]
the corresponding electric current results as
\[j_{i}=2\omega\overline{E}|e_{1i}|\cos\left[\omega(z-t)+\varphi_{i}-\frac{\pi} {2}\right], \tag{18}\]
where the phase \(\varphi_{i}-\frac{\pi}{2}\) is irrelevant to our problem. We see that only \(j_{x}\) and \(j_{y}\) exist in our case.
The electric field is distributed in a space between parallel plates, let the height, width, length be H, W, L, and the origin be at the center of the box. We can use Green's function to get the EM field value at faraway point \(x=(r,\hat{k})\), where \(\hat{k}\) is unit vector representing a propagation direction. The \(x\) component of 4-potential is
\[A_{x}(r,\hat{k},t)\approx\int\,{\rm d}x^{\prime}\frac{\omega\overline{E}|e_{1 1}|}{2\pi r}\cos\left[\omega(z^{\prime}-t+r-\hat{k}\cdot\vec{r}^{\prime}) \right]\equiv\frac{\omega\overline{E}V|e_{11}}{2\pi r}\beta(\hat{k})\cos\left[ \omega(r-t)\right], \tag{19}\]
Figure 1: G\(\rightarrow\)P process in transverse electric field between parallel plates. The picture depicts that the excitation of photons is due to the space distortion of EM field caused by GW.
with \(\beta(\hat{k})\) as
\[\beta(\hat{k})=\frac{1}{V}\int_{V}\,{\rm d}^{3}x^{\prime}\ \cos\left[\omega(z^{ \prime}-\hat{k}\cdot\vec{r}^{\prime})\right]. \tag{20}\]
The electric field follows straightforward as
\[E_{x}(r,\hat{k},t)=-\frac{\partial A_{x}}{\partial t}=\frac{\omega^{2}\overline {E}V|e_{11}|}{2\pi r}\beta(\hat{k})\sin\left[\omega(r-t)\right]\equiv E_{0} \cos\left[\omega(r-t)\right]. \tag{21}\]
The power radiated is calculated by integrating the average energy flux density over the whole spherical region:
\[P_{EM}=\int|\overline{S}|r^{2}\,{\rm d}\Omega=\frac{1}{2}\int E_{0}^{2}r^{2}\, {\rm d}\Omega=\frac{1}{2}\left(\frac{\omega^{2}\overline{E}V|e_{11}}{2\pi r^{ 2}}\right)^{2}r^{2}\int\beta^{2}(\hat{r})\,{\rm d}\Omega\equiv\frac{\omega^{4 }(\overline{E}V|e_{11})^{2}}{8\pi^{2}}\alpha, \tag{22}\]
and \(\alpha\) as
\[\alpha(\hat{k})=\int\left[\beta(\hat{k})\right]^{2}\,{\rm d}^{2}\hat{k}=\frac {4\pi^{2}}{\omega^{2}HW}. \tag{23}\]
go back to common SI units, bring back \(c\) and \(\overline{E}^{2}\rightarrow\epsilon_{0}\overline{E}^{2}\), we have
\[P_{\rm EM}=\frac{1}{2}\omega^{2}\epsilon_{0}\overline{E}^{2}VL|e_{11}|^{2}/c \tag{24}\]
and we know the incoming gravitational radiation power is
\[P_{\rm GW}=\frac{\omega^{2}c^{3}|e_{11}|^{2}}{8\pi G}WH. \tag{25}\]
Therefore, the ration between EM radiation power and incoming power, i.e. the probability that a graviton turns into photon, is
\[\epsilon_{g\rightarrow\gamma}=P_{EM}/P_{\rm GW}=4\pi G\epsilon_{0}\bar{E}^{2 }L^{2}/c^{4}. \tag{26}\]
For \(e_{12}\) mode, it is the same conversion probability.
A background magnetic field will also produce EM radiation from gravitational radiation. The coupling strength is the same, only that when \(\overline{B}\) in x direction, \(e_{11}\) produce EM polarization in \(y\) direction and \(e_{12}\) produce polarization in \(x\) direction. For a constant magnetic field background,
\[\epsilon_{G-EM}=P_{EM}/P_{h}=4\pi G\bar{B}^{2}L^{2}/\mu_{0}c^{4}. \tag{27}\]
The picture becomes clear, when a graviton travels in a background electric or magnetic field, it can be turned into a photon. The switching amplitude grows as it travels, and it does not
depend on the frequency of the graviton, only on the strength of the background field. Given a long enough travel distance, it can oscillate back and forth between the photon and graviton states. When the distance is not that long, the probability of switching grows with the square of the distance. For non-constant, but slowly varying background field, it can be generalized to
\[\epsilon_{G-EM}=\frac{4\pi G}{\mu_{0}c^{4}}\left[\left(\int\,\mathrm{d}l\bar{B} _{x}\right)^{2}+\left(\int\,\mathrm{d}l\bar{B}_{y}\right)^{2}\right], \tag{28}\]
where the graviton still travels in z direction, the \(\bar{B}_{x}\) and \(\overline{B}_{y}\) term corresponds to the probability of creating photons of different polarization.
The graviton-photon switching effect benefit from the fact that both particles are massless, the probability amplitudes adds up coherently along the path of the particle. It is almost a resonance; but the effect is very sensitive to phase changes. If there were even a tiny speed difference between the speed of the two particles, the coherence will break after a short distance, the probability will stop grow with the square of the distance. QED one-loop effect must be considered, background field changes the speed of light. We leave the QED correction for future works.
In lab environment and visible light frequency, photon and graviton can travel light years without losing coherence. However, in strong stellar magnetic fields such as on neutron stars and magnetars, coherence can break down quickly. Thus it's enough to use the lowest order results in ground experiment.
### G\(\rightarrow\)P conversion in Atomic electric field and the catch of gravitons
Graviton-photon conversion amplitude is proportional to the strength of background EM filed. Inside an atom, there is strong electric field, must stronger than EM field that can be created in a lab. The field strength near nuclei is particularly strong. The electric field is spherically symmetric around the nuclei and cancels out when the wave length is long. But when the wave length is shorter than atomic radius, atomic electric field can produce graviton-photon conversion. Here we consider an incoming high-energy graviton from the universe, we will use formula (17) and the Green's function approach to compute its interaction with earth matter through atomic electric field. Let the incoming graviton to have direction in \(z\) with polarization \(e_{12}\), and ignoring
the atomic magnetic field, equation (17) becomes
\[\mathcal{L}(e_{12})=-2\mathrm{Re}\left[\left(\overline{E}_{x}E_{y}+\overline{E}_ {y}E_{x}\right)e_{12}\;\mathrm{e}^{\mathrm{i}\omega(z-t)}\right], \tag{29}\]
we can compute the effective current spacetime vector to be
\[\begin{cases}j_{0}^{\mathrm{eff}}=-2\mathrm{Re}\left[\left(\partial_{x} \overline{E}_{y}+\partial_{y}\overline{E}_{x}\right)e_{12}\mathrm{e}^{ \mathrm{i}\omega(z-t)}\right]\\ j_{x}^{\mathrm{eff}}=-2\mathrm{Re}\left[\left(\mathrm{i}\omega\overline{E}_{y} \right)e_{12}\mathrm{e}^{\mathrm{i}\omega(z-t)}\right]\\ j_{y}^{\mathrm{eff}}=-2\mathrm{Re}\left[\left(\mathrm{i}\omega\overline{E}_{x} \right)e_{12}\mathrm{e}^{\mathrm{i}\omega(z-t)}\right]\\ j_{z}^{\mathrm{eff}}=0\end{cases} \tag{30}\]
We will see this effective current produces a quadrupole EM radiation. To match situations in realistic experiments, we consider light propagating in matter with refractive index \(n\), leading to the speed of light as \(c_{m}=1/n\). Using Green's function, at an infinitely faraway point:
\[A_{\mu}(r,\hat{k},t)=\frac{1}{4\pi r}\int\,\mathrm{d}^{3}x^{\prime}\;j_{\mu}^{ \mathrm{eff}}\left(x^{\prime},t-n(r-\hat{k}\cdot\vec{x}^{\prime})\right) \tag{31}\]
where the integration of \(x^{\prime}\) is on the whole atom. We will compute the cross section of graviton-photon conversion through a spherical symmetric atom. We use spherical coordinates with axis on \(z\). The atomic electric field is:
\[\vec{E}(x)=\overline{E}(r)\hat{r}\qquad\text{with}\qquad\overline{E}(r)=\frac {Ze}{4\pi r^{2}}\;q\left(\frac{r}{r_{A}}\right). \tag{32}\]
The formula is in natural units where \(\epsilon_{0}=1\), where Z is the atomic number (number of protons inside the nucleus), \(e\) is the unit electric charge, \(\hat{r}\) is the unit vector in radial direction, and \(q(r/r_{A})\) is the fraction of total net charges (that of nucleus minus electrons) distributed inside this radius, \(r_{A}\) is the radius of the atom. In real matter, electric field is affected by molecular and crystal structures, but near the nuclei, it is always spherically symmetric. High energy gravitons can sense the electric field in the center.
From the above equations, we get
\[\begin{cases}A_{x}(r,\theta,\varphi,t)=\mathrm{Re}\left[\frac{e_{12}\mathrm{e }^{\mathrm{i}\omega(nr-t)}Ze}{4\pi r}f(\theta)\sin\varphi\right]\\ A_{y}(r,\theta,\varphi,t)=\mathrm{Re}\left[\frac{e_{12}\mathrm{e}^{\mathrm{i} \omega(nr-t)}Ze}{4\pi r}f(\theta)\cos\varphi\right]\end{cases} \tag{33}\]
with the "inclination function" \(f(\theta)\) as
\[f(\theta)=\omega r_{A}\int_{0}^{1}\,\mathrm{d}\rho\ q\left(\rho\right)\int_{0}^{ \pi}\,\mathrm{d}\theta^{\prime}\sin^{2}\theta^{\prime}\cos\left[\omega r_{A} \rho(1-n\cos\theta)\cos\theta^{\prime}\right]J_{1}(\omega r_{A}n\rho\sin \theta^{\prime}\sin\theta) \tag{34}\]
with \(\rho\equiv r/r_{A}\) the ratio of radial coordinate with atomic radius, and \(J_{1}(x)\) the 1st order Bessel function of the first kind. Given the quantum atomic wave function, \(f(\theta)\) and \(\beta_{A}(\omega r_{A})\) can be computed numerically. We will use a simple atom model, in which the negative charges (electron clouds) are uniformly distributed within a sphere of radius \(r_{A}\), and positive charges are uniformly distributed within \(r_{N}\). Note that \(r_{N}\) is not the radius of the nucleus but somewhat larger, in quantum mechanics, the center of the atom is the center of mass, the range of the nucleus is therefore determined by the mass ratio of electrons and the nuclei, in most atoms this ratio is around \(1:4000\). We will roughly take \(r_{A}/r_{N}=10^{4}\), considering average the inside and outside electrons.
\[q(r)=\begin{cases}0,&r>r_{A}\\ 1-\frac{r^{3}}{r_{A}^{3}},&r_{N}<r<r_{A}\\ \frac{r^{3}}{r_{N}^{3}}-\frac{r^{3}}{r_{A}^{3}},&r<r_{N}.\end{cases} \tag{35}\]
We use the above simple atom model (35) and assume \(n\simeq 1\) for ultra-high frequencies to obtain numerical results. Figure 2 shows \(f(\theta)\) which determines the outgoing EM wave azimuthal distribution. The curve in red is for \(\omega r_{A}=1\), the outgoing EM wave peaks at \(90^{\circ}\) to the incoming graviton direction; the curve in blue and purple are for \(\omega r_{A}=5,10\), which peaks at more forward directions. When the incoming graviton's energy is higher, the outgoing photons will be peak more and more in line with the incoming photon. Despite that \(f(0)=0\), high energy gravitons will convert into photons almost in the same direction.
To calculate radiation power for direction \(\hat{k}\) at a faraway point, note that \(A_{0}\) and longitudinal vector potential \(A_{\hat{k}}\) cancels out in a gauge transformation, the radiation power is determined by the transverse vector potential:
\[\vec{A}_{T}=A_{x}\left[\hat{x}-\hat{k}(\hat{x}\cdot\hat{k})\right]+A_{y}\left[ \hat{y}-\hat{k}(\hat{y}\cdot\hat{k})\right], \tag{36}\]
and hence
\[A_{T}^{2}=A_{x}^{2}\left[1-(\hat{x}\cdot\hat{k})^{2}\right]+A_{y}^{2}\left[1- (\hat{y}\cdot\hat{k})^{2}\right]-2A_{x}A_{y}(\hat{x}\cdot\hat{k})(\hat{y}\cdot \hat{k}). \tag{37}\]
Then we can get the angular EM radiation power distribution as
\[P(\theta,\varphi)=(\omega r)^{2}\overline{A}_{T}^{2}=\frac{1}{32\pi^{2}}|e_{12}| ^{2}(Ze\omega)^{2}f^{2}(\theta)(1-\sin^{2}\theta\sin^{2}2\varphi) \tag{38}\]
Its polar distribution is of quadrupole nature, with maximum at 4 directions of \(\pm x\) and \(\pm y\), and minimum at 4 direction of \(45^{\circ}\). If the incoming graviton has polarization \(e_{11}\), the EM radiation distribution is rotated \(45^{\circ}\).
The total EM radiation power is integrated as
\[P_{\rm EM}=\int_{0}^{\pi}\,{\rm d}\theta\int_{-\pi}^{\pi}\,{\rm d}\varphi\ P( \theta,\varphi)=\frac{|e_{12}|^{2}(Ze\omega)^{2}}{16\pi}\beta_{A}(\omega r_{A}) \tag{39}\]
with
\[\beta_{A}(\omega r_{A})=\int_{0}^{\pi}\,{\rm d}\theta\ f^{2}(\theta)\left(1- \frac{1}{2}\sin^{2}\theta\right). \tag{40}\]
Recovering SI units, we have
\[P_{\rm EM}=\frac{|e_{12}|^{2}(Ze\omega)^{2}}{16\pi\epsilon_{0}c}\beta_{A}( \omega r_{A}/c) \tag{41}\]
Figure 2: The inclination function \(f(\theta)\) in \(0\leqslant\theta\leqslant\pi\) when \(\omega r_{A}=1,5,10\), with the assumption \(n\simeq 1\) for ultra-high frequencies. The horizontal axis is range of \(\theta\), and the vertical axis is the numerical value of \(f(\theta)\). It is seen that as \(\omega r_{A}\) increases, the peak of \(f(\theta)\) distribution moves to a small angle, shrinking the emitting EMW in almost the same direction with that of incident GW.
And the cross section is obtained as
\[\sigma=\frac{P_{\rm EM}}{P_{\rm GW}}=\frac{G(Ze)^{2}}{2\epsilon_{0}c^{4}}\beta_{ A}(\omega r_{A})\simeq Z^{2}\beta_{A}(\omega r_{A})\times 1.2\times 10^{-71}\ {\rm m}^{2}. \tag{42}\]
Further, \(\beta_{A}(\omega r_{A})\) is also computed numerically; at \(\omega r_{A}=1\), \(\beta_{A}=0.01\), but it rises very quickly; when \(\omega r_{A}=100\), and at least till \(\omega r_{A}=10^{6}\), \(\beta_{A}(\omega r_{A})\approx\frac{1}{2}\omega r_{A}\). Therefore the cross section (42) becomes
\[\sigma=\omega r_{A}Z^{2}\times 6\times 10^{-72}{\rm m}^{2},\quad{\rm with} \quad 100\leqslant\omega r_{A}\leqslant 10^{6}. \tag{43}\]
This formula holds for graviton energy from around 100 keV to 1 GeV, when energy is higher, recoil effect, or phonon excitation, has to be taken into account. At MeV energy level, for a medium sized atom, this cross section is about 17 orders of magnitude smaller than neutrino cross section with atoms. It is still much larger than one would naively expect from \(M_{weak}/M_{Planck}=10^{-34}\).
Due to the fact that the refraction index is a function of frequency \(n=n(\omega)\), which will influence the speed of light and hence the coherent behavior. We will analysis coherence behavior at different spectrum regions.
Figure 3: Water refraction index n as a function of photon energy(eV). In low frequency region, \(n>1\) makes the light speed slower than 1; in high frequency region, \(n<1\) makes the phase velocity greater than 1. Picture taken from: [https://refractiveindex.info](https://refractiveindex.info).
In low frequency region, we keep \(\omega r_{A}\) in arguments of functions to linear order, leading to the Bessel function approximated to \(J_{1}(\omega r_{A}n\rho^{\prime}\sin\theta^{\prime}\sin\theta)\simeq\omega r_{A} n\rho^{\prime}\sin\theta^{\prime}\sin\theta/2\), and the inclination function \(f(\theta)\) as
\[f(\theta)\simeq\frac{1}{2}(\omega r_{A})^{2}n\sin(\theta)\int_{0}^{1}\rho^{ \prime}(1-\rho^{\prime 3})\,\mathrm{d}\rho^{\prime}\int_{0}^{\pi}\sin^{3}\theta^{ \prime}\,\mathrm{d}\theta^{\prime}=\frac{1}{5}(\omega r_{A})^{2}n\sin(\theta) \tag{44}\]
we see that as \(\sigma\propto f(\theta)^{2}\), when the wave length of incoming graviton is much bigger than the radius of the atom, the cross section goes down quickly with the 4th power of \(\omega r_{A}\). This is understandable because atomic electric fields average to zero when the length scale is much larger than atomic radius. Note that this result holds quantitatively, as the above calculation is only precise for a spherically symmetric atomic electric field, which is not the case for most molecules and crystals, given that low frequency waves are sensitive to the outer rim of the atoms; in contrast, this spherical symmetry holds true for high frequency waves, since most contributions come from area near the atomic nuclei.
In low to middle frequency region, where the refractive index is still bigger than 1, there will be coherent enhancement between different atoms, with the deflection angle of the cone \(\theta_{m}\) is \(\cos\theta_{m}=1/n\). The outgoing EM wave is a quadruple radiation near a conic surface at \(\theta_{m}\). The coherent enhancement will happen regardless the matter is crystal, amorphous or liquid. It is assumed that speed of light is uniform inside the matter, while this is probably not true for the case of air, where the speed of light inside and outside the atoms should be different. In this case, the total cross section in a piece of matter is enhanced as
\[\sigma_{t}=kN\sigma, \tag{45}\]
with \(N\) the total number of atoms, and \(k\) the coherence enhancement factor, which equals to the number of atoms on the mean free path \(l_{\rightarrow}\) of a photon.
Figure 4: The enhancement along the GW propagation, in low to middle frequency region. The outgoing EM radiation with a deflection angle \(\theta_{m}\).
In high frequency region, the phase speed of light will be faster than in the vacuum; for waves with a relatively higher frequency than the internal frequency \(\omega_{0}\) of electron oscillation, there will not be a coherent enhancement; but for extremely high frequency \(\omega\gg\omega_{0}\), the speed of light is again very close to that in the vacuum, thus giving a chance to be coherent in a limited distance.
We will invoke simple harmonic oscillator model for electrons to draw results in extremely high frequency regime. The medium permittivity is given by
\[\varepsilon=\varepsilon_{0}-\frac{n_{e}e^{2}}{m_{e}}\frac{1}{\omega^{2}- \omega_{0}^{2}}, \tag{46}\]
with \(n_{e},e,m_{e}\) the electron number density, charge and mass, \(\varepsilon_{0}\) the vacuum permittivity. At extremely high frequency, \(\omega_{0}\) is suppressed, thus the refraction index is
\[n(\omega)=\sqrt{\varepsilon/\varepsilon_{0}}\simeq 1-\frac{1}{2}\frac{\omega_{p} ^{2}}{\omega^{2}},\qquad\mbox{with}\qquad\omega_{p}\equiv\sqrt{\frac{n_{e}e^{2 }}{\varepsilon_{0}m_{e}}}, \tag{47}\]
hence the phase velocity is
\[\frac{c_{m}}{c}=\frac{1}{n}\simeq 1+\frac{\omega_{p}^{2}}{2\omega^{2}}. \tag{48}\]
As gravitons and photons travel in medium, there's going to be a enhancement length \(l_{\uparrow}\) that their phase difference is less than half of a cycle,
\[l_{\uparrow}=\frac{\pi c}{\frac{2\pi(c_{m}-c)}{\lambda}}\simeq\frac{2\pi c \omega}{\omega_{p}^{2}}. \tag{49}\]
The physics is intuitive: as the frequency goes up, the phase velocity is closer to the speed of light, making the enhancement length longer. We apply datas of H\({}_{2}\)O to estimate numerical values, with electron density \(n_{0}\simeq 3.33\times 10^{29}\)m\({}^{-3}\) and the characteristic frequency \(\omega_{p}\simeq 3.25\times 10^{16}\)Hz. For a 100 MeV graviton (\(\sim 1.52\times 10^{23}\) Hz), it gives an enhancement length about \(l_{\uparrow}\simeq 0.27\)m, very close to the mean free path of a photon in water \(l_{\rightarrow}\simeq 0.3\)m; the shortest of the two distance will give the true coherent length \(l_{c}=\min\{l_{\uparrow},l_{\rightarrow}\}\), multiplying the average number of atoms per unit length \(n_{0}^{1/3}\), it gives a coherent enhancement of about 9 orders of magnitude with respect to a single atom's cross section.
This makes it feasible to try to capture high energy gravitons from the universe using the current neutrino experiment facilities, or some upgraded version of it. It needs to be done deep underground, when the energy is higher than a few MeV, there is pretty much no radio activity
background, the only background is neutrino. It can generate photons through higher order weak interactions, proper detections need to rule out neutrinos.
A known source of high energy gravitons is from primordial black holes, in its final moment of Hawking evaporation [28; 29; 30; 31]. Such kind of black holes has recently been brought up as candidate to dark matter again. If there is one that is evaporating away not too far away, it may be observed as gamma photon events.
## III Photon to graviton conversion and the energy quantization of gravitational fields
On the other side of GRAPH, a consequence of P\(\rightarrow\)G switching is to produce small number of gravitons in the universe, with a spectrum matching that of photons, because magnetic field is everywhere. The transition probability is about \(\epsilon\simeq 8.2\times 10^{-38}\,(BL/T\cdot m)^{2}\), which is a very small effect. Neutron stars and magnetars have very strong magnetic field, but unfortunately QED effects mentioned above breaks coherence. Moderate magnetic field in large space can convert photons to gravitons, a typical galaxy has a size of 100000 light years (\(\sim 10^{21}\) m) and an average magnetic field of \(10^{-9}\) T, and most of the field are not turbulent, the conversion ratio can be on the order of \(10^{-14}\). The graviton spectrum matches that of the photons, except for radio frequency of which the speed is affected by interstellar dust. Although this ratio is still small, it is much larger than you would naively expect by compare the strength of gravitational interaction with that of EM interaction. Primordial magnetic field in the universe is also a subject of interests in recent years, reference [15] studied conversion of graviton to photons in early universe magnetic field. In today's universe, magnetic field can convert photons to gravitons.
For ground experiments, reference [9] suggested use this effect to generate gravitational wave from EM wave, then regenerate EM wave from gravitational wave. However, if we make a graviton detector with 30T magnetic field and 10km length, it gives \(\epsilon=7.2\times 10^{-27}\), which is a very small efficiency; for 1W wave of \(\omega=10^{14}\) Hz, we will get about 2 events a month. EM-GW-EM process will square this efficiency, hopelessly small.
A more feasible experiment is to have a photon beam, entangled with one another for comparison, goes through the long magnetic tunnel, and to count the missing photons. Recent studies on axions pointed out there are also photon-axion(\(\gamma-a\)) conversion [32] given a background
EM field, but with different polarizations; the Lagrangian \(\mathcal{L}=\frac{1}{4}g_{\gamma a}\tilde{F}_{\mu\nu}F^{\mu\nu}a=g_{a\gamma}\mathbf{ E}\cdot\mathbf{B}\)\(a\) shows there'll be no axion creation in perpendicular electric and magnetic field, which can be done with a polarizer; thus in transverse EM field, photons will only be converted to gravitons. We hereby point out this is an experiment that can test the quantum feature of gravity.
An example experimental set up can be as shown below: creating a pair of entangled \(\gamma\) photons from electron-positron annihilation, try to capture the event that one of them going through a long magnetic channel, count the photons on the 2 detectors to find the missing photons.
The electron and positron beams need to be properly cooled to make sure the total transverse momentum is 0, therefore the opposite going photon event can be used to tag the photon going through a long magnetic tunnel that might be converted to a graviton. The photons of interest are in a particular direction and energy, this will help to eliminate background from environmental radio activities.
Here one may ask, is it possible that gravitational field remains classical while other fields is quantized? How to prove gravitational field must be quantized? One can argue that classical gravitational theory will encounter blackbody radiation trouble similar to EM field. But this argument is weak, in any practical system, gravitational field will never have thermal equilibrium. Instead, missing photons after going through magnetic tunnel shall prove that the energy of gravitational field is quantized.
Figure 5: Experimental setup to count the missing photon and to testify the energy quantization of gravity. Entangled photon pairs are created from positron-electron beams, with one going left directly to the \(\gamma\) photon detector, and the other going right through a magnetic tunnel before running into the detector.
Given the fact that EM field is quantized, and GR predicts gravitational radiation when EM wave goes through magnetic tunnel, if gravitational field were not quantized, then each photon must radiate gravitational wave and lose some energy, and be red-shifted. However, general relativity predicts that EM wave lose some intensity without changing frequency, gravitational wave generates EM wave in the reverse process, cause original EM wave to lose intensity. It is only consistent to re-interpret the gravitational wave as a quantum probability wave. The photon-graviton conversion shows, given the fact that EM field is quantized, classical gravity as described by general relativity is not a self-consistent theory.
Photon magnetic tunnel experiment can demonstrate or rule out the quantum feature of gravity, the reverse process, gravitational wave generate photons, if detected, does not prove gravity is quantized. If energy quantization of gravity is indeed verified, the gravitational constant at the given frequency can be measured. Plausibly it is the same as low frequency, but it will be nice to check.
## IV Conclusions and prospects
In this work, we studied the GRAPH in two aspects. In the G\(\rightarrow\)P process, we calculated the transition probability in transverse EM fields with classical Green's function approach, and obtained the cross section of graviton-atom interaction in presence of atomic electric field. The results hold for graviton energy from around 100 keV to 1 GeV, and would be amplified by the crystal structure, thus making it feasible to capture high energy gravitons from the universe using the current neutrino experiment facilities underground; the only factor that should be ruled out is neutrino background. The relevant sources are guaranteed in relevant literatures. In the G\(\rightarrow\)P process, we illustrated in detail the possibility of testing the quantum feature of gravity, and proposed an experiment using entangled photon pairs to count missing photon passing through transverse magnetic tunnel. We pointed out this could be a criteria to judge the energy quantization of gravitational field, and a positive results (if it is) would suggest that one is able to study quantum gravity without going to the Plank energy.
Although being studied along the history, GRAPH research is still far from being fully investigated. Analytically, GRAPH in curved spacetime, e.g, at photon sphere of a charged black hole, or in a wide class of modified gravity, would be interesting topics to explore, and useful
physical insights and implications are expected. Phenomenologically, GRAPH in different EM background, either in various astronomical environments, or in ground-based and man-made facilities, would give us crucial information on the basic properties of interactions between gravity and other components of the universe. We will report our research on these parallel lines in future.
###### Acknowledgements.
The authors thank Prof. Xiangdong Ji and Prof. Yuqing Lou for useful discussions on background effect and Magnetars, and Donglian Xu for discussions on neutrino detections. We thank Dr. Manqi Ruan for discussions on \(\gamma\)-photon experiments, and additionally thank Prof. Miao Li for relevant discussions and suggestions. The work is supported by Natural Science Foundation of China under Grants 12147163 and 12175099.
## Appendix A Some detail computations
### Calculations of the \(\beta\) and \(\alpha\) integrals in (20) and (23)
The \(\beta(\hat{k})\) is integrated as
\[\begin{split}\beta(\hat{k})&=\frac{1}{V}\int_{V} \mathrm{d}^{3}x^{\prime}\ \cos\left[\omega(z^{\prime}-\hat{k}\cdot\vec{r}^{\prime})\right]\\ &=\frac{1}{V}\int_{-L/2}^{L/2}\cos\left[\omega z^{\prime}(1-k_{z} )\right]\,\mathrm{d}z^{\prime}\int_{-H/2}^{H/2}\cos(\omega k_{x}x^{\prime})\, \mathrm{d}x^{\prime}\int_{-W/2}^{W/2}\cos(\omega k_{y}y^{\prime})\,\mathrm{d}y ^{\prime}\\ &=\int_{-1/2}^{1/2}\cos\left[\omega Lz(1-k_{z})\right]\, \mathrm{d}z\int_{-1/2}^{1/2}\cos(\omega Hk_{x}x)\,\mathrm{d}x\int_{-1/2}^{1/2 }\cos(\omega Wk_{y}y)\,\mathrm{d}y\\ &=\frac{2\sin\left[\frac{\omega L(1-k_{z})}{2}\right]}{\omega L( 1-k_{z})}\frac{2\sin\left(\frac{\omega Hk_{x}}{2}\right)}{\omega Hk_{x}} \frac{2\sin\left(\frac{\omega Wk_{y}}{2}\right)}{\omega Wk_{y}},\end{split} \tag{10}\]
since there's a resonance along the \(z\)-axis, we have
\[\begin{split}& k_{z}=\sqrt{1-(k_{x}^{2}+k_{y}^{2})}\approx 1- \frac{1}{2}(k_{x}^{2}+k_{y}^{2})\\ \Longrightarrow&\omega L(1-k_{z})\approx\frac{1}{2} \omega L(k_{x}^{2}+k_{y}^{2})\ll\omega Hk_{x},\quad\omega Wk_{y},\end{split} \tag{11}\]
so we can approximate the \(k_{z}\) term as \(\frac{2\sin\left[\frac{\omega Hk_{z}}{2}(1-k_{z})\right]}{\omega L(1-k_{z})}\approx 1\), while the other two terms keep the original form, thus the integral becomes
\[\beta(\hat{k})\approx\frac{\sin\left(\frac{\omega Hk_{z}}{2}\right)}{\frac{ \omega Hk_{z}}{2}}\frac{\sin\left(\frac{\omega Wk_{y}}{2}\right)}{\frac{\omega Wk _{y}}{2}}. \tag{10}\]
Then it is easy to compute the \(\alpha(\hat{k})\) as
\[\begin{split}\alpha(\hat{k})&=\int\left[\beta(\hat{ k})\right]^{2}\,\mathrm{d}^{2}\hat{k}\\ &=\int_{-1}^{1}\left[\frac{\sin\left(\frac{\omega Hk_{x}}{2} \right)}{\frac{\omega Hk_{x}}{2}}\right]^{2}\,\mathrm{d}k_{x}\cdot\int_{-1}^{1 }\left[\frac{\sin\left(\frac{\omega Wk_{y}}{2}\right)}{\frac{\omega Wk_{y}}{2 }}\right]^{2}\,\mathrm{d}k_{y}\\ &=\frac{4}{\omega^{2}HW}\left[\int_{-\infty}^{\infty}\left(\frac {\sin^{2}u}{u^{2}}\right)\,\mathrm{d}u\right]^{2}=\frac{4\pi^{2}}{\omega^{2}HW },\end{split} \tag{11}\]
where we have taken the limit of \(\omega H,\omega W\gg 1\).
### Derivation of \(A_{x}(r,\hat{k},t)\) in equation (33)
From equation (31) and (32) we get:
\[A_{x}(r,\hat{k},t)=2\mathrm{Re}\left\{\frac{\mathrm{i}\omega e_{12}}{4\pi r} \int\sin\theta^{\prime}r^{\prime 2}\,\mathrm{d}r^{\prime}\,\mathrm{d}\theta^{ \prime}\,\mathrm{d}\varphi^{\prime}\overline{E}(r^{\prime})\sin\theta^{\prime} \sin\varphi^{\prime}\,\,\mathrm{e}^{\mathrm{i}\omega\left[r^{\prime}\cos\theta ^{\prime}-t+n(r-\hat{k}\cdot\vec{x}^{\prime})\right]}\right\} \tag{12}\]
and hence:
\[\begin{split} A_{x}(r,\hat{k},t)=& 2\mathrm{Re}\left\{\frac{ \mathrm{i}Ze\omega e_{12}\mathrm{e}^{\mathrm{i}\omega(nr-t)}}{4\pi\cdot 4\pi r} \int_{0}^{r_{A}}\,\mathrm{d}r^{\prime}\int_{0}^{\pi}\,\mathrm{d}\theta^{ \prime}\int_{-\pi}^{\pi}\,\mathrm{d}\varphi^{\prime}\right.\\ &\qquad\qquad\left.q\left(\frac{r^{\prime}}{r^{\prime}_{A}} \right)\sin^{2}\theta^{\prime}\sin\varphi^{\prime}\,\,\mathrm{e}^{\mathrm{i} \omega r^{\prime}(\cos\theta^{\prime}-\cos\theta^{\prime}\cos\theta-\sin\theta ^{\prime}\sin\theta\cos(\varphi^{\prime}-\varphi)}\right\}\\ =&\mathrm{Re}\left\{\frac{\mathrm{i}(Ze)e_{12} \mathrm{e}^{\mathrm{i}\omega(nr-t)}}{8\pi^{2}r}\int_{0}^{\omega r_{A}}\,\, \mathrm{d}\rho^{\prime}\,\,q\left(\frac{r^{\prime}}{r^{\prime}_{A}}\right) \int_{0}^{\pi}\,\mathrm{d}\theta^{\prime}\sin^{2}\theta^{\prime}\\ &\qquad\qquad\left.\int_{-\pi}^{\pi}\,\mathrm{d}\varphi^{\prime} \sin\varphi^{\prime}\,\,\mathrm{e}^{\mathrm{i}\rho(\cos\theta^{\prime}-n\cos \theta^{\prime}\cos\theta-n\sin\theta^{\prime}\sin\theta\cos(\varphi^{\prime} -\varphi)}\right\},\end{split} \tag{13}\]
where in the second equator we substituted \(\rho^{\prime}=\omega r^{\prime}\). The integration over \(\phi^{\prime}\) can be done analytically
\[\begin{split}&\int_{-\pi}^{\pi}\,\mathrm{d}\varphi^{\prime}\sin \varphi^{\prime}\,\,\mathrm{e}^{-\mathrm{i}n\rho\sin\theta^{\prime}\sin\theta \cos(\varphi^{\prime}-\varphi)}\\ =&\int_{-\pi}^{\pi}\,\mathrm{d}\varphi^{\prime}\sin( \varphi^{\prime}+\varphi)\,\,\mathrm{e}^{-\mathrm{i}n\rho\sin\theta^{\prime} \sin\theta\cos\varphi^{\prime}}\\ =&\sin\varphi\int_{-\pi}^{\pi}\,\mathrm{d}\varphi^{ \prime}\cos\varphi^{\prime}\,\,\mathrm{e}^{-\mathrm{i}n\rho\sin\theta^{\prime} \sin\theta\cos\varphi^{\prime}}\\ =&-\mathrm{i}\sin\varphi\int_{-\pi}^{\pi}\, \mathrm{d}\varphi^{\prime}\cos\varphi^{\prime}\sin(n\rho\sin\theta^{\prime} \sin\theta\cos\varphi^{\prime})\\ =&-2\pi\mathrm{i}J_{1}(n\rho\sin\theta^{\prime} \sin\theta)\sin\varphi\end{split} \tag{10}\]
with \(J_{1}(x)\) the 1st order Bessel function of the first kind. Joining (10) and (10) we have the 1st line of (33); the derivation of the 2nd line is similar.
|
2306.12800 | HypeRS: Building a Hypergraph-driven ensemble Recommender System | Recommender systems are designed to predict user preferences over collections
of items. These systems process users' previous interactions to decide which
items should be ranked higher to satisfy their desires. An ensemble recommender
system can achieve great recommendation performance by effectively combining
the decisions generated by individual models. In this paper, we propose a novel
ensemble recommender system that combines predictions made by different models
into a unified hypergraph ranking framework. This is the first time that
hypergraph ranking has been employed to model an ensemble of recommender
systems. Hypergraphs are generalizations of graphs where multiple vertices can
be connected via hyperedges, efficiently modeling high-order relations. We
differentiate real and predicted connections between users and items by
assigning different hyperedge weights to individual recommender systems. We
perform experiments using four datasets from the fields of movie, music and
news media recommendation. The obtained results show that the ensemble
hypergraph ranking method generates more accurate recommendations compared to
the individual models and a weighted hybrid approach. The assignment of
different hyperedge weights to the ensemble hypergraph further improves the
performance compared to a setting with identical hyperedge weights. | Alireza Gharahighehi, Celine Vens, Konstantinos Pliakos | 2023-06-22T10:59:58Z | http://arxiv.org/abs/2306.12800v1 | # HypeRS: Building a Hypergraph-driven
###### Abstract
Recommender systems are designed to predict user preferences over collections of items. These systems process users' previous interactions to decide which items should be ranked higher to satisfy their desires. An ensemble recommender system can achieve great recommendation performance by effectively combining the decisions generated by individual models. In this paper, we propose a novel ensemble recommender system that combines predictions made by different models into a unified hypergraph ranking framework. This is the first time that hypergraph ranking has been employed to model an ensemble of recommender systems. Hypergraphs are generalizations of graphs where multiple vertices can be connected via hyperedges, efficiently modeling high-order relations. We differentiate real and predicted connections between users and items by assigning different hyperedge weights to individual recommender systems. We perform experiments using four datasets from the fields of movie, music and news media recommendation. The obtained results show that the ensemble hypergraph ranking method generates more accurate recommendations compared to the individual models and a weighted hybrid approach. The assignment of different hyperedge weights to the ensemble hypergraph further improves the performance compared to a setting with identical hyperedge weights.
Keywords:Recommender systems Hypergraph learning Ensemble methods.
## 1 Introduction
Nowadays, people use digital services more and more to fulfill their needs. The owners of these services monitor users' behavior and utilize users' interactions with provided items, such as movies, songs, commercial products, to predict users' preferences. This enables the personalization of digital services and the rise of effective recommender systems (RSs) which learn from users' preferences and provide them with accurate recommendations. Generally, there are two main categories in RSs: content-based filtering and collaborative filtering approaches.
Content-based RSs use the features that describe the items for computing similarities between the items and the user interaction profile. Next, they recommend items that are more similar to this user profile. Upon a recommendation query for a target user, these RSs do not consider the interactions of the other users in generating the recommendation list. In contrast to that, collaborative filtering approaches infer the users' preferences by processing the collaborative information between users or items. In many applications, collaborative filtering RSs generate more accurate [1] and less obvious [17] recommendations compared to content-based approaches.
Each type of RS processes the information based on different assumptions to decide which items should be ranked higher among many available ones. For instance, memory-based collaborative filtering approaches (user-based and item-based) assume that users (items) with similar interactions have similar interests. Therefore, these approaches form neighborhoods to generate recommendations. Model-based collaborative filtering approaches assume that users and items can be represented in a common feature space and they use different learning methods to learn these latent features. While these approaches might vary in prediction power, they convey relevant information from different perspectives, following practically different learning strategies for the same recommendation task. Ensemble methods include multiple learning methods and integrate their predictive power into a single system, achieving superior predictive performance to individual models. Examples of ensembles in machine learning are bagging and boosting. In recommendation tasks a hybrid RS can be applied to exploit several data sources or the prediction power of different RSs to generate more relevant recommendations. An ensemble RS is a hybrid model that employs the ranking lists of multiple RSs to decide which items should be recommended to each user [2].
In this paper we propose an ensemble hypergraph learning framework for recommendation. This way we integrate the predictive power of several models into a unified RS powered by hypergraph ranking. Unlike regular graphs, where edges connect pairs of nodes, in hypergraphs multiple nodes can be connected via hyperedges. These higher order relations in hyperedges empower hypergraphs to cast more reliable information in the model [23]. Furthermore, hypergraph learning can inherently model the complex relations between different types of entities in a unified framework. It is therefore a deliberate choice for the construction of an ensemble of individual RSs driven by different types of information. Moreover, as was shown in [13], hypergraph ranking-based methods can mitigate popularity bias, enhance fairness and coverage as well as act as innate multi-stakeholder RSs. The main contribution of this paper is to construct a hypergraph as an ensemble framework for recommendation tasks. Despite its capability to stack multiple connections in a unified model, to the best of our knowledge hypergraphs have not been employed to form ensembles of RSs.
This article is an extension of our previous conference paper [12] in the following main directions:
* We have extended the hypergraph-based ensemble RS by further incorporating two effective individual RSs, namely Weighted Approximate-Rank Pairwise (WARP) [25] and Multi-variational autoencoder (MVAE) [16]. This way, we managed to substantially enhance the performance of our RS. We also demonstrated the potential of our model, as it has the capacity to integrate more effective individual RSs that might appear in the future, boosting its recommendation performance even further.
* In the previous paper we considered equal weights for all the hyperedges in the ensemble hypergraph. Here, we have differentiated the actual links between users and items from the ones predicted by the individual RSs. We also assigned different weights to hyperedges that relate to different individual RSs based on the performance of these RSs.
* We have extended the experimental study by including more evaluation metrics as well as by comparing our method to two additional baseline models, namely Weighted Approximate-Rank Pairwise (WARP) [25] and Multi-variational autoencoder (MVAE) [16]. Especially the MVAE is a state-of-the-art method that emerged among the winning models in a recent comparison study of RSs [7].
The structure of this paper is as follows: Studies about applications of hypergraph learning in RSs are presented in Section 2. Next, in Section 3, we show how a unified hypergraph can be formed as an RS (Section 3.1) and how it can formulate an ensemble of RSs (Section 3.2). In Section 4, four recommendation datasets are described and the experimental setup in designing and testing the proposed model is described. Next, the obtained results of comparing the proposed ensemble model against other methods on these four datasets are presented and discussed in Section 5. Finally, we draw conclusions and outline some directions for future research in Section 6.
## 2 Related work
Hypergraph learning has been applied to generate recommendation lists in several applications. For instance in the music domain, Bu et al. [3] used hypergraph learning to recommend music tracks where the relations between users, tracks, albums and artists were modeled using a unified hypergraph. Hypergraph ranking has been also used in news recommendation tasks [15, 13]. News usually contains very rich features such as text, tags and named entities. Therefore, hypergraph learning can effectively model the relations between these entities. Moreover, Pliakos et al. [21] used hypergraph ranking for a tag recommendation task. They built a hypergraph ranking model to capture the complex relations between different entities in the system, such as users, images, tags, and geo-tags. Hypergraph-based RSs have been also used in e-commerce applications [18, 24]. For instance in [18], a multipartite hypergraph is used to model the relations between users, restaurants and attributes in a multi-objective setting. In such applications, item attributes and sequences of user-item interactions are effectively modeled in hypergraphs.
Hypergraph learning has been employed to address various issues in RSs. A hypergraph can model the relations between different types of stakeholders and objects and therefore, it can be intrinsically used as a multi-stakeholder RS [13]. Additionally, it can be used to burst the filter bubble around the user by querying a more diverse recommendation list based on the user history [15, 13]. Moreover, hypergraph learning has been used to address fairness [13], the cold-start problem [27] as well as context-awareness [26] in recommendation tasks.
An ensemble RS is a type of hybrid RSs that integrates the recommendations of multiple individual RSs. Aggarwal [2] categorized hybrid RSs to monolithic, ensembles, and mixed RSs. Burke et al. [4] provided another categorization where hybrid models are categorized into weighted, switching, cascade, feature augmentation, feature combination, meta-level and mixed RSs. A weighted hybrid RS uses the weighted average of the scores from individual RSs to generate the recommendation list. For instance, Do et al. [6] applied a weighted hybrid RS based on collaborative and content-based filtering approaches on _Movielens_ dataset and showed that it is more effective compared to the individual collaborative and content-based RSs. Here, we employ a unified hypergraph as an ensemble RS. Although hypergraph learning is very promising and effective in addressing many problems in RSs, to the best of our knowledge, it has never been studied as an ensemble RS.
## 3 Methodology
### Hypergraphs as recommender systems
Hereafter, uppercase bold letters are used for matrices, lowercase bold letters represent vectors, uppercase non-bold letters are used for sets and lowercase non-bold letters represent constants. The element in \(i^{th}\) row and \(j^{th}\) column of matrix \(\mathbf{X}\) is denoted as \(\mathbf{X}\)(i,j).
A hypergraph consists of a set of nodes (vertices) \(N:\{n_{1},n_{2}\cdots,n_{|N|}\}\) and a set of hyperedges \(E:\{e_{1},e_{2}\cdots,e_{|E|}\}\) that connect the nodes. Each hyperedge can connect multiple nodes in the hypergraph. Based on the application, different types of hyperedges can be defined that capture different forms/sources of information. We define these hyperedge types in Section 3.2. In a typical collaborative filtering setting there are two types of entities in a hypergraph: users \(U:\{u_{1},u_{2}\cdots,u_{|U|}\}\) and items \(I:\{i_{1},i_{2}\cdots,i_{|I|}\}\). Therefore, the set of nodes \(N\) in a hypergraph is formed based on users and items (\(N:\{U\cup I\}\)).
Let \(\mathbf{H}\) of size \(|N|\times|E|\) be the incidence matrix of the hypergraph, where \(H(n,e)=1\), if node \(n\) is in hyperedge \(e\) and _zero_ otherwise. Based on \(\mathbf{H}\), the symmetric matrix \(\mathbf{A}\) can be formed using Eq.1:
\[\mathbf{A}=\mathbf{D_{n}}^{-1/2}\mathbf{HWD_{e}}^{-1}\mathbf{H}^{T}\mathbf{D_ {n}}^{-1/2} \tag{1}\]
where \(\mathbf{D}_{n}\) and \(\mathbf{D}_{e}\) are the diagonal matrices that contain the node and hyperedge degrees and \(\mathbf{W}\) is the diagonal hyperedge weight matrix. Each element \(\mathbf{A}(i,j)\) reflects the relatedness between nodes \(i\) and \(j\). Higher values indicate stronger
relations between the corresponding nodes. Then, the recommendation problem is formulated as finding a ranking (score) vector \(\mathbf{f}\in\mathrm{I\!R}^{|N|}\) that minimizes the following loss function [3]:
\[Q(\mathbf{f})=\frac{1}{2}\mathbf{f}^{T}\mathbf{L}\mathbf{f}+\vartheta|| \mathbf{f}-\mathbf{y}||_{2}^{2} \tag{2}\]
where \(\mathbf{L}\) is the hypergraph Laplacian matrix (i.e. \(\mathbf{L}=\mathbf{I}-\mathbf{A}\)), \(\vartheta\) is a regularizing parameter and \(\mathbf{y}\in\mathrm{I\!R}^{|N|}\) is the query vector. Every item of the ranking vector \(\mathbf{f}\) or query vector \(\mathbf{y}\) corresponds to a node. Typically, to generate the recommendation list for user \(u\) in a regular recommendation task, one can query the hypergraph for user \(u\) by setting the corresponding value in the query vector to _one_ (\(\mathbf{y}(u)=1\)) and all the other values that correspond to other nodes to _zero_. By solving the optimization problem in Eq.2, the optimal score (ranking) vector can be calculated using Eq.3:
\[\mathbf{f}^{*}=\frac{\vartheta}{1+\vartheta}\big{(}\mathbf{I}-\frac{1}{1+ \vartheta}\mathbf{A}\big{)}^{-1}\mathbf{y}. \tag{3}\]
Finally, the top k items that have the highest scores in \(\mathbf{f}^{*}\) are recommended to the user \(u\).
### A hypergraph-based ensemble recommender system, HypeRS
An ensemble RS4 utilizes the decisions of multiple individual RSs to decide which items should be ranked higher in the final recommendation lists. Let \(M:\{m_{1},m_{2}\cdots,m_{|M|}\}\) be the set of individual methods that we want to incorporate in our ensemble RS. Each of these individual methods \(m_{i}\) can generate its own top \(k\) rankings \(\mathbf{R}_{i}\in\mathrm{I\!R}^{|U|\times k}\) where each row in \(\mathbf{R}_{i}\) is the top \(k\) ranked items for the corresponding user. Then, based on the recommendation lists of each RS, hyperedges are formed to connect users to their top \(k\) recommendations.
Footnote 4: The source code is available at [https://github.com/alirezagharahi/ensemble_hypergraph](https://github.com/alirezagharahi/ensemble_hypergraph).
As is mentioned previously, the hypergraph consists of multiple types of hyperedges. We consider three types of hyperedges, which are defined in Table 1. The \(E_{UI}\) hyperedges connect the users with the items that they have interacted with. To make the relations between users with similar tastes more explicit, the \(E_{UU}\) hyperedges connect users to their \(k\) nearest neighbors. To find these neighbors we use the user-item interaction matrix \(\mathbf{Z}\), where \(\mathbf{Z}(i,j)\in\{0,1\}\). The \(k\) nearest neighbors of user \(u\) are users that have the highest cosine similarity with \(u^{th}\) row of matrix \(\mathbf{Z}\). The \(E_{M}\) hyperedges are considered to integrate the recommendations of multiple RSs in the hypergraph. These RSs can be from different families such as collaborative filtering or content-based approaches. The fact that recommendations from any type of RS can be directly modeled as hyperedges in our system is a vital advantage of the proposed method.
We constructed the \(E_{M}\) hyperedge set using four well-established and powerful matrix completion-based recommendation methods, namely BPR [22], WARP
[25], WRMF [14, 20] and MVAE [16]. _BPR_ is a learning-to-rank matrix completion approach which uses user-specific relative pair-wise preferences between observed and unobserved items to learn items' and users' low rank matrices. Similar to _BPR_, _WARP_ is also a pair-wise learning-to-rank approach but with a different objective function. _BPR_ is optimized for approximation of _AUC_ whereas _WARP_ is optimized for _precision_. _WRMF_ is a matrix factorization approach for implicit feedback datasets that uses the alternating-least-squares optimization process to learn items' and users' parameters. MVAE is a CF model for implicit feedback that assumes that user logs are from a multinomial distribution and uses variational autoencoders to learn users' and items' parameters.
The hypergraph and its incidence matrix \(\mathbf{H}\) are constructed using the hyperedge sets of Table 1. Following that, the affinity matrix \(\mathbf{A}\) is computed and the recommendation task is addressed as was described in Section 3.1.
The weight of a hyperedge reflects the relative importance of the hyperedge compared to the other hyperedges in the hypergraph. We propose to assign lower weights for \(E_{M}\) hyperedges (0.5) compared to \(E_{UI}\) ones (1.0), as the \(E_{M}\) are based on predicted links, whereas the \(E_{UI}\) are based on real links between users and items. One should also consider that the individual RSs in the ensemble have different predictive power. Therefore we assign a decay weight based on their performance in the validation set. Hyperedges that are associated with the top ranked RS receive no decay weight while hyperedges that are related to lower ranked RSs receive lower weights (according to their ranks). In this paper we consider a linear decay (10%) per rank.
## 4 Experimental Setup
To evaluate the performance of the proposed approach we use four datasets from news, music and movie application domains. These datasets are described in Table 2. AOTM is a publicly available dataset collected from the Art-of-the-Mix platform that is based on user playlists [19]. Movielens5 is a publicly available movie rating dataset [5]. As we only encode interactions in the hypergraph for
\begin{table}
\begin{tabular}{l l c} \hline \hline
**hyperedge** & **definition** & **\# of hyperedges** \\ \hline \(E_{UI}\) & Each user is connected to the items that the user has interacted with & \(|U|\) \\ \(E_{UU}\) & Each user is connected to the k most similar users & \(|U|\) \\ \(E_{M}\) & Each user is connected to top k recommended items by a RS & \(|M|\times|U|\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hyperedge definitions
this dataset we transform ratings to binary feedback. Globoo6 and Roularta7 are news datasets that contain readers' interactions with news articles.
Footnote 6: [http://www.globo.com](http://www.globo.com)
Footnote 7: [http://www.roularta.be](http://www.roularta.be)
Footnote 8: For BPR and WRMF we used implicit library.
Footnote 9: Users with few interactions are omitted from experiments.
In our experiments we consider the following eight approaches[8]:
* **BPR**: Bayesian Personalized Ranking (BPR) [22] is a pair-wise learning-to-rank matrix completion approach as presented in the previous section.
* **WARP**: Weighted Approximate-Rank Pairwise (WARP) [25] is a another pair-wise learning-to-rank method as explained in the previous section.
* **WRMF**: Weighted Regularized Matrix Factorization (WRMF) [14, 20] is a MF approach using the alternating-least-squares optimization process to learn items and users' parameters as presented in the previous section.
* **MVAE**: Multi-variational autoencoder (MVAE) [16] is a CF approach based on variational autoencoders as explained in the previous section.
* **Hybrid**: A weighted hybrid model that uses scores of _BPR_, _WARP_, _MVAE_ and _WRMF_, and then considers the weighted average of these scores to generate the final ranking lists.
* **H**: A hypergraph-based RS explained in Section 3.1 that only contains the hyperedge types of \(E_{UI}\) and \(E_{UU}\) from Table 1.
* **HyperRS**: The proposed hypergraph-based ensemble RS explained in Section 3.2.
* **HyperRS\({}_{\mathbf{W}}\)**: The proposed hypergraph-based ensemble RS with proposed hyperedge weights.
To validate the performance of the proposed method against the compared methods we randomly hide _ten_ interactions of each user from training and then measure the ability of the methods in predicting these hidden interactions [9]. We use _precision@10_, _recall@10_ and _F1-score@10_ to measure the accuracy of predictions. _Precision_ and _Recall_ are standard information retrieval accuracy measures that reflect the proportion of the recommendation list which is relevant and the proportion of relevant items that is recommended respectively. To have a balanced measure, one can calculate _F1-score_ that combines the precision and
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **AOTM** & **Movielens** & **Globo** & **Roularta** \\ \hline
**item type** music track & movie & news article & news article \\
**\# users** & 1,605 & 1,573 & 3,903 & 5.082 \\
**\# items** & 2,199 & 2,053 & 1,246 & 2,739 \\
**sparsity** & 3.8\% & 19.9\% & 5.7\% & 8.5\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Datasets descriptions
recall into a single measure by taking their harmonic mean. These accuracy measures can be calculated as follows:
\[Precision=\frac{1}{|U|}\sum_{u\in U}\frac{|T_{u}\cap R_{u}|}{|R_{u}|}, \tag{4}\]
\[Recall=\frac{1}{|U|}\sum_{u\in U}\frac{|T_{u}\cap R_{u}|}{|T_{u}|}. \tag{5}\]
\[F1-score=2\times\frac{precision\times recall}{precision+recall}. \tag{6}\]
where \(U\) is the set of users, \(T_{u}\) is the test items for user \(u\), and \(R_{u}\) is the recommendation list for user \(u\). As the number of relevant items and the length of the recommendation lists are the same (10 items), the reported results from _precision_, _recall_ and _F1-score_ are eventually all the same.
The compared methods have some hyperparameters to be tuned. _BPR_ and _WARP_ have number of latent features, number of iterations, regularizing parameter and learning rate, _WRMF_ has number of latent features, number of iterations, regularizing parameter, _MVAE_ has batch size, number of iterations and number of anneal steps, _Hybrid_ model has hybridization weights and \(H\) as well as \(HypeRS\) have a regularizer as a hyperparameter. To tune these hyperparameters we form a validation set for each dataset by randomly drawing _five_ interactions of each user from the training set as the validation set. The final tuned hyperparameter values are based on _precision@10_ and are reported in Table 3.
## 5 Results and Discussion
The results of the proposed hypergraph-based ensemble RS and the selected approaches on the four datasets are reported in Table 4. The reported values are in terms of average _precision@10_ of the recommendation lists generated by the compared approaches. As is shown in Table 4, the proposed hypergraph-based ensemble RS (\(HypeRS\)) has superior predictive performance compared to all the competitor approaches including the weighted hybrid model in all datasets. The competitor methods have different performance rankings in the four datasets. Each of these methods processes the information based on different assumptions and learning approaches. The effectiveness of these assumptions and learning approaches differs across different applications. An ensemble RS exploits the combined predictive power of the individual methods. It considers all assumptions and decisions of various independent RSs and achieves overall superior performance regardless of the application domain of the recommendation task. The weighted ensemble RS (\(HypeRS_{W}\)) performs better compared to the ensemble RS with identical hyperedge weights (\(HypeRS\)) in all datasets.
In this study we keep the experiments simple by only using the collaborative information, i.e. user-item interactions, to make them applicable on available
datasets and various application fields (i.e. movies, music, news). Nevertheless, in cases where side information is available for users or items, content-based approaches can be included in the ensemble RS. Hypergraph learning has the natural capability of modeling the complex relations between different types of entities in a unified hypergraph and therefore is a deliberate choice to construct an ensemble of RSs with different types of information.
## 6 Conclusion
We proposed a new ensemble hypergraph learning-based RS. A unified hypergraph can integrate multiple connections between entities (here users and items) and therefore can combine the predictive power of various individual RSs boosting the precision of final recommendation lists. We empirically tested this method on four datasets from different application domains, such as news, music, and movies. The obtained results showed that the hypergraph-based ensemble RS achieves superior performance compared to all the individual models, as well as compared to a weighted hybrid approach that averages individual scores to produce final rankings, in all datasets.
For future work we outline the following directions:
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & range & **AOTM** & **Movielens** & **Globo** & **Roularta** \\ \hline \multirow{4}{*}{BPR} & \# iterations & [1000,2000] & 1645 & 1984 & 1598 & 1984 \\ & \# latent features & [100,250] & 179 & 129 & 168 & 129 \\ & regularizing parameter & [0.01,0.05] & 0.0194 & 0.0412 & 0.0374 & 0.0412 \\ & learning rate & [0.001,0.07] & 0.0284 & 0.0092 & 0.0174 & 0.0092 \\ \hline \multirow{4}{*}{WARP} & \# iterations & [200, 850] & 809 & 810 & 810 & 650 \\ & \# latent features & [15,40] & 21 & 21 & 21 & 26 \\ & regularizing parameter & [10e-3,10e-6] & 4.7e-06 & 3.4e-6 & 8.1e-6 & 7.4e-6 \\ & learning rate & [0.001,0.1] & 0.0472 & 0.0165 & 0.0191 & 0.0191 \\ \hline \multirow{4}{*}{WRMF} & \# iterations & [1000,2000] & 1276 & 1393 & 29 & 1288 \\ & \# latent features & [100,250] & 201 & 107 & 493 & 109 \\ & regularizing parameter & [0.01,0.05] & 0.0374 & 0.0225 & 0.0432 & 0.0315 \\ \hline \multirow{4}{*}{MVAE} & \# iterations & [10,250] & 29 & 18 & 29 & 28 \\ & batch size & [25,500] & 469 & 34 & 152 & 127 \\ & \# anneal steps & [100000,30000] & 127692 & 100212 & 244065 & 223544 \\ \hline \multirow{4}{*}{Hybrid} & BPR\_weight & [0.01,0.99] & 0.181 & 0.152 & 0.443 & 0.285 \\ & WARP\_weight & [0.01,0.99] & 0.422 & 0.193 & 0.290 & 0.184 \\ & WRMF\_weight & [0.01,0.99] & 0.0588 & 0.490 & 0.458 & 0.168 \\ & MVAE\_weight & [0.01,0.99] & 0.329 & 0.255 & 0.469 & 0.427 \\ \hline H & regularizing parameter & [0.01,0.99] & 0.2414 & 0.2414 & 0.0656 & 0.0616 \\ \hline \(HypeRS\) regularizing parameter & [0.01,0.99] & 0.4554 & 0.4554 & 0.8301 & 0.6325 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Hyperparameters
* **Beyond accuracy evaluation**: In this paper we only used user-item interactions. Future approaches could include additional information and relevant stakeholders so that fairness [13] and diversity [10] are also taken into account.
* **Consumption level**: We only captured the binary feedback between users and items. In real applications usually the user feedback is graded [8] which shows to what extend the user is interested in the item. This graded feedback could be reflected in the hypergraph to model user preferences more precisely.
* **Long-term vs short-term preferences**: In some applications such as news [9] and music [11] recommendation tasks, users' short-term preferences play important roles. Session-based RSs have been used to model such user short-term preferences. An ensemble RS could include models for both long-term and short-term preferences.
#### 4.0.1 Acknowledgments
This work was executed within the imec.icon project NewsButler, a research project bringing together academic researchers (KU Leuven, VUB) and industry partners (Roularta Media Group, Bothrs, ML6). The NewsButler project is co-financed by imec and receives project support from Flanders Innovation & Entrepreneurship (project nr. HBC.2017.0628). The authors also acknowledge support from the Flemish Government (AI Research Program).
|
2302.09069 | The Effect of Information Type on Human Cognitive Augmentation | When performing a task alone, humans achieve a certain level of performance.
When humans are assisted by a tool or automation to perform the same task,
performance is enhanced (augmented). Recently developed cognitive systems are
able to perform cognitive processing at or above the level of a human in some
domains. When humans work collaboratively with such cogs in a human/cog
ensemble, we expect augmentation of cognitive processing to be evident and
measurable. This paper shows the degree of cognitive augmentation depends on
the nature of the information the cog contributes to the ensemble. Results of
an experiment are reported showing conceptual information is the most effective
type of information resulting in increases in cognitive accuracy, cognitive
precision, and cognitive power. | Ron Fulbright, Samuel McGaha | 2023-02-15T20:38:47Z | http://arxiv.org/abs/2302.09069v1 | # The Effect of Information Type on Human Cognitive Augmentation
###### Abstract
When performing a task alone, humans achieve a certain level of performance. When humans are assisted by a tool or automation to perform the same task, performance is enhanced--augmented. Recently developed cognitive systems are able to perform cognitive processing at or above the level of a human in some domains. When humans work collaboratively with such "cogs" in a human/cog ensemble, we expect augmentation of cognitive processing to be evident and measurable. This paper shows the degree of cognitive augmentation depends on the nature of the information the cog contributes to the ensemble. Results of an experiment are reported showing conceptual information is the most effective type of information resulting in increases in cognitive accuracy, cognitive precision, and cognitive power.
## 1 Introduction
Recent developments, most notably in the fields of unsupervised deep learning, have produced systems capable of outperforming human experts in many domains. Computers have outplayed human experts in various games such as card games, Checkers, and Chess for several years and within the last decade have conquered human champions in _Jeopardy!_ and Go (Ferrucci, 2012; Silver, et al., 2016; DeepMind, 2018). Going far beyond gameplaying, systems now diagnose cancer, childhood depression, dementia, heart attacks, achieve reading comprehension, and discover new patterns in mathematics better than human experts (Wehner, 2019; Lavars, 2019; Towers-Clark, 2019; Gregory, 2019). These systems are not artificially intelligent, yet they mimic, perform, or replace parts of human-level thinking. Systems like this are called _cognitive systems_, or "cogs" for short (Wladawsky-Berger, 2015; Gil, 2019; Kelly & Hamm, 2013).
Cogs like these are assistive tools used by humans in a collaborative engagement called a _huma/cog ensemble_. Aggregate cognitive processing of a human/cog ensemble is therefore a mixture of artificial and biological thinking and exceeds the cognitive processing of a human acting alone. Using cogs, augmented humans outperform unassisted humans therefore we say the human is cognitively augmented. If cognitive performance is enhanced, we should be able to measure it. To do so requires us to measure either the information itself, the cognition, or the results of the cognition. Neither of these is an easy task yet. However, theoretical and practical work is progressing.
This paper presents the results of an experiment designed to investigate hypothesis H1, shown below, that the degree of cognitive augmentation achieved in a human/cog ensemble is dependent on the nature of information supplied to the human by the cog.
**H1:** The degree of cognitive augmentation achieved by humans working together on a task in collaboration with cognitive systems is dependent on the nature of information contributed by the cognitive system.
To investigate the hypothesis, we performed an experiment asking humans to solve several non-trivial puzzles. To simulate different contributions of a cognitive system, some humans were given no assistive information whereas others were given assistive information of two different types: _conceptual_ and _policy/principle_. Results showed both types of assistive information improved performance, but conceptual information had greater impact on cognitive performance than policy/principle information. Furthermore, we were able to calculate cognitive augmentation by calculating increases in cognitive accuracy and cognitive precision.
Literature and Previous Work
### Measuring Cognitive Augmentation
We can view data, information, knowledge, and wisdom (DIKW) as a hierarchy based on value as shown in Fig 1. (Ackoff, 1998). Data is obtained by sensing disturbances in the environment, information is processed data, knowledge is processed information, and wisdom is processed knowledge. Each level is of a higher value than the level below it because of the processing involved and the utility of information stock at that level.
Processing at each of the level in the DIKW can be modeled as a cognitive process transforming data, information, or knowledge, generically referred to as _information stock_, to a higher-valued form as depicted in Fig 2 where the transformation of the information stock is accomplished by the expenditure of a certain amount of _cognitive work_ (W) (Fulbright, 2020).
To illustrate how cognitive processing increases the value of information stock, consider a temperature sensor. The electrical conductivity of two different metals in a thermocouple is affected by the temperature of the environment in which the thermocouple is placed causing a detectable voltage potential. Detecting the voltage represents _data_--a direct sensing of a physical disturbance. To convert this reading to temperature a calculation must be performed. The calculation is a cognitive process combining the data sensed from the environment with information obtained from an engineering units reference table, and the knowledge of how to calculate the formula. The result of this cognitive process is _degrees_ and represents new information of a higher value than the data input into the cognitive process. Similarly, information is processed into knowledge and knowledge is processed into wisdom by additional cognitive processing.
In a human/cog ensemble (a collaborative team), cognitive processing of the entire ensemble is a mixture of human cognitive processing and artificial cognitive processing (W* = W\({}_{\text{H}}\)+ W\({}_{\text{C}}\)) as depicted in Fig. 3 (Fulbright, 2020; Fulbright & Walters, 2020; Fulbright, 2020).
Figure 1: The DIKW Hierarchy.
Figure 2: A cognitive process as a transformation of information stock.
e of cognitive augmentation achieved in a human/cog ensemble (Fulbright, 2017; Fulbright, 2018; Fulbright, 2019; Fulbright, 2020). A way of measuring the amount of cognitive work done by a cognitive process is to compare the value of the information stock before and after the processing as shown in Eq. (1) where the value of the information stock is evaluated by the value function, \(\psi\).
\[W=|\psi(S_{out})-\psi(S_{in})| \tag{1}\]
Eq. (1) therefore, focuses on the transformation effected by the cognitive process. One way to measure cognitive augmentation is to calculate a quantity called _cognitive power_ as shown in Eq. (2) where W represents an amount of cognitive work performed by one or more cognitive processes and \(t\) is the time required to perform W.
\[P\,=\,\frac{W}{t} \tag{2}\]
In general, cognitive power increases as the amount of cognitive work increases or the amount of time decreases. In a human/cog ensemble, contributions by either the human or the cog can result in either.
Another way to measure cognitive augmentation is to measure the increase in _cognitive accuracy_ and/or _cognitive precision_ of an augmented human. Cognitive accuracy is a measure of the ability to produce the correct, or preferred, output. Cognitive precision is a measure of the ability to produce _only_ the correct or preferred output as depicted in Fig. 4 where the oval represents the correct or preferred output.
Figure 4: Cognitive Accuracy and Cognitive Precision.
Figure 3: A Human/Cog ensemble performing a cognitive process.
The goal, of course, is to achieve high accuracy _and_ high precision (upper right quadrant of Fig. 4). Using a chosen accuracy and precision performance metric (\(x\) and \(y\)), comparing the performance of a human working alone (\(x\)) to a human working in partnership with a cog in a human/cog ensemble (\(x\)') will calculate any change in cognitive accuracy and cognitive precision as shown in Eq. (3).
\[\Delta\ C_{A} =\ \frac{x-x^{\prime}}{x} \Delta\ C_{p} =\ \frac{y-y^{\prime}}{y} \tag{3}\]
For example, a human working alone might produce the correct result 4 times out of 10. If the same human, working in partnership with a cog, produces the correct result 8 times out of 10, then the cognitive accuracy has increased by two-fold, a 100% increase.
Human performance is _augmented_ by partnering with cogs and is superior to humans acting alone. However, not all human/cog ensembles result in the same level of cognitive augmentation. Different Levels of Cognitive Augmentation have been defined ranging from no augmentation at all (all human thinking) to fully artificial intelligence (no human thinking) as shown in Fig. 5 (Fulbright, 2020; Fulbright & Walters, 2020; Fulbright, 2020).
### Types of Information
Earlier, we characterized various types of information (data, information, knowledge, and wisdom) based on processing and the utility value of the information at the various levels. However, DIKW is not the only way to characterize information. Hertz and Rubenstein identified six types of information as shown in Fig. 6 (Hertz & Rubenstein, 1953; LISBON, 2014; Indeed, 2021).
Figure 5: Levels of Cognitive Augmentation.
Figure 6: Hertz and Rubenstein’s Six Types of Information.
Robert Horn, the developer of Information Mapping(tm), identified seven types of information as shown in Fig. 7 [16, 17, 18, 19].
Even though these two sources use different names and words, the categories of information types defined are very similar. In our experiment, we chose to use _conceptual_ and _policy_ (also called _principle)_ information. Examples of conceptual information include definitions, examples, and counter examples. Examples of principle information include guidelines, rules, goals, and objectives [10].
## 3 The Experiment
Participants were asked to solve four different puzzles listed below and shown in Fig. 8.
The puzzles were presented to the participants one at a time with the participant allowed to continue to the next puzzle only upon successful completion of the current puzzle. Two of the four puzzles involved basic
Figure 8: Four puzzles participants were asked to solve.
Figure 7: Horne’s Seven Types of Information.
mathematical functions (addition, subtraction, multiplication). One puzzle involved recognizing a pattern in a sequence of numbers. One puzzle involved solving decoding a simple substitution cyber. Each puzzle involved non-trivial kinds of cognition but was simple enough to be solved by anyone with grade-school education and knowledge.
To investigate the effect of different types of information, some participants were presented with a hint along with the puzzle. Approximately 1/3 of the participants were given no hint (the "normal" group) and served as the control group. Approximately 1/3 of the participants were given a hint in the form of conceptual information ( the "concept" group). The conceptual hint was an example of a completed puzzle shown to the participants. The remaining 1/3 of the participants were given a hint in the form of principle/policy information (the "policy" group). The policy/principle hint for each puzzle involved a guideline or rule as shown below:
* **Square** "Each row is a different mathematical operation."
* **X puzzle** "The middle box and the empty box combine to equal the third box."
* **4 X 4** "Each row is based on a specific number. One row is a combination of the other three rows."
* **Message** "Each number is tied to a specific letter in the English alphabet."
To take part in the experiment, participants downloaded a computer program presenting each of the four puzzles and the assistive information (if any). Participants were given up to one hour to complete the puzzles. If, after an hour, all puzzles were not solved the attempt was counted as a failure. Participants were allowed to submit an attempted solution to a puzzle and then receive a message whether the solution was correct. If incorrect, the participant was allowed to repeat and submit another solution. Attempted solutions were limited to 25. If after 25 attempts the puzzles were not solved, the attempt was listed as a failure. Performance of the participants was assessed in several ways:
* Failure Percentage (inability to solve a puzzle)
* Total Overall Time (total time taken working on the puzzles)
* Average Attempts Per Puzzle
* Longest Individual Time per Puzzle
* Shortest Individual Time per Puzzle
* Highest Individual Number of Attempts per Puzzle
## 4 The Results
### Failure Percentage
During the testing phase, some participants failed to complete the puzzles within 25 attempts or one hour of time. Participants receiving conceptual information as a hint (the "concept" group) had the least number of failures whereas those receiving no information at all (the "normal" group) had the most failures as seen in Fig. 9. Success of the "concept" group was three times better than the "normal group."
of accuracy and a failure percentage of 0% would mean perfect accuracy. Using, Eq. (3), the decrease in failure percentage for each type of information can be calculated showing conceptual information has the greatest impact on failure percentage:
\[\Delta\ F_{Policy}\ =\ \frac{75\%-37\%}{75\%}=\ \ \ 51\% \tag{4}\]
\[\Delta\ F_{Conceptual}\ =\ \frac{75\%-25\%}{75\%}=\ \ \ 67\% \tag{5}\]
The inverse of failure percentage is also a measure of cognitive accuracy. Participants receiving conceptual information as a hint were successful 75% of the time. Participants receiving policy/principle information were successful 63% of the time. Participants receiving no assistive information were successful only 25% of the time. Therefore, when compared to no information, policy/principle information increased cognitive accuracy by 60% (\(\Delta\ C_{A}=60\%\) ), a 1.7 fold increase, and conceptual information increased cognitive accuracy by 200% (\(\Delta\ C_{A}=\ 200\%\) ), a three-fold increase.
### Total Overall Time
The total overall time for a group of participants is the sum of all times spent by participants in the group, measured in seconds. Participants receiving conceptual information as a hint had the shortest overall time (the "concept" group) whereas those receiving no information at all (the "normal" group) had the longest overall time as seen in Fig. 10. The "concept" group spent less than half the amount of time the "normal" group did.
Figure 9: Failure Percentage for Different Types of Information.
By calculating the reduction in time, we see conceptual information had the greatest impact on time spent on the puzzles.
\[\Delta\,T_{Policy}=\,\frac{90,000-55,000}{90,000}=\quad 39\% \tag{6}\]
\[\Delta\,T_{Conceptual}=\,\frac{90,000-35,000}{90,000}=\quad 61\% \tag{7}\]
Before any cognitive processing was done, the four puzzles were in the unsolved state with a certain amount of value associated. After successfully solving the four puzzles, they were in the solved state at an increased value. Therefore, according to Eq. (1), a nonzero amount of cognitive work was performed by the participants (W \(>\) 0). Therefore, using Eq. (2), cognitive power can be calculated for each type of information as shown in Eq. (8).
\[P_{Normal}=\,\frac{W}{90,000}<\,\,\,P_{Policy}=\,\frac{W}{55,000}<\,\,P_{Conceptual}=\,\frac{W}{35,000} \tag{8}\]
Cognitive augmentation by virtue of conceptual information yielded a cognitive power more than 2.5 times greater than no information (no augmentation) and more than 1.5 times that of policy/principle information.
### Average Attempts per Puzzle
Participants were allowed to attempt each puzzle multiple times (up to 25 times). The number of attempts for each group is the average of the number of attempts for each participant in a group for each puzzle. Participants receiving conceptual information as a hint (the "concept" group) had the fewest number of attempts for each puzzle whereas those receiving no information at all (the "normal" group) had the greatest number of attempts for each puzzle as seen in Fig. 11. The "normal" group had three times the number of attempts over the "concept" group.
Figure 10: Total Overall Time (_in seconds_).
The number of attempts per puzzle is a measure of _precision_. Correct solution of a puzzle on the first try would represent maximal cognitive precision with cognitive precision decreasing as the number of incorrect attempts increases. For each puzzle, comparing the impact of policy/principle information and conceptual information against no information yields a cognitive augmentation attributed to conceptual information increased cognitive precision of 63% -65%.
Message Puzzle: \[\Delta\;C_{P}(policy)=\tfrac{19-11}{19}=\phantom{-}42\%\qquad\Delta\;C_{P}( Conceptual)=\tfrac{19-7}{19}=\phantom{-}63\%\]
**4x4 Puzzle: \[\Delta\;C_{P}(policy)=\tfrac{20-12}{20}=\phantom{-}40\%\qquad\Delta\;C_{P}( Conceptual)=\tfrac{20-7}{20}=\phantom{-}65\%\]
**X Puzzle: \[\Delta\;C_{P}(policy)=\tfrac{20-12}{20}=\phantom{-}40\%\qquad\Delta\;C_{P}( Conceptual)=\tfrac{20-7}{20}=\phantom{-}65\%\]
**Square Puzzle: \[\Delta\;C_{P}(policy)=\tfrac{22-16}{22}=\phantom{-}27\%\qquad\Delta\;C_{P}( Conceptual)=\tfrac{22-8}{22}=\phantom{-}64\%\]
### Longest and Shortest Individual Time per Puzzle
Participants were allowed to spend as much time as they wished on each puzzle. Since each puzzle required different types and kinds of cognitive effort to complete, time (measured in seconds) spent on each puzzle varied:
\begin{tabular}{l l} Message & 160s - 1000s \\
4x4 & 100s - 1800s \\ X & 25s - 800s \\ Square & 60s - 2600s \\ \end{tabular}
Here, we considered only the times resulting in a completed puzzle. As seen in Fig. 12, the type of information did not significantly affect the shortest times on three out of four of the puzzles but participants receiving conceptual information (the "concept" group) were able to complete the "square" puzzle 3-4 times faster than participants receiving no information or policy information.
Figure 11: Average Attempts Per Puzzle.
### Lowest and Highest Individual Number of Attempts per Puzzle
Participants were allowed to attempt a puzzle multiple times. The number of attempts before achieving a successful completion varied with the "4x4" and the "square" puzzle being the most difficult to solve.
\begin{tabular}{l l} Message & 1 - 6 attempts \\
4x4 & 1 - 14 attempts \\ X & 1 - 6 attempts \\ Square & 1 - 19 attempts \\ \end{tabular}
As seen in Fig. 13, all four puzzles were able to be solved in one or two attempts regardless of the type of information received as a hint. The exception is the "square" puzzle. Without any hint at all (the "normal" group) participants required at least seven attempts to achieve success. However, with some information (the "policy" and "conceptual" groups), participants were able to solve the "square" puzzle in only one or two attempts. The effect of type of information on "square" puzzle performance is also seen when considering the highest number of attempts per puzzle as seen in Fig. 13. Participants receiving no information (the
Figure 12: Shortest and Longest Time Per Puzzle.
"normal" group) required as many as 19 attempts to complete whereas participants receiving policy information (the "policy" group) required fewer attempts and participants receiving conceptual information (the "concept" group) required far fewer attempts. The "concept" group required almost one-half the number of attempts as the "normal" group.
## 5 Conclusion
We have confirmed the hypothesis described earlier:
**H1:**: The degree of cognitive augmentation achieved by humans working together on a task in collaboration with cognitive systems is dependent of the nature of information contributed by the cognitive system.
Cognitive performance of the human participants was enhanced to differing degrees when receiving information in the form of a hint. When presented with two different types of information as a hint on how to solve a set of puzzles, _conceptual_ information improved performance more than _policy/principle_ information. Also, _conceptual_ and _policy/principle_ information improved human performance over participants receiving no information at all as a hint.
Based on these results, when humans collaborate with cognitive systems as a team, we expect to see a greater degree of cognitive augmentation when the cog provides conceptual information to the human.
Figure 13: Lowest and Highest Individual Number of Attempts Per Puzzle.
Cognitive accuracy was increased by 200% using conceptual information. Cognitive precision was increased by 63%-65% when using conceptual information. Cognitive power was increased by 2.5 times (150) when using conceptual information.
These results should be taken into consideration by cognitive system designers and developers to tailor the way in which the cognitive systems assist their human partners. Careful attention should be given to the nature of information provided to the human by the cog.
## 6 Further Research and Discussion
It is important to note the experimental results reported in this paper use only _conceptual_ and _policy/principle_ types of information. Further studies should include other types of information identified by [21, 22, 23]: _procedure, process, structure, classification, and fact_. Is there a type of information able to achieve even higher levels of cognitive augmentation than _conceptual_?
It is also important to note the cognitive effort needed to solve the four puzzles in our experiment represent only a fraction of possible cognitive efforts to be examined. Future studies should utilize a vast array of cognitive efforts and seek to use cognitive effort tested and scored in other studies. Has "the type of information leading to different levels of cognitive augmentation" phenomenon been observed in other studies already?
When running similar experiments in the future it would be of value to capture age, gender, and other identifying information. This could lead to discovering if the effects of certain types of information differ for different age groups, gender groups, etc.
We realize the wording and presentation of the information given as hints could have an effect. Future studies could present the same type of information in multiple ways to discover if the way information is presented affects cognitive augmentation.
|
2303.06073 | I Tag, You Tag, Everybody Tags! | Location tags are designed to track personal belongings. Nevertheless, there
has been anecdotal evidence that location tags are also misused to stalk
people. Tracking is achieved locally, e.g., via Bluetooth with a paired phone,
and remotely, by piggybacking on location-reporting devices which come into
proximity of a tag. This paper studies the performance of the two most popular
location tags (Apple's AirTag and Samsung's SmartTag) through controlled
experiments - with a known large distribution of location-reporting devices -
as well as in-the-wild experiments - with no control on the number and kind of
reporting devices encountered, thus emulating real-life use-cases. We find that
both tags achieve similar performance, e.g., they are located 55% of the times
in about 10 minutes within a 100 m radius. It follows that real time stalking
to a precise location via location tags is impractical, even when both tags are
concurrently deployed which achieves comparable accuracy in half the time.
Nevertheless, half of a victim's exact movements can be backtracked accurately
(10m error) with just a one-hour delay, which is still perilous information in
the possession of a stalker. | Hazem Ibrahim, Rohail Asim, Matteo Varvello, Yasir Zaki | 2023-03-09T17:19:19Z | http://arxiv.org/abs/2303.06073v2 | # I Tag, You Tag, Everybody Tags!
###### Abstract.
Location tags enable tracking of personal belongings. This is achieved _locally_, e.g., via Bluetooth with a paired phone, and _remotely_, by piggybacking on the location reported by location-reporting devices which come into proximity of a tag. There has been anecdotal evidence that location tags are also misused to stalk people. This paper studies the performance of the two most popular location tags (Apple's AirTag and Samsung's SmartTag) through _controlled_ experiments - with a known large distribution of location-reporting devices - as well as _in-the-wild_ experiments - with no control on the number and kind of reporting devices encountered, thus emulating real-life use-cases. We find that both tags achieve similar performance, e.g., they are located 60% of the times in about 10 minutes within a 100 meter radius. It follows that real time stalking via location tags is impractical, even when both tags are concurrently deployed which achieves comparable accuracy in half the time. Nevertheless, half of a victim's movements can be backtracked accurately (10 meter error) with just a one-hour delay.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
for AirTags, Samsung devices for SmartTags - to report the location of tags encountered in the wild. Whenever a location-reporting device comes in the proximity of a location tag, _i.e.,_ it receives a Bluetooth beacon, it updates the tag's location in the cloud using its GPS coordinates as an approximation. Tag owners can check their location via the tag's companion application. This process is private, without leaking any information about either the tag's owner or the device which has reported its last location.
Apple and Samsung have implemented measures to deter malicious and unsolicited tracking, yet these measures have been insufficient. The main issue is that each vendor only alerts a user if an unpaired tag from the same vendor has been in their vicinity for an extended period of time. This means that, for example, an AirTag can be used to stalk Samsung users and vice-versa. To address this concern, Apple released "Tracker Detect" [5] an Android application which allows to manually scan for nearby AirTags. Heinrich et al. [11] improved this design by automatically alerting users if they encounter the same AirTag in three separate locations within a 24 hour period. Similarly, Briges et al. [8] extend this design to generic tags, not just AirTags. These applications are only partially effective due to MAC address randomization [7], which makes tags eventually appear as new devices to a third-party application. Last but not least, Mayberry et al. [14] developed a custom location tag which mimics an AirTag, can be tracked in Apple's FindMy network, and circumvent Apple's tracking of malicious AirTags.
To the best of our knowledge, no previous paper has investigated the performance (_i.e.,_ accuracy and responsiveness) of location tags in real-world scenarios. Instead, Givehchian et al. [10] have investigated the privacy of devices using the BLE protocol, such as location tags, showing that physical-layer identification is viable although often unreliable.
## 3. Methodology
Figure 1 shows our methodology to study the two most popular location tags in the market: AirTag (Apple) and SmartTag (Samsung). The methodology is generic and can be applied to other tags. The figure shows two _data collection servers_ (iOS and Ubuntu) whose task is to continuously monitor the data available at the companion app of each tag. The figure also shows multiple _vantage points_, _i.e.,_ a mobile device mounting the two tags via a custom cover.
### Location Tags Pairing
**Apple AirTag:** This tag must be paired and registered via Bluetooth with an iOS or iPadOS device above version 14.5, _i.e.,_ no MacOS. Once the tag is linked to the Apple ID of the device it is paired with, it is then displayed in the FindMy app across all devices that have signed in with that Apple ID (including MacOS devices).
**Samsung SmartTag:** This tag can only be paired and registered via Bluetooth with a Samsung Galaxy device running Android \(\geq\)8.0. The tag is linked to the Samsung account of the device it is registered with, and it is displayed as a linked device in the Samsung SmartThings app.
### Tag Data Collection
Neither Samsung nor Apple offer public APIs to access tag's location data (\(<\)timestamp, GPS location\(>\)) as maintained by each tag's companion app: FindMy (Apple) and SmartThings (Samsung). In addition, FindMy does not support location history, and SmartThings only provides some low resolution location history for up to 6 days. Therefore, we developed "crawlers" for both apps which monitor location changes once a minute, and can thus build fine-grained location histories.
**FindMy Crawler:** The FindMy app is available for most Apple devices, e.g., Macbook, iPhone, and iPad. For ease of instrumentation, we write our crawler for MacOS. Note that MacOS version 11 or above is needed, since FindMy on older MacOS versions does not support AirTags. In FindMy, users can find the last reported coordinates of any AirTag paired with their account as follows. First, by clicking on the targeted tag from the list of devices in FindMy and selecting the option to open the location in Apple Maps. Once Apple Maps is launched, a pin is placed on the map with the latest reported location of the tag. With a right-click on the pin, the user is given the option to "copy coordinates".
We wrote a FindMy crawler in Python using pyautogui [3] to automate the above operations, and store in a file the last reported coordinates of each available AirTag. Along with a tag's coordinates, we also store a timestamp approximating when the coordinates were reported. This is computed using the crawling epoch time and the time at which a tag was last seen which is reported by FindMy as "X minutes ago", thus adding a potential error of up to one minute. Given this "last seen" time cannot be extracted from the FindMy app, we use Optical Character Recognition (OCR) [15] to convert a screenshot of its value into usable text.
**SmartThings Crawler:** The SmartThings app is only available for Android. In the app, users select a tag from the list of tags associated with their account, and then click "view location" which opens Google Maps with a pin showing the tag's location. At this point, the tag's coordinates are available in the search bar and can be copied. We automate SmartThings via the Android Debugging Bridge (ADB [1]), a rich Android protocol which allows to automate app operations like launching, scrolling, and GUI interaction. We connect an Android device, previously paired with one or more SmartTags, to a Linux machine via USB. ADB is then
Figure 1. Visualization of our measurement platform. On the left, two data collection servers (MacOS and Ubuntu) run the FindMy and SmartThings crawlers. On the right, several views of our vantage point, a Redmi Go equipped with two tags.
used to launch SmartThings and iterate over the tags. Once a tag's coordinates are available in Google Maps, they are copied and logged to a file. The same OCR-based procedure described for FindMy is used to approximate the time at which the tag location was updated last.
### Vantage Point
A vantage point consists of an Android device (Xiaomi Redmi Go with a 1.4 GHz Quad-core and a 1 GB RAM), an AirTag and a SmartTag; both tags are mounted on a custom cover for the mobile device which we designed and 3D printed (see Figure 1). The tags are paired with testing Samsung and Apple accounts as described in Section 3.1. Note that the Android device used is not capable of reporting the location of neither the AirTag nor the SmartTag, thus not impacting the accuracy of the experiments.
The Android device is equipped with an app we developed which collects GPS data, if available. The app buffers pairs of \(<\)timestamp, GPS location\(>\) with a 5-second frequency for up to five minutes; only GPS variations are recorded, thus avoiding redundant data. After five minutes, the buffered data is POSTed to a server in our lab, if a data connection is available. Otherwise, the data is kept in the buffer until a connection becomes eventually available. The \(<\)timestamp, GPS location\(>\) pairs are used as the ground truth of where the tags were located at a given point in time. This allows us to evaluate the accuracy of a tag's location as shown by its companion app, _i.e.,_ as reported by location-reporting devices opportunistically encountered by location tags.
## 4. Data Collection
This section describes two data-sets (controlled and in-the-wild) we have collected using the previous methodology. It further details the crawling infrastructure we used.
**Controlled Experiments** - We deployed an AirTag and a SmartTag at a university cafeteria over five days. The cafeteria serves roughly 1,000 students, faculty, and staff, and operates everyday between 7:30am and 10pm, with peak hours during lunch (12 to 3pm) and dinner (6 to 9pm). Meanwhile, we ran our crawlers and collaborated with the university's IT infrastructure team to monitor the number of Apple and Samsung devices connected to the WiFi access point in the cafeteria. This is achieved by inspecting the destinations of the traffic generated by each device connected to WiFi. The rationale is that a clear distinction arises between Samsung and Apple devices since they rely on disjoint and proprietary data-centers to run their services. This approach was needed as modern mobile phones hide their vendor information from the MAC address (Bartos et al., 2017). This information was aggregated into a count of the number of Apple and Samsung devices at different time periods, and thus completely anonymized.
One limitation of this experiment is that we miss devices not connected to WiFi. While we cannot quantify this limitation, most phones rely on WiFi due to poor mobile coverage in the cafeteria. Another limitation is that we approximate the number of devices connected to WiFi to the number of location-reporting devices. This can be an overestimate, especially for Samsung devices whose users are required to opt-in to enable this behavior.
We also conduct an experiment in a secluded area - 300 meters away from any building - where only our tags and phones are present. For each tag, we deploy four phones at distances of 0, 10, 20, and 50 meters from the tag and measure both the frequency and strength of the Bluetooth beacons received. SmartTags beacons are easy to detect as they carry the name of the sending tag. AirTag beacons share the first 4 bytes of their header ("1EFF004C12").
**In The Wild Experiments** - We deployed our vantage points via four volunteers between March and August 2022. In total, the tags were carried along 9,378 Kms across six countries and 20 cities (see Table 1). Study participants were instructed to carry the vantage point as much as possible, and only interact with it to charge the phone, connect to a WiFi network, or insert a SIM card with a mobile data plan.
To avoid biasing results in favor of either tag, participants ensured the location reporting option was disabled on any personal Samsung or Apple device they owned. Other family members were not required to do so. Note that we filter data recorded within a 300 meter radius of each participant's _home_ location, as to not bias the data-set in the event of a neighbor or family member's phone repeatedly reporting a tag's location. Home locations are assumed as our participants homes, hotels, or any other area in which they slept overnight. Overall, this filter accounted for 65% of all data collected.
**Ethics** - We obtained IRB approval (HRP-2021-185) and informed participants of our data collection practices through a consent form. While we collect GPS data, we do not gather any identifiable or sensitive personal information.
## 5. Results
This section analyzes the location tag data-sets collected in the wild and via controlled experiments (busy university cafeteria and secluded area). We first introduce metrics and methodology we have devised to analyze location tag data-sets, and then dive into the results.
### Methodology
We analyze the performance of AirTag and SmartTag both independently and _combined_, which emulates a scenario where Apple and Samsung devices can report the location of each
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Cry** & **\# of** & **\# Reports** & **\# Reports** & **Walk/Jog** & **Days** \\ & **Cities** & **Samsung** & **Apple** & **Transit (km)** & \\ \hline USA & 2 & 145 & 4,821 & 14/22/871 & 30 \\ \hline IT & 10 & 1,361 & 4,520 & 157/68/3,170 & 28 \\ \hline UAE & 2 & 1,442 & 9,572 & 145/151/3,384 & 52 \\ \hline PK & 1 & 129 & 454 & 13/16/165 & 2 \\ \hline CH & 1 & 331 & 489 & 14/16/62 & 3 \\ \hline DE & 4 & 187 & 1,225 & 46/45/1,021 & 5 \\ \hline \hline Tot. & 20 & 3,595 & 21,081 & 388/317/8,673 & 120 \\ \hline \end{tabular}
\end{table}
Table 1. Summary of data-set collected in the wild. # Reports refer to the number of times that tag locations were reported as “Now” in each companion app.
other tags, functionally detaching the two tags from their proprietary ecosystems. This scenario is also representative of a victim being stalked by both tags concurrently. We rely mainly on two metrics: _accuracy_ and _responsiveness_.
**Accuracy** - At high level, assessing the accuracy of a tag consists of comparing its reported location, at a given time, with the location of its associated vantage point. Several factors might impact a tag's accuracy. First and foremost, the tag's location is approximated by the GPS location of the reporting device. Given Bluetooth has a 100 meter range, this can cause an error of up to 100 meters. Another source of error is the movement of both the tag and the reporting device: as these devices move, the time needed to extract and report the GPS location can introduce some error. For example, when moving on a high speed train (300 Kmh) our sampling of the GPS locations every 5 seconds can introduce an error of up to 400 meters.
For a given tag, we group the locations reported within the same X-minutes interval into the same "bucket". For each X-minutes "bucket", we calculate the distance between the location reported by the vantage point and the locations crawled from the tag's companion app. If the distance between the vantage point's location and a tag's location is below a (radius) threshold we count a "hit", otherwise we count a "miss". We compute a tag's accuracy as percentage of hits.
To identify the radii of interest, we analyzed the combined accuracy of the location tags as we increase the radius of reporting across different time intervals (see Figure 8 in Appendix B). In the case of short time intervals (1 and 10 minutes), the accuracy increases as the radius increases, for then plateauing at roughly 100 meters. For longer time intervals, there is no significant improvement in accuracy beyond 50 meters. Accordingly, we will use the following radii in our analysis: 10, 50, and 100 meters.
**Responsiveness** - Having accurate tag locations is important, but their locations also need to be reported in a timely manner. If a tag's location is updated frequently, then the owner will have less area to backtrack as (s)he realizes that the "tagged" object was lost. At the same time, a high update frequency is also an enabler of stalking or unsolicited tracking of a person. We calculate tag responsiveness as the time difference between the timestamp of the first hit - _i.e.,_ when the distance between the vantage point's location and a tag's location is below a radius -- and the first time that the vantage point reported such location.
### Controlled
We start with the analysis of the _signal strength_ of the Bluetooth beacons emitted by each tag. We measure the signal strength in a secluded area with phones located at increasing distance. Figure 2 shows that, regardless of the distance considered, SmartTag beacons are received with about 10dBm higher RSSI (Received Signal Strength Indicator) than AirTag beacons. The overall lower RSSI for AirTag beacons make them impossible to be received at a distance of 50 meters, differently from SmartTag beacons.
Next, we investigate each tag's _update rate_, computed as the number of location updates reported by location-reporting devices every hour. Accordingly, we focus on the controlled experiments performed in a cafeteria where the number of Samsung/Apple devices encountered by each tag naturally varies over time. Figure 3 shows the update rate as a function of the surrounding location-reporting devices. The figure shows, for each hour of the day, the average (over 5 days) tag's update rate and device count, _i.e.,_ the number of Apple and Samsung devices present in the cafeteria. The shaded areas and errorbars in the figure report the standard deviation of each metric. The figure shows an overall similar update rate between tags, peaking at roughly 15 updates per hour during lunch and dinner, and dipping to zero as the cafeteria closes over night. However, the figure also shows that there were far more Apple than Samsung devices, up to 6 times more devices during peak hours, e.g., 320 Apple devices versus only 50 Samsung devices at 8pm.
To further understand the previous result, Figure 4 shows the update rate as a function of the likelihood to have N location-reporting devices within one hour, e.g., up to 10 and between 10 and 20. As expected from Figure 3, it is more likely to find few Samsung devices, e.g., less than 20, whereas it is more likely to find lots of Apple devices, e.g., between 100 and 300. The key result of this analysis is that, while both AirTags and SmartTags converge to a similar maximum update rate (15-20 updates per hour), they do so in a very different way. Samsung implements an _aggressive_ update strategy, which quickly converges to the maximum update rate. In contrast, Apple implements a _conservative_ strategy, e.g., half the update rate than Samsung when less than 20 devices are present. Note that the update rate for Samsung was not measured beyond 71-80 encountered devices per hour due to the fact that there was never more than 80 Samsung phones in the cafeteria at any hour during the experiment.
### In-The-Wild
**Tags Accuracy and Responsiveness** - We begin our analysis by investigating each tag's accuracy within a given radius as a function of its responsiveness. Figure 5 summarizes this analysis as we consider a radius of 10, 50, and 100 meters; note that "combined" refers to a unified Apple/Samsung ecosystem. Intuitively, Figure 5 (a,b,c) shows that relaxing the responsiveness, _i.e.,_ allowing more time to locate a tag within a radius, improves tag accuracy, e.g., the combined
Figure 2. Beacon RSSI for each tag at different distances.
tag's accuracy for larger radii (50 and 100 meters) grows from 10% to 80% as the responsiveness grows from one to 120 minutes. Combining tags offers a 15% improvement, on average, over the accuracy of each individual tag.
The previous observations also apply to a small radius (10 meters, see Figure (a)a) although with a few important differences. First, one minute is too fast to locate a tag within such a small radius, e.g., an accuracy of 2% versus 8-10% at larger radii. Second, as we relax the responsiveness, the tag's accuracy increases much slower than what is observed for larger radii, e.g., 40-45% versus 60-63% assuming a responsiveness of 25 minutes. This happens because, as both tags and reporting users might move, it is more challenging to correctly report the right location with such small radius and high responsiveness. Finally, the maximum accuracy caps at 72%, when considering both tags combined, or 8% less than what observed for larger radii. Given the slow responsiveness allowed, this reflects errors introduced by approximating a tag's location with the reporting device location, which is unlikely more than 50 meters as per Figure 2.
Finally, if we focus on each tag independently, Figure (a)a shows that SmartTag (orange lines) slightly outperforms AirTag (blue lines) at a radius of 10 meters. However, at larger radii, this trend does not hold, with both tags performing similarly at radius of 50 and 100 meters. This result likely depends on Samsung's aggressive strategy (see Figure 4), which allows higher accuracy in more challenging scenarios, e.g., small radius, as well as SmartTags' stronger signal strength (see Figure 2).
**Mobility and Time of the Day** - We continue our analysis by exploring the effect of different mobility and temporal characteristics on the accuracy of each tag. For this analysis, we assume a responsiveness of 10 minutes and radii of 10, 50, and 100 meters. We also compute the statistical significance between different mobility and temporal scenarios by running t-tests across the average accuracy computed for each scenario. In Figures (d)d-f, statistical significant tests are denoted using the following symbols: ns denotes a \(p>0.05\), * denotes \(0.01<p<0.05\), ** denotes \(0.001<p<0.01\), *** denotes \(0.0001<p<0.001\), and *** denotes \(p<0.0001\).
Figure (d)d shows average tag's accuracy - 95% confidence intervals reported as error-bars across the different radii considered - as we vary how fast a tag is moving (as per our ground truth). We find that while walking at a pedestrian speed (\(<6.0\) km/h), the accuracy is maximized for both tags and even when combined. The rationale behind this finding is that walking represents a good equilibrium between number of devices the tag may be exposed to, e.g., higher than when being stationary, and the length of the time window for the Bluetooth signal to be picked up by a location-reporting device. As the speed increases, e.g., when jogging (speed comprised between 6.0 and 12.0 km/h) or in transit (\(\geq 12.0\) km/h), the accuracy deteriorates due to the little time allowed for Bluetooth communication.
Figure (e)e shows average tag's accuracy during different times of the day. The figure shows no significant differences between morning (6 A.M. to 10 A.M.) lunch (10 A.M. and 2 P.M.), afternoon (2 P.M. to 6 P.M.) and evening hours (6 P.M. to 10 P.M.), but a statistically significant decrease at night (10 P.M. to 2 A.M.). We next explore potential impact of weekdays and weekends on the accuracy. Figure (f)f shows significant tag's accuracy increase on weekends as compared to weekdays, likely due to greater outdoor activity by the general public in the locations visited.
Figure 4. AirTag/SmartTag update rates as a function of the likelihood to have N location-reporting devices within one hour.
Figure 3. Update rates of AirTag and SmartTag at different times of day in a busy university cafeteria.
**Population Density** - Intuitively, the accuracy of a tag depends on the number and type of devices in their vicinity. While we cannot collect this information in the wild, we approximate it with the Kontur Hexagon Population density data set [2], which reports population densities within H3 hexagons inferred from satellite images of building density. H3 is Uber's Hexagonal Hierarchical Spatial Index [6] which groups GPS locations as hexagons.
We group GPS locations from our data-set as hexagons using a "resolution" of eight as in the Kontur data set; see Appendix A for more details. We threshold the different population density buckets as the 33rd, 66th, 100th percentiles of the population densities of all hexagons visited in our study. As such, we designate hexagons which hold a population \(<600\) (33rd percentile) as "low density", those with a \(600\leq\) population \(<1,750\) (66th percentile) as "medium density" and those with population \(\geq 1,750\) as "high density".
Figure 6 shows the Cumulative Distribution Function (CDF) of the accuracy as a function of population density (low, medium, and high). For this analysis, we consider a responsiveness of one hour and radius of 100 meters. The figure shows that the probability of a zero accuracy, _i.e.,_ no correct location reported within 100 meters, drops from 20-25% in low density areas down to 10-15% in high density areas. A slight decrease in accuracy is observed between low and medium density areas for the median accuracy (roughly 45% in low-density areas vs. 42% in median-density areas), while it increases to 63% for high density areas. With respect to the combined accuracy, high density areas see the least improvement: on average 15% versus 20% in low density areas. This happens because the benefit of sharing the same ecosystem reduces as an area is already highly populated with devices from each ecosystem.
## 6. Conclusion
Location tags such as AirTag and SmartTag are useful tools for locating objects, but there is anecdotal evidence of their misuse for tracking and stalking people. This paper has studied the performance of location tags through controlled and in the wild experiments. These experiments showed that AirTag and SmartTag achieve similar performance with respect to how quickly and precisely they can be located, in
Figure 5. Evaluation of AirTag, Smartag and “combined” accuracy in the wild.
Figure 6. CDF of tags accuracy for different population densities; one hour responsiveness and 100 meters radius.
various scenarios. For example, several minutes are needed to locate a tag within 100 meters from its true location, implying that real-time tracking of a victim is impractical. Nonetheless, almost half of a victim's movements can be backtracked with high accuracy with just a one hour delay.
|
2307.14180 | Tackling Scattering and Reflective Flare in Mobile Camera Systems: A Raw
Image Dataset for Enhanced Flare Removal | The increasing prevalence of mobile devices has led to significant
advancements in mobile camera systems and improved image quality. Nonetheless,
mobile photography still grapples with challenging issues such as scattering
and reflective flare. The absence of a comprehensive real image dataset
tailored for mobile phones hinders the development of effective flare
mitigation techniques. To address this issue, we present a novel raw image
dataset specifically designed for mobile camera systems, focusing on flare
removal. Capitalizing on the distinct properties of raw images, this dataset
serves as a solid foundation for developing advanced flare removal algorithms.
It encompasses a wide variety of real-world scenarios captured with diverse
mobile devices and camera settings. The dataset comprises over 2,000
high-quality full-resolution raw image pairs for scattering flare and 1,100 for
reflective flare, which can be further segmented into up to 30,000 and 2,200
paired patches, respectively, ensuring broad adaptability across various
imaging conditions. Experimental results demonstrate that networks trained with
synthesized data struggle to cope with complex lighting settings present in
this real image dataset. We also show that processing data through a mobile
phone's internal ISP compromises image quality while using raw image data
presents significant advantages for addressing the flare removal problem. Our
dataset is expected to enable an array of new research in flare removal and
contribute to substantial improvements in mobile image quality, benefiting
mobile photographers and end-users alike. | Fengbo Lan, Chang Wen Chen | 2023-07-26T13:12:01Z | http://arxiv.org/abs/2307.14180v1 | Tackling Scattering and Reflective Flare in Mobile Camera Systems: A Raw Image Dataset for Enhanced Flare Removal
###### Abstract.
The increasing prevalence of mobile devices has led to significant advancements in mobile camera systems and improved image quality. Nonetheless, mobile photography still graphes with challenging issues such as scattering and reflective flare. The absence of a comprehensive real image dataset tailored for mobile phones hinders the development of effective flare mitigation techniques. To address this issue, we present a novel raw image dataset specifically designed for mobile camera systems, focusing on flare removal. Capitalizing on the distinct properties of raw images, this dataset serves as a solid foundation for developing advanced flare removal algorithms. It encompasses a wide variety of real-world scenarios captured with diverse mobile devices and camera settings. The dataset comprises over 2,000 high-quality full-resolution raw image pairs for scattering flare and 1,100 for reflective flare, which can be further segmented into up to 30,000 and 2,200 paired patches, respectively, ensuring broad adaptability across various imaging conditions. Experimental results demonstrate that networks trained with synthesized data struggle to cope with complex lighting settings present in this real image dataset. We also show that processing data through a mobile phone's internal ISP compromises image quality while using raw image data presents significant advantages for addressing the flare removal problem. Our dataset is expected to enable an array of new research in flare removal and contribute to substantial improvements in mobile image quality, benefiting mobile photographers and end-users alike.
flare removal, raw image dataset, reflective flare, scattering flare +
Footnote †: journal: Computer vision
## 1. Introduction
Lens flare (Leng et al., 2016; Liu et al., 2017; Liu et al., 2018; Liu et al., 2019) is a common optical artifact that occurs when non-image-forming light enters a camera's lens system and interacts with the imaging sensor. This phenomenon can degrade image quality and adversely affect the visual appeal of photographs, especially in mobile computational imaging. Lens flare is more prevalent in this field due to factors such as the widespread use of plastic lenses in mobile camera systems, resulting in lower lens quality compared to professional cameras, and the lack of costly anti-reflective (AR) coatings (Beng et al., 2019).
There are various causes of lens flare, including light scattering within the lens system, reflections between lens elements, and the influence of dust, contaminants, or scratches on lens surfaces. Lens flare can be broadly classified into two types: scattering flares and reflective flares, each exhibiting distinct characteristics and shapes.
Scattering flares (Liu et al., 2017; Liu et al., 2018; Liu et al., 2019; Liu et al., 2019) arise from the interaction of light with microscopic imperfections and defects within the lens system. As an example shown in Fig. 1, these imperfections cause light to scatter in various directions, resulting in a visible haze such as veiling glare (Liu et al., 2019) or a series of artifacts in the captured image. The shape and appearance of scattering flares depend on the nature and distribution of the defects within the lens. Dust particles on the lens surface can cause small, localized bright spots or streaks, while scratches can produce more elongated, linear artifacts. The presence of multiple defects may lead to a complex pattern of overlapping flares, further degrading image quality.
Reflective flares (Liu et al., 2017; Liu et al., 2018; Liu et al., 2019; Liu et al., 2019), in contrast, are caused by reflections between lens elements, particularly in multi-element lens
systems, as illustrated in Fig. 1. When light enters the lens, it can reflect off the internal surfaces of the lens elements, bouncing between them before eventually reaching the image sensor. These internal reflections can create a series of concentric rings, polygons, or other geometric shapes in the image, depending on the lens design and the relative position of the light source. Reflective flares are often more pronounced when the light source is close to the optical axis or when the lens system comprises a large number of elements.
Moreover, reflective flares can exhibit different shapes and appearances depending on whether they are in-focus or out-of-focus. In-focus reflective flares tend to form sharp, well-defined geometric patterns, such as white spots [3; 6; 26]. Out-of-focus reflective flares can appear more diffuse and irregular, often taking the form of circular or elliptical blobs, known as bokeh. Factors influencing out-of-focus flares include lens design, aperture shape, and the degree of defocus.
The symmetric properties of lens flare, particularly in the case of reflective flares, can be utilized in our proposed method for capturing ground truth data. For a camera lens with rotational symmetry, meaning its elements are centered along the optical axis, the flare chain stretches in a straight line from the light source through the center of the image. Each individual ghost image's shape exhibits symmetry with respect to this axis. This occurs because light rays originating from a specific source point travel symmetrically concerning the tangential plane [5]. By carefully controlling the position and orientation of the light source relative to the lens system, we can exploit the symmetry of flare patterns to obtain accurate and consistent ground truth data for training and evaluating lens flare removal algorithms.
Previous approaches to flare removal have relied on traditional image processing techniques or synthetic datasets [3; 6; 7; 14; 26; 30] for training. However, these methods suffer from various limitations, such as the inability to handle complex real-world examples, limited diversity, and low quality of synthesized flare images. Existing datasets for lens flare removal also have their shortcomings, as they often lack real-world examples, feature limited diversity in flare types, and rely on low-quality synthesized flare images. These limitations make it challenging to train and evaluate robust lens flare removal algorithms capable of handling diverse real-world scenarios.
In this paper, we present the construction of a novel dataset for lens flare removal in mobile computational imaging, addressing the limitations of existing datasets and methods. Our dataset comprises real-world examples of both scattering and reflective flares captured with mobile phone cameras, saved in raw image format and
Figure 2. This figure presents examples that demonstrate the differences between real scattering and reflective flare images and their synthesized counterparts using Flare7K [7]. The first and fourth rows display the ground truth images obtained from the internal processing pipeline of mobile phones and external processing pipelines run on computers, respectively. The second and fifth rows depict the corresponding flare-corrupted pairs. The third and final rows showcase the synthesized results created using the ground truth images with Flare7K.
processed by the internal processing pipeline of the mobile phone. This dataset, the first of its kind, enables supervised training of state-of-the-art deep learning models and provides valuable ground truth data by leveraging the symmetric properties of lens flare. Our findings reveal that networks trained with synthetic data struggle on this real image dataset, and that internal processing pipeline processed data are more challenging to restore compared to raw image data, possibly due to aggressive post-processing algorithms and heavy compression. These insights confirm the importance of real-world data and raw image formats for more accurate and reliable lens flare removal, ultimately leading to improved image quality and enhanced visual experiences for mobile phone camera users.
## 2. Related Works
In this section we first introduce the current synthetics of flare image dataset and their limitations. Followed by the discussion about how a raw image datasets can be used for enhancing user experience in mobile computational photography.
### Synthetic Flare Image Dataset
The initial flare image dataset, proposed by Wu _et al._[30], consists of 2,000 captured flare-only images and 3,000 flare images simulated using their physics-based model. These flares are superimposed on flare-free base images to create synthesized flare corruption. However, their real image data exhibits similar lens settings, resulting in comparable flare-only images across different scenes and unrealistic simulation outcomes. Qiao _et al._[21] collect unpaired flare-corrupted and flare-free images for training Cycle-GAN-like networks [34], but the lack of paired data precludes its use for training pixel-to-pixel neural networks such as U-Net [23]. Flare7K [7] synthesizes 5,000 scattering flare-only images in various colors and 2,000 reflective flare-only images using the Optical Flares plug-in in Adobe After Effects, primarily for stimulating nighttime flare-corrupted images. The dataset also includes 100 real images with a resolution of \(512\times 512\) pixels for evaluation. Similar to [30], flare-only images are added to flare-free base images for simulation. However, the simulated results are constrained by limited flare diversity. Among the 5,000 scattering flares, the majority feature conspicuous colors like green and red, rather than the more commonly observed blue and yellow flares due to human perception and artificial light source design, as shown in Fig. 2. Although green and red flares have specific applications, their prevalence in the dataset is less practical for general lighting scenarios.
Despite progress in synthetic flare image datasets, their synthesis quality is hindered by several limitations. For scattering flares, the base image quality used in these datasets [33] is suboptimal. This dataset is initially proposed for reflection removal, and only around 10% of the images contain a light source, complicating the simulation of practical corrupted images. Furthermore, scattering flare caused by defects such as dust introduces global artifacts like veiling glare, which may reduce contrast and degrade overall image quality. However, the provided flares in the dataset only affect the local light source, neglecting flare impact on other image regions.
Reflective flare, more prevalent in daily life, is insufficiently addressed in these synthetic datasets. Reflective flare properties depend on factors such as light source position and shape, exposure, lens design, and capturing angle. Unfortunately, these datasets fail to capture the diversity of reflective flares. For example, Flare7K [7]
Figure 4. This figure demonstrates the differences in data information between raw images and images processed by the ISP pipeline on a mobile phone. (a) shows an image produced by the mobile phone’s ISP pipeline, while (b) displays an image exported from its corresponding raw image using an external processing pipeline. (c) is an image with highlight area adjustments applied to (a), and (d) is an image with highlight area adjustments applied to the raw image, subsequently exported using the external processing pipeline. The ISP-processed and compressed image (a) contains significantly less information compared to the raw image. As a result, restoring data from the ISP-processed image (c) yields fewer details than recovering and exporting data from the raw image (d).
Figure 3. Comparison of images exported from the image signal processing (ISP) pipeline on a mobile phone (a) and external processing pipeline with raw images (b). Modern ISPs tend to run post-processing algorithms, such as aggressive denoising and image sharpening, and save the processed image in lossy formats, which may degrade image quality. In this example, the ISP-processed image has obvious sharpening artifacts.
and (Srivastava et al., 2017) offer simulated reflective flares at specific angles without considering image content, such as light source shape. Additionally, flares are randomly superimposed on images using synthesized schemes provided in these datasets, disregarding the symmetric property between the light source and reflective flare. Since reflective flare is caused by reflection between camera system lenses, obtaining real ground truth data is often considered difficult, and flare removal has long relied on simulation. However, we have discovered a method for obtaining ground truth images for such flares and provide them as training data.
In this paper, we address these limitations by constructing a real image dataset for both scattering and reflective flares. To the best of our knowledge, this is the first real image dataset providing paired data for supervised training.
### Raw Image Dataset
An Image Signal Processor (ISP) is a specialized processing unit in mobile phones designed to handle complex tasks involved in processing data captured by sensors. The primary function of an ISP is to convert raw data from the camera sensor into a usable image format, such as JPEG (Srivastava et al., 2017), while ensuring fast processing times. The processing pipeline involves a non-linear transform that includes demosaicing (Srivastava et al., 2017), white balance (Beng et al., 2016), color manipulation (Srivastava et al., 2017), tone-mapping (Deng et al., 2016), and JPEG compression (Srivastava et al., 2017). Modern mobile phones also incorporate advanced computational photography (Deng et al., 2016) and post-processing algorithms, such as enhancing underexposed areas in dark environments, aggressive denoising due to smaller sensors, and image sharpening for visually pleasing quality, as demonstrated in Fig. 3. However, constrained by computational efficiency and capability on mobile phones, the image quality is limited compared to external processing with raw data. Additionally, since it is heavily compressed to 8-bit data from 12-bits or 14-bits on raw images for storage cost saving, the image quality is further degraded. Therefore, internal-ISP on mobile phones may result in worse image quality compared to external processing pipelines on raw data.
With advances in mobile phones, more devices are now capable of capturing and storing raw image data. As shown in Fig. 4, compared to internal-ISP processed images, raw images provide better flexibility in post-processing and richer information, resulting in higher image quality. An external processing pipeline can employ more advanced algorithms without computational capability constraints, yielding improved quality. Moreover, raw image data is linearly proportional to light intensity. In contrast to ISP-processed data, raw images have a higher dynamic range, capturing more information in both shadows and highlights, allowing for better detail recovery in post-processing.
For these reasons, a high-quality raw image dataset from mobile phones is versatile and desirable for image restoration studies. Unfortunately, the availability of raw datasets captured by mobile phones is still limited. The SIDD dataset (Beng et al., 2016) presents real noisy images from smartphone cameras with high-quality ground truth. The Fujifilm UltralSP dataset (Srivastava et al., 2017) and the ETH dataset (Srivastava et al., 2017) aim to enhance learning an ISP for better quality on mobile phones
Figure 5. Pipeline for the capturing scheme. (a) To capture scattering flare image pairs, we fix the camera position on a tripod, add a stain-corrupted camera filter in front of the capturing mobile devices to mimic different levels of corruption. Two-dimensional registration and light source detection are performed for aligning and cropping the paired raw images into paired flare-corrupted and flare-free raw image patches. (b) To capture reflective flare pairs, based on the physical property that the light source and flare are always symmetrically located, we capture a fixed flare image and slightly rotate the sensor plane along the \(z-\)axis when capturing to change the light source location, resulting in a moved flare image. We perform 3-D registration for computing the difference between the images, which is the flare, and merge the two images into a ground truth image.
by providing data captured with mobile phones and professional high-end DSLR cameras. RawNeRF (Krizhevsky et al., 2017) collects a noisy raw dataset for training Neural Radiance Fields (NeRF), demonstrating that rendering raw output images from the resulting NeRF allows for novel high dynamic range (HDR) view synthesis tasks. We acknowledge the current unavailability of raw datasets specifically tailored for the flare removal problem and understand that leveraging the richer information from raw images may prove advantageous. To address this gap, we contribute raw data for this purpose.
## 3. Dataset Construction
In this section, we first discuss our capturing devices and settings, followed by detailed description of the capturing schemes for scattering flare and reflective flare, respectively.
### Capturing Settings
Since similar lens structures tend to be used across the same series of mobile phone models, potentially resulting in similar lens flare, we adopt mobile phone models from different manufacturers for capturing devices. These devices are representative in mobile photography, with detailed specifications listed in Table. 1. iPhone 13 is popular for mobile photography, while Pixels by Google are known for their advanced computational photography algorithms. iQoo Neo 7 is a mid-range phone recognized for its camera system, and Find X6 Pro is a premium model with an advanced camera system. For all listed devices, we use their main cameras with manually controlled exposure and focus, if possible, and multi-frame fusion turned off to deliver the best raw image quality.
### Scattering Flare
To simulate a lens with defects, we add a stain-corrupted camera filter in front of the capturing mobile devices. Different areas on the filter exhibit varying levels of defects, so changing the filter's location relative to the camera simulates different corruption levels. Since we fix the camera position on a tripod and only change the filter's location, the captured flare-corrupted and flare-free images are naturally paired but may still be misaligned due to minor vibrations during the capturing process. To provide the highest quality image pairs, we perform sub-pixel registration for each captured pair. Registration is performed by extracting and matching SURF (Beng et al., 2019) features. As the movement between the pairs is small and only requires 2D registration, we only compute translations along the vertical (z-axis) and horizontal (\(y\)-axis) directions. Registration is first performed on images processed by the mobile phone's internal ISP, and then the translation is converted for raw image data, where pixels are on an integer grid.
Given a registered pair, we first mark areas with light sources, then apply a light-source detection algorithm to detect the light-source position, as highlighted in Fig. 4(a). We crop the images into patches centered on the light-source position for both raw image data and internally processed ISP images. The cropped raw patches undergo external processing pipline to generate high-quality RGB image pairs. Before saving the cropped patches into a dataset, low-quality pairs are detected using predefined metrics to ensure the quality of generated pairs.
### Reflective Flare
Capturing ground truth pairs for reflective flare is challenging because this flare is caused by internal reflections between lenses within the camera system, which always exist during the capturing process. Consequently, it is difficult to capture flare-corrupted and flare-free image pairs with a single capture, which is the primary reason no real image dataset for reflective flare images currently exists for supervised training.
However, it is still possible to obtain ground truth images using two real images by changing the light source location, leveraging the symmetric property between flare and light source, and performing registration. Specifically, as shown in Fig. 4(b), given the physical property that the light source and flare are always symmetrically located to the image's center point, we slightly rotate the sensor plane along the \(z-\)axis when capturing to change the light source location, resulting in a moved flare image. In the registration step, given moving image A and fixed image B, 3D registration can be performed to align image A with image B. Similar to processing scattering flare images, registration is performed by extracting and matching SURF features. This allows us to compute the difference between the two images to locate flares, followed by merging the images and compensating for missing information caused by flare on image B with image A. Similarly, one can fix image A and perform registration with image B. As a result, one image pair will produce two flare-corrupted and flare-free pairs, and the resulting images still preserve the symmetric property. Before saving them into our dataset, we also filter out low-quality pairs using predefined metrics to ensure the quality of the generated pairs.
For reflective flare, since the sensor plane is rotated, which involves registration in 3D space, it is unavoidable to perform image warping and interpolation for obtaining the registered image. While it is possible to perform interpolation on raw image data, inaccurate interpolation may lead to artifacts such as color aliasing, which can significantly degrade image quality. For this reason, we provide the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & Manufacturer & \begin{tabular}{c} CMOS sensor \\ (Main camera) \\ \end{tabular} &
\begin{tabular}{c} Specification \\ \end{tabular} & Released year \\ \hline iPhone 13 & Apple & IMX603 & 12 MP sensor, 1/1.7-inch sensor, 1/\(\mu\)m pixels, & \\ & & & 26 mm equivalent /1.6-aperture lens & \\ Pixel 7 & Google & GN1 & 50MP sensor, 1/1.31-inch sensor, 1.2\(\mu\)m pixels, & \\ & & & 24mm equivalent /1.8-aperture lens & \\ iQoo Neo 7 & Vivo & IMX766 & 50MP sensor, 1/1.56-inch sensor, 1/\(\mu\)m pixels, & \\ & & & 23mm equivalent /1.88-aperture lens & \\ Find X6 Pro & OPPO & IMX989 & 50MP sensor, 1-inch sensor, 1.5\(\mu\)m pixels, & \\ & & & 23mm equivalent /1.8-aperture lens & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Detailed specifications of the mobile phones used for constructing the dataset.
raw images for the original images, and we only perform registration on RGB images processed by the mobile phone's internal ISP and our external processing pipeline.
## 4. Experiments
We investigate the effect of mobile ISPs on flare removal performance and compare models trained with synthetic and real image data. Specifically, we evaluate these models on images acquired using both the internal ISP of the mobile phone and an external processing pipeline implemented on a computer. Furthermore, we assess the performance of networks trained with synthetic data on real flare-corrupted images for reflective and scattering flares.
### Experiment Settings
We evaluate performance on both raw images and images processed by the mobile phone's internal ISP. For raw images, we convert flare-corrupted images and their ground truth pairs to RGB images using a customized external processing pipeline implemented in MATLAB, denoted as _RAW2RGB_ images. The external pipeline comprises black level correction, white balancing, demosaicing, and color space conversion modules, without employing aggressive denoising or post-processing methods such as sharpening or nighttime enhancement algorithms. We save images in a lossless format instead of the lossy JPEG format. Images processed by the internal ISP are denoted as _ISPRGB_ images. We apply this data processing to both scattering and reflective flare images.
We collected \(2,200\) raw image pairs for scattering flares and \(1,100\) raw pairs for reflective flares. For scattering flares, we conduct light source detection to locate the position of the light sources and crop the \(2,200\) high-resolution raw image pairs into \(30,000\) flare-corrupted pairs with a resolution of \(512\times 512\), using \(200\) pairs for evaluation. For reflective flares, we use the \(1,100\) raw image pairs to generate \(2,100\) pairs with a resolution of \(1024\times 1024\) for training, with \(50\) pairs used for evaluation. We apply the same division scheme to both RAW2RGB and ISPRGB data, respectively.
Regarding comparison schemes, early works (Beng et al., 2016; Wang et al., 2017; Wang et al., 2017) primarily focused on detecting lens flares based on intensity or location and recovering them using inpainting techniques. However, these solutions are not robust in flare-corrupted area detection, and more importantly, they are limited by the type of flare. They can handle in-focus reflective flares resulting in spots but struggle with out-of-focus flares, which appear transparent, and also scattering flares. Therefore, we assess the performance of state-of-the-art data-driven flare removal methods trained with synthetic datasets due to the current lack of real images.
For neural network training, we follow the network settings in (Wang et al., 2018) and (Chen et al., 2018) and use U-Net as a baseline for training. For subsequent comparisons between training with real image data and networks trained with synthetic data, we use the released code and data from (Wang et al., 2018) to train a model for evaluation, as their model is not available. For (Chen et al., 2018), since only the pre-trained Uformer (Wang et al., 2018) model is available, which has the best reported performance among the models, we use it for comparison. For reflective flare images, we use their model trained with both data types, and for scattering images, we use their model trained only with scattering flare data, reported to be more robust in scattering flare removal. The network is trained with similar settings to previous works, taking images with a resolution of \(512\times 512\) as input and training on an RTX 3090 but without additional techniques such as light source blending used in previous works.
### Qualitative Comparison
We first evaluate the performance of recent flare removal approaches on both ISPRGB and RAW2RGB data for reflective and scattering flare images. We observe that recent models perform better on RAW2RGB images due to their higher quality.
Figure 6. Comparison of reflective flare removal using different schemes. The U-Net from Wu _et al._(Wu et al., 2018) struggles to restore reflective flare-corrupted images due to the lack of sufficient data in its training dataset for such pairs. It also misclassifies the flare-corrupted area in some cases. Flare7K (Chen et al., 2018) can remove part of the in-focus reflective flare in some cases but struggles to remove the transparent out-of-focus area.
Figure 7: Comparison of scattering flare removal using different schemes. The models from Wu _et al._[30] and Flare7K [7] trained with synthetic data struggle to recover images corrupted with large areas of glare, which may degrade global image quality and are more common in real daily life.
For reflective flares, the U-Net from Wu _et al._[30] fails to accurately classify the flare region for restoration, as illustrated in Fig. 6. This issue stems from its training data, which predominantly consists of scattering flares but lacks sufficient reflective flares. We also observe that light-source blending approaches, which segment all light sources, perform flare removal and image blending to merge the results back into the segmented images, may misclassify the flare, leading to visual artifacts. For example, in the first row of Fig. 6, Wu _et al._[30] misclassify a portion of the cloud as a flare, resulting in improper color restoration. The Uformer from Flare7K [7], trained with synthetic reflective and scattering flare data, can identify the in-focus area in certain instances but struggles to resolve the transparent, out-of-focus reflective region, as emphasized in the second row of Fig. 6.
For scattering flares, recent models can restore some local scattering flares, as demonstrated in the third row of Fig.7. However, these models struggle to handle large area glare affecting the global contrast of images, as seen in the first and last rows of Fig.7, which are more common in daily life.
### Quantitative Comparison
We use peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) [28], and learned perceptual image patch similarity (LPIPS) [32] to compare different schemes quantitatively. As shown in Table 2, the overall image quality obtained with internal ISP is lower than that of images obtained using an external pipeline on raw data. More importantly, the lower quality makes flare-free image reconstruction more difficult in subsequent steps. This is true for models trained with synthetic data and our real image dataset, suggesting that flare removal may be more effective when performed early in the processing pipeline instead of on compressed ISP-processed data.
Consistent with our visual comparison observations, in the reflective flare removal test, the U-Net from Wu _et al._[30] struggles to restore reflective flare-corrupted images due to insufficient data for such pairs in its training dataset. The model misclassifies the flare area and performs flare reduction, leading to worse image quality and lower performance metrics. The Uformer from Flare7K [7] encounters a similar problem when handling externally processed data, resulting in lower performance metrics than the input. For scattering flare, the model from Flare7K [7] can remove mild flares similar to those in their training data but struggles to remove large area corruptions, which degrade overall image quality.
## 5. Conclusion
In conclusion, this paper introduces a novel raw image dataset specifically tailored for mobile camera systems, focusing on both scattering and reflective flare removal. This dataset, encompassing a broad range of real-world scenarios captured using various mobile devices and camera settings, lays a solid foundation for developing advanced flare removal algorithms by exploiting the unique properties of raw images. We anticipate that this dataset will catalyze further research in flare removal and contribute to significant enhancements in mobile image quality, benefiting mobile photographers and end-users alike. Experimental results underscore the limitations of networks trained with synthetic data, as they grapple with complex lighting conditions present in our real image dataset. Moreover, we showcase the considerable benefits of utilizing raw image data over processing data through a mobile phone's internal ISP, which adversely affects image quality.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Flare Type} & \multirow{2}{*}{Metric} & \multirow{2}{*}{Input} & \multicolumn{3}{c}{Network trained with synthetic dataset} \\ \cline{5-6} & & & & Wu _et al._ (U-Net) [30] & Flare7K (Uformer) [7] & Ours (U-Net) \\ \hline \multirow{6}{*}{RAW2RGB} & \multirow{2}{*}{Reflective} & PSNR\(\uparrow\) & 34.034 & 25.810 & 33.158 & **37.449** \\ & & SSIM\(\uparrow\) & 0.944 & 0.834 & 0.941 & **0.955** \\ & & LPIPS\(\downarrow\) & 0.023 & 0.090 & 0.025 & **0.015** \\ \cline{2-6} & \multirow{2}{*}{Scattering} & PSNR\(\uparrow\) & 20.776 & 22.673 & 21.310 & **30.289** \\ & & SSIM\(\uparrow\) & 0.688 & 0.722 & 0.690 & **0.780** \\ & & LPIPS\(\downarrow\) & 0.125 & 0.137 & 0.120 & **0.071** \\ \hline \multirow{6}{*}{ISPRGB} & \multirow{2}{*}{Reflective} & PSNR\(\uparrow\) & 31.799 & 31.265 & 31.976 & **34.867** \\ & & SSIM\(\uparrow\) & 0.956 & 0.953 & 0.956 & **0.960** \\ \cline{1-1} & & LPIPS\(\downarrow\) & 0.028 & 0.031 & 0.028 & **0.021** \\ \cline{1-1} \cline{2-6} & \multirow{2}{*}{Scattering} & PSNR\(\uparrow\) & 16.558 & 16.774 & 16.931 & **24.061** \\ \cline{1-1} & & SSIM\(\uparrow\) & 0.557 & 0.554 & 0.559 & **0.736** \\ \cline{1-1} & & LPIPS\(\downarrow\) & 0.217 & 0.215 & 0.212 & **0.133** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Quantitative results on reflective and scattering flare removal for the two types of data, using PSNR, SSIM [28], and LPIPS [32]. \(\uparrow\) denotes higher is better, and \(\downarrow\) denotes lower is better. |
2310.09278 | Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning | In deep learning, auxiliary objectives are often used to facilitate learning
in situations where data is scarce, or the principal task is extremely complex.
This idea is primarily inspired by the improved generalization capability
induced by solving multiple tasks simultaneously, which leads to a more robust
shared representation. Nevertheless, finding optimal auxiliary tasks that give
rise to the desired improvement is a crucial problem that often requires
hand-crafted solutions or expensive meta-learning approaches. In this paper, we
propose a novel framework, dubbed Detaux, whereby a weakly supervised
disentanglement procedure is used to discover new unrelated classification
tasks and the associated labels that can be exploited with the principal task
in any Multi-Task Learning (MTL) model. The disentanglement procedure works at
a representation level, isolating a subspace related to the principal task,
plus an arbitrary number of orthogonal subspaces. In the most disentangled
subspaces, through a clustering procedure, we generate the additional
classification tasks, and the associated labels become their representatives.
Subsequently, the original data, the labels associated with the principal task,
and the newly discovered ones can be fed into any MTL framework. Extensive
validation on both synthetic and real data, along with various ablation
studies, demonstrate promising results, revealing the potential in what has
been, so far, an unexplored connection between learning disentangled
representations and MTL. The code will be made publicly available upon
acceptance. | Geri Skenderi, Luigi Capogrosso, Andrea Toaiari, Matteo Denitto, Franco Fummi, Simone Melzi, Marco Cristani | 2023-10-13T17:40:39Z | http://arxiv.org/abs/2310.09278v1 | # Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning
###### Abstract
In deep learning, auxiliary objectives are often used to facilitate learning in situations where data is scarce, or the principal task is extremely complex. This idea is primarily inspired by the improved generalization capability induced by solving multiple tasks simultaneously, which leads to a more robust shared representation. Nevertheless, finding optimal auxiliary tasks that give rise to the desired improvement is a crucial problem that often requires hand-crafted solutions or expensive meta-learning approaches. In this paper, we propose a novel framework, dubbed _Detaux_, whereby a weakly supervised disentanglement procedure is used to discover new unrelated classification tasks and the associated labels that can be exploited with the principal task in any Multi-Task Learning (MTL) model. The disentanglement procedure works at a representation level, isolating a subspace related to the principal task, plus an arbitrary number of orthogonal subspaces. In the most disentangled subspaces, through a clustering procedure, we generate the additional classification tasks, and the associated labels become their representatives. Subsequently, the original data, the labels associated with the principal task, and the newly discovered ones can be fed into any MTL framework. Extensive validation on both synthetic and real data, along with various ablation studies, demonstrate promising results, revealing the potential in what has been, so far, an unexplored connection between learning disentangled representations and MTL. The code will be made publicly available upon acceptance.
## 1 Introduction
Human learning is often considered to be a combination of processes (_e.g._, high-level acquired skills and evolutionary encoded physical perception) that are used together and can be transferred from one problem to another. Inspired by this, _Multi-Task Learning (MTL)_[1] represents a machine learning paradigm where multiple tasks are learned together to improve the generalization ability of a model by using shared knowledge that derives from considering different aspects of the input. Specifically, this is achieved by jointly optimizing the model's parameters across different tasks, allowing the model to learn task-specific and task-shared representations simultaneously. As a result, MTL can lead to better generalization, improved efficiency at inference time, and enhanced performance on individual tasks by exploiting their underlying relationships.
A specific form of this learning approach referred to as _auxiliary learning_, has garnered considerable interest in recent years [17]. In particular, auxiliary learning is a specific type
of MTL, where auxiliary tasks are intentionally crafted to ultimately boost the performance of the principal task. As of now, auxiliary tasks are found by meta-learning (Liu et al., 2019; Navon et al., 2021), but this requires the a priori definition of the hierarchy of the desired auxiliary tasks and is computationally inefficient. Thus, the question is: _can we discover with no prior knowledge one or more additional auxiliary tasks in order to improve the performance of the principal task?_
In this paper, we explore this difficult problem by proposing _Detaux_, a weakly supervised strategy that discovers auxiliary classification tasks that enable solving a single-task classification problem in a multi-task fashion. Specifically, _Detaux_ is capable of individuating unrelated auxiliary tasks: unrelatedness in MTL means to have two or more tasks that do not share any features, as proven to be effective in the MTL literature (Wang et al., 2003; Zhou et al., 2011; Paredes et al., 2012; Jayaraman et al., 2014; Zheng et al., 2019; Liu et al., 2019).
In particular, our method takes roots in the idea from Paredes et al. (2012), which starts with two groups of tasks, the principal task and the auxiliary tasks, which are given and known to be unrelated, and assumes the claim that joint learning of unrelated tasks can improve the performance on the principal task. They propose to generate a shared low-dimensional representation for both the principal task and the unrelated auxiliary tasks forcing these two representations to be orthogonal.
The procedure from Paredes et al. (2012) exploits a linear classifier and requires the knowledge of the labels for both the principal task and the auxiliary tasks. With our method, we aim to follow a similar process giving up on the supervision and fostering non-linear classifiers estimated by neural networks. The proposed method generates auxiliary tasks in such a way that their labels implicitly drive an MTL network to understand the unrelatedness between the tasks. Our idea is to work in a specific representation space, a product manifold, to unveil the auxiliary tasks for a given principal task. We get inspiration from Fumero et al. (2021), who discovered the product manifold as a convenient representation basis for disentanglement. In particular, as depicted in Figure 1, we extract task-specific features using weakly supervised disentanglement; then, we identify the most disentangled factor of variation within a subspace, and finally, we generate new labels via a clustering module to enable seamless integration with the primary task in any MTL model.
This makes our proposed pipeline agnostic to the choice of the MTL model, given that the latter acts directly on the primary and auxiliary labels. In this way, any MTL model can be chosen, depending on several factors besides performance, such as efficiency, scalability, and resource constraints. In the experimental section, we utilize three different MTL models with _Detaux_, revealing its flexibility.
Figure 1: _Detaux_ involves two steps. First, we use weakly supervised disentanglement to isolate the structural features specific to the principal task in one subspace. Next, we identify which is the subspace with the most disentangled factor of variation related to the principal task, and through a clustering module, we obtain new labels. These can be used to create a new classification task that can be combined with the principal task in any MTL model.
Related Work
We organize this section into three different parts, each one providing an overview of a topic related to our work: _i)_ MTL and auxiliary learning; _ii)_ disentanglement; and _iii)_ existing studies on the relationship between MTL and disentanglement.
### MTL and auxiliary learning
MTL, _i.e._, the procedure through which we can solve multiple learning problems at the same time (Caruana, 1997), can help us reduce inference time, reach improved accuracy, and increase data efficiency (Standley et al., 2020). When the adopted dataset contains annotation for multiple tasks, the challenges to face concern which tasks may work well together (Zamir et al., 2018; Standley et al., 2020; Fifty et al., 2021) or how to weigh the losses of different tasks (Kendall et al., 2018) to create a better joint optimization objective. Numerous methods have recently emerged addressing the simultaneous resolution of multiple tasks (Gao et al., 2019; Vandenhende et al., 2020).
A different problem arises when we would like to use one of these methods but only one task is approachable, given the annotations in the considered dataset. Auxiliary task learning aims at maximizing the prediction performance on a principal task by supervising the model to additionally learn other tasks, as shown in (Liu et al., 2019; Navon et al., 2021). Therefore, auxiliary tasks are tasks of minor interest, or even irrelevant compared to the principal task we want to solve, and thus can be seen as regularizers if learned simultaneously with the task of interest (Liebel and Korner, 2018). For example, Paredes et al. (2012) suggests that using two unrelated groups of tasks, where one of them is hosting the principal task, can lead to better performance, where unrelated means that the two groups of tasks are defined by an orthogonal set of features. Also in Liebel and Korner (2018) the authors make use of seemingly unrelated tasks to help the learning on one principal task, this time without imposing any constraint on the feature structure. In our case, we are working in product manifold, which has been already shown by Fumero et al. (2021) as effective for separating embedding subspaces that are orthogonal by design.
Moreover, recent emerging techniques leverage meta-learning to effectively select the most appropriate auxiliary tasks or even autonomously create novel ones. Liu et al. (2019) and Li and Shan (2021) both train two neural networks simultaneously, a label-generation network to predict the auxiliary labels and a multi-task network to train the primary task alongside the auxiliary task. These, in contrast with our approach, require the a priori definition of a hierarchy binding the auxiliary labels to the principal task labels and present conflicting ideas on the possible semantic interpretation of the generated labels. Furthermore, they are computationally inefficient: meta-learning is a resource-intensive technique and requires the retraining of the entire architecture to change the employed multi-task method.
Even more recently, Dery et al. (2022) propose to deconstruct existing objectives for NLP within a unified taxonomy, identifying connections between them, and generating new ones by selecting the best combinations from a cartesian product of the available options. Furthermore, Nam et al. (2023) also used meta-learning, presenting a novel framework for generating new auxiliary objectives to address the niche problem of few-shot semi-supervised tabular learning.
To the best of our knowledge, we are not aware of any other method that proposes a systematic approach for generating new labels from a disentangled latent space, in order to enable MTL classification when only the annotations for one task are given in the considered dataset.
### Learning disentangled representations
Representing data in a space where different components are independent is a long-standing research topic in machine learning. The rise of deep learning in recent years led to proposed learning disentangled representations as an important aspect of unsupervised deep learning (Bengio et al., 2013).
Recent literature has proposed several characterizations of disentanglement, whether that is in terms of group theory (Higgins et al., 2018), metric and product spaces (Fumero et al., 2021), or permutations of element-wise, nonlinear functions (Horan et al., 2021). The seminal paper of Higgins et al. (2017) demonstrated that variational auto-encoders could learn to disentangle by enforcing
the ELBO objective, while Chen et al. (2016) relies on GANs and an information-theoretic view of disentanglement. Later works, such as Eastwood and Williams (2018); Singh et al. (2019); Ojha et al. (2020) extensively explored different directions and use cases. Work by Locatello et al. (2019) showed that completely unsupervised disentanglement was not possible due to the inability of the models to identify factors of variation. Soon after, the authors proposed weak supervision and having access to few labels as a way to bypass this limitation (Locatello et al., 2020, 2020).
In _Detaux_, we place ourselves in the same setting of Fumero et al. (2021), but control and force the disentanglement by supervision only on the known (principal) task. We describe in detail the main differences between the original method and our custom implementation in Section 4.
### Relationship between MTL and disentanglement
Meng et al. (2019) report a connection between disentangled representations and MTL, showing that disentangled features can improve the performance of multi-task networks, especially on data with previously unseen properties. Disentanglement is obtained by adversarial learning, forcing the encoded features to be minimally informative about irrelevant tasks. In this case, the tasks to be disentangled are known a priori, while in our case only the principal task task is known.
Yang et al. (2022) propose a novel concept called "Knowledge Factorization". Exploiting the knowledge contained in a pre-trained multi-task network (called teacher), the idea is to train disentangled single-task networks (called students) to reduce the computational effort required by the final single-task network. The factorization of the teacher knowledge is dual: they provide structural factorization and representation factorization. In structural factorization, they split the net into a common-knowledge network and a task-specific network, based on mutual information.
Finally, Maziarka et al. (2023) propose a disentanglement analysis of MTL models by creating a semi-synthetic dataset based on latent information in simple datasets. The authors run the latent information through randomly initialized fully-connected layers to create tasks that are harder than just recovering the simple factors. A CNN is then trained to produce a representation that fits these auxiliary tasks. The reported results may be seen as inconclusive, as they do not provide a clear indication of how disentangled representations directly impact MTL performance.
In our proposal, we show that disentanglement in a representation space can be used as a general prior for MTL, _i.e_., by using disentanglement to mine for auxiliary tasks, an MTL model extracts a model-specific embedding which exploits the combination of the principal and the newly discovered labels, thus improving the performance on the principal task.
## 3 Mathematical Background
To understand the core concepts of our research, let us delve into the mathematical background presented in Fumero et al. (2021). Due to lack of space, we only provide a compact overview of the key concepts of this idea and refer the readers to the original paper for more details regarding the loss formulations and training process.
Given as input a collection of data, such as a set of labeled images, a disentanglement procedure should output a representation of these data, separating the different generative factors that produce all the variations observed in the data. The method proposed by Fumero et al. (2021) is based on the manifold hypothesis: high-dimensional data lies near a lower-dimensional manifold. Fostering this idea, the authors claim that, if independent factors generate the data, then this manifold is a product manifold: \(\mathcal{M}=\mathcal{M}_{1}\times\mathcal{M}_{2}\times\ldots\times\mathcal{M} _{k}\), where each \(\mathcal{M}_{i},i\in\{1\ldots k\}\) is orthogonal to the others and represents (in the ideal case) at most one generative factor of the data. Given a pair of data \((x_{1},x_{2})\) that differ in the \(h\)-th generative factor only, in Fumero et al. (2021), their learned representations are considered fully disentangled if they are equal for all projections in the submanifolds \(\{\mathcal{M}_{i}\}_{i=1}^{k}\), except for the \(h\)-th.
In practice, the authors consider a finite-dimensional normed vector space \(\mathbf{Z}\) containing the disentangled latent representation, obtained as the output of an encoder network (\(\mathbf{Z}\) is indeed a special case of a manifold). Without loss of generality, considering \(\mathbf{Z}\) over the field of reals, we can state that \(\mathbf{Z}\subseteq\mathcal{R}^{d}\). Under perfect disentanglement, _i.e_., a fully minimized loss term \(\mathcal{L}\) from Equation 2, the latent disentangled representation takes the form of a Cartesian product space \(\mathbf{Z}=\mathcal{S}_{1}\times\mathcal{S}_{2}\times\ldots\times\mathcal{S} _{k}\)
such that for all \((i,j)\in\{1\ldots k\},\mathcal{S}_{i}\cap\mathcal{S}_{j}=\{0\}\). Intuitively, each subspace encodes an "axis" of variation. Finally, an aggregation step restores the complete latent information, and a decoder \(g\) maps the resulting vectors back to the input data space.
To learn the global manifold structure, a standard autoencoder architecture is used, with \(f\) being an encoder that receives non-i.i.d data pairs \((x_{1},x_{2})\) and produces the latent representations \((z_{1},z_{2})\), and the decoder \(g\) that approximates the inverse of \(f\). The pair sampling procedure is designed to induce weak supervision, requiring a pair of images to vary only in one (or a few) generative factors. Additionally, a set of \(k\) neural networks \(p_{i},i\in\{1\ldots k\}\) called projectors, are trained simultaneously and guided by an unsupervised oracle \(\mathcal{O}\), to map the latent codes in the subspaces \(\{\mathcal{S}_{i}\}_{i=1}^{k}\), each of which contains the corresponding submanifold \(\{\mathcal{M}_{i}\}_{i=1}^{k}\). To wrap up, the representation framework operates in the following way:
\[x\xrightarrow{f}z\xrightarrow[i=1\ldots k]{\{p_{i}\}}{\{s_{i}\}}\xrightarrow[ i=1\ldots k]{\tilde{z}}\xrightarrow[i=1\ldots k]{g}\tilde{x}\, \tag{1}\]
where \(\tilde{z}\) and \(\tilde{x}\) are the aggregated latent representation and the reconstructed input, respectively. Notably, \(f\) and \(g\) are initially trained only to minimize the reconstruction error, which is needed to generate the global structure of the manifold \(\mathcal{M}\). After a warm-up period, three constraints are added to the optimization problem, each with its own non-learned weight (_i.e._, \(\beta_{1}\), \(\beta_{2}\), and \(\beta_{3}\)) to disentangle the latent code:
\[\mathcal{L}=\mathcal{L}_{rec}+\beta_{1}(\mathcal{L}_{dist}+\mathcal{L}_{spar}) +\beta_{2}\mathcal{L}_{cons}+\beta_{3}\mathcal{L}_{reg}. \tag{2}\]
\(\mathcal{L}_{rec}\) and \(\mathcal{L}_{spar}\) correspond to _L2 reconstruction_ and _L1 sparsity_ loss terms, respectively. \(\mathcal{L}_{cons}\), namely the _consistency loss_, forces each projector \(p_{i}\) to be invariant to changes in subspaces \(\mathcal{S}_{j},j\neq i\). The _distance loss_\(\mathcal{L}_{dist}\) is a contrastive loss term that follows the oracle \(\mathcal{O}\), which calculates the subspace \(\mathcal{S}_{i}\) where the projections of the images in the pair \((x_{1},x_{2})\) differ the most, and forces the projection representation of the two input images onto the subspaces not selected by \(\mathcal{O}\) to be as close as possible, while the projections in the selected \(\mathcal{S}_{i}\) to differ the most. \(\mathcal{L}_{cons}\) and \(\mathcal{L}_{dist}\) thus realize the orthogonality of the discovered subspaces. Finally, the _regularization loss_\(\mathcal{L}_{reg}\) introduces a penalty that ensures the choice of the oracle \(\mathcal{O}\) is evenly distributed among the subspaces.
## 4 Methodology
In this Section, after introducing the setting and the adopted notation, we will describe the fundamental contributions of our work: the principal task-based oracle 4.1, and the auxiliary task discovery procedure 4.2.
Setting and notationWe assume the existence of a labeled image dataset \(D=\{\,(x^{(i)},y^{(i)})\,|\,\forall i\in\{1\ldots N\},\,x^{(i)}\in\mathcal{R}^ {w\times h\times c},\,y^{(i)}\in\mathbb{N}\}\), where \(w\) is the width, \(h\) is the height, and \(c\) is the number of channels, and \(N\) is the number of (image, label) tuples. We consider the classification task whose fundamental objective is to learn a mapping from the image space \(\{x^{(i)}|\forall i\in\{1\ldots N\}\}\) to the corresponding label \(\{y^{(i)}|\forall i\in\{1\ldots N\}\}\).
### The principal task-based oracle
A major issue of the procedure proposed by Fumero et al. (2021) in our setting is that the oracle will assign the variation given by the principal task label to a random subspace. In order to facilitate the automatic discovery of auxiliary tasks, we must have a way to accommodate the known variation of the principal task in an arbitrary subspace and fix it there, constraining the representation learning. To achieve this, we define a masking procedure that creates the principal task oracle \(\hat{\mathcal{O}}\), such that the \(\alpha\)-th subspace contains all the variation in the data corresponding to pairs \((x^{(1)},x^{(2)})\) whose elements differ in their label. This implies that we do not inject direct knowledge of the known label, but only whether or not it differs between images of the pair. Thus, in contrast to Fumero et al. (2021), where all labels are required, we only need to utilize the labels associated with the principal task. Given an (arbitrary) subspace \(\alpha\in\{1\ldots k\}\) (by default we set \(\alpha=1\)) where we wish to force the variation of the principal task labels, we define \(\hat{\mathcal{O}}\) as:
\[\hat{\mathcal{O}}(z^{(1)},z^{(2)})=\begin{cases}\alpha&\text{if }y^{(1)}\neq y^{(2)}\\ \arg\max_{i}d(s_{i}^{(1)},s_{i}^{(2)}),\forall i\neq\alpha&\text{otherwise} \end{cases}\, \tag{3}\]
where \(d(s_{i}^{(1)},s_{j}^{(2)})\), \(i\in\{1...k\}\) is the distance between the projections of the pair \((z^{(1)},z^{(2)})\), in each i-th subspace \(\mathcal{S}_{i}\).
Our new oracle implies that the distance and regularization losses will always force the variation in the data to be encoded in \(\mathcal{S}_{\alpha}\) if \(y^{(1)}\neq y^{(2)}\), and in a different subspace otherwise (if \(y^{(1)}=y^{(2)}\)). Thanks to the consistency loss, the remaining subspaces can encode other variations while remaining invariant to the ones related to the principal task and contained in \(\mathcal{S}_{\alpha}\). This constraint imposed through the consistency loss will lead us to discover a proper representation where unknown tasks will correspond to (possibly) multiple subspaces, orthogonal to the ones of the known task.
### Auxiliary task discovery
In the disentangled representation of the input data, where the known principal task variation is encoded into a specific subspace, we look to find new, auxiliary tasks in the remaining subspaces.
Intuitively, we wish to have a disentangled subspace that exhibits a clustering tendency over the projected data. Therefore, set \(p_{j}\) to be the projector that has maximized the overall distance \(d\) of Equation 3 after training and apply a clustering algorithm to the corresponding subspace \(\mathcal{S}_{j}\) to obtain weak labels, determining the new auxiliary classification task. Under the assumption that tasks living in orthogonal spaces help increase MTL performance (Paredes et al., 2012), we now show why our method regularizes the learning procedure and implicitly guides it towards orthogonal feature spaces for each task.
Let \(\mathcal{S}_{\alpha}\) be the subspace that contains the variation of the principal task, forced by Equation 3, and \(\mathbf{Z}\in\mathcal{R}^{N\times d}\) be the latent representation of the input data \(x\). Then, for any two vectors \(\{z^{(1)},z^{(2)}\}\in\mathbf{Z}\), the functions \(p_{\alpha}\) and \(p_{j}\) will lead to the latent representations \(\{s_{\alpha}^{(1)},s_{j}^{(1)},s_{\alpha}^{(2)},s_{j}^{(2)}\}\), such that the metric \(d\) induced by the norm of the space acts independently on \(d(s_{\alpha}^{(1)},s_{\alpha}^{(2)})\) from \(d(s_{j}^{(1)},s_{j}^{(2)})\), due to the orthogonality of the basis vectors. In simpler terms, the relationship between \(d(s_{\alpha}^{(1)},s_{\alpha}^{(2)})\) will not influence the one between \(d(s_{j}^{(1)},s_{j}^{(2)})\), given that the information encoded in each subspace is different. This can also be verified in terms of covariance, where due to the orthogonality of subspaces \(\langle p_{\alpha}(\mathbf{Z}),p_{j}(\mathbf{Z})\rangle=0\), which would lead to a covariance (given centered data) of \(0\). To summarize, the labels of the new auxiliary tasks contain discriminative information that cannot be reduced to the set of the principal task labels. In this way, the auxiliary task will not provide redundant information.
While it is possible to use an arbitrary clustering algorithm, we would like for it to support clusters of arbitrary shapes and for the number of clusters to not be directly specified (_e.g._, K-Means). Therefore, we utilize the HDBSCAN algorithm introduced by Campello et al. (2013), due to its ability to cluster data points based on their proximity and density without the need to specify the number of clusters explicitly. In the case HDBSCAN finds just one cluster, no auxiliary task can be found, and the procedure stops. Otherwise, we have discovered a novel task and its corresponding labels \(y^{\prime}\in\mathbb{N}\), which can be used as input to any MTL model. In this work, we limit ourselves to finding one additional task only. Scaling on more tasks, _i.e._, investigating additional subspaces, is the subject of future work.
At this stage, we have access to the new dataset \(D^{\prime}=\{\left(x_{i},y_{i},y^{\prime}_{i}\right)|\forall i\in\{1\dots N\}, \,x_{i}\in\mathcal{R}^{w\times h\times c},\,y_{i},y^{\prime}_{i}\in\mathbb{N}\}\). We are now ready to learn on \(D^{\prime}\) using multi-task classification.
## 5 Experiments
We start by providing a motivating toy example on the 3D Shapes (Burgess and Kim, 2018) synthetic dataset in 5.1. Next, 5.2 explains the experiments on real-world image datasets (_i.e._, FACES (Ebner et al., 2010), CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and Cars (Krause et al., 2013)), to cope with real and complex use cases. Finally, in 5.3 we discuss some additional experiments and research questions that pinpoint the advantages of our solution.
For all our experiments, we fix the batch size to 32, the learning rate to \(0.0005\), AdamW (Loshchilov and Hutter, 2018) as optimizer, within the PyTorch Lightning framework, on an NVIDIA RTX 3090. We train our disentanglement model for 40 epochs on 3D Shapes and 400 epochs on the other datasets. Instead, all the MTL models were trained for 150 epochs.
### Synthetic data
To showcase the capabilities of our methodology, we begin our experiments with the 3D Shapes dataset, a widely used benchmark in the disentanglement literature (Kim and Mnih, 2018; Locatello et al., 2019; Fumero et al., 2021). 3D Shapes is composed of six generative factors: floor hue, wall hue, object hue, scale, shape, and orientation, resulting in 480,000 images. To adapt it to our specific case, we treat the classification of one generative factor as the principal task and pretend to have no knowledge of the others.
Due to the synthetic nature of the images in 3D Shapes, solving classification tasks with a neural network can be excessively easy, leaving limited possibility for improvement through MTL. Thus, to render this setting slightly more complicated, we add salt-and-pepper noise to 15% of the image pixels. With the presence of noise, the classification of the object scale (4 classes) becomes challenging. Hence, we have chosen it as the primary task for our experiments.
We sample pairs of images with \(0.5\) probability of having the same principal task label and fed these into the disentanglement model, where the encoder \(f\) and the decoder \(g\) are parametrized through a simple LeNet-like architecture. The number of subspaces \(k\) is set to 10 as used in Fumero et al. (2021).
As described in 4.2, we cluster the most disentangled subspace (not considering the forced one) according to the disentanglement loss. In our experiment, it coincides with the subspace which contains the information regarding the object hue (10 classes). The minimum cluster size hyperparameter of HDBSCAN is set to 2% of the number of data points \(N\).
We feed the noisy 3D Shapes and the enriched label set into an MTL hard parameter-sharing architecture with a VGG16 (Simonyan and Zisserman, 2015) as the backbone and compare Single-Task Learning (STL) vs MTL (using, respectively, one vs two classification heads). For this comparison, we need to perform a train-test split on 3D Shapes, which is non-trivial since the possible combinations of the latent factors in the dataset are present exactly once. Therefore, we split the dataset based on the floor and wall hue labels, allocating the images that contain 5 out of the 10 values for both factors only to the testing set, resulting in a 75-25 train-test split. On the principal task, MTL achieves an accuracy of 0.889, outperforming with a large margin the 0.125 obtained by STL.
### Real data
As in the toy example, during the disentanglement procedure, pairs of images are sampled only based on the principal task labels. In FACES this corresponds with the person's facial expression. In CIFAR-10, SVHN, and Cars with the only annotated labels. To obtain a higher fidelity reconstruction, we utilize a ResNet-18 encoder-decoder architecture. The number of subspaces \(k\) is set to 10.
During the auxiliary task discovery, we set the minimum cluster size hyperparameter of HDBSCAN to 1% of the number of data points \(N\) for all the datasets.
We compare our approach to two different auxiliary learning methods, _i.e._, MAXL (Liu et al., 2019) and AuxiLearn (Navon et al., 2021). Unlike these auxiliary learning architectures, our discovered auxiliary task can be exploited interchangeably with any MTL model. Therefore, we select three different multi-task models: _i)_ HPS (weighing the losses to give more importance to the main task, as explained in Kendall et al. (2018)), _ii)_ NDDR (Gao et al., 2019), and _iii)_ MTI (Vandenhende et al.,
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \hline
**Learning Paradigm** & **FACES** & **CIFAR-10** & **SVHN** & **Cars** \\ \hline STL & 0.915 & 0.844 & 0.956 & 0.711 \\ Ours MTL-HPS & 0.951 & 0.848 & 0.954 & 0.789 \\ Ours NDDR & 0.932 & 0.872 & 0.952 & 0.712 \\ Ours MTI & **0.978** & **0.910** & **0.961** & **0.807** \\ \hline MAXL & 0.933 & 0.868 & 0.953 & 0.638 \\ AuxiLearn & 0.915 & 0.811 & 0.943 & 0.644* \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification accuracy on the FACES, CIFAR-10, SVHN, and Cars datasets. (*) indicates that the results are the ones reported in the original paper. In bold the best results, underlined the models that outperform STL.
2020), to foster this property of our approach. MTI is always pre-trained due to its inherent nature of performing high-resolution operations, which is unfeasible on \(64\times 64\) images.
Table 1 summarizes the results. MTI with our generated labels displays the best performance. Furthermore, even simple CNN-based architecture like HPS and NDDR, achieve superior results compared to MAXL and AuxiLearn. Most notably, we outperform STL with at least one of the MTL models in all the datasets, whereas MAXL and AuxiLearn have significant performance discrepancies between the datasets.
We would like to report, for the sake of completeness, that on the CARS dataset, which contains very complex images, we exploit pre-trained MTL models. For the disentanglement phase, we change the structure of the encoder \(f\) such that it does not produce a dense representation in the bottleneck layer but a compressed feature map. The latent space projectors are then learned using \(1\times 1\) convolution, and the disentanglement losses are applied to the flattened feature map. Furthermore, in Table 1, with *, we denote the result we take from the original paper, as we encountered challenges in replicating the performance using the available code and information.
### Research questions
Is disentanglement useful for discovering new auxiliary tasks?With this ablation, we wish to quantitatively and qualitatively show how disentanglement is effective in extracting task labels from the underlying data structure. On the FACES dataset, we compare the auxiliary task generated by _Detaux_ with the auxiliary task resulting from the clustering on the latent space of an autoencoder that only learns to reconstruct. Without the disentanglement, HPS can only reach 0.9 accuracy, worse than the 0.915 obtained by STL, revealing that performing auxiliary task mining on the entangled autoencoder space does not provide helpful information. In Figure 2, we compare the projected messy clusters created from the entangled representation _(a)_ and the clear grouping obtained in the disentangled representation space _(b)_.
Is it necessary to return to the image space for MTL?One may ask why we did not work directly in the latent feature space found by the disentanglement procedure. We did some preliminary experiments in this direction, but they yielded inconclusive results and raised implementation issues that are out of the scope of the contribution of this paper. A reason is that most of the MTL frameworks (_e.g._, MTL-HPS, NDDR, and MTI) require convolution, which is not well defined in the feature space. Another reason is that _Detaux_ works at a representation
Figure 2: 3D visualization (via PCA) of the discovered auxiliary task in the entangled autoencoder feature space _(a)_ and the most disentangled subspace _(b)_ on FACES. Learning a disentangled representation leads to a subspace that separates the data into two major groups, which correspond to the labels of the new auxiliary task. Instead, using only a reconstruction loss leads to an entangled representation from which it is not beneficial to extract auxiliary tasks. Different colors mean different clusters found by HDBSCAN, which are subsequently projected by PCA in 3 dimensions. Best viewed in color.
level, regardless of any classification aim induced by a specific classification framework. Its sole purpose is to reveal, together with the principal subspace determined by the initial labels, other orthogonal complementary subspaces, which can be assumed as tasks if they admit clustering. The output of _Detaux_ is an enriched set of labels, that can be exploited with any MTL model. In addition, _Detaux_ enables us to visualize and interpret the disentangled subspaces since it reconstructs the images. This procedure allowed us to understand that, in the toy example, the additional task corresponds to the object's hue. Unfortunately, in the more complex real cases, clear interpretation becomes more challenging, barely disclosing in the FACES benchmark the gender as an additional task. In the other cases, we had no clue. Anyway, it is worth noting that we focused on producing a framework that transforms a single-task classification problem into an MTL one, and we let the interpretability be in the background, leaving it as future work.
## 6 Conclusion
In this paper, we propose a novel outlook on the utility of disentangled representations, utilizing them as a proxy for auxiliary learning in order to improve the accuracy of a principal task originally solvable only in a single-task fashion. Our proposed pipeline facilitates the unsupervised discovery of new tasks from a factorized representation. These newly discovered tasks can be readily incorporated into any MTL framework. We demonstrate empirically that this approach offers advantageous performance, and we analyze various aspects using ablation studies. Our implementation and analysis shed light on the potential of combining disentanglement and MTL for improved performance and generalizability.
## Acknowledgements
This work was partially supported by the MUR under the grant "Dipartimenti di Eccellenza 2023-2027" of the Department of Informatics, Systems and Communication of the University of Milano-Bicocca, Italy. We gratefully acknowledge the support of NVIDIA Corporation with the RTX A5000 GPUs granted through the Academic Hardware Grant Program to the University of Milano-Bicocca for the project "Learned representations for implicit binary operations on real-world 2D-3D data". Furthermore, this study was also carried out within the PNRR research activities of the consortium iNEST (Interconnected North-Est Innovation Ecosystem) funded by the European Union Next-GenerationEU (Piano Nazionale di Ripresa e Resilienza (PNRR) - Missione 4 Componente 2, Investimento 1.5 - D.D. 1058 23/06/2022, ECS_0000043). This manuscript reflects only the Authors' views and opinions, neither the European Union nor the European Commission can be considered responsible for them.
|
2310.11437 | On Faces and Hilbert Bases of Kostka Cones | Kostka coefficients appear in the representation theory of the general linear
group and enumerate semistandard Young tableaux of fixed shape and content. The
$r$-Kostka cone is the real polyhedral cone generated by pairs of partitions
with at most $r$ parts, written as non-increasing $r$-tuples, such that the
corresponding Kostka coefficient is nonzero. We provide several results showing
that its faces have interesting structural and enumerative properties. We show
that the $d$-faces of the $r$-Kostka cone can be determined from those of the
$(3d+3)$-Kostka cone, allowing us to characterize its $2$-faces and enumerate
its $d$-faces for $d \leq 4$. We provide tight asymptotics for the number of
$d$-faces for arbitrary $d$ and determine the maximum number of extremal rays
contained in a $d$-face for $d < r$. We then make progress towards a
generalization of the Gao-Kiers-Orelowitz-Yong Width Bound on initial entries
of partitions $(\lambda,\mu)$ appearing in the Hilbert basis of the
$\lambda_1$-Kostka cone. We show that at least $93.7\%$ of integer pairs
$\lambda_1 \geq \mu_1 > 0$ appear as the initial entries of partitions
$(\lambda,\mu)$ comprising a Hilbert basis element of the $r$-Kostka cone for
every $r > \lambda_1$. We conclude with a conjecture about a curious $h$-vector
phenomenon. | Amanda Burcroff | 2023-10-17T17:47:11Z | http://arxiv.org/abs/2310.11437v1 | # On faces and Hilbert bases of Kostka cones
###### Abstract.
Kostka coefficients appear in the representation theory of the general linear group and enumerate semistandard Young tableaux of fixed shape and content. The \(r\)-Kostka cone is the real polyhedral cone generated by pairs of partitions with at most \(r\) parts, written as non-increasing \(r\)-tuples, such that the corresponding Kostka coefficient is nonzero. We provide several results showing that its faces have interesting structural and enumerative properties. We show that the \(d\)-faces of the \(r\)-Kostka cone can be determined from those of the \((3d+1)\)-Kostka cone, allowing us to characterize its 2-faces and enumerate its \(d\)-faces for \(d\leq 4\). We provide tight asymptotics for the number of \(d\)-faces for arbitrary \(d\) and determine the maximum number of extremal rays contained in a \(d\)-face for \(d<r\). We then make progress towards a generalization of the Gao-Kiers-Orelowitz-Yong Width Bound on initial entries of partitions \((\lambda,\mu)\) appearing in the Hilbert basis of the \(\lambda_{1}\)-Kostka cone. We show that at least \(93.7\%\) of integer pairs \(\lambda_{1}\geq\mu_{1}>0\) appear as the initial entries of partitions \((\lambda,\mu)\) comprising a Hilbert basis element of the \(r\)-Kostka cone for every \(r>\lambda_{1}\). We conclude with a conjecture about a curious \(h\)-vector phenomenon.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 The Maximum Number of Vertices of a Face
* 4 Characterization of Edges
* 5 Enumeration of Faces of a Fixed Dimension
* 6 Initial Partition Entries of Hilbert Basis Elements
* 7 Further Directions
* 8 Appendix: Initial Pair Probability Computation
## 1. Introduction
### Background
The \(r\)-Kostka cone, denoted by \(\mathsf{Kostka}_{r}\), is the real polyhedral cone generated by pairs \((\lambda,\mu)\in\mathbb{R}^{2r}\) of non-increasing \(r\)-tuples of equal sum such that, for all \(1\leq i<r\), the sum of the first \(i\) parts of \(\lambda\) is at least the sum of the first \(i\) parts of \(\mu\). It is directly connected to the well-known _Kostka numbers_, which in turn have connections to Young tableaux [8], representation theory [2], symmetric functions [7], dimer configurations [6], and supergravity theories [18].
The integral points of the \(r\)-Kostka cone are precisely the pairs \((\lambda,\mu)\) of integer partitions with at most \(r\) parts such that the Kostka number \(K_{\lambda,\mu}\) is positive. Carl Kostka introduced Kostka numbers in 1882 while studying symmetric function expansions [7]. Kostka numbers are hard to compute in general, as their computation is \(\mathsf{\#P}\)-complete [10]. Kostka numbers also appear in the representation theory of the general linear group. By Young's Rule, the Kostka number \(K_{\lambda,\mu}\) is the multiplicity with which the weight \(\mu\) appears in the irreducible representation of \(\mathrm{GL}_{r}(\mathbb{C})\) with highest weight \(\lambda\). It is also the coefficient of the monomial symmetric function corresponding to \(\mu\) in the expansion of the Schur polynomial corresponding to \(\lambda\). See, [16, Chapter 7] for a more thorough history of Kostka numbers and [2] for details on the representation-theoretic perspective.
Slicing the \(r\)-Kostka cone by the affine hyperplane \(\{x\in\mathbb{R}^{2r}:(1,1,\ldots,1)\cdot x=1\}\) yields a \((2r-2)\)-dimensional polytope, which we call the _Kostka polytope_ and denote by \(\mathrm{P}^{\mathsf{Kostka}}_{r}\). There are numerous other polytopes defined in terms of partitions, the faces of which have previously been shown to have interesting enumerative properties. The _Fibonacci polytopes_, or _ordered partition polytopes_, have vertex sets satisfying a Fibonacci-like recurrence [12] and are related to alternating permutations [17]. For the family of _unordered partition polytopes_, Shlyk gave a description of the dynamic behavior of the vertices and a characterization of the facets [14]. Each unordered partition polytope is combinatorially equivalent to a face of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\), and computational evidence suggests that both polytope families share a curious \(h\)-vector phenomenon [19] (see Section 7).
Several recent works on the Kostka cone have focused on its Hilbert basis and extremal rays. In 2021, Gao, Kiers, Orelowitz, and Yong [4] gave a criterion for Hilbert basis membership, though they show that this decision problem is \(\mathsf{NP}\)-complete in general. They use this criterion to give a simple description of the extremal rays and a "Width Bound" on the integer pairs \((\lambda_{1},\mu_{1})\) that can be the first parts of partitions \(\lambda,\mu\) forming a Hilbert basis element \((\lambda,\mu)\) of the \(r\)-Kostka cone for \(r\leq\lambda_{1}\). Kim has since provided a strengthening of this Width Bound via a study of generalized Dyck paths [5]. Similar studies have also been carried out in other Lie types. Besson, Jeralds, and Kiers [1] took a representation-theoretic approach to enumerate the rays of the _generalized Kostka cones_ of types \(D_{r}\) and \(E_{r}\), where type \(A_{r}\) is the classical case handled in [4].
### Results
Our work focuses on studying the faces and Hilbert basis of the \(r\)-Kostka cone \(\mathsf{Kostka}_{r}\), with a focus on enumerative and structural properties. We typically refer to \(r\)-Kostka polytope \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) instead of the Kostka cone when discussing the face structure, as \(d\)-faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) are naturally identified with \((d+1)\)-faces of \(\mathsf{Kostka}_{r}\). We begin by studying the maximum number of vertices contained in a face of fixed dimension (see Corollary3.3).
**Theorem 1.1**.: _For \(r>d+1\), the maximum number of vertices contained in a \(d\)-face of the polytope \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) is \(\prod_{i=1}^{3}\left\lfloor\frac{d+2+i}{3}\right\rfloor\), which is the maximum product of three positive integers summing to \(d+3\)._
We then characterize the edges of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) using a connection to cells of the braid arrangement. As is explained in Section2, the vertices of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) can be labeled by integer triples, and the edge characterization is given in terms of certain inequalities on the vertex labels (Theorem4.6). By reducing the \(d\)-face structure of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) to that of \(\mathrm{P}^{\mathsf{Kostka}}_{3d+3}\) (Theorem5.3), we can provide exact formulas for the number of \(d\)-faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) for \(d=1,2,3\).
**Theorem 1.2**.: _The number of edges of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) is_
\[f_{1}(r)=\binom{r}{6}+2\binom{r}{5}+6\binom{r}{4}+7\binom{r}{3}+3\binom{r}{2}\,,\]
_the number of two-dimensional faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) is_
\[f_{2}(r)=\binom{r}{9}+3\binom{r}{8}+12\binom{r}{7}+23\binom{r}{6}+33\binom{r}{ 5}+31\binom{r}{4}+13\binom{r}{3}+\binom{r}{2}\,,\]
_and the number of three-dimensional faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) is_
\[f_{3}(r)=\binom{r}{12} +4\binom{r}{11}+19\binom{r}{10}+49\binom{r}{9}+105\binom{r}{8}+1 63\binom{r}{7}+177\binom{r}{6}\] \[+131\binom{r}{5}+53\binom{r}{4}+7\binom{r}{3}\,.\]
These face counting functions have positive integer coefficients in terms of the polynomial basis \(\binom{r}{k}_{k\geq 0}\), and we show that this property holds in all dimensions. We also determine that the coefficient of the top degree term \(\binom{r}{3d+3}\) is always \(1\), yielding precise asymptotics for the number of \(d\)-faces.
The main result of the last section concerns the Hilbert basis of \(\mathsf{Kostka}_{r}\). We say that an integer pair \((\lambda_{1},\mu_{1})\) is \(r\)_-initial_ if there is an element \((\lambda,\mu)\) in the Hilbert basis of \(\mathsf{Kostka}_{r}\) such that \(\lambda\) has first element \(\lambda_{1}\) and \(\mu\) has first element \(\mu_{1}\). The Width Bound of
Gao-Kiers-Orelowitz-Yong [4, Theorem 1.4] implies that \((\lambda_{1},\mu_{1})\) is \(\lambda_{1}\)-initial if and only if \(\lambda_{1}\) and \(\mu_{1}\) are coprime. We provide several sufficient conditions for a pair \((\lambda_{1},\mu_{1})\) to be \((\lambda_{1}+1)\)-initial, and these conditions hold for over \(93.7\%\) of integer pairs \(\lambda_{1}\geq\mu_{1}\).
**Theorem 1.3**.: _If any of the following conditions hold:_
* \(\lambda_{1}\) _and_ \(\mu_{1}\) _are coprime_ _[_4_, Theorem 1.4]__, or_
* \(\lambda_{1}+1\) _and_ \(\mu_{1}\) _are coprime, or_
* \(\lambda_{1}+1\) _and_ \(\mu_{1}+1\) _are coprime with_ \(2\mu_{1}\geq\lambda_{1}\)_,_
_then the pair \((\lambda_{1},\mu_{1})\) is \((\lambda_{1}+1)\)-initial. Moreover, this holds even if we consider only Hilbert basis elements on the \(2\)-faces of \(\mathsf{Kostka}_{r}\)._
The first criterion follows directly from the work of Gao-Kiers-Orelowitz-Yong, while the latter two conditions are the result of new constructions of Hilbert basis elements. We conclude with a new observation that, for small \(r\), half of the \(h\)-vector entries for \(\mathsf{Kostka}_{r}\) are \(1\), and we conjecture that this holds in general.
### Outline
We begin by providing some preliminaries on the Kostka cone and Kostka polytope in Section2. We study the maximum number of vertices contained in a face of the Kostka polytope in Section3. The edge characterization of the Kostka polytope is in Section4, and the enumerative results on the faces of fixed dimension are in Section5. The construction of Hilbert basis elements is discussed in Section6, with some computation relegated to the Appendix. We conclude with a discussion of further directions in Section7.
## Acknowledgements
This work was completed in part at the 2022 Graduate Research Workshop in Combinatorics, which was supported in part by NSF grant #1953985, and a generous award from the Combinatorics Foundation. This paper is the result of many fruitful discussions with Shiliang Gao and Sheila Sundaram. The author deeply thanks Shiliang Gao for suggesting this topic at the 2022 GRWC and for his helpful contributions throughout the course of the project. She is extremely grateful to Sheila Sundaram, as this project would not have been possible without her insight and generous support. The author also thanks Margaret Bayer, Yibo Gao, Jeremy Martin, and Tyrrell McAllister for their early contributions to this project. She appreciates the comments of Richard Stanley and Charles Wang on the \(h\)-vector phenomenon discussed in the Further Directions section. The author extends her thanks to Niven Achenjang for helping to compute the probability in Corollary6.5 and to Joshua Kiers for sharing the code used to discover Example6.4.
## 2. Preliminaries
### The Kostka Cone
For positive integers \(r\) and \(n\), we denote the set of integer partitions of \(n\) into at most \(r\) parts by \(\mathsf{Par}_{r}(n)\), where such partitions are written as non-increasing \(r\)-tuples. Each partition can be viewed as a Young diagram, where the length of the \(i^{\text{th}}\) row is the \(i^{\text{th}}\) entry of the \(r\)-tuple.
Consider two partitions \(\lambda=(\lambda_{1},\ldots,\lambda_{r})\) and \(\mu=(\mu_{1},\ldots,\mu_{r})\) in \(\mathsf{Par}_{r}(n)\). A _semistandard tableau of shape \(\lambda\) and content \(\mu\)_ is a filling of the Young diagram corresponding to \(\lambda\) with integer entries such that the rows are non-decreasing to the right, the columns strictly increase downward, and there are precisely \(\mu_{i}\) boxes with entry \(i\) for all \(1\leq i\leq r\). These are counted by the _Kostka coefficient_\(K_{\lambda,\mu}\).
**Example 2.1**.: The Kostka coefficient \(K_{(4,2),(2,2,1,1)}\) is equal to \(4\), as shown by the following four tableaux of shape \((4,2)\) and content \((2,2,1,1)\).
\begin{tabular}{
_Observation 2.4_.: The Kostka cone is bounded by the following hyperplanes for \(1\leq i<r\):
\[H_{i} =\left\{(\lambda,\mu)\in\mathbb{R}^{2r}:\lambda_{i}=\lambda_{i+1} \right\},\] \[H_{r} =\left\{(\lambda,\mu)\in\mathbb{R}^{2r}:\lambda_{r}=0\right\},\] \[\widehat{H}_{i} =\left\{(\lambda,\mu)\in\mathbb{R}^{2r}:\mu_{i}=\mu_{i+1}\right\}, \text{ and }\] \[J_{i} =\left\{(\lambda,\mu)\in\mathbb{R}^{2r}:\sum_{j=1}^{i}\lambda_{j} =\sum_{k=1}^{i}\mu_{k}\right\}\,.\]
_Remark 2.5_.: It is straightforward to check that each of these hyperplanes intersects \(\mathsf{Kostka}_{r}\) along a facet, and these facets are distinct when \(r>2\). Thus, \(\mathsf{Kostka}_{r}\) has \(3r-2\) facets for \(r>2\).
#### 2.1.2. The Kostka polytope
Since a large portion of this work concerns the face structure of \(\mathsf{Kostka}_{r}\), it is often more convenient to work with a polytopal slice of this cone.
**Definition 2.6**.: Let \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) be the \((2r-2)\)-dimensional polytope obtained by intersecting \(\mathsf{Kostka}_{r}\) with the affine hyperplane \(\{\sum_{i=1}^{r}(\lambda_{i}+\mu_{i})=1\}\).
In other words, \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) is the set of points \((\lambda,\mu)\) in \(\mathsf{Kostka}_{r}\) such that \(\lambda\) and \(\mu\) each have entries summing to \(\frac{1}{2}\). Since we are interested only in the combinatorial type of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\), we could have equivalently intersected \(\mathsf{Kostka}_{r}\) with any affine hyperplane nontrivially intersecting all faces of \(\mathsf{Kostka}_{r}\) except the origin.
_Observation 2.7_.: The \(d\)-faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) are in bijection with the \((d+1)\)-faces of \(\mathsf{Kostka}_{r}\). In particular, each \((d+1)\)-face of \(\mathsf{Kostka}_{r}\) is obtained by taking all points along any ray emanating from the origin and passing through some fixed \(d\)-face of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\). Thus, the vertices of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) correspond to the extremal rays of \(\mathsf{Kostka}_{r}\).
#### 2.1.3. Extremal Rays
The extremal rays of \(\mathsf{Kostka}_{r}\) were described in [4]. In particular, we have
**Proposition 2.8**.: _[_4_, Proposition 4.1, Corollary 1.7]_ _Let \(a,b,\ell\) satisfy \(0\leq\ell<b\leq a\leq r\). Then_
\[(\lambda,\mu) =\left(\underbrace{a-\ell,\ldots,a-\ell}_{b},0\ldots,0; \underbrace{a-\ell,\ldots,a-\ell}_{\ell},\underbrace{b-\ell,\ldots,b-\ell}_{a -\ell},0,\ldots,0\right)\] \[=((a-\ell)^{b},0^{r-b});\ ((a-\ell)^{\ell},(b-\ell)^{a-\ell},0^{r-a} ))\,,\]
_generates an extremal ray of \(\mathsf{Kostka}_{r}\), and all extremal rays are generated by such an element. In particular, the number of extremal rays of \(\mathsf{Kostka}_{r}\) is \(\binom{r}{3}+\binom{r}{2}+\binom{r}{1}\)._
**Example 2.9**.: Let \(r=a=5\), \(b=4\), and \(\ell=2\). Then
\[(\lambda,\mu)=((3,3,3,3,0),(3,3,2,2,2))=\left(\begin{array}{c}\includegraphics[ 14]{figure/1-3.pdf},\includegraphics[14]{figure/1-3.pdf},\includegraphics[14]{figure/1-3.pdf}\\ \includegraphics[14]{figure/1-3.pdf},\includegraphics[14]{figure/1-3.pdf}\\ \includegraphics[14]{figure/1-3.pdf},\includegraphics[14]{figure/1-3.pdf}\\ \includegraphics[14]{figure/1-3.pdf}\end{array}\right)\]
generates an extremal ray of \(\mathsf{Kostka}_{5}\).
**Definition 2.10**.: We say that the extremal ray in Proposition 2.8 is _labeled_ by the triple \((a,b,\ell)\) whenever \(a\neq b\). Whenever \(a=b\), the extremal ray in Proposition 2.8 is not dependent on the choice of \(\ell\), and we say it is _labeled_ by the triple \((a,a,a)\). We also say that the corresponding vertex of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) (using Observation 2.7) is _labeled_ by the same triple.
**Example 2.11**.: The seven extremal rays of \(\mathsf{Kostka}_{3}\) are labeled by the triples \((1,1,1)\), \((2,1,0)\), \((2,2,0)\), \((3,1,0)\), \((3,2,0)\), \((3,2,1)\), and \((3,3,3)\).
_Remark 2.12_.: Note that our usage of the parameters \(a,b,\ell\) differs from the convention in [4]; in particular, we relabel their parameter \(a+\ell\) by \(a\) and \(b+\ell\) by \(b\). While the choice of label \((a,a,a)\) may seem arbitrary for the case when \(a=b\), this choice simplifies the statement of Lemma 4.5.
### Hilbert Bases
Let \(C\subseteq\mathbb{R}^{d}\) be a rational convex polyhedral cone. By Gordan's Lemma [13, Theorem 16.4], there exists a finite set \(\mathcal{H}(C)\subseteq C\cap\mathbb{Z}^{d}\), such that
* every integral point of \(C\) can be expressed as a nonnegative integer combination of points in \(\mathcal{H}(C)\), and
* \(\mathcal{H}(C)\) has minimal cardinality with respect to the first property.
In the case that \(C\) is pointed, the set \(\mathcal{H}(C)\) is unique and is known as the _Hilbert basis_ of \(C\). Moreover, an element of \(C\cap\mathbb{Z}^{d}\) is in the Hilbert basis if and only if it is _irreducible_, i.e., cannot be expressed as a nonnegative integer combination of any other integral points of \(C\); otherwise it is called _reducible_. See [13, Section 16.4] for further background.
_Remark 2.13_.: Since \(\mathsf{Kostka}_{r}\) is pointed and has integral points corresponding to pairs in \(\mathsf{Par}_{r}(n)\), we can express Hilbert basis membership in terms of the partitions. Namely, an element \((\lambda,\mu)\in\mathsf{Kostka}_{r}\cap\mathbb{Z}^{2r}\) is a Hilbert basis element if and only if no nontrivial subset of the columns of \(\lambda\) has total size equal to a subset of the columns of \(\mu\).
## 3. The Maximum Number of Vertices of a Face
In this section, we look at the maximum number of vertices contained in a \(d\)-face of the polytope \(\mathrm{P}_{r}^{\mathsf{Kostka}}\). Equivalently (see Observation2.7), we look at the maximum number of extremal rays contained in a \((d+1)\)-face of the cone \(\mathsf{Kostka}_{r}\). We give a uniform upper bound on this quantity for fixed \(d\), and furthermore show that this upper bound is exact for \(r>d+1\).
**Definition 3.1**.: For integers \(r\geq 1\) and \(0\leq d\leq 2r-2\), let \(m(r,d)\) denote the maximum number of vertices in a \(d\)-dimensional face of the polytope \(\mathrm{P}_{r}^{\mathsf{Kostka}}\). Let \(m(d)\) denote the maximum number of vertices of a \(d\)-face in any polytope \(\mathrm{P}_{j}^{\mathsf{Kostka}}\) over all choices of \(j\geq 1\).
By Observation2.3, we have that \(m(r,d)\) is non-decreasing as a function in \(r\). Moreover, since any proper face can be extended to a face of higher dimension, the function \(m(r,d)\) is strictly increasing in \(d\). Table1 depicts some values of \(m(r,d)\).
_Remark 3.2_.: Note that \(m(d)\) is a priori not guaranteed to exist, but Corollary3.3 shows that it is well-defined.
Our main result is an exact calculation of \(m(d)\), which in turn gives an upper bound on \(m(r,d)\). Using the language of \(m(d)\) and \(m(r,d)\), we restate the result stated in Theorem1.1.
**Corollary 3.3**.: _For \(r>d+1\), we have_
\[m(r,d)=m(d)=\prod_{i=1}^{3}\left\lfloor\frac{d+2+i}{3}\right\rfloor\,.\]
_Remark 3.4_.: The values of \(m(d)\) appear as the sequence A006501 in the OEIS [11], with generating function \(\dfrac{1+x^{2}}{(1-x)^{2}(1-x^{3})^{2}}\). The quantity \(m(d)\) can be alternatively characterized as the maximum product of three positive integers summing to \(d+3\).
From Observation 2.4 and Proposition 2.8 the following is clear.
**Proposition 3.5**.: _Let \(v\) be a vertex of \(P_{r}^{\mathsf{Kostka}}\) labeled by the triple \((a,b,\ell)\). Then_
* \(v\in H_{i}\) _if and only if_ \(b\neq i\)_,_
* \(v\in\widehat{H}_{k}\) _if and only if_ \(a\neq k\) _and_ \(\ell\neq k\)_._
* \(v\in J_{j}\) _if and only if_ \(j\leq\ell\)_,_ \(j\geq a\)_, or_ \(a=b\)_._
**Theorem 3.6**.: _A \(d\)-dimensional face of \(P_{r}^{\mathsf{Kostka}}\) has at most \(\prod_{i=1}^{3}\left\lfloor\frac{d+2+i}{3}\right\rfloor\) vertices._
Proof.: Let
\[F=\mathrm{P}_{r}^{\mathsf{Kostka}}\cap\left(\bigcap_{i\in I}H_{i}\right)\cap \left(\bigcap_{j\in J}J_{j}\right)\cap\left(\bigcap_{k\in K}\widehat{H}_{k}\right)\]
be a \(d\)-dimensional face of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\), where \(I\subseteq\{1,2,\ldots,r\}\) and \(J,K\subseteq\{1,2,\ldots,r-1\}\) are (possibly empty) index sets. We can furthermore assume that the set of hyperplanes is chosen minimally to have this intersection, i.e., \(|I|+|J|+|K|=2r-1-d\).
We are interested in bounding the possible triples \((a,b,\ell)\in\mathbb{Z}_{\geq 0}\) labeling the vertices of \(F\). According to Proposition 3.5, such a triple must satisfy that \(b\notin I\), \(a,\ell\notin K\), and an element of \(J\) is weakly between \(a\) and \(\ell\) only if \(a=b=\ell\). Let \(F_{1}\) be the set of triples \((a,b,\ell)\) meeting these conditions. The minimality condition implies that, for any elements \(j<j^{\prime}<j^{\prime\prime}\) of \(J\cup\{0,r\}\), there must be some \((a,b,\ell)\in F_{1}\) such that \(j\leq\ell<j^{\prime}<a\leq j^{\prime\prime}\). That is, the sets \(\{j,j+1,\ldots,j^{\prime}-1\}\setminus K\) and \(\{j-1,j,\ldots,j^{\prime}\}\setminus K\) are nonempty for any elements \(j<j^{\prime}\) in \(J\cup\{0,r\}\).
Fix \(b\notin I\). Let \(z_{1}(b)=|\{a:(a,b,\ell)\in F_{1}\text{ for some }a,\ell\}|\) and \(z_{2}(b)=|\{\ell:(a,b,\ell)\in F_{1}\text{ for some }a,\ell\}|\). If \(b\in J\), then we must have \(a=b=\ell\), so \(z_{1}(b)+z_{2}(b)\leq 2\). If \(b\notin J\), since each \(j\in J\) has an element of \(\{0,\ldots,r\}\setminus K\) on either side of it, we have \(z_{1}(b)+z_{2}(b)\leq r+1-|J|-|K|\). Thus, summing over our choices for \(b\), we have
\[|F_{1}| \leq\sum_{b\in[r]\setminus I}z_{1}(b)\cdot z_{2}(b)\] \[\leq\sum_{b\in[r]\setminus I}\left\lfloor\frac{r+1-|J|-|K|}{2} \right\rfloor\cdot\left\lfloor\frac{r+2-|J|-|K|}{2}\right\rfloor\] \[\leq(r-|I|)\cdot\left\lfloor\frac{r+1-|J|-|K|}{2}\right\rfloor \cdot\left\lfloor\frac{r+2-|J|-|K|}{2}\right\rfloor\,,\]
where, in the second step, we replace the summand by the maximum value of the product of two numbers summing to \(r+1-|J|-|K|\). The sum of the three factors in the final expression is
\[2r+1-|I|-|J|-|K|=d+3\,,\]
so their product is at most \(\prod_{i=1}^{3}\left\lfloor\frac{d+2+i}{3}\right\rfloor\). This yields the desired upper bound.
Via a construction, we can prove a lower bound on \(m(r,d)\).
**Theorem 3.7**.: _Suppose \(r>d+1\). Given any positive integers \(z_{1},z_{2},z_{3}\) summing to \(d+3\), the intersection_
\[F=P_{r}^{\mathsf{Kostka}}\cap\left(\left(\bigcap_{i=1}^{z_{1}-1}H_{i}\right) \cap\left(\bigcap_{j=z_{1}+z_{2}}^{r}H_{j}\right)\cap\left(\bigcap_{k=z_{1}}^ {r-z_{3}}\widehat{H}_{k}\right)\right)\]
_is a face of \(P_{r}^{\mathsf{Kostka}}\) of dimension at most \(d\) with \(z_{1}z_{2}z_{3}\) vertices._
Proof.: We begin by determining the set of vertices contained in \(F\). Let \(v\) be a vertex of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) labeled by \((a,b,\ell)\). We have \(v\in\left(\bigcap_{i=0}^{z_{1}-2}H_{i}\right)\cap\left(\bigcap_{j=z_{1}+z_{2} }^{r}H_{j}\right)\) if and only if \(z_{1}\leq b\leq z_{1}+z_{2}-1\). Similarly, we have \(v\in\bigcap_{k=z_{1}}^{r-z_{3}}\widehat{H}_{k}\) if and only if \(a,\ell\not\in\{z_{1},\ldots,r-z_{3}\}\). By assumption, we have \(r-z_{3}\geq z_{1}+z_{2}-1\) and \(\ell\leq b\), hence \(v\in F\) if and only if
\[0\leq\ell<z_{1}\leq b\leq z_{1}+z_{2}-1\leq r-z_{3}<a\leq r\,.\]
The ranges for \(\ell\), \(b\), and \(a\) are disjoint and of sizes \(z_{1}\), \(z_{2}\), and \(z_{3}\), respectively. Therefore, there are \(z_{1}\cdot z_{2}\cdot z_{3}\) vertices of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) contained in \(F\), each associated to a triple \((a,b,\ell)\) satisfying the inequalities above.
It remains to show that dimension of \(F\) is at most \(d\). This follows because any element \((\lambda,\mu)\) in \(F\) lies in the affine subspace of \(\mathbb{R}^{2r}\) where
\[\lambda_{1}=\lambda_{2}=\cdots=\lambda_{z_{1}},\;\lambda_{z_{1}+z_{2}}=\cdots= \lambda_{r},\;\mu_{z_{1}}=\cdots=\mu_{r-z_{3}},\;\text{and}\;\sum_{i=1}^{r} \lambda_{i}=\sum_{j=1}^{r}\mu_{j}=\frac{1}{2}\,,\]
which has dimension \(2r-(z_{1}-1)-(r-z_{1}-z_{2}+1)-(r-z_{3}-z_{1}+1)-2=d\).
Proof of Corollary 3.3.: The upper bound follows directly from Theorem 3.6. For the lower bound, consider the face constructed in Theorem 3.7 with \(z_{i}=\left\lfloor\frac{d+1+i}{3}\right\rfloor\). Since this face achieves the upper bound on the number of vertices in a \(d\)-face from Theorem 3.6 and \(m(r,d)\) is strictly increasing in \(d\), we can conclude that this face has dimension exactly
## 4. Characterization of Edges
Here we present a procedure for characterizing the faces of a fixed dimension in the Kostka polytope \(\mathrm{P}^{\mathsf{Kostka}}_{r}\), where \(r\) can vary. We carry out this characterization explicitly for dimension \(1\). This characterization yields an enumeration of the faces of these dimensions, which is handled in the following section. It seems very feasible that these methods could be extended to higher dimensions, though the conditions seem to get increasingly complicated.
**Proposition 4.1**.: _The minimal face of \(P^{\mathsf{Kostka}}_{r}\) containing a set of vertices with labels \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq m}\) is formed by the set of all vertices whose label \((a,b,\ell)\) satisfies that_
1. \(b\) _is an element of_ \(\bigcup_{i=1}^{m}\{b_{i}\}\)_,_
2. \(\ell\) _and_ \(a\) _are both elements of_ \(\bigcup_{i=1}^{m}\{\ell_{i},a_{i}\}\)_,_
3. _the open interval_ \((\ell,a)\) _is contained in_ \(\bigcup_{i=1}^{m}(\ell_{i},a_{i})\)_, and_
4. \(0\leq\ell<b<a\leq r\) _or_ \(a=b=\ell\)_._
Proof.: Comparing these conditions to those in Proposition3.5, we see that these conditions precisely encode that the vertex labeled by \((a,b,\ell)\) is contained in all hyperplanes that contain the vertices with labels \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq m}\).
_Remark 4.2_.: For convenience, when considering the labels of a list of vertices, we follow the convention that the labels are ordered lexicographically.
We will now show that whether a collection of vertices is the vertex set of some face of the Kostka cone depends only on the cell of the braid arrangement that the vertex label list lies in, i.e., the relative order of the vertex label entries. We say that two tuples \((x_{1},\ldots,x_{n}),(y_{1},\ldots,y_{n})\in\mathbb{Z}^{n}\) are _order-isomorphic_ provided that \(x_{i}>x_{j}\) if and only if \(y_{i}<y_{j}\) for any \(i,j\in\{1,\ldots,n\}\).
**Lemma 4.3**.: _Suppose we have a pair of order-isomorphic tuples \((a_{1},b_{1},\ell_{1},\ldots,a_{m},b_{m},\ell_{m})\) and \((a^{\prime}_{1},b^{\prime}_{1},\ell^{\prime}_{1},\ldots,a^{\prime}_{m},b^{ \prime}_{m},\ell^{\prime}_{m})\) in \(\{0,\ldots,r\}^{3}\) such that the triples \((a_{i},b_{i},\ell_{i})\) and \((a^{\prime}_{i},b^{\prime}_{i},\ell^{\prime}_{i})\) are labels of vertices of \(P^{\mathsf{Kostka}}_{r}\). Then the vertices labeled by \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq m}\) form the vertex set of a \(d\)-dimensional face of \(P^{\mathsf{Kostka}}_{r}\) if and only if the vertices labeled by \(\{(a^{\prime}_{i},b^{\prime}_{i},\ell^{\prime}_{i})\}_{1\leq i\leq m}\) do._
Proof.: In order to determine if a set \(V\) of vertices in \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) labeled by \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq m}\) is the vertex set of a \(d\)-face of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\), we test whether any other vertex of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) lies in the intersection of the hyperplanes containing \(V\). In order to lie in this intersection, the new vertex labeled by \((a,b,\ell)\) must satisfy the conditions of Proposition4.1.
These conditions, and hence the existence of such a tuple, only depend on the order-isomorphism class of the tuple \((a_{1},b_{1},\ell_{1},\ldots,a_{m},b_{m},\ell_{m})\). Moreover, all vertex sets corresponding to a given ordering have convex hulls of the same dimension, since the set
of bounding hyperplanes of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) containing a vertex is determined entirely by this ordering.
Thus, in order to determine if a set of vertices is the vertex set of some face of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\), it is sufficient to test this for any set of vertices with an order-isomorphic list of labels. That is, a list in \(\{0,\ldots,r\}^{3m}\) being the set of labels of a face of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) is constant across open cells of the braid arrangement \(\mathcal{B}_{3m}\). We can combine this fact with the well-known Upper Bound Theorem for polytopes, proved by McMullen [9] in 1970 (see [15, Chapter 2, Section 3] for more details). This yields an upper bound on the dimension of open cells in \(\mathcal{B}_{3m}\cap\{0,\ldots,r\}^{3m}\) that correspond to vertex labels of \(d\)-faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\). We state the Upper Bound Theorem under the additional assumption that the face dimension is less than half the polytope dimension, which is sufficient for our purposes.
**Theorem 4.4**.: _[Upper Bound Theorem, [9]] For \(0\leq i\leq\left\lfloor\frac{m}{2}\right\rfloor\), the number of \(i\)-faces of an \(m\)-polytope with \(n\) vertices is at most \(\binom{n}{i+1}\). Moreover, this bound is realized by \(\Delta(n,m)\), the \(m\)-dimensional cyclic polytope with \(n\) vertices._
We now prove an upper bound on the number of distinct values of the triples labeling the vertices of a face of fixed dimension in \(\mathrm{P}^{\mathsf{Kostka}}_{r}\). Of course, we already have an upper bound of \(3\prod_{i=1}^{3}\left\lfloor\frac{d+2+i}{3}\right\rfloor\) from Theorem3.6, which bounds the number of vertices. However, we can obtain a tight bound using the upper bound theorem.
**Lemma 4.5**.: _If the vertices of a \(d\)-face of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) are labeled by \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq n}\), then there are at most \(3d+3\) distinct values among the parameters \(a_{1},b_{1},\ell_{1},\ldots,a_{n},b_{n},\ell_{n}\)._
Proof.: Let \(t=|\{a_{1},b_{1},\ell_{1},\ldots,a_{n},b_{n},\ell_{n}\}|\). By Lemma4.3, the number of \(d\)-faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) generated by tuples order-isomorphic to \((a_{1},b_{1},\ell_{1},\ldots,a_{n},b_{n},\ell_{n})\) is \(\binom{r+1}{t}=\Theta(r^{t})\).
We now apply the upper bound theorem for polytopes (see Theorem4.4). Since the number of vertices is \(\binom{r}{3}+\binom{r}{2}+\binom{r}{1}\) by Proposition2.8, then the number of faces of dimension \(d\) is bounded by the corresponding number of \(d\)-faces of the cyclic \(2r\)-polytope with \(\binom{r}{3}+\binom{r}{2}+\binom{r}{1}\) vertices. This quantity is asymptotically \(\Theta(r^{3d+3})\). Therefore, we must have \(t\leq 3d+3\), as desired.
It follows that in order to characterize the \(d\)-dimensional faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) for arbitrary \(r\), one must merely determine the \(d\)-faces of \(\mathrm{P}^{\mathsf{Kostka}}_{3d+3}\). The faces of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) are then those whose label sets are order-isomorphic to a label set of a \(d\)-face of \(\mathrm{P}^{\mathsf{Kostka}}_{3d+3}\).
**Theorem 4.6**.: _Let \(u\) and \(v\) be vertices of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) labeled \((a,b,\ell)\) and \((a^{\prime},b^{\prime},\ell^{\prime})\), where \(a-b\leq a^{\prime}-b^{\prime}\). Then \(\{u,v\}\) is a face of \(\mathrm{P}^{\mathsf{Kostka}}_{r}\) if and only if_
1. \(a=b\) _and at least one of the following holds:_ _(i)_ \(a^{\prime}=b^{\prime}\)
1. \(a=b^{\prime}\)_,_
2. \(a\geq a^{\prime}\)_, or_
3. \(\ell^{\prime}\geq a\)_._
2. \(a\neq b\) _and at least one of the following holds:_ 1. _two of the three equalities_ \(a=a^{\prime}\)_,_ \(b=b^{\prime}\)_, and_ \(\ell=\ell^{\prime}\) _hold,_ 1. \(\ell\geq a^{\prime}\)_, or_ 2. \(\ell^{\prime}\geq a\)_._
Proof.: By Lemma4.3 and Lemma4.5, it is enough to check that this is the case for the vertices of \(\mathrm{P}_{6}^{\mathsf{Kostka}}\). This can be readily completed with the aid of a computer.
By similarly examining the \(2\)-faces of \(\mathrm{P}_{9}^{\mathsf{Kostka}}\), one could determine a characterization of the \(2\)-faces of all Kostka polytopes. While the conditions seem rather complex, we shall see in the next section that these methods yield nice enumerative results.
## 5. Enumeration of Faces of a Fixed Dimension
In this section, we derive formulas for the number of faces of a fixed dimension \(d\) in \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) for \(d=1,2,3\). We then asymptotically determine the number of \(d\)-faces of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) for arbitrary \(d\). As mentioned in Theorem4.4, it is well known that the number of \(d\)-faces of a \(k\)-polytope with \(n\) vertices is maximized by the cyclic polytope \(\Delta(n,k)\) for sufficiently large \(k\). We show that, as \(r\) increases, the number of \(d\)-faces of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) grows asymptotically at the same rate as the number of \(d\)-faces of \(\Delta\left(\binom{r}{3}+\binom{r}{2}+\binom{r}{1},2r-2\right)\) up to a constant factor depending on \(d\), and we furthermore determine this constant for all \(d\) (see Corollary5.7).
**Definition 5.1**.: Let \(f_{d}(r)\) denote the number of \(d\)-dimensional faces of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\).
In the previous section, we showed that whether a set of \(m\) vertices of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) forms the vertex set of a face depends only on the order-isomorphism class of the vertex labels (see Lemma4.3). In other words, it depends only on the cell of the braid arrangement \(\mathcal{B}_{3m}\) that the list of \(m\) vertex label triples lies in. By examining the integer points in each cell, we obtain the following lemma.
**Lemma 5.2**.: _The function \(f_{d}(r)\) is a polynomial in \(r\) of degree at most \(3d+3\) and has a positive integer expansion in terms of the of basis \(\left\{\binom{r}{k}\right\}_{0\leq k\leq 3d+3}\)._
Proof.: The number of integer points in any collection of cells of \(\mathcal{B}_{3m}\cap\{0,\ldots,r\}^{3m}\) has a positive integer expansion in the basis \(\left\{\binom{r}{k}\right\}_{1\leq k\leq 3m}\). Thus, it follows directly from Lemma4.3 that \(f_{d}(r)\) is a polynomial with a positive integer expansion in terms of the of basis \(\left\{\binom{r}{k}\right\}_{0\leq k}\). The claim about the degree then follows from Lemma4.5.
**Theorem 5.3**.: _Fix \(d\geq 0\). Setting \(d_{\min}=\left\lfloor\frac{d+3}{2}\right\rfloor\), we have_
\[f_{d}(r)=\sum_{k=d_{\min}}^{3d+3}\alpha_{k}\binom{r}{k}\]
_where \(\alpha_{k}=f_{d}(k)-\left(\sum_{j=d_{\min}}^{k-1}\binom{k}{j}a_{j}\right)\) for \(k>d_{\min}\) and_
\[\alpha_{d_{\min}}=f_{d}(d_{\min})=\begin{cases}3d-2&\text{if $d$ odd and $d>1$}\,,\\ 1&\text{if $d$ even,}\\ 3&\text{if $d=1$}\,.\end{cases}\]
Proof.: If \(d\) is odd, then the value of \(\alpha_{d_{\min}}\) is the number of facets of \(\mathrm{P}_{d_{\min}}^{\mathsf{Kostka}}\), which we calculate in Remark 2.5. If \(d\) is even, then \(\alpha_{d_{\min}}\) is the number of top-dimensional faces, which is \(1\) since \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) is a polytope. The recursive formula for the other values of \(\alpha_{k}\) follows from Lemma 4.3, Observation 2.3, and Lemma 5.2 by evaluating \(f_{d}(k)\) as a sum of terms of the form \(\alpha_{k}\binom{r}{k}\).
Thus, if one can compute the values \(f_{d}(0),\ldots,f_{d}(3d+3)\), then Theorem 5.3 implies that we can determine the entire function \(f_{d}(r)\). Using SageMath, we were able to compute some initial terms of \(f_{d}(r)\) (see Table 2). The number of vertices, \(f_{0}(r)\), is also shown in Table 2 for \(r\leq 13\), with the general formula given in [4].
These computations allow us to derive formulas for the number of \(d\)-faces of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) for \(d=1,2,3\), given in Theorem 1.2.
Proof of Theorem 1.2.: Each formula can be obtained by applying Theorem 5.3 to the values in a fixed row of Table 2.
We now shift our focus to determining the asymptotic behavior of the function \(f_{d}(r)\). To achieve this, we determine the degree and leading coefficient of the polynomial \(f_{d}(r)\)
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ \hline
0 & 1 & 3 & 7 & 14 & 25 & 41 & 63 & 92 & 129 & 175 & 231 & 298 & 377 \\
1 & 0 & 3 & 16 & 52 & 132 & 288 & 567 & 1036 & 1788 & 2949 & 4686 & 7216 & 10816 \\
2 & 0 & 1 & 16 & 89 & 328 & 961 & 2427 & 5517 & 11584 & 22846 & 42812 & 76868 & 133068 \\
3 & 0 & 0 & 7 & 81 & 466 & 1898 & 6253 & 17803 & 45502 & 106946 & 234964 & 488229 & 967863 \\ \end{tabular}
\end{table}
Table 2. The values of \(f_{d}(r)\), the number of \(d\)-faces in \(\mathrm{P}_{r}^{\mathsf{Kostka}}\), are shown for the cases where \(0\leq d\leq 3\) and \(1\leq r\leq 13\). Here, \(d\) is given by the row label and \(r\) is given by the column label.
**Lemma 5.4**.: _Fix positive integers \(d,r\) such that \(r\geq 3d+3\). If a set of \(d+1\) vertices in \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) with labels \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq d+1}\) satisfies that the intervals \([\ell_{i},a_{i}]\) are all disjoint, then it is the vertex set of a \(d\)-face of \(\mathsf{Kostka}_{r}\)._
Proof.: We prove this by induction on \(d\), with the base case \(d=1\) following from Theorem4.6. We first show that such a set of vertices is the vertex set of a face of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\), and then determine its dimension. We prove the former by showing there is no other vertex in the minimal face \(F\) containing the vertices labeled by \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq d+1}\) via the conditions of Proposition4.1. Let \((a,b,\ell)\) be the label of a vertex in \(F\). By Condition (2), the parameters \(a,\ell\) must be chosen from within intervals \([\ell_{i},a_{i}]\). If \(a\) and \(\ell\) are chosen from different intervals, then in order to satisfy Condition (3), we must have \(a=\ell+1\). However, then \(b\) cannot be chosen to satisfy Condition (4). On the other hand, if \(a\) and \(\ell\) are chosen within the same interval \([\ell_{j},a_{j}]\), then Condition (2) implies that \(a=a_{j}\) and \(\ell=\ell_{j}\). But then Condition (1) and the required ordering of \(\ell\), \(b\), and \(a\) imply that we also have \(b=b_{j}\), so \((a,b,\ell)\) was in the original list of vertex labels. Hence, the minimal face containing the vertices labeled by \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq d+1}\) contains no other vertices, so these form the vertex set of \(F\).
The fact that the dimension of \(F\) is \(d\) follows from the induction. In particular, we know that the vertices with labels \(\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq d}\) form the vertex set of a face of dimension \(d-1\). Since we have added one additional vertex and formed another face of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\), the dimension of \(F\) must be \(d\).
**Lemma 5.5**.: _Fix positive integers \(d,r\) such that \(r\geq 3d+3\), and suppose \(L\) is the set of vertex labels of a \(d\)-face of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\). Then either_
1. \(F\) _is a simplex whose vertex labels satisfy the conditions of_ Lemma5.4_, or_
2. _there are at most_ \(3d+2\) _distinct values among the vertex label entries._
Proof.: We proceed by induction on \(d\), with the base case \(d=1\) following from Theorem4.6.
Let \(F\) be a \(d\)-face of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\), and let \(L=\{(a_{i},b_{i},\ell_{i})\}_{1\leq i\leq n}\) be the set of labels of the vertices of \(F\). Let \(t\) denote the number of distinct values among the label entries \(a_{i}\), \(b_{i}\), and \(\ell_{i}\). Fix a bounding hyperplane of \(F\) of type \(H_{i}\) or \(\widehat{H}_{i}\) (see Observation2.7 for hyperplane descriptions), i.e.,
\[H\in\left\{H_{i}:1\leq i\leq r,\;\dim(H_{i}\cap F)=d-1\right\}\cup\left\{ \widehat{H}_{i}:1\leq i\leq r,\;\dim(\widehat{H}_{i}\cap F)=d-1\right\},\]
such that the number of vertices of \(F\) contained in \(H\) is minimal.
Suppose \(H=H_{j}\) (the case for \(\widehat{H}_{j}\) proceeds analogously). By Proposition3.5, the vertices of \(F\) that are contained in \(H\) are precisely those whose label \((a_{i},b_{i},\ell_{i})\) does not have \(b_{i}=j\). We now consider the number of distinct values among the label entries of the vertices in \(F\cap H\). If a label entry \(m\) appears among the vertices of \(F\) but not
\(F\cap H\), then all vertices whose label contains the entry \(m\) must also contain the entry \(j\). Moreover, by the minimality condition, \(F\cap\widehat{H}_{m}\) has at least as many vertices as \(F\cap H\), so \(F\cap\widehat{H}_{m}=F\cap H_{j}\). Since each label has three entries, it is either the case that
1. there are at most two label entries that appear among the vertices of \(F\) but not \(F\cap H\), or
2. there are exactly three label entries that appear among the vertices \(F\) but not \(F\cap H\), and these entries appear in the label of a unique vertex of \(F\).
In Case (a), the face \(F\cap H\) is then a \((d-1)\)-dimensional face of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) with at least \(t-2\) distinct entries among the labels of its vertices. By the inductive hypothesis, this implies \(t\leq 3d+2\).
In Case (b), the face \(F\cap H\) is a \((d-1)\)-dimensional face of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) with \(t-3\) distinct entries among the labels of its vertices and one fewer vertex than \(F\). Thus, by the inductive hypothesis, we have \(t\leq 3d+3\).
It remains to show that, if \(t=3d+3\) in Case (b), then \(F\) is a simplex satisfying the conditions of Lemma5.4 (up to reordering of the vertices). In this case, the inductive hypothesis implies that \(F\cap H\) is a simplex whose labels satisfy the conditions of Lemma5.4. So it is enough to show that the label \((a,b,\ell)\) of the unique vertex of \(F\) that is not in \(F\cap H\) satisfies \(a<\ell^{\prime}\) or \(a^{\prime}<\ell\) for any label \((a^{\prime},b^{\prime},\ell^{\prime})\) of a vertex of \(F\cap H\). This must hold because otherwise \((\ell,b^{\prime},a^{\prime})\) or \((\ell^{\prime},b,a)\) is the label of an additional vertex in \(F\), contradicting that there is only one vertex of \(F\) not contained in \(H\). Therefore, \(F\) is indeed a simplex whose labels satisfy the conditions of Lemma5.4.
**Theorem 5.6**.: _The function \(f_{d}(r)\) is a polynomial of degree \(3d+3\) with leading coefficient \(\frac{1}{(3d+3)!}\)._
Proof.: By Lemma5.2, \(f_{d}(r)=\sum_{k=1}^{3d+3}\alpha_{k}\binom{r}{k}\) for nonnegative integers \(\alpha_{k}\). By Lemma5.4, the coefficient \(\alpha_{3d+3}\) is at least \(1\). By Lemma5.5, the coefficient \(\alpha_{3d+3}\) is at most \(1\), and hence we can conclude \(\alpha_{3d+3}=1\). Expanding this out as a polynomial in \(r\), we see that the top degree coefficient is \(\alpha_{3d+3}/(3d+3)!=1/(3d+3)!\).
**Corollary 5.7**.: _For \(r\geq 1\), let \(n_{r}=\binom{r}{3}+\binom{r}{2}+\binom{r}{1}\). We have_
\[\lim_{r\to\infty}\frac{f_{d}(r)}{f_{d}\left(\Delta\left(n_{r},2r-2\right) \right)}=\frac{6^{d+1}(d+1)!}{(3d+3)!}\,,\]
_where \(f_{d}\left(\Delta\left(n_{r},2r-2\right)\right)\) is the number of \(d\)-faces of the cyclic polytope \(\Delta\left(n_{r},2r-2\right)\)._
Proof.: By Theorem4.4, the leading coefficient of the polynomial \(f_{d}\left(\Delta\left(n_{r},2r-2\right)\right)\) is \(\frac{1}{6^{d+1}(d+1)!}\). By Theorem5.6, the leading coefficient of the polynomial \(f_{d}(r)\) is \(\frac{1}{(3d+3)!}\). Since both polynomials have degree \(3d+3\), we can directly compute the limit of their quotient.
## 6. Initial Partition Entries of Hilbert Basis Elements
Lastly, we study some families of Hilbert basis elements of \(\mathsf{Kostka}_{r}^{\mathbb{Z}}\) in the context of their relation to the face structure. This work builds upon the "Width Bound" proved by Gao, Kiers, Orelowitz, and Yong. See, for example, [4, Table 1] for the Hilbert basis elements of \(\mathsf{Kostka}_{4}^{\mathbb{Z}}\).
**Theorem 6.1** ([4, Theorem 1.4], Width Bound).: _Suppose \((\lambda,\mu)\) is a Hilbert basis element of \(\mathsf{Kostka}_{r}^{\mathbb{Z}}\). Then \(\lambda_{1}\leq r\). Moreover, if \(\lambda_{1}=r\) then \(\lambda\) and \(\mu\) are both rectangles._
We now further study the initial entries of Hilbert basis elements of \(\mathsf{Kostka}_{r}^{\mathbb{Z}}\), recalling the following definition.
**Definition 6.2**.: We say that an integer pair \((\lambda_{1},\mu_{1})\) is _\(r\)-initial_ if there is an element \((\lambda,\mu)\) in the Hilbert basis of \(\mathsf{Kostka}_{r}\) such that \(\lambda\) has first element \(\lambda_{1}\) and \(\mu\) has first element \(\mu_{1}\).
By the dominating condition for \(\lambda\) and \(\mu\), an \(r\)-initial pair must satisfy \(\lambda_{1}\geq\mu_{1}\). Moreover, note that if \((\lambda_{1},\mu_{1})\) is \(r\)-initial, then it is also \(r^{\prime}\)-initial for any \(r^{\prime}>r\). This is because any \((\lambda,\mu)\in\mathsf{Kostka}_{r}\) can be embedded in \(\mathsf{Kostka}_{r^{\prime}}\) by appending zeroes to \(\lambda\) and \(\mu\) (see creftypecap 2.3), and this map preserves the Hilbert basis elements.
_Remark 6.3_.: It follows immediately from creftypecap 6.1 that
* if \((\lambda_{1},\mu_{1})\) is \(r\)-initial then \(r\geq\lambda_{1}\), and
* a pair \((r,\mu_{1})\) is \(r\)-initial if and only if \(r\) and \(\mu_{1}\) are coprime.
Thus, it remains to determine when \((\lambda_{1},\mu_{1})\) is \(r\)-initial for \(r>\lambda_{1}\). creftypecap 2.8 implies that the pair \((\lambda_{1},\lambda_{1})\) is \(r\)-initial for any \(\lambda_{1}<r\), as realized by the extremal rays. It may seem tempting to expect that any pair \((\lambda_{1},\mu_{1})\) is \((\lambda_{1}+1)\)-initial, but there is a counterexample when \(\lambda_{1}=14\). This is currently the only counterexample known to the author.
**Example 6.4**.: We have checked computationally that \((14,6)\) is not \(15\)-initial. Moreover, \(r=15\) is the smallest value such that there is a pair \((r-1,\mu_{1})\) with \(\mu_{1}<r-1\) that is not \(r\)-initial.
The main result of this section is creftypecap 1.3, which states that a pair \((\lambda_{1},\mu_{1})\) is \((\lambda_{1}+1)\)-initial if \(\lambda_{1}\geq\mu_{1}\) and any of the following conditions holds
* \(\lambda_{1}\) and \(\mu_{1}\) are coprime, or
* \(\lambda_{1}+1\) and \(\mu_{1}\) are coprime, or
* \(\lambda_{1}+1\) and \(\mu_{1}+1\) are coprime with \(2\mu_{1}\geq\lambda_{1}\).
**Corollary 6.5**.: _The probability that a pair of positive integers \(\mu_{1}<\lambda_{1}\) satisfies at least one of the conditions of Theorem 1.3 is_
\[\frac{5}{2}\prod_{p\text{ prime}}\left(1-\frac{1}{p^{2}}\right)-2\prod_{p\text{ prime}}\left(1-\frac{2}{p^{2}}\right)+\frac{1}{2}\prod_{p\text{ prime}}\left(1-\frac{3}{p^{2}}\right)>0.937293\,.\]
Proof.: The details of this computation are given in the appendix.
**Example 6.6**.: The pairs \((\lambda_{1},\mu_{1})\) with \(\mu_{1}<\lambda_{1}\leq 30\) for which the conditions of Theorem 1.3 do not hold are \((14,6)\), \((15,6)\), \((20,6)\), \((20,14)\), \((21,6)\), \((24,10)\), \((25,10)\), \((26,6)\), \((26,12)\), \((27,6)\), \((27,12)\), and \((27,21)\).
**Theorem 6.7**.: _Fix \(\lambda_{1}>\mu_{1}\). Let_
\[r(\lambda_{1},\mu_{1})=\min\{z\in\mathbb{N}:z\geq\lambda_{1},\text{ \gcd}(z,\mu_{1})=1\}\,.\]
_Then \((\lambda_{1},\mu_{1})\) is \(r(\lambda_{1},\mu_{1})\)-initial. In particular, \((\lambda_{1},\mu_{1})\) is \((\lambda_{1}+\mu_{1}-1)\)-initial._
Proof.: Let \(r=r(\lambda_{1},\mu_{1})\). Since some entry among the \(\mu_{1}\) integers \(\lambda_{1},\ldots,\lambda_{1}+\mu_{1}-1\) must be equivalent to \(1\) modulo \(\mu_{1}\), we have \(r\leq\lambda_{1}+\mu_{1}-1\).
Let \(\lambda\) and \(\mu\) be the partitions
\[\lambda=(\underbrace{\lambda_{1},\ldots,\lambda_{1}}_{\mu_{1}})\quad\text{ and }\quad\mu=(\underbrace{\mu_{1},\ldots,\mu_{1}}_{r-\mu_{1}},\underbrace{\mu_{1} -(r-\lambda_{1}),\ldots,\mu_{1}-(r-\lambda_{1})}_{\mu_{1}})\,.\]
If \((\lambda,\mu)\) were reducible, then, by Remark 2.13, we could choose a proper subset of the columns of \(\lambda\) with the same size as a proper subset of the columns of \(\mu\). The \(\mu_{1}\) columns of \(\mu\) are all equivalent to \(r\) modulo \(\mu_{1}\), and \(\gcd(r,\mu_{1})=1\), and hence there is no way to choose a proper subset of the columns of \(\mu\) such that their size is divisible by \(\mu_{1}\). However, any subset of the columns of \(\lambda\) is divisible by \(\mu_{1}\). Therefore, \((\lambda,\mu)\) is irreducible in \(\mathsf{Kostka}_{r}\) and hence is in the Hilbert basis.
**Example 6.8**.: Let \(\lambda_{1}=20\) and \(\mu_{1}=15\). Since \(\gcd(20,15)=5\), \(\gcd(21,15)=3\), and \(\gcd(22,15)=1\), we have \(r(\lambda_{1},\mu_{1})=22\).
The construction in the proof of Theorem 6.7 yields the Hilbert basis element \((\lambda,\mu)\in\mathsf{Kostka}_{22}\), where
\[\lambda=(\underbrace{20,\ldots,20}_{15})\text{ and }\mu=(\underbrace{15, \ldots,15}_{7},\underbrace{13,\ldots,13}_{15})\,.\]
Thus the pair \((20,15)\) is \(22\)-initial.
The second sufficient condition of Theorem 1.3 follows immediately from Theorem 6.7, as in this case we have \(r(\lambda_{1},\mu_{1})\leq\lambda_{1}+1\). We can now construct another family of examples to account for the last case of Theorem 1.3.
**Theorem 6.9**.: _Suppose \(\gcd(\lambda_{1}+1,\mu_{1}+1)=1\) and \(2\mu_{1}>\lambda_{1}+1\). Then the pair \((\lambda_{1},\mu_{1})\) is \((\lambda_{1}+1)\)-initial._
Proof.: Let
\[\lambda=(\underbrace{\lambda_{1},\ldots,\lambda_{1}}_{2\mu_{1}-\lambda_{1}+1}, \underbrace{\lambda_{1}-1,\ldots,\lambda_{1}-1}_{\lambda_{1}-\mu_{1}})\]
and
\[\mu=(\underbrace{\mu_{1},\ldots,\mu_{1}}_{\lambda_{1}+1})\,.\]
It is straightforward to check that \(\lambda\) dominates \(\mu\), so \((\lambda,\mu)\) is in \(\mathsf{Kostka}_{\lambda_{1}+1}\). Observe that all but one of the columns of \(\lambda\) have size \(\mu_{1}+1\), while the last column has size \(2\mu_{1}-\lambda_{1}+1\). The columns of \(\mu\) all have size \(\lambda_{1}+1\).
By Remark 2.13, if \((\lambda,\mu)\) is reducible, then we can choose a proper subset of the columns of \(\lambda\) with the same size as a proper subset of the columns of \(\mu\). If such a choice exists, note that the complement of the chosen columns also satisfies this property. Thus, we can choose a subset of the columns of \(\lambda\) excluding the smallest column of size equal to some subset of columns of \(\mu\). Note that the total size of any collection of columns of \(\mu\) is divisible by \(\lambda_{1}+1\). Since we assume \(\mu_{1}+1\) is coprime to \(\lambda_{1}+1\), then a collection of at most \(\lambda_{1}-1\) columns of size \(\mu_{1}+1\) will not be divisible by \(\lambda_{1}+1\). Therefore, no such set of columns exist. We can conclude \((\lambda,\mu)\) is irreducible and hence is in the Hilbert basis of \(\mathsf{Kostka}_{\lambda_{1}+1}\).
**Example 6.10**.: As in Example 6.8, we consider \(\lambda_{1}=20\) and \(\mu_{1}=15\). Since \(21\) and \(16\) are coprime, Theorem 6.9 applies to the pair \((\lambda_{1},\mu_{1})\). The construction in the proof yields the Hilbert basis element \((\lambda,\mu)\in\mathsf{Kostka}_{21}\), where
\[\lambda=(\underbrace{20,\ldots,20}_{11},\underbrace{19,\ldots,19}_{5})\text{ and }\mu=(\underbrace{15,\ldots,15}_{21})\,.\]
Thus the pair \((20,15)\) is \(21\)-initial, which is stronger than the statement yielded in Example 6.8.
Lastly, we show that the Hilbert basis elements we constructed lie on the \(2\)-skeleton of the Kostka cone by examining elements consisting of few distinct entries in \(\mathsf{Kostka}_{r}\).
**Lemma 6.11**.: _If \(\lambda,\mu\in\mathsf{Par}_{r}(n)\) are partitions satisfying \(\lambda\geq_{\mathsf{Dom}}\mu\) and that one is rectangular while the other has exactly two part sizes, then the point \((\lambda,\mu)\) lies on a \(2\)-dimensional face of \(\mathsf{Kostka}_{r}\)._
Proof.: By Observation 2.3, we can assume that the length of \(\mu\) is \(r\). Suppose
\[\lambda=(\underbrace{x,\ldots,x}_{s})\text{ and }\mu=(\underbrace{y,\ldots,y}_{t },\underbrace{z,\ldots,z}_{r-t})\,.\]
Hence we have
\[(\lambda,\mu)\in\left(\bigcap_{\begin{subarray}{c}1\leq i\leq r\\ i\neq s\end{subarray}}H_{i}\right)\cap\left(\bigcap_{\begin{subarray}{c}1\leq j <r\\ j\neq t\end{subarray}}\widehat{H}_{j}\right)\,.\]
Thus the point \((\lambda,\mu)\) lies in the \(2\)-dimensional intersection of these \(2r-3\) hyperplanes with the \((2r-1)\)-dimensional cone \(\mathsf{Kostka}_{r}\), and hence is a \(2\)-face of \(\mathsf{Kostka}_{r}\).
An analogous argument shows that if \(\lambda\) is rectangular and \(\mu\) has \(k\) part sizes, then \((\lambda,\mu)\) lies on a \(k\)-dimensional face of \(\mathsf{Kostka}_{r}\).
Since the Hilbert basis elements we constructed satisfy the hypotheses of Lemma6.11, we can conclude the following.
**Corollary 6.12**.: _The \((\lambda,\mu)\) constructed in the proofs of Theorem6.7 and Theorem6.9 lie on a two-dimensional face of their respective Kostka cones._
We can now combine these results to prove the main result.
Proof of Theorem1.3.: The first sufficient condition follows from the Width Bound of Gao-Kiers-Orelowitz-Yong (Theorem6.1) and the fact that if a pair is \(r\)-initial, then it is \(r^{\prime}\)-initial for any \(r^{\prime}>r\). The second and third sufficient conditions follow from Theorem6.7 and Theorem6.9, respectively. The final claim is a result of Corollary6.12 and the fact that the Hilbert basis elements in Theorem6.1 are primitive vectors of extremal rays.
## 7. Further Directions
We start by discussing a curious phenomenon in the \(h\)-vector of the \(r\)-Kostka polytope, namely, that half of the entries appear to take the value \(1\). The _\(h\)-vector_\((h_{0},h_{1},\ldots,h_{d})\) of a \(d\)-polytope is defined from the _\(f\)-vector_\((f_{-1},f_{0},\ldots,f_{d-1})\), where \(f_{k}\) is the number of \(k\)-faces, by
\[h_{k}=\sum_{i=0}^{k}(-1)^{k-i}\binom{d-i}{k-i}f_{i-1}\,.\]
While \(h\)-vectors are usually studied in the case that the polytope is simple (or, dually, simplicial), recent work of Gaetz has shown that they can still have nice positivity properties in certain non-simple cases [3]. Though \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) is not simple and its \(h\)-vector can have negative entries, half of its \(h\)-vector still seems well-behaved.
**Conjecture 7.1**.: _Let \((h_{0},h_{1},\ldots,h_{2r-2})\) be the \(h\)-vector of \(P_{r}^{\mathsf{Kostka}}\). Then \(h_{k}=1\) whenever \(r-1\leq k\leq 2r-2\)._
We have verified that the conjecture holds for all \(r\leq 7\). The only other instance we know of this phenomenon was observed by Charles Wang [19] in studying the unordered partition polytope, which is the convex hull of the points \(x\in\mathbb{Z}_{\geq 0}^{n}\) such that \((1,2,\ldots,n)\cdot x=n\). The facets of these polytopes were previously studied by Shlyk [14]. It turns out that each unordered partition polytope is combinatorially equivalent to a face of some Kostka polytope. It would be interesting to have an explanation for this phenomenon in either family of polytopes. See [15, Chapter 2] or [20, Chapter 8] for more details on \(f\)- and \(h\)-vectors.
**Example 7.2**.: The \(h\)-vectors of \(\mathrm{P}_{r}^{\mathsf{Kostka}}\) for \(2\leq r\leq 7\) are given by \((1,1,1)\), \((1,3,1,1,1)\), \((1,8,-3,1,1,1,1)\), \((1,17,-15,5,1,1,1,1,1)\), \((1,31,-36,13,1,1,1,1,1,1,1)\), and
\((1,51,-60,2,25,-7,1,1,1,1,1,1)\).
Another avenue for potential progress is furthering the understanding of the face numbers of the Kostka polytope. As we determined in Section4 and Section5, the function \(f_{d}(r)\) counting the number of \(d\)-faces of the \(r\)-Kostka polytope is a polynomial of degree \(3d+3\). A more extensive computer calculation would allow one to determine this function for \(d>3\) via Theorem5.3. We have also shown that \(f_{d}(r)\) has a positive integer expansion in the basis \(\left\{\binom{r}{i}\right\}_{i\geq 1}\). It may be possible to explicitly express some integer coefficients in this expansion for arbitrary \(d\) using an analogue of our methods for calculating the top degree coefficient.
## Appendix: Initial Pair Probability Computation
In this appendix, we calculate the probability that two integers \(\mu_{1}<\lambda_{1}\) satisfy at least one of the conditions of Theorem1.3. Fix \(N\in\mathbb{Z}_{>0}\), \(B\in\mathbb{Z}_{>0}\cup\{\infty\}\), and let \(I\) be a subset of \(\{1,2,3\}\). We then define \(d(N,B,I)\) to be the proportion of integer pairs \((m,n)\) with \(1\leq m<n\leq N\) satisfying the restriction that \(E_{i}\) holds for all \(i\in I\), where the conditions are:
\(E_{1}\): \(m\) and \(n\) have no common prime factors less than \(B\),
\(E_{2}\): \(m\) and \(n+1\) have no common prime factors less than \(B\),
\(E_{3}\): \(m+1\) and \(n+1\) have no common prime factors less than \(B\), and \(2m\geq n\).
Note that the case when \(B=\infty\) is when the respective integers are coprime. By inclusion-exclusion, the desired probability is given by
\[\lim_{N\to\infty}\sum_{\text{nonempty }I\subseteq\{1,2,3\}}(-1)^{|I|+1}d(N, \infty,I)\,.\]
It remains to calculate \(\lim_{N\to\infty}d(N,\infty,I)\) for each nonempty \(I\subseteq\{1,2,3\}\). First, assume \(3\notin I\). The Chinese Remainder Theorem implies that, for fixed \(B\in\mathbb{Z}_{>0}\), we have
\[\lim_{N\to\infty}d(N,B,I)=\prod_{\text{prime }p\leq B}\left(1-\frac{|I|}{p^{2}}\right)\,.\]
We now only need to account for the probability that our pairs of integers of interest are divisible by a large prime \(p>B\). By summing the probabilities for all such \(p\), we see that the error \(d(N,\infty,I)-d(N,B,I)\) vanishes as \(B\) goes to infinity, since
\[\lim_{N\to\infty}d(N,B,I)-d(N,\infty,I)\leq\sum_{p>B}\frac{|I|}{p^{2}}\leq\int_ {B}^{\infty}\frac{|I|}{x^{2}}dx=\frac{|I|}{B}\,.\]
We can then conclude that
\[\lim_{N\to\infty}d(N,\infty,I)=\lim_{B\to\infty}\lim_{N\to\infty}d(N,B,I)= \prod_{\text{prime }p}\left(1-\frac{|I|}{p^{2}}\right)\]
For \(k=1,2,3\), let \(\alpha_{k}=\prod_{\text{prime }p}\left(1-\frac{k}{p^{2}}\right)\). A similar computation can be carried out when \(3\in I\), i.e, when we require \(2m\geq n\) in addition to the divisibility properties, and the resulting probability is then \(\alpha_{|I|}/2\). The quantities \(\alpha_{k}\) for \(k=1,2,3\) have decimal expansions described by the OEIS sequences A059956, A065474, and A206256, respectively [11]. We can then conclude that the desired probability is
\[\frac{5}{2}\alpha_{1}-2\alpha_{2}+\frac{1}{2}\alpha_{3}\approx 0.93729304\,.\]
|
2305.12388 | Ren-integrable and ren-symmetric integrable systems | A new type of symmetry, ren-symmetry describing anyon physics and the
corresponding topological physics, is proposed. Ren-symmetry is a
generalization of super-symmetry which is widely applied in super-symmetric
physics such as the super-symmetric quantum mechanics, super-symmetric gravity,
super-symmetric string theory, super-symmetric integrable systems and so on.
The super-symmetry and Grassmann-number are, in some sense, the dual
conceptions, which turns out that these conceptions coincide for the ren
situation, that is, a similar conception of ren-number is devised to
ren-symmetry. In particular, some basic results of the ren-number and
ren-symmetry are exposed which allow one to derive, in principle, some new
types of integrable systems including ren-integrable models and ren-symmetric
integrable systems. Training examples of ren-integrable KdV type systems and
ren-symmetric KdV equations are explicitly given. | S. Y. Lou | 2023-05-21T07:58:23Z | http://arxiv.org/abs/2305.12388v1 | # Ren-integrable and ren-symmetric integrable systems
###### Abstract
A new type of symmetry, ren-symmetry describing anyon physics and the corresponding topological physics, is proposed. Ren-symmetry is a generalization of super-symmetry which is widely applied in super-symmetric physics such as the super-symmetric quantum mechanics, super-symmetric gravity, super-symmetric string theory, super-symmetric integrable systems and so on. The supersymmetry and Grassmann-number are, in some sense, the dual conceptions, which turns out that these conceptions coincide for the ren situation, that is, a similar conception of ren-number is devised to ren-symmetry. In particular, some basic results of the ren-number and ren-symmetry are exposed which allow one to derive, in principle, some new types of integrable systems including ren-integrable models and ren-symmetric integrable systems. Training examples of ren-integrable KdV type systems and ren-symmetric KdV equations are explicitly given.
Introduction
The idea of symmetry originates in natural scientific fields, and its importance there is well known. Symmetry considerations belong to the most universal and astonishing methods by which scientists have successfully solved the problems in building new solutions from known ones [2], doing dimensional reductions of nonlinear partial differential equations [3; 4; 5; 6], getting new integrable systems [7; 8; 9; 10] and even constructing all solutions for certain nonlinear systems [11].
By using the SU(3)\(\times\)SU(2)\(\times\)U(1) symmetry, three fundamental interactions, the strong, weak and electromagnetic interactions, have been unified into the so-called standard model. However, in order to unify the gravitational interaction to the standard model, one has to introduce a new type of symmetry, say the super-symmetry between bosons and fermions. New areas of physical fields including the super-symmetric gravity [12; 13], super-symmetric quantum mechanics [14], super-symmetric string theory [15] and super-symmetric integrable systems [16; 17; 18; 19; 20; 21] have been developed which are highly motivated by super-symmetries, in the belief that they possess a high potential for future development.
In super-symmetry theory, it is essential to introduce the Grassmann variable \(\theta\)[22; 23] and the super-symmetric derivative \(\mathcal{D}\) with the properties
\[\theta^{2}=0,\ \theta_{1}\theta_{2}=-\theta_{2}\theta_{1}, \tag{1}\] \[\mathcal{D}=\partial_{\theta}+\theta\partial_{x},\ \mathcal{D}^{2}= \partial_{x}. \tag{2}\]
The super-symmetric derivative \(\mathcal{D}\) is invariant under the super-symmetric transformation
\[\theta\rightarrow\theta+\eta,\ x\to x-\theta\eta. \tag{3}\]
Recently, different to the bosons and fermions, anyons with fractional charges, spin and statistics in two dimensions have been attracted high sufficient attentions by many scientists [24; 25; 26; 27]. Anyons can be used to describe some kinds of quasi-particles (the low-energy excitations in Hamiltonian systems) including the fractional quantum Hall states [28], vortices in topological superconductors [29] and Majorana zero modes in semiconductors proximitized by superconductors [30]. By analogy with fermion case in which fermions can be
described by Grassmann fields, some new fields endowed to describe anyons, we call anyon-fields and/or ren-fields. We shall use the adjective "ren" to stress the arbitrary of \(\alpha\) and to avoid the confusion on "arbitrary symmetry" or "any symmetry". "Ren" means "arbitrary" in Chinese.
A comparison of the Grassmann number \(\theta=\sqrt{0}\) and the super-symmetric derivative \({\cal D}=\sqrt{\partial_{x}}\), corresponding to ren point of view, suggests one to use, as the ren-number and the ren-symmetric derivative, respectively, the following radical generalization of the formulae
\[\theta_{\alpha}\equiv\theta=\sqrt[3]{0},\ \ {\cal R}=\sqrt[3]{\partial_{x}}\]
with \(\alpha\) being arbitrary.
The introduction of the G-number and the super-symmetric derivative yields some significant novel mathematical and physical fields such as the Grassmann algebra [22], the super-symmetric quantum mechanics [14], the super-symmetric string theory [15], the supersymmetric gravity [12], the super/Kuper-integrable systems [31; 32; 33; 34] and super-symmetric integrable theories [16]. Therefore, we hope that the introduction of the ren-number and the ren-symmetric derivative, may successfully create some different mathematical and physical fields such as the ren-algebra, ren-calculus, ren integrable models and ren-symmetric integrable systems. The usual G-number, G-algebra, super-symmetric theory, super-integrable systems and super-symmetric integrable systems just correspond to the ren-case for \(\alpha=2\).
In Sec. II of this paper, the concept of the R-number \(\theta\) for an arbitrary positive integer \(\alpha\) is defined with the aim of deriving ren-algebra, ren-derivative, and ren-symmetric derivative. Then, we deal in Sec. III with the problem of finding some types of ren-integrable systems by coupling the usual boson fields and the anyon fields. When \(\alpha\) is fixed to \(\alpha=2\), the ren-integrable system is just the known super- or Kuper-integrable systems [31]. By means of the ren-symmetric derivative, we study the ren-symmetric integrable systems in Sec. IV, the ren-symmetric KdV systems for \(\alpha=3,\ 4\) are explicitly given. The well known super-symmetric integrable systems are just the special cases of the ren-symmetric integrable systems with \(\alpha=2\). The last section includes a short summary and some discussions.
Ren-Algebra, Ren-Derivative and Ren-Symmetric Derivative
**Definition 1.** A ren-number \(\theta\equiv\theta_{\alpha}\) is defined as a number possessing the properties
\[\theta^{\alpha}=0,\ \theta^{i}\neq 0,\ i=1,\ 2,\ \ldots,\ \alpha-1, \tag{4}\]
where \(\alpha\) is an arbitrary positive integer.
This is, a ren-number can be spelled out as a non-zero \(\alpha\) root of zero
\[\theta=\sqrt[\alpha]{0}\neq 0. \tag{5}\]
It is clear that there are \(\alpha-1\) solutions of (5), \(\{\theta,\ \theta^{2},\ \ldots,\ \theta^{\alpha-1}\}\). Such a definition, exists in one important special case \(\alpha=2\), the usual G-number \(\theta=\theta_{2}\).
For \(\alpha=2\), we know that if \(a\) and \(b\) are Grassmann numbers then the combination is still a G-number when the anti-communication relation
\[ab=-ba \tag{6}\]
holds.
Similarly, for \(\alpha\geq 2\), if \(a\) and \(b\) are ren-numbers with the \(q\)-commutation relation (i \(\equiv\sqrt{-1}\)),
\[ab=q_{j}ba,\quad q_{j}^{\alpha}=1,\ q_{j}=q^{j},\ q=\exp\left( \frac{2\pi\mathrm{i}}{\alpha}\right),\] \[j\in\mathcal{J}_{\alpha}=\{1,\ 2,\ \ldots,\ \alpha-1\}/\{n_{m}p_{m}< \alpha\}, \tag{7}\]
then so is the combination \(a+b\) of \(a\) and \(b\), where \(\{n_{m}p_{m}<\alpha\}\) is a set with \(p_{m},\ m=1,\ \ldots,\ M\) being the prime factors of \(\alpha\) and \(n_{m}=1,\ \ldots,\ N_{m}\) being integers with \(N_{m}p_{m}<\alpha\). For \(\alpha=3,\ 4,\ \ldots,\ 10\), we have
\[\mathcal{J}_{3}=\{1,\ 2\},\ \mathcal{J}_{4}=\{1,\ 3\},\ \mathcal{J}_{5}=\{1,\ 2,\ 3,\ 4\},\ \mathcal{J}_{6}=\{1,\ 5\},\ \mathcal{J}_{7}=\{1,\ 2,\ \ldots,\ 6\},\] \[\mathcal{J}_{8}=\{1,\ 3,\ 5,\ 7\},\ \mathcal{J}_{9}=\{1,\ 2,\ 4,\ 5,\ 7,\ 8\},\ \mathcal{J}_{10}=\{1,\ 3,\ 7,\ 9\}. \tag{8}\]
From the expression of \(q\) given in (7), we know that the usual number (boson number) is related to \(\alpha=\infty\) and the Grassmann number (fermion number) corresponds to \(\alpha=2\).
**Definition 2.** The degree, \(\beta\) (mod\((\alpha)\)), of a ren-number \(\gamma\) is defined as
\[\gamma\theta=q^{\beta}\theta\gamma, \tag{9}\]
where the degree of \(\theta\) is always fixed as one in this paper. \(\gamma\) with (9) is said also a \(\beta\) order ren-number.
If the ren-numbers \(\gamma_{1}\) and \(\gamma_{2}\) possess the degrees \(\beta_{1}\) and \(\beta_{2}\), respectively, then we have the commutation relation
\[\gamma_{1}\gamma_{2}=q^{\beta_{1}\beta_{2}}\gamma_{2}\gamma_{1}, \tag{10}\]
which is consistent with (9) when \(\beta_{2}=1\) and \(\beta_{1}\beta_{2}=0\).
**Definition 3.** Ren-derivative, \(\frac{\mathrm{d}}{\mathrm{d}\theta}\), is a derivative with respect to the ren-variable \(\theta\),
\[\frac{\mathrm{d}f(\theta)}{\mathrm{d}\theta}=\left.\frac{f(\theta_{1})-f( \theta)}{\theta_{1}-\theta}\right|_{\theta_{1}\rightarrow\theta}=\frac{f( \theta)-f(q\theta)}{(1-q)\theta}. \tag{11}\]
Similar to the Grassmann case, because the definition of the ren-number (4), an arbitrary function of ren-variable, \(f(\theta)\) can be written as
\[f(\theta)=\sum_{i=0}^{\alpha-1}\theta^{i}g_{i}=\sum_{i=0}^{\alpha-1}f_{i} \theta^{i}. \tag{12}\]
If \(f(\theta)\) is a \(\beta\) order ren-number, then \(f_{i}\) and \(g_{i}\) in (12) are \(\beta-i\) order ren-numbers with \(f_{i}=q^{i\beta-i^{2}}g_{i}\).
According to the property (12), it is enough to find all the possible ren-derivatives for an arbitrary ren-function \(f(\theta)\) by calculating
\[\frac{\mathrm{d}\theta^{i}}{\mathrm{d}\theta},\ i=1,\ 2,\ \ldots,\ \alpha-1.\]
Based on the commutation relation (7) and the definition of the ren-derivative (11), it is readily to prove that
\[\frac{\mathrm{d}\theta^{i}}{\mathrm{d}\theta}=\sum_{k=0}^{i-1}q^{k}\theta^{i- 1}=\frac{1-q^{i}}{1-q}\theta^{i-1}\equiv i_{q}\theta^{i-1}, \tag{13}\]
where \(i_{q}\) is defined as \(i_{q}=1+q+\cdots+q^{i-1}\), say, \(2_{q}=1+q,\ 3_{q}=1+q+q^{2}\) and so on.
Thus, for the ren-function \(f(\theta)\) with degree \(\beta\), we have
\[\frac{{\rm d}f(\theta)}{{\rm d}\theta}=\sum_{i=0}^{\alpha-1}i_{q}\theta^{i-1}g_{i }=\sum_{i=0}^{\alpha-1}\frac{(q^{\beta-i}-q^{\beta})}{1-q}f_{i}\theta^{i-1}= \sum_{i=0}^{\alpha-1}q^{\beta-i}i_{q}f_{i}\theta^{i-1}. \tag{14}\]
The ren-integration may be defined as an inverse operator of the ren-derivative for \(\theta^{k},\ k<\alpha-1\), however, for \(\theta^{\alpha-1}\) the inverse operator of the ren-derivative is not well defined. A different integration operator can be defined [35]. For \(\alpha=2\), the Berezin integral has been defined [36, 37]. In this paper, we will not discuss this problem though the similar Berezin integral may be introduced under the requirement of the translation invariance [35].
**Definition 4.** A ren-symmetric derivative \({\cal R}\equiv{\cal R}_{\alpha}\) is defined as an \(\alpha\) root of the usual space derivative \(\partial_{x}\), i.e.,
\[{\cal R}^{\alpha}=\partial_{x},\ {\cal R}=\sqrt[\alpha]{\partial_{x}}. \tag{15}\]
It is interesting that in terms of the ren-number \(\theta\), the ren-symmetric derivative \({\cal R}\) can be explicitly written as
\[{\cal R}=\partial_{\theta}+\frac{1}{[(\alpha-1)!]_{q}}\theta^{\alpha-1} \partial_{x}, \tag{16}\]
where \([n!]_{q}\) is defined as
\[[n!]_{q}\ \equiv\ \prod_{i=1}^{n}\frac{1-q^{i}}{1-q}\equiv\prod_{i=1}^{n}i_{q }\equiv 1_{q}\cdot 2_{q}\cdots(n-1)_{q}\cdot n_{q}, \tag{17}\]
whose particular case \(q=1\) is the usual \(n!\).
It is reasonable that the ren-symmetric derivative (16) will reduce back to the known super-symmetric derivative \({\cal R}_{2}\equiv{\cal D}=\partial_{\theta}+\theta\partial_{x}\) when \(\alpha=2\). The ren-symmetric derivatives for \(\alpha=3,\ 4,\ 5,\ 6\) and \(7\) are given by the formulae as a straightforward computation,
\[{\cal R}_{3}=\partial_{\theta}+\frac{1}{[2!]_{q}}\theta^{2} \partial_{x}=\partial_{\theta}-q\theta^{2}\partial_{x},\ q={\rm e}^{(2\pi{ \rm i}/3)}=\frac{1}{2}(\sqrt{3}{\rm i}-1),\] \[{\cal R}_{4}=\partial_{\theta}+\frac{1}{[3!]_{q}}\theta^{3} \partial_{x}=\partial_{\theta}-\frac{1}{2}(1+q)\theta^{3}\partial_{x},\ q={ \rm e}^{(2\pi{\rm i}/4)}={\rm i},\] \[{\cal R}_{5}=\partial_{\theta}+\frac{1}{[4!]_{q}}\theta^{4} \partial_{x}=\partial_{\theta}+\frac{q^{3}}{(1+q)^{2}}\theta^{4} \partial_{x},\ q={\rm e}^{(2\pi{\rm i}/5)},\] \[{\cal R}_{6}=\partial_{\theta}+\frac{1}{[5!]_{q}}\theta^{5} \partial_{x}=\partial_{\theta}+\frac{q}{6}\theta^{5}\partial_{x},\ q={\rm e}^{ (2\pi{\rm i}/6)}=\frac{1}{2}(1+\sqrt{3}{\rm i}),\] \[{\cal R}_{7}=\partial_{\theta}+\frac{1}{[6!]_{q}}\theta^{6} \partial_{x}=\partial_{\theta}-\frac{q^{6}}{[3!]_{q}^{2}}\theta^{6} \partial_{x},\ q={\rm e}^{(2\pi{\rm i}/7)}. \tag{18}\]
It is not difficult to prove that the ren-symmetric derivative \(\mathcal{R}\) possesses the following ren-symmetric transformation
\[\theta\rightarrow\theta+\eta,\ x\to x-f(\theta,\ \eta), \tag{19}\]
with
\[f=f(\theta,\ \eta)=\sum_{k=1}^{\alpha-1}\frac{1}{[k!]_{q}[(\alpha-k)!]_{q}} \theta^{k}\eta^{\alpha-k}. \tag{20}\]
With the stress on the first few \(f\), say, \(\alpha=2,\ 3,\ 4,\ 5\) and \(6\), we have
\[f=\theta\eta, \alpha=2,\] \[f=\tfrac{1}{[2!]_{q}}(\theta\eta^{2}+\theta^{2}\eta), \alpha=3,\] \[f=\tfrac{1}{[3!]_{q}}\left(\theta\eta^{3}+\tfrac{3_{q}}{2_{q}} \theta^{2}\eta^{2}+\theta^{3}\eta\right), \alpha=4,\] \[f=\tfrac{1}{[4!]_{q}}\left(\theta\eta^{4}+\tfrac{4_{q}}{2_{q}} \theta^{2}\eta^{3}+\tfrac{4_{q}}{2_{q}}\theta^{3}\eta^{2}+\theta^{4}\eta \right), \alpha=5,\] \[f=\tfrac{1}{[5!]_{q}}\left(\theta\eta^{5}+\tfrac{5_{q}}{2_{q}} \theta^{2}\eta^{4}+\tfrac{5_{q}4_{q}}{3_{q}2_{q}}\theta^{3}\eta^{3}+\tfrac{5_ {q}}{2_{q}}\theta^{4}\eta^{2}+\theta^{5}\eta\right), \alpha=6. \tag{21}\]
## III Ren-integrable systems
In the limit \(\alpha=2\), ren-integrable models are just the known super- or Kuper- integrable models which are first proposed by Kupershmidt in [31]. It is known that the usual bosonic KdV equation,
\[u_{t}=(-u_{xx}+3u^{2})_{x}, \tag{22}\]
is determined as a compatibility condition \([L,\ S]=LS-SL=0\) of equations from a Lax pair
\[\psi_{xx}-(u+\lambda)\psi\equiv L\psi=0,\] \[\psi_{t}-3u_{x}\psi-6u\psi_{x}+4\psi_{xxx}\equiv S\psi=0. \tag{23}\]
Usually, the spectral function \(\psi\) is considered as a boson function. In fact, because the Lax pair (23) is linear, the spectral function \(\psi\) may be a fermion function and even a ren function.
It is known that if \(\sigma\) is a symmetry of an integrable evolution equation
\[u_{t}=K(u), \tag{24}\]
i.e., a solution of
\[\sigma_{t}=K^{\prime}\sigma\equiv\lim_{\epsilon=0}\partial_{\epsilon}K(u+ \epsilon\sigma), \tag{25}\]
then
\[u_{t}=K(u)+\sigma, \tag{26}\]
is also an integrable model.
Furthermore, if \(\sigma=\sigma(\psi)\), where \(\psi\) is a spectral function of the Lax pair,
\[\hat{L}\psi=0,\qquad\hat{S}\psi=0, \tag{27}\]
of (24), then the first type of source equation
\[u_{t}=K(u)+\sigma(\psi),\] \[\hat{L}\psi=0, \tag{28}\]
and the second type of source equation
\[u_{t}=K(u)+\sigma(\psi),\] \[\hat{S}\psi=0, \tag{29}\]
may all be integrable [38; 39; 40; 41; 42; 43; 44; 45; 46].
Usually, the spectral functions \(\psi\) studied in the integrable models are restricted as bosonic functions. For instance, for the KdV equation (22) the first and second types of integrable bosonic source equations possess the forms
\[u_{t}=(-u_{xx}+3u^{2}+\langle\phi|\phi\rangle)_{x},\qquad\langle \phi|\phi\rangle\equiv\sum_{i=1}^{n}\phi_{i}^{2},\] \[(\partial_{x}^{2}-u-\lambda_{i})\phi_{i}=0,\quad i=1,\ 2,\ \ldots,\ n, \tag{30}\]
and
\[u_{t}=(-u_{xx}+3u^{2}+\langle\phi|\phi\rangle)_{x},\] \[(\partial_{t}+4\partial_{x}^{3}-6u\partial_{x}-3u_{x})\phi_{i}=0,\quad i=1,\ 2,\ \ldots,\ n, \tag{31}\]
respectively.
Now, if we extend the spectral function of the Lax pair (23) to a ren-function, \(\xi\), then we have a trivial symmetry, \(\sigma=\xi^{\alpha-1}\xi_{xx}\). Applying this symmetry to the second type of source equation, we can find some coupled ren systems
\[u_{t}=(-u_{xx}+3u^{2})_{x}+\sum_{i=1}^{k}\langle\xi_{i}^{\alpha_{i }-1}|\xi_{i,xx}\rangle,\qquad\langle\xi_{i}^{\alpha_{i}-1}|\xi_{i,xx}\rangle \equiv\sum_{j=1}^{n_{i}}\xi_{ij}^{\alpha_{i}-1}\xi_{ij,xx},\] \[(\partial_{t}+4\partial_{x}^{3}-6u\partial_{x}-3u_{x})\xi_{ij}=0,\quad j=1,\ 2,\ \ldots,\ n_{i},\ i=1,\ 2,\ \ldots,\ k,\ \xi_{ij}^{\alpha_{i}}=0 \tag{32}\]
with one boson field \(u\) and \(k\times n_{i}\) ren fields \(\xi_{ij}\), where \(k,\ n_{i}\) and \(\alpha_{i}\) are all arbitrary integers.
The integrability of (32) with \(\alpha_{i}=2\) for all \(i=1,\ 2,\ \ldots,\ k\) is known because the models reduce back to the so-called super/Kuper-integrable systems [31; 32; 33; 34]. Before studying the integrability of (32) with \(\alpha_{i}\neq 2\) for some \(2<i\leq k\), we directly write down a more general nontrivial symmetry of the KdV equation (22),
\[\sigma=(\xi_{1}\xi_{2x}-\xi_{1x}\xi_{2})_{x}, \tag{33}\]
where \(\xi_{1}\) and \(\xi_{2}\) are ren-spectral functions of the usual KdV equation with the same spectral parameter \(\lambda=\lambda_{1}=\lambda_{2}\) but with different degrees, \(\beta\) and \(\alpha-\beta\), respectively.
The simplest second type of source equation related to (33) possesses the form
\[u_{t}=[3u^{2}-u_{xx}+12(\xi_{1}\xi_{2x}-\xi_{1x}\xi_{2})]_{x},\] \[(\partial_{t}+4\partial_{x}^{3}-6u\partial_{x}-3u_{x})\xi_{i}=0, \ i=1,\ 2. \tag{34}\]
**Theorem.** The model (34) is Lax integrable with the Lax pair
\[\psi_{xx}-(u+\lambda)\psi+\left(\int\xi_{1}\psi\mathrm{d}x\right) \xi_{2}-\xi_{1}\left(\int\xi_{2}\psi\mathrm{d}x\right)\equiv\hat{L}\psi=0,\] \[(\partial_{t}+4\partial_{x}^{3}-6u\partial_{x}-3u_{x})\psi\equiv \hat{S}\psi=0. \tag{35}\]
**Proof.** To complete the proof of the theorem, it suffices to prove that the compatibility condition
\[[\hat{L},\ \hat{S}]f=(\hat{L}\hat{S}-\hat{S}\hat{L})f \tag{36}\]
is valid for arbitrary \(f\) if (34) is satisfied.
Expanding the expression (36) with the operators \(\hat{L}\) and \(\hat{S}\) defined in (35),
\[\left(\int\xi_{1}f\mathrm{d}x\right)(6u\xi_{2x}+3\xi_{2}u_{x}-4\xi_ {2xxx}-\xi_{2t})+(\xi_{1t}+4\xi_{1xxx}-6u\xi_{1x}-3\xi_{1}u_{x})\int\xi_{2}f \mathrm{d}x\] \[\qquad+\xi_{1}\left[\int(\xi_{2t}f-4\xi_{2}f_{xxx}+6\xi_{2}uf_{x} +3\xi_{2}u_{x}f)\mathrm{d}x-8\xi_{2xx}f-4\xi_{2x}f_{x}\right]\] \[\qquad-\left[\int(\xi_{1t}f-4\xi_{1}f_{xxx}+6\xi_{1}uf_{x}+3\xi_{ 1}u_{x}f)\mathrm{d}x-8\xi_{1xx}f-4\xi_{1x}f_{x}\right]\xi_{2}\] \[\qquad+(u_{t}-6uu_{x}+u_{xxx})f=0, \tag{37}\]
and simplifying the result with the formulae of integration by parts
\[\int\xi_{i}f_{xxx}\mathrm{d}x=\xi_{i}f_{xx}-\xi_{ix}f_{x}+\xi_{ ixx}f-\int\xi_{ixxx}f\mathrm{d}x,\] \[\int\xi_{i}uf_{x}\mathrm{d}x=\xi_{i}uf-\int f(u\xi_{i})_{x} \mathrm{d}x, \tag{38}\]
(37) is changed to
\[\int\xi_{1}f\mathrm{d}x\ (6u\xi_{2x}+3\xi_{2}u_{x}-4\xi_{2xxx}- \xi_{2t})+(\xi_{1t}+4\xi_{1xxx}-6u\xi_{1x}-3\xi_{1}u_{x})\int\xi_{2}f\mathrm{ d}x\] \[+\xi_{1}\int(\xi_{2t}+4\xi_{2xxx}-6u\xi_{2x}-3\xi_{2}u_{x})f \mathrm{d}x-\left[\int(\xi_{1t}+4\xi_{1xxx}-6u\xi_{1}-3\xi_{1}u_{x})f \mathrm{d}x\right]\xi_{2}\] \[+(u_{t}-6uu_{x}+u_{xxx}+12\xi_{1xx}\xi_{2}-12\xi_{1}\xi_{2xx})f=0. \tag{39}\]
Because of the arbitrariness of \(f\), (39) is correct only after joining to it the equation (34). The theorem is proved.
**Remark.** From the proof procedure of the theorem, it is known that we have not used any commutation relation on \(\xi_{1}\) and \(\xi_{2}\). That means (34) is Lax integrable no matter the fields \(\xi_{1}\) and \(\xi_{2}\) are boson fields, fermion fields and/or ren-fields with arbitrary \(\alpha\).
## IV Ren-symmetric integrable systems
In Sec. II of this paper, we have defined the ren-symmetric derivative \(\mathcal{R}\). By means of the ren-symmetric derivative, the usual bosonic integrable systems can be extended to ren-symmetric integrable ones. Before discussing ren-symmetric integrable systems, we list some special cases for \(\alpha=2\), i.e., the super-symmetric integrable models.
### Super-symmetric integrable KdV systems
The most general \(N=1\) symmetric form of the KdV equation (22) is generated by the fermionic super-field \(\Phi\) with an arbitrary constant \(a\),
\[\Phi_{t}+\Phi_{xxx}+a(\mathcal{D}\Phi_{x})\Phi+(6-a)(\mathcal{D}\Phi)\Phi_{x}=0. \tag{40}\]
Mathieu had proven that the super-symmetric KdV equation (40) is Painleve integrable only for \(a=0\) and \(3\)[47]. Although the super-symmetric KdV system (40) is not Painleve integrable for arbitrary \(a\), it does possess multiple soliton solutions [48]. In (40), the super-field \(\Phi\equiv\xi+\theta u\) is a fermionic super-field with a fermion field \(\xi\) and a boson field \(u\).
For the coupled KdV equation, we have an interacting model between a susy-boson field \(U\) and a susy-fermion field \(\Phi\)
\[\Phi_{t} =(-\Phi_{xx}+3\Phi\mathcal{D}\Phi+6U\Phi)_{x},\] \[U_{t} =(-U_{xx}+3U^{2}+3\Phi\mathcal{D}U)_{x}, \tag{41}\]
which is Lax integrable. Incorporating \(U=0\), (41) readily reduces back to (40) with \(a=3\). For \(\Phi=0\), (41) becomes a quite trivial extension of the original KdV equation (22) by \(u\to U\).
The component form of (41) reads
\[v_{t} =(-v_{xx}+3v^{2}+3\zeta_{x}\zeta+6uv+6\xi\zeta)_{x},\] \[u_{t} =(-u_{xx}+3u^{2}+3\zeta\xi)_{x},\] \[\xi_{t} =(-\xi_{xx}+6u\xi+3v\xi+3u_{x}\zeta)_{x},\] \[\zeta_{t} =(-\zeta_{xx}+6u\zeta+3v\zeta)_{x}, \tag{42}\]
with \(U=u+\theta\xi\) and \(\Phi=\zeta+\theta v\), where \(u\) and \(v\) are boson components and \(\xi\) and \(\zeta\) are fermion components.
The Lax pair of (41) can be written as
\[\Psi_{xx}=\Phi\mathcal{D}\Psi+(U+\lambda)\Psi,\] \[\Psi_{t}=-4\Psi_{xxx}+6U\Psi_{x}+3U_{x}\Psi+6\Phi\mathcal{D}\Psi_ {x}+3\Phi_{x}\mathcal{D}\Psi. \tag{43}\]
### Ren-symmetric integrable KdV systems
Analogous to (40), the general ren-symmetric KdV equation is expressible in the form
\[\Phi_{t}+\Phi_{xxx}+\sum_{i=0}^{[\beta_{1}]}a_{i}\left(\mathcal{R}^{i}\Phi\right) \left(\mathcal{R}^{\alpha+\beta-i}\Phi\right)=0,\ \beta=0,\ 1,\ 2,\ \ldots,\ \alpha-1,\ \beta_{1}\equiv\frac{\alpha+\beta}{2}, \tag{44}\]
where \([\beta_{1}]\) is the integer part of \(\beta_{1},\,a_{i},\ i=0,\ 1,\ 2,\ \ldots,\ [\beta_{1}]\), are arbitrary bosonic constants, \(\beta\) is the degree of the ren-field \(\Phi\equiv\Phi(x,\ t,\ \theta)\).
As in the super-symmetric (\(\alpha=2\)) case, one may find some possible integrable cases by fixing the constants \(a_{i}\) of the ren-symmetric KdV equation (44). For instance, the Lax integrable systems,
\[\Phi_{jt}+\Phi_{jxxx}-3\mathcal{R}^{\alpha-j}\left(\mathcal{R}^{j}\Phi_{j} \right)^{2}+\rho_{j}=0,\ \mathcal{R}^{j}\rho_{j}=0,\ j=0,\ 1,\ \ldots,\ \alpha-1, \tag{45}\]
with \(\rho_{j}=0\) possessing Lax pair of
\[\Psi_{xx}-(\mathcal{R}^{j}\Phi_{j}+\lambda)\Psi=0,\] \[\Psi_{t}+4\Psi_{xxx}-3\left(\mathcal{R}^{j}\Phi_{jx}\right)\Psi-6 \left(\mathcal{R}^{j}\Phi_{j}\right)\Psi_{x}=0, \tag{46}\]
are just the special cases of (44). The degrees of \(\Phi_{j}\) and \(\rho_{j}\) in (45) are \(j\).
For \(\alpha=3\), the ren-symmetric KdV equation (44) becomes
\[\Phi_{0t}+\Phi_{0xxx}+a\Phi_{0}\Phi_{0x}+b(\mathcal{R}\Phi_{0})( \mathcal{R}^{2}\Phi_{0})=0,\ \mathcal{R}=\partial_{\theta}-q\theta^{2}\partial_{x}, \tag{47}\] \[\Phi_{1t}+\Phi_{1xxx}+a\Phi_{1}(\mathcal{R}\Phi_{1x})+b(\mathcal{ R}\Phi_{1})\Phi_{1x}+c(\mathcal{R}^{2}\Phi_{1})^{2}=0,\] (48) \[\Phi_{2t}+\Phi_{2xxx}+a\Phi_{2}(\mathcal{R}^{2}\Phi_{2x})+b( \mathcal{R}\Phi_{2})(\mathcal{R}\Phi_{2x})+c(\mathcal{R}^{2}\Phi_{2})(\Phi_{2 x})=0, \tag{49}\]
where \(a,\ b\) and \(c\) are arbitrary constants and \(\Phi_{0},\ \Phi_{1}\) and \(\Phi_{2}\) are the ren-fields with degrees, \(0,\ 1\) and \(2\), respectively.
The special integrable case (45) for \(\{\alpha=3,j=0\}\) is related to (47) with \(b=0\) up to a re-scaling. (48) with \(\{a=0,\ c=b\}\) is equivalent to the integrable case (45) for \(\{\alpha=3,j=1\}\). Taking \(\{a=b=0\}\) in (49) leads to the equivalent special integrable ren-symmetric KdV equation (45) with \(\{\alpha=3,j=2\}\).
Incorporating the explicit forms for
\[\Phi_{0}=u+\theta\zeta+\theta^{2}\xi,\ \Phi_{1}=\xi+\theta u+\theta^{2}\zeta,\ \Phi_{2}=\zeta+\theta\xi+\theta^{2}u\]
and the consistent commutation relations
\[\xi\theta=q\theta\xi,\ \zeta\theta=q^{2}\theta\zeta,\ \xi\zeta=q^{2}\zeta\xi,\ \zeta\zeta_{x}=q\zeta_{x}\zeta,\ \zeta\zeta_{xx}=q\zeta_{xx}\zeta \tag{50}\]
leads to the coupled component forms of (47), (48) and (49)
\[u_{t}+u_{xxx}+auu_{x}-q^{2}b\zeta\cdot\xi=0,\] \[\zeta_{t}+\zeta_{xxx}+a(\zeta u)_{x}+b\zeta u_{x}+bq\xi^{2}=0,\] \[\xi_{t}+\xi_{xxx}+a(u\xi)_{x}+(a-b)\zeta_{x}\cdot\zeta=0, \tag{51}\]
\[\xi_{t}+\xi_{xxx}+a\xi u_{x}+bu\xi_{x}+cq\zeta^{2}=0,\] \[u_{t}+u_{xxx}+(a+b)uu_{x}-aq^{2}\zeta_{x}\cdot\xi-(2cq^{2}+b) \xi_{x}\cdot\zeta=0,\] \[\zeta_{t}+\zeta_{xxx}+(a-bq-cq^{2})\zeta u_{x}+(b-aq^{2})u\zeta_ {x}-aq\xi_{xx}\cdot\xi+q(c-b)\xi_{x}^{2}=0, \tag{52}\]
and
\[\zeta_{t}+\zeta_{xxx}-aq^{2}\zeta u_{x}+b\xi\cdot\xi_{x}-cq^{2}u \zeta_{x}=0,\] \[u_{t}+u_{xxx}-q(aq+cq-b)uu_{x}+(aq-b)\xi\cdot\zeta_{xx}-(bq^{2}+ cq-c)\xi_{x}\cdot\zeta_{x}-aq\xi_{xx}\cdot\zeta=0,\] \[\xi_{t}+\xi_{xxx}-(aq^{2}+b)\xi u_{x}-(b+c)q^{2}u\xi_{x}+a\zeta_{ xx}\cdot\zeta+c\zeta_{x}^{2}=0, \tag{53}\]
respectively. \(u\) in (51)-(53) is a bosonic field and \(\xi\) and \(\zeta\) are ren-fields with degrees 1 and 2, respectively.
The known special integrable ren-symmetric KdV systems of (51) and (52) read
\[u_{t}+u_{xxx}-6uu_{x}=0,\] \[\xi_{t}+\xi_{xxx}-6(u\xi)_{x}-6\zeta_{x}\cdot\zeta=0,\] \[\zeta_{t}+\zeta_{xxx}-6(\zeta u)_{x}=0, \tag{54}\]
\[u_{t}+u_{xxx}-6uu_{x}-6q(1-q)\xi_{x}\cdot\zeta=0,\] \[\xi_{t}+\xi_{xxx}-6u\xi_{x}-6q\zeta^{2}=0,\] \[\zeta_{t}+\zeta_{xxx}-6(\zeta u)_{x}=0, \tag{55}\]
respectively. The special integrable ren-symmetric KdV system of (53) is equivalent to that of (52) by the transformation \(\zeta_{x}\rightarrow\zeta\).
The choice \(\alpha=4,\ q=\mathrm{i},\ \mathcal{R}=\partial_{\theta}-\frac{1+q}{2}\theta^{3} \partial_{x}\) leads ren-symmetric KdV equation (44) straightforwardly to
\[\Phi_{0t}+\Phi_{0xxx}+a\Phi_{0}\Phi_{0x}+b(\mathcal{R}\Phi_{0})( \mathcal{R}^{3}\Phi_{0})+c(\mathcal{R}^{2}\Phi_{0})^{2}=0, \tag{56}\] \[\Phi_{1t}+\Phi_{1xxx}+a\Phi_{1}\mathcal{R}\Phi_{1x}+b(\mathcal{R }\Phi_{1})\Phi_{1x}+c(\mathcal{R}^{2}\Phi_{1})(\mathcal{R}^{3}\Phi_{1})=0,\] (57) \[\Phi_{2t}+\Phi_{2xxx}+a\Phi_{2}\mathcal{R}^{2}\Phi_{2x}+b( \mathcal{R}\Phi_{2})(\mathcal{R}\Phi_{2x})+c(\mathcal{R}^{2}\Phi_{2})(\Phi_{2 x})+d(\mathcal{R}^{3}\Phi_{2})^{2}=0,\] (58) \[\Phi_{3t}+\Phi_{3xxx}+a\Phi_{3}\mathcal{R}^{3}\Phi_{3x}+b( \mathcal{R}\Phi_{3})(\mathcal{R}^{2}\Phi_{3x})+c(\mathcal{R}^{2}\Phi_{3})( \mathcal{R}\Phi_{3x})+d\Phi_{3x}\mathcal{R}^{3}\Phi_{3}=0, \tag{59}\]
where \(\Phi_{0},\ \Phi_{1},\ \Phi_{2}\) and \(\Phi_{3}\) are the ren-fields with the degrees, \(0,\ 1,\ 2\) and \(3\), respectively.
## V Summary and Discussions
In retrospect, the usual Grassmann number and the super-symmetric derivative have been straightforwardly extended to more general forms, the ren-number and the ren-symmetric derivatives, to be applicable to describe physically important quasi-particles, anyons. Applying the ren-numbers and ren-symmetric derivatives to integrable theory, we have extended the super-integrable and super-symmetric integrable systems to ren-integrable and ren-symmetric integrable systems.
It is interesting that the ren-integrable KdV system (34) possesses the completely same form for arbitrary \(\alpha\) even for the boson case (\(\alpha=\infty\)) and fermion case (\(\alpha=2\)). The only difference is that the degrees of the ren-fields \(\xi_{1}\) and \(\xi_{2}\) should be complementary, say, \(\beta\) and \(\alpha-\beta\), such that \(\xi_{1}\xi_{2}\) becomes a boson.
The re-integrable system (34) can be further extended to
\[u_{t}=\left[3u^{2}-u_{xx}+\langle\phi|\phi\rangle+12\sum_{\alpha=2} ^{\infty}\sum_{\beta_{\alpha}=0}^{\alpha-1}(\langle\xi_{\{\alpha,\beta_{\alpha} \}}|\zeta_{\{\alpha,\alpha-\beta_{\alpha}\},x}\rangle-\langle\xi_{\{\alpha, \beta_{\alpha}\},x}|\zeta_{\{\alpha,\alpha-\beta_{\alpha}\}}\rangle)\right]_{x},\] \[(\partial_{t}+4\partial_{x}^{3}-6u\partial_{x}-3u_{x})|\phi\rangle =0,\] \[(\partial_{t}+4\partial_{x}^{3}-6u\partial_{x}-3u_{x})|\xi_{\{ \alpha,\beta_{\alpha}\}}\rangle=0,\] \[(\partial_{t}+4\partial_{x}^{3}-6u\partial_{x}-3u_{x})|\zeta_{\{ \alpha,\alpha-\beta_{\alpha}\}}\rangle=0, \tag{60}\]
where \(|\phi\rangle\) is a boson vector field, \(|\xi_{\{\alpha,\beta_{\alpha}\}}\rangle\) is a \(\beta_{\alpha}\) order re-vector field and \(|\zeta_{\{\alpha,\alpha-\beta_{\alpha}\}}\rangle\) is an \(\alpha-\beta_{\alpha}\) order re-vector field. The general re-integrable KdV type system (60) is still a Lax integrable model.
Although, indeed the number of papers produced so far on construction of solutions is incredibly large, it is necessary to develop some novel methods, one of which may be the so-called bosonization method [23], to construct special solutions of the re-integrable KdV system (34) (or more generally (60)) and the re-symmetric KdV system (44).
Ren-numbers may also be used to find other types of integrable models such as the dark equations and integrable couplings [49; 50]. The concept of dark equations is first introduced by Kupershmidt in Ref. [51] where many types of dark KdV systems are given. The modified dark KdV equations are studied in Ref. [52]. The bosonization procedure of the super-symmetric systems have offered some new types of dark integrable systems [23]. The bosonization of re-symmetric integrable models may yield further dark integrable equations. In fact, applying the bosonization assumptions
\[\xi=\eta p,\ \zeta=\eta^{2}q \tag{61}\]
with the \(\{x,\ t\}\)-independent re-number \(\eta\) and the \(\{x,\ t\}\)-dependent boson fields \(p\) and \(q\) on the integrable systems (54) and/ot (55) yields a same standard dark equation system,
\[u_{t}+u_{xxx}-6uu_{x}=0,\] \[p_{t}+p_{xxx}-6(up)_{x}=0,\] \[q_{t}+q_{xxx}-6(uq)_{x}=0, \tag{62}\]
because of \(\eta^{3}=0\). From (62), we know that the ren-integrable systems (54) and (55) possess a special types of exact solutions with \(u\) being an arbitrary solutions of the usual KdV equation and \(\xi\) and \(\zeta\) being given by (61) while \(p\) and \(q\) are arbitrary symmetries of the usual KdV equation.
The dark systems can also be considered as some special type of integrable couplings [53; 54]. The more about the ren-integrable, ren-symmetric integrable and dark integrable systems should be further studied later.
###### Acknowledgements.
The work was sponsored by the National Natural Science Foundations of China (Nos.12235007, 11975131). The author is indebt to thank Profs. Q. P. Liu, B. F. Feng, X. B. Hu, R. X. Yao, M. Jia and Drs. K. Tian, X. Z. Hao and D. D. Zhang for their helpful discussions.
|
2302.01435 | Target specific peptide design using latent space approximate trajectory
collector | Despite the prevalence and many successes of deep learning applications in de
novo molecular design, the problem of peptide generation targeting specific
proteins remains unsolved. A main barrier for this is the scarcity of the
high-quality training data. To tackle the issue, we propose a novel machine
learning based peptide design architecture, called Latent Space Approximate
Trajectory Collector (LSATC). It consists of a series of samplers on an
optimization trajectory on a highly non-convex energy landscape that
approximates the distributions of peptides with desired properties in a latent
space. The process involves little human intervention and can be implemented in
an end-to-end manner. We demonstrate the model by the design of peptide
extensions targeting Beta-catenin, a key nuclear effector protein involved in
canonical Wnt signalling. When compared with a random sampler, LSATC can sample
peptides with $36\%$ lower binding scores in a $16$ times smaller interquartile
range (IQR) and $284\%$ less hydrophobicity with a $1.4$ times smaller IQR.
LSATC also largely outperforms other common generative models. Finally, we
utilized a clustering algorithm to select 4 peptides from the 100 LSATC
designed peptides for experimental validation. The result confirms that all the
four peptides extended by LSATC show improved Beta-catenin binding by at least
$20.0\%$, and two of the peptides show a $3$ fold increase in binding affinity
as compared to the base peptide. | Tong Lin, Sijie Chen, Ruchira Basu, Dehu Pei, Xiaolin Cheng, Levent Burak Kara | 2023-02-02T21:56:52Z | http://arxiv.org/abs/2302.01435v1 | # Target specific peptide design using latent space approximate trajectory collector
###### Abstract
Despite the prevalence and many successes of deep learning applications in de novo molecular design, the problem of peptide generation targeting specific proteins remains unsolved. A main barrier for this is the scarcity of the high-quality training data. To tackle the issue, we propose a novel machine learning based peptide design architecture, called Latent Space Approximate Trajectory Collector (LSATC). It consists of a series of samplers on an optimization trajectory on a highly non-convex energy landscape that approximates the distributions of peptides with desired properties in a latent space. The process involves little human intervention and can be implemented in an end-to-end manner. We demonstrate the model by the design of peptide extensions targeting
Beta-catenin, a key nuclear effector protein involved in canonical Wnt signalling. When compared with a random sampler, LSATC can sample peptides with **36%** lower binding scores in a **16** times smaller interquartile range (IQR) and **284%** less hydrophobicity with a **1.4** times smaller IQR. LSATC also largely outperforms other common generative models. Finally, we utilized a clustering algorithm to select 4 peptides from the 100 LSATC designed peptides for experimental validation. The result confirms that all the four peptides extended by LSATC show improved Beta-catenin binding by at least **20.0%**, and two of the peptides show a **3** fold increase in binding affinity as compared to the base peptide.
**Keywords:** Automated protein specific peptide design, Machine learning, Evolutionary optimization
## 1 Introduction
Therapeutic peptides are a class of pharmaceutical agents that are distinct from small molecule drugs due to their unique biochemical and therapeutic characteristics. In recent years, many peptide drugs have been found to have superior potency and safety profiles than small molecule drugs [1]. Peptides can disrupt unwanted protein-protein interactions (PPI) that have often been implicated to play a role in cancer development and progression. One such example is the deregulation of Wnt/beta-catenin/T-cell factor (Tcf) signaling common in many human cancers. Thus, the development of peptide-based PPI inhibitors has become one of the most topical directions in cancer drug research.
Structure-based design of peptide inhibitors for a specific protein target has long been an empirical task. Traditionally, peptide design has been focused on sequence perturbation, including residue mutation, interchain residue exchange, alanine scanning and chemical modification [2; 3], which is guided by structural information obtained for the protein or protein-peptide complex system. The key limitation of this strategy is the negligence of potential secondary structure changes upon sequence perturbation and how the resulting changes may shift the protein-peptide binding structures. High-Throughput-Screening (HTS) has also been widely applied in peptide design. HTS is a brute force method to identify bioactive peptides by rapidly conducting thousands to millions of biochemical, genetic, or pharmacological assays. However, HTS demands highly specialized instrumentation, development and adoption of appropriate bioassays, and high quality peptide libraries. Additionally, actives discovered in HTS are often serendipitous [4]. To complement HTS, virtual screening, such as peptide docking, has been heavily used for peptide inhibitor design. [5]. Peptide docking is usually composed of a sampling technique to explore peptide conformations and a scoring function to evaluate the strength of peptide binding for all sampled peptide binding poses. The
two-step protocol takes at least minutes to evaluate one protein-peptide complex [6], which limits its ability in sequence exploration and thus its use in de novo peptide design.
In recent years, machine learning-based molecular design has witnessed rapid development. A very important area in deep generative models is the efficient representation of molecules. Prior to the deep learning era, fingerprint descriptors such as Morgan fingerprints [7] for small molecules and atom-pair fingerprints for proteins [8] are prevalent. However, these representations that encode the chemical and structural information of individual molecules are not task specific [9]. Deep learning models have been developed to address this limitation by learning a unified representation in a large dataset and then fine tuning this representation for a specific task. During the past few years, the research on molecular representation has been shifted to string transformation, most of which has borrowed the idea from natural language processing. There are two major types of model framework. The first is the recurrent neural network, such as Long-Short Term Memory (LSTM)[10], Gated Recurrent Units [11] and Recurrent Attention [12]. These models have been utilized to predict molecular properties, such as solubility, toxicity [13; 14; 15]. The other type is transformer [16] based models, which incorporate a multi-head attention mechanism to process sequential data more efficiently. The transformer based architectures such as ProteinBert[17] and ProteinTrans[18] have been frequently used in protein representation, and have shown great success in multiple protein downstream classification or regression tasks (e.g. secondary structure classification, fluorescence prediction). Recently, the graph representation of molecules has gained great attention due to the graph's ability to include more detailed structural and spatial information [19]. Graph neural network has been applied to process molecular graphs and perform property prediction [20; 21; 22].
Although the learned molecular representation has been exploited in many property prediction models, few models are about molecular generation. In 2017, Bjerrum used the RNN network to generate valid molecules [23]. Since then, several studies have been published on the generation of valid molecules with optimized general properties such as logP, TPSA and QED [24; 25; 26], while work on protein sequence design has been scarce, and most of it has focused on a single protein's general property design such as the length, the stability and the isoelectronic point[27; 28]. Drug design targeting protein interaction has been less explored. In 2020, Das proposed a method for antimicrobial drug design using rejection sampling to search appropriate molecular SMILES in a latent space[29]. In 2022, Castro proposed a gradient based latent space search method for designing a protein sequence against the third complementarity-determining region of the nanibizumab antibody heavy chain [30]. The two papers are most relevant to our work, albeit the first work is not on peptide drug design and the second limits its usage to one protein that has an existing dataset of 60,000 samples. The main challenge in protein-specific peptide design is the inadequacy of accurate binding affinity data due to the large
computational or experimental cost and the immense peptide space. To our knowledge, deep learning models for protein specific peptide generation don't yet exist.
In this paper, we propose a novel protein specific peptide generation scheme (Figure 1), called Latent Space Approximate Trajectory Collector (LSATC). We implement a GRU based Wasserstein auto encoder (WAE) [31] to obtain 1D continuous latent space representation of peptides, as shown in Figure 1(a). To circumvent the data scarcity problem, we design a feedback loop using CMA-ES [32] to optimize the generator in a peptide latent space and collect the generator's trajectory. The process is illustrated in Figure 1(c). A fast feedback evaluator is crucial to the scheme. Since the current binding evaluator (e.g. docking) is computationally costly, we train an efficient surrogate model to boost the evaluation time from minutes to microseconds (Figure 1(b)). For the surrogate model training, the binding energies and the hydrophobicities of a reasonably small number of randomly sampled peptides are evaluated using Pyrosetta and Biopython, respectively. Finally, we sample peptides on the explored trajectory whose associated losses are lower than a predefined threshold value. We note that the increase in speed offered by the surrogate model enable us to explore the peptide encoder space for peptide inhibitor design and optimization. Additionally, the incorporation of a biophysics-based model in our deep learning can facilitate peptide candidate selection during the post processing stage.
Here, we test our LSATC model in a multi-objective peptide extension task. Specifically, we aim to improve the binding of a base peptide "YPEDILD-KHLQRVIL" with beta-catenin by extending its N-terminus, and to reduce the hydrophobicity of peptides to minimize non-specific binding. For simplicity, we only consider peptide extensions of 5 amino acids, which limits the search space to 3.2 million peptides. However, it is worth noting that LSATC is not limited to generation of fixed length peptides. The entire peptide extension process, including dataset generation and model training, takes two days to finish. Our LSATC model is proven to be much more efficient in generating desired peptides than random generation and several other commonly used generative models. The _in vitro_ results show that our generated peptides are not only less hydrophobic but also more potent than the experimental baseline result, with the highest improvement of 3 fold.
Our contributions in the paper is as following,
* We design a peptide generator that can efficiently discover more novel protein-specific peptide sequences when no or very few binding data exists.
* We give insights of how and why our machine learning based peptide generator works under the condition that the data is scarce. This increases the interpretability of the model.
* Our proposed design pipeline is _in vitro_ test ready and highly automated. We create a peptide filtering strategy to select desired number of peptides among the sampled high quality peptides for _vitro test_. This erases the needs of experimentalists' manual inspection to finalize the selected testing peptides.
## 2 Results
In _in silico_ evaluation, LSATC can much more efficiently generates peptide extensions with low binding energy and low hydrophobicity than the random generation, Gaussian mixture model (GMM) and the conditional Wasserstein Autoencoder(cWAE). In _in vitro_ test, all the sampled peptide extensions largely improve the base peptide binding score. In this section, we will present the analysis and the results of each component of the LSATC.
### Dataset preparation
In LSATC, the sequence reconstruction model is used for converting the amino acid representation from discrete letters to continuous numbers. The property surrogate model is used for a fast peptide binding energy and hydrophobicity evaluation. A 500,000 unlabelled peptide extension dataset and a 50,000 labelled peptide extension dataset are prepared for the training of the sequence reconstruction and the surrogate model, respectively. For both datasets, all the peptide extensions are unique and randomly generated.
Figure 1: (a). Peptide extension reconstruction network. A reconstruction loss is used. The distribution of the encoding is regularized to a Gaussian distribution.(b). Property prediction network. The model estimates the hydrophobicity and the binding energy of a peptide extension. (c). Optimization process for generative model. A Gaussian sampler in the latent space is learned using CMA-ES. Reconstruction network is used in decoding. Property prediction network is used in the evaluation as a surrogate model of Pyrosetta. The samplers are recorded in each of the iteration. The total loss includes the desired peptide properties and penalty to sample invalid encodings.
Each peptide in the labelled dataset possesses two properties - hydrophobicity and beta-catenin binding score. The hydrophobicity is a quantity to show the tendency of water to exclude nonpolar molecules. It is a sequence dependent property. We calculate the hydrophobicity using Biopython. The Kyte-Doolittle [33] scale is used for measuring the degree of the hydrophobicity for each amino acids. We select a window size 3 and calcute the moving averages by sliding the window on the peptides. The hydrophobicity is computed by summing all the movin averages. Note that we use fixed length of the peptides. Thus, the hydrophobicity does not need to be normalized with respect to the peptide lengths. The binding score is calculated according to [34]. It is a weighted sum of the Van Der Waals force energies, solvation energy, residue-residue pair potentials, hydrogen bond energies, electrostatics energy and internal energy of sidechain rotamers. The unit for the energy is \(\frac{kcal}{mol}\). The distance threshold to define interacting atoms is \(10\AA\). The computation of binding scores requires 3D structural information of the peptide-protein complexes. Such information is obtained through mutational substitution of an initial peptide-protein complex structure, for which a detailed description is given in Section 4.1. We have tested 5 different methods for binding energy calculation, MM/GBSA, Rosetta FlexPepDock, flex ddG, flex ddG(gam) and Pyrosetta. In Figure 2, we plot the correlations between the experimental binding data and those estimated from the five methods for 10 assayed peptides. The result shows Pyrosetta has the best linear correlation in lower energy regions. MM/GBSA and Rosetta FlexPepdock score peptides in the whole region. Thus, we select Pyrosetta to compute the binding energies of the 50,000 peptides. Although Pyrosetta is a lightweight software package, the generation of the labelled dataset still takes around 12 hours to finish.
To properly train the surrogate model, the log transformation is performed to normalize the binding energy to reduce the outliers' effect. The detail of the dataset generation process is shown in Section 4.2.
### Sequences reconstruction
The mapping between the peptides represented by the amino acids and their properties is not smooth because the amino acid space is discrete. This often leads to the ineffectiveness of an optimizer to find better peptides. To alleviate the problem, a sequences reconstruction model is created to represent peptide sequences in continuous number and to decode the number back to the original amino acid space. To achieve the goal, we use a gated recurrent unit (GRU) generative auto encoder framework for sequence reconstruction. The sequence reconstruction process during the training phase and the inference phase are different. We call the first direct reconstruction and the second sampling reconstruction. We show the difference of the two in Figure 3(b). When the decoding is performed, the known encoding and the amino acid at the previous position needs to be used to output the current amino acid. For the direct reconstruction, the previous amino acid is always correctly inputted because it is known during the training; however, for the sampling reconstruction, it has
to use the predicted previous amino acid as an input to infer the current amino acid. This is because the true previous amino acid is not accessible during the inference phase. Sampling reconstruction is important for our model. A deficient reconstruction model with low sampling reconstruction accuracy could result in an encoding shift during our later optimization stage. In Figure 3(c), we show an optimization trajectory of an encoding sampler using a deficient reconstruction model with low sampling reconstruction accuracy. The results are shown in Figure 3(d). The results are shown in Figure 3(e). The results are shown in Figure 3(f). The results are shown in Figure 3(f).
model. The optimization trajectory is the trajectory of the encoding during the search for a good encoding, which we will illustrate in Section 2.4. Due to the discrepancy between the encoder space and the decoder space, the optimizer, which needs feedback from decoded sequences, does not search in or near the encoder's output space represented by the dataset.
In this study, we have compared a commonly used variational AE (VAE) and a Warseetein AE (WAE) in reconstruction. VAE uses KL divergence to regulate the encoding distribution to be the target distribution while WAE uses the Wasserstein distance to achieve the goal. The details of the two models are described in Section 4.3. The right plot in Figure 3(a) shows the test mean square error loss for the direct reconstruction during the training. We find that both models have low test direct reconstruction loss. On the right figure, we show the sampling reconstruction error as the number of the mismatch between the input and the reconstructed output. The sampling reconstruction error of the VAE model stays high while the WAE manages to reduce the error close to 0 after 150 epochs even though both models have low direct reconstruction loss. In fact, the VAE model tends to repeat the previous input as the output during the inference stage. An example is shown in Figure 3(b). Based on the above experiments, we choose a GRU-based WAE model to encode and decode the peptide sequences as it has good results of both direct reconstruction and sampling reconstruction.
### Surrogate model prediction
Accurate evaluation of peptide-protein binding free energies is computationally demanding due in a large part to the considerable conformational, translational, and rotational changes underlying the binding process that are difficult to sample. Hence, to be incorporated into an iterative optimization process a
Figure 4: (a). The \(r^{2}\) plot of the hydrophobicity. The \(r^{2}\) value is almost 1. (b). The binding score prediction plot. The \(r^{2}\) value is calculated between the prediction mean in a small interval and the mean of the true values whose associated predictions fall into the interval. The uncertainty represents the range of the ground truth at each predicted value. It is apparent that the smaller the prediction value is, the smaller the uncertainty is.
fast but accurate free energy evaluation method is required. To this end, we train a surrogate model to predict the beta-catenin binding scores and the hydrophobicity of peptides on our labelled dataset.
The prediction network is similar to the reconstruction network except that a convolution neural network (CNN) is added at the end to predict the binding energy and the hydrophobicity. We have tested several different machine learning models, among which the CNN model achieves the lowest test mean square error in predicting the hydrophobicity and the binding energy. All of the test results are shown in Support Information.
The predicted hydrophobicity shows excellent correlation with the experimental data (Figure 4(a)). This suggests that our surrogate model can accurately capture the sequence level property of peptides.
The binding score turns out more difficult to predict as the surrogate model relies on peptide sequences exclusively. It can be considered as a deep learning-based quantitative structure activity relationship (DL-QSAR) model. To balance the speed and the accuracy, our goal is not to obtain predictions highly correlated with the true activity values, but to be able to rank the order of peptide binding so that our focus can be put on those peptides with a low binding score. What we need is that when Pyrosetta predicts a low binding score for a peptide, the surrogate model would also predict a low binding score for the peptide. The surrogate model shows a reasonably good correlation between the predicted and Pyrosetta calculated values with r2 of 0.841. Importantly, the uncertainty decreases as the binding score decreases as shown in 4(b). This indicates that a peptide is more likely to have a true low score if it is predicted to have a low score by the surrogate model, which is well suited for our optimization need.
### CMA-ES optimization results
CMA-ES is designed to optimize a Gaussian distribution's mean and variance in a gradient-free approach. We parameterize the region of the good encodings by a Gaussian distribution which we call an encoding sampler. The space of the properties w.r.t the encoding is highly non-convex. Thus, the CMA-ES is a good optimizer for optimizing this sampler model. Specifically, we design the sampler as an isotropic Gaussian distribution \(N(\mu,\sigma)\) that can sample good peptides in a latent space, where the dimension of the mean \(\mu\) is the same as that of the encodings. \(\mu\) and \(\sigma\) need to be optimized so that the sampled sequences have desired properties of both binding energy and hydrophobicity. We implement CMA-ES for this purpose. The loss function of the CMA-ES is comprised of three components: the binding energy and the hydrophobicity and the penalty for invalid peptides. The penalty is applied to help the sampler quickly escape from an invalid peptide rich region, which we define as more than 80% of the sampled peptides being invalid. The details of the loss design are presented in Section 4.4. Figure 5(a) shows the losses change during the optimization. The high losses enclosed by a black box indicate that the sampler enters an invalid peptide rich region. When moving into a valid
peptide rich region enclosed by a green box, the sampler has a strong tendency to generate peptides of low binding energy and low hydrophobicity as shown by the pink arrow. The right plot in Figure 5(d) shows a trajectory of \(\mu\) during the optimization along with the encodings of the labelled dataset. \(\mu\) appears to move around the periphery of the labelled dataset. Such behavior balances the novelty and the similarity to the labelled dataset. In the left plot of Figure 5(d), the learned \(\sigma\) converges to small values as the optimization process goes. This behavior of the sampler's exploration process can, thus, be interpreted as the following. When the sampler reaches an invalid peptide rich region, it quickly escapes from the region. However, when the sampler enters a valid peptide rich region, it starts performing local optimization on the binding score and the hydrophobicity via searching in a small region around \(\mu\). Such two processes alternate throughout the optimization. This is illustrated in 5(c). Concentrating on a small region in the latent space is reasonable as it can be seen from the left plot in Figure 6(a) that the total score w.r.t the encoding is highly non-convex. If the search region is large, it is easy to sample low quality peptides.
Figure 5(b) shows the trajectories of binding score and hydrophobicity during optimization. The minimum hydrophobicity value is around -15 in KD scale and the minimum binding score is around -50\(\frac{kcal}{mol}\), all calculated from generated unseen peptides. Note that in the labelled dataset, the range of the binding energy is from -59 \(\frac{kcal}{mol}\) to 4782\(\frac{kcal}{mol}\) with standard deviation of 220\(\frac{kcal}{mol}\). The range of the hydrophobicity is from -19 to 17 with standard deviation of 6 in KD scale. Assuming that the surrogate model can approximate the two properties well, both minimum values are close to the lowest values in the labelled dataset. Thus, it is important to collect the sampler information at these local minima. We further explains the sampler collections in Section 2.5.
During the optimization, we find that the optimizer has a preference in tuning \(\mu\) to a certain direction. The left plot in Figure 5(e) shows that the \(\mu\)'s trajectory prefers high negative values on the first and second principal components. The right plot in Figure 5(e) shows that the top 5 encoding dimensions whose values change the most during the optimization. It is obvious that these values change in a preferential direction during the optimization. The interpretability of high dimensional encodings for any reconstruction models is a long-standing research problem, which causes trouble in tuning the encodings to generate objects with specific properties. The CMA-ES optimizer proposed here provides a way to auto-tune the encodings to accomplish property specific generation tasks.
CMA-ES, as a gradient free method, is ideal for peptide generation. The reason is three fold. First, the encodings are regularized to be a Gaussian distribution in the sequence reconstruction model. This aligns with the CMA-ES assumption where the sampler is also a Gaussian. Second, CMA-ES is a gradient-free method. It is not guaranteed that any generated encoding can be decoded into a valid sequence. Thus, an important objective is to make sure that the generated encodings are valid (e.g. the decoded sequence has a
length of 5). There is no gradient information about this objective w.r.t the encodings. Third, the loss w.r.t the encoding is likely to be highly non-convex. Many local minima could exist and we would like to collect the information in these local minima. It is easy for a gradient based method to be stuck in a local minimum while a gradient free method can climb over the barriers between these minima.
### Sequences sampling results
As shown in Figure 5(d), the \(\sigma\) tends to be small during the optimization. In our experiment, we find that sampling valid encodings at a single (\(\mu,\sigma\)) becomes harder as the sampling iteration increases. The small \(\sigma\) ensures the quality of the sampled peptides, but suffers from the depletion of valid encodings. We iteratively collect the valid encodings until reaching a desired number. At a single location the required sampling iteration is found to rapidly increases as the requested number of peptides increases. This is evident from Figure 6(d) for the best selection curve, which corresponds to the sampling at a single location. Thus, the peptide depletion problem could be alleviated by collecting peptides from multiple regions, whereas selecting the appropriate regions could be tricky. For example, the sampler could linger around the same region for several iterations and produce similar property losses. In this case, some of the collected peptides with the lowest losses are likely from the same region,
Figure 5: (a). The CMA-ES training loss. There is a high loss region caused by the invalid peptide penalty and a low loss region representing a rich region of valid peptides. In the low loss region, the primary goal is to reduce the property losses. (b). The averaged binding scores during the optimization. (left). The averaged hydrophobicity during the optimization(right). (c). An illustration of the sampler’s behavior during the optimization (d). The magnitude of the covariance during the optimization. The magnitude is calculated as the sum of the absolute value of each element (left). A 2d-tuse plot to show the trajectory of the sampler mean. The sampler is searching around the periphery of the labelled dataset (right). (e). A 2d-pca plot to show the preferential direction of the encoding change. It is obvious that most of the search happens in the bottom left corner on the edge of the labelled dataset (left). The value change for 5 of the encoding dimensions.They are tuned either to positive or negative direction, indicating the tuner has a tuning preference (right).
which would still suffer from the encodings depletion problem. This situation is shown as the red dots in Figure 6(e). To solve this problem, we collect the sampler trajectories (\(\mu\),\(\sigma\)) whose losses are in top 500, which are referred to as the candidate trajectories. These regions are shown in orange in Figure 6(e) and their corresponding losses are shown in orange in Figure 5(a). To obtain distant samplers, we implement K mean algorithms to cluster the 500 samplers and select the representative samplers from individual clusters. The selected samplers are shown as green dots in Figure 6(e), which are evidently better separated than the red dots.
Our proposed LSATC is a collection of high quality samplers each corresponding to a distant small region in a local minimum. To see if such a strategy is effective, we compare it with algorithms that directly model known high quality encodings. Since hydrophobic peptides tend to bind proteins non-specifically, we define an overall score to unify the hydrophobicity and the binding score. We rank the binding energy and the hydrophobicity of a peptide separately and then calculate the overall score as the summation of the two. The lower the score, the better the quality. To select peptides for training, we divide peptides in the labelled dataset into 5 classes. The label 0 peptides have an overall score below 10,000. These peptides have the highest quality and account for 456 out of the 50,000 peptides. We implemented two models on this dataset. The first one is a Gaussian mixture model (GMM), which is used to model the encoding distributions of the label 0 peptides. The granularity of the distribution approximated by GMM depends on the cluster number. The more clusters, the finer the distribution is. However, the model is more likely to suffer from the depletion of valid encodings as the cluster number becomes larger. This is because each of the distribution modes can be very concentrated as there are only 456 encodings in the dataset. We show in Support Information that 200 clusters give the best result. The second model is a conditional WAE model, which manually separates the encodings of each class by concatenating the label into the encoding. It models the distribution of each of the labelled encodings separately. The high quality peptides can be directly sampled by concatenating the label 0 and the normally sampled encodings as the input of the decoder. The algorithm detail is given in the Support Information. During the sampling phase, the model only samples the encodings from the desired class region. As shown in Figure 6(a), the sampled encodings from LSATC are concentrated in several preferred regions, but those from the GMM are spread out in low score regions and those from the conditional WAE are sampled in its own cluster space.
We compare the sampling quality of these models where the random sampling (our labelled dataset) is used as a baseline. For the overall score, the binding score and the hydrophobicity of each peptide are ranked against those of the labelled dataset. The two ranks are summed to give the overall score. The lower this score, the better the model is. The overall scores of random peptides can be considered as background ranks. For each machine learning model, 100 sequences are sampled. In the rightmost figure of Figure 6(b), the
overall score distributions of the peptides sampled from all the machine learning models show an obvious shift to the lower score side when compared with that of the random model. The median and the IQR of the LSATC are 7975 and 12436, respectively, which are 6.3 times and 2.3 times smaller than those of the random model. The median overall score of LSATC is 2.8 times smaller than GMM and 4.2 times smaller than CWAE. Its IQR is 1.3 time smaller than GMM and 2.1 times than CWAE. As shown in Figure 6(b), LSATC has the lowest median and the narrowest IQR, indicating that it is the most efficient sampler among these models. When compared with the random model, LSATC samples peptides with 36% lower binding score in 16 times smaller IQR and 284% lower hydrophobicity with 1.4 times smaller IQR. Additionally, GMM sampled peptides also have low median and small IQR of binding score and hydrophobicity, indicating that high quality peptides also exist in the neighborhood of the known high quality peptides in the latent space. The performance of the CWAE model is the worst among the three models, albeit, better than the random model. The median hydrophobicity of CWAE is lower than the random model but the IQR is almost the same as the random model, indicating that sampling in the label 0 region shown in Figure 6(a) can not guarantee low hydrophobicity. This observation is reasonable because the high quality peptides (label 0) are very sparse. Modelling the distribution of the high quality regions is very difficult because such a distribution function should have many local modes while the number of modes is sparse compared to the whole region. Comparison of the performance of the LSATC, the GMM and the conditional WAE suggests that the most efficient way to sample high quality peptides is to sample in each of the high quality yet small regions.
In Figure 6(c), we show the sampled peptides from the LSATC model. Extending amino acid residues will in general improve binding due to increased nonspecific interactions, but our model seems to generate peptide sequences to maximize specific binding. As shown in the logo plot, I, R, E, Y, K are the most frequently generated amino acids at each position and the majority of them is charged/polar residues capable of forming specific polar/ionic interactions. The results indicate that our model can learn the binding environment and then generate corresponding residues. They also corroborate that penalizing hydrophobicity in the loss function benefits the model to generate fewer nonspecific interactions. A full sample table is shown in Support Information.
### Experimental results
From Figure 6(b), it is evident our LSATC-generated peptide inhibitors lose differences in binding strengths and hydrophobicity. Thus, beyond selecting peptides out of the best scores, we tend to cluster peptides in terms of diversity. The strategy first selects 40 extensions with the best overall scores out of 100 LSATC sampled extensions. Second, the strategy further picks up 4 representative extensions from 4 different clusters among the elite extensions using GibbsCluster [35]. The detailed procedure is shown in Section 4.6. Figure 7 shows the log-odd (LO) matrix for each of the four clusters, represented by
Seq2Log [36] plots. 20 natural amino acids are sized by their log-odds score at the corresponding position in the logo plot. This score can be considered as the probability of the appearance of the amino acid at this position in this cluster. For Cluster 1 to Cluster 4, each contains 16, 2, 11, and 11 sequences, and the corresponding cluster representatives are IREYK and IREFK, IRCCK, ICEYK, and EREYK. It is worth to notice the first row in the logo plot represents the theoretical amino acid for the highest likelihood appearance within the cluster and they are not always present in our sampling pools. However, all of them can be found in our LSATC-generated peptide inhibitor list except the first one. The complete 40 peptides are attached in Support Information. Overall, we integrate GibbsCluster peptide clustering method to implement a more systematic way to select testing peptides for binding assay.
We compare the experimental results with the parent peptide. Note that our real base peptide in experiment is "GGYPEDILDKHLQRVIL". However, it is a good practice to remove "GG" in the computational design because glycine is usually enforced to loop secondary structure, which brings flexibility and uncertainty in our binding energy evaluations. In Table 1, the base peptide is GG truncated version of our original base peptide. It is clear that all the LSATC-generated peptides largely improves the binding affinity and the best peptide "ICEYKYYPEDILDKHLQRVIL" has an improvement over 3 folds.
Figure 6: (a). 2d-tsne plot to show how the samples from different generative models positioned in the training high quality peptides. The models are LSATC(left), GMM(mid) and CWAE(right). (b). The comparison of the sampled peptide qualities from random selection, LSATC, CWAE and GMM. Note that it is necessary to have an overall score to describe a peptide as the binding plot and the hydrophobicity plot losses the peptide specificity. (c). A logo plot to show the most sampled amino acids at each extension position. (d). The depletion of the valid encoding. Sampling batch is fixed for each iteration. It is obvious that the required iteration rapidly increases as the number of the request peptides increases except K-mean selection. (e). The optimization trajectory for the sampler and the final selected samplers using different selection strategy.
## 3 Discussion
In this study, we propose LSATC, an automated protein specific peptide design method by collecting the search trajectory of the sampler in peptides' latent space. This method opens a wide spectrum of users because all we need is an initial pose of the peptide-protein complex.
In LSATC, CMA-ES is used as an automatic encoding tuner to guide a Gaussian sampler to iteratively explore valid peptide regions with low hydrophobicity and low binding energy in the vast latent space. CMA-ES, as a gradient free method, is ideal for this peptide generation task. The reason is three fold. First, the encodings are regularized to be a Gaussian distribution
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Source** & **peptide sequence** & _in vitro_ IC-ray (mM) \\ \hline
**LSATC-1** & IRERYYPEDILDKHLIQRVIL & \(51.8\pm 8.2\) \\
**LSATC-2** & IRCCKYPEDILDKHLIQRVIL & \(120.3\pm 2.6\) \\
**LSATC-3** & ICEKYKYPEDILDKHLIQRVIL & \(47.8\pm 1.9\) \\
**LSATC-4** & EERYKYPEDILDKHLIQRVIL & \(68.4\pm 10.2\) \\
**base peptide** & GYPEDILDKHLIQRVIL & \(150\pm 20\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental results of the tested peptides.
Figure 7: The logo plot of the LO matrices for the four cluster. The x-axis is the amino acid location, which has a total length of 5 in our case. The y-axis is the information gain of an amino acid at that location compared with its own background frequency. (a),(b),(c),(d) are corresponding to the logo plot of the LO matrix for cluster 1 to 4 respectively. In a logo plot, the top amino acid in each column represents the most likely amino acid at that location in this cluster.
in the sequence reconstruction model. This aligns with the CMA-ES assumption where the sampler is also a Gaussian. Second, CMA-ES is a gradient-free method. It is not guaranteed that any generated encoding can be decoded into a valid sequence. Thus, an important objective is to make sure that the generated encodings are valid (e.g. the decoded sequence has a length of 5). There is no gradient information about this objective w.r.t the encodings. Third, the loss w.r.t the encoding is likely to be highly non-convex. Many local minima could exist and we would like to collect the information in these local minima. It is easy for a gradient based method to be stuck in a local minimum while a gradient free method can climb over the barriers between these minima. To reduce the computational cost, a surrogate model was trained to evaluate the binding and hydrophobic properties, making the iterative search computationally feasible. We have shown that the LSATC sampler can learn a series of individual Gaussian distribution to approximate a small region at each of the local minima in the peptide latent space, which effectively avoids the encoding depletion problem. We use a peptide extending task to show that LSATC can generate b-catenin specific peptides much efficiently than the random selection and other machine learning models that globally fit the encoding distribution of high quality peptides in _in silico_ evaluation.
Moreover, we propose a strategy to select reasonable number of representative peptides from LSATC proposed peptides for the final _in vitro_ test. In the final _in vitro_ test, the selected peptides are found to show improved binding affinity than the base peptide and also outperform all peptide extensions obtained from a library screening method, proving the practical usefulness of our LSATC sampler.
As mentioned in Section 2.1, LSATC method requires around 50,000 simulated binding data w.r.t the target protein. Thus, the peptide sampling quality essentially depends on the quality of the mutation model for estimating the 3D structure of the peptide-protein complex. For short peptide generation, the mutation model is accurate and it is possible to finish the de novo design task, that is, generating complete peptides rather than peptide extensions. However, for long peptides, the accuracy of the mutation model becomes worse and using LSATC in de novo peptide design can result in poor peptides in practice. The deep learning techniques in macromolecules drug discovery is very active today. There are some works in binary classification of whether a peptide ligand and and a protein receptor can bind or not by sending both sequences into a deep learning model. It is possible that in the near future a deep learning model can accurately predict a continuous binding score between a peptide and a protein. Then, the mutation model will not be necessary for the LSATC method and it will be possible for LSATC to complete de novo design of generating arbitrary length peptides in high quality.
## 4 Method
### Peptide binding energy calculation
We use crystal structure of beta-catenin bound with a stapled peptide inhibitor (PDB:4DJS) as the starting structure for peptide binding energy calculation. The original peptide sequence YPEDILDKHLQRVIL was extended by 5 alannes that were enforced to adopt an alpha helical structure. The extended peptides were then superimposed onto the stapled peptide in the co-crystal structure. The poly alanine extensions were randomly mutated to any of the 20 natural amino acids, and the resulting structures were minimized by Modeller. [37]
To search for a robust scoring function for our optimizer, we tested five binding energy calculation tools, including molecular mechanics generalized Born surface area (MM/GBSA), Rosetta FlexPepDock, flex ddG, flex ddG gam and Pyrosetta to identify the one whose predicted binding energies best correlate with in vitro IC50 values. [6; 37; 38; 39; 40](Figure 5) MM/GBSA is a popular method for binding free energy estimation, in which a peptide-protein complex is subjected for a 10 ns molecular dynamics simulation and GMX_MMPBSA is then used to approximate the binding energy. Rosetta FlexPepdock is another popular method for estimating peptide-protein interaction energy. Structures of a peptide-protein complex are first sampled with a Monte Carlo method and the interface energy is then computed with the Rosetta scoring function for every sampled conformations. The mean value of 50 conformations is reported. Pyrosetta is a modified python version of Rosetta. A peptide-protein complex structure is first minimized and then subjected to interface energy calculation. In addition to the three best known binding energy calculation methods, we also tested the Flex ddG protocol, which is developed to predict binding free energy change upon mutation (interface ddG) of the peptide at the peptide-protein interface. Furthermore, the interface ddG can be improved to correlate with experimentally determined interface ddG with the generalized addition model (GAM) approach. A peptide-protein complex structure is first subjected to conformational sampling using backrub, followed by torsion minimization and side chain repacking. The mean of the interface ddG of the resulting conformation ensemble is reported as binding energies. Our results reveal that Pyrosetta has the best correlation between computed binding energy and experimental IC50 values, and Pyrosetta is thus chosen for subsequent evaluation of the ML model generated peptide sequences.
### Dataset preparation
Two datasets were prepared: one is an unlabelled dataset for reconstruction model training, and the other is a labelled dataset for surrogate model training. The unlabelled dataset contains 500,000 randomly sampled peptides of length 5. Note that there are 3,500,000 combinations in total. We use 80% of them to train our VAE and 20% to test. The labelled dataset contains 50,000 randomly
sampled peptide extensions. These extensions are concatenated with the base peptide. The hydrophobicity and binding energies of the extended peptides are evaluated using Biopython and Pyrosetta. It takes around 12 hours to evaluate their binding energies using multi-processing on a 16 core CPU computer.
The binding score distribution is highly skewed. There are many high values due to steric clash between the peptide and the protein. We first remove the outliers to keep the values within 0.1 to 0.9 quantile. Then we use \(\log(x+100)\) to normalize the binding scores into a small range. Note that without this data preprocessing step, the surrogate model cannot make reasonable predictions at any range. For the hydrophobicity, their values follow a normal distribution. Thus, we only standardize the values to a unit Gaussian. The distribution changes before and after the preprocessing are shown in SI Figure xx.
### Sequence reconstruction model and surrogate model
The sequence reconstruction model consists of an embedding network, a gated recurrent unit and a multiple layer perceptron. The embedding network converts the discrete representation of individual amino acids to a continuous representation and concatenates them together to represent a peptide sequence. This ordered representation is processed by a GRU network and a MLP network to output the mean and the standard deviation of the encoding of the peptide. Then the mean of the encoding is input into another GRU network followed by a MLP network to reconstruct the peptide.
Eq.1 shows the loss of the VAE. The first term is a reconstruction loss. It tries to match the output \(x_{o}\) decoded from the encoding z and the input \(x_{i}\) that encodes the z. The second term is an encoding regularizer that uses KL divergence to force the encoding to be normally distributed. In WAE, the KL divergence w.r.t p(z) is substituted by the Mean Maximum Discrepancy (MMD) measure as shown in Eq.2.
\[\text{vae loss}=\underbrace{-E_{q_{d}(z|x_{i})}\log p(x_{o}|z)}_{\text{ reconstruction loss}}+\underbrace{D_{KL}(q_{D}(z|x_{i})\|p(z))}_{\text{encoding regularizer}} \tag{1}\]
\[\text{vae loss}=\underbrace{-E_{q_{d}(z|x_{i})}\log p(x_{o}|z)}_{\text{ reconstruction loss}}+\underbrace{MMD(q_{D}(z|x_{i}),p(z))}_{\text{encoding regularizer}} \tag{2}\]
The definition of MMD is shown in Eq.3. It is a discrepancy measure of two distributions \(q_{D}\) and \(p\) after the values are transformed into a Hilbert space using some function \(\phi\).
\[\text{MMD}^{2}(q_{D},p) =\|E_{z_{1}\sim q_{D}}[\phi(z_{1})]-E_{z_{2}\sim p}[\phi(z_{2})] \|_{H} \tag{3}\] \[=E_{z_{1}\sim q_{D}}E_{z^{\prime}_{1}\sim q_{D}}\langle[\phi(z_{1} )],[\phi(z^{\prime}_{1})]\rangle-2E_{z_{1}\sim q_{D}}E_{z_{2}\sim p}\langle[ \phi(z_{1})],[\phi(z_{2})]\rangle\] \[+E_{z_{2}\sim p}E_{z^{\prime}_{2}\sim p}\langle[\phi(z_{2})],[ \phi(z^{\prime}_{2})]\rangle\]
In Eq.3, the independent variables in the final form of the \(MMD^{2}\) are fully defined by a valid inner product \(\langle\cdot,\cdot\rangle\) known as kernel function. It is unnecessary to define the transformation function \(\phi\) once we know the form of a kernel. In this study, we transform the learned encodings and normally sampled \(z\)s using a Gaussian kernel to calculate the MMD loss between the two distributions. In addition, we add to the unit Gaussian a weak KL penalty term directly on the learned mean and variance of the encoding. For the construction loss, we use cross-entropy (CE) loss since the input and the output are discrete. The final loss is shown in Eq.4.
\[loss=\underbrace{CE(x_{i},x_{o})}_{\text{reconstruction loss}}+\underbrace{ MMD(q_{D}(z|x),N(0,1))+10^{-3}*D_{KL}(q_{D}(z|x)\|N(\mu_{z},1))}_{\text{encoding regularizer}} \tag{4}\]
The surrogate model structure is similar to the sequence reconstruction model. Instead of performing a reconstruction task, the output of the decoding GRU network is directed to a \(7\times 100\) image for hydrophobicity and binding energy prediction using a CNN model. The final loss is a summation of the mean squared error (MSE) loss of the two properties as shown in Eq.5, where the asterisk sign represents the ground truth. In practice, we tested different surrogate model structures, and the CNN model was chosen due to its best performance. In Support Information, we show the training results of all the structures that we have tested.
\[\text{surrogate loss}=MSE(BE^{*},BE)+MSE(Hydro^{*},Hydro) \tag{5}\]
### CMA-ES optimization design
The CMA-ES is a gradient-free optimization method. It assumes a multivariate Gaussian sampler. At each step, the algorithm collects a pool of samples. The sampler mean is updated according to Eq.6. In the equation, \(\mu^{i}\) is the sampler mean at the \(i^{th}\) iteration. \(b\) is the learning rate. \(x_{n}^{i}\) is the \(n^{th}\) sampled point at the\(i^{th}\) iteration. \(w_{n}\) is the weight of \(x_{n}\). \(\sum w_{n}\) is equal to 1. \(k\) is the number of the top selections measured by a fitness function. We update the mean of a sampler based on the k selected points and the previous mean. b is usually set to 1. In this case, we update the mean according to the weighted combination of the k selected points.
\[\mu^{i+1}=\mu^{i}+b\sum_{n=1}^{k}w_{n}(x_{n}^{i+1}-\mu^{i}) \tag{6}\]
The covariance matrix update is based on the previous covariance for a more accurate estimation. It has two components as shown in Eq xx. The first term combines the current estimated covariance with weighed previous
covariances. The second term is an improvement on the first term.
\[S_{1}^{i+1}=(1-b_{2})S^{i}+b_{2}\underbrace{(\sum_{n=1}^{k}w_{n}(\frac{x_{n}^{i+1 }-\mu^{i}}{\sigma^{i}})(\frac{x_{n}^{i+1}-\mu^{i}}{\sigma^{i}})^{T})}_{\text{ current covariance estimation}} \tag{7}\]
As we can see in the first term, \(\frac{x_{n}^{i+1}-\mu^{i}}{\sigma^{i}}\) (\(\frac{x_{n}^{i+1}-\mu^{i}}{\sigma^{i}})^{T}\) loses the sign information during the update. To reinforce this information, the covariance is updated through another evolutionary path that is constructed by the mean of the previous steps rather than a single sample. Such a path is shown in Eq.8
\[y^{i+1}=(1-b_{3})y^{i}+\sqrt{b_{3}(2-b_{3})w_{eff}}\frac{\mu^{i+1}-\mu^{i}}{ \sigma^{i}} \tag{8}\]
\[S_{2}^{i+1}=(1-b_{4})S^{i}+b_{4}\underbrace{y^{i+1}(y^{i+1})^{T}}_{\text{ estimation from}\text{ evolution path}} \tag{9}\]
Eq.9 shows the covariance updating rule from the evolutionary path. Such an update has been shown to better couple the two optimization steps[41]. The two covariance updates are summed together to form the final update rule shown in Eq.10.
\[S^{i+1}=S_{1}^{i+1}+S_{2}^{i+1} \tag{10}\]
Despite many hyperparameters in the above equations, most of the parameters have optimal values [41]. The only parameter to tune in CAM-ES is the initial sampling size. The initial mean and covariance do not affect the result so much[42]. Although simple, CMA-ES has shown good performance in non-smooth, non-continuous functions and even noisy datasets, and is a reliable method for local optimization[43].
In this work, the fitness function of the CMA-ES is the sum of hydrophobicity, the binding energy and the penalty of sampled invalid peptides. During the optimization, N peptides will be sampled from the current sampler. However, due to the large space of the encoding, it is not guaranteed that the decoded peptide sequences will have a valid token. Sometimes, the peptide length is not 5. Very occasionally, the padding token could appear before the ending token, which violates the token rule. Once these situations occur, the generated encodings will be considered as invalid. Considering that CMA-ES is a population based optimization method, we want to make sure that at each optimization step, the majority of generated samples is valid. Thus, inside each step we designed another generation loop that repeatedly samples 1000 peptides. The loop exits only when 80% of the samples are valid or the loop
reaches 20 iterations. Eq.11 shows the fitness function \(f(z_{i})\) for each of the sampled encodings \(z_{i}\).
\[f(z_{i})=w(z)(NN_{b}(z_{i})+0.5*NN_{h}(z_{i}))+w(z)I(z_{i}) \tag{11}\]
where,
\(NN_{b}\) and \(NN_{h}\) are the surrogate model output for the binding energy and the hydrophobicity,
\(w(z)\) is 1 if less than 80% of the sampled encodings are valid. Otherwise, the value is 0.
\(I(z_{i})\)= 0.1*(number of the invalid encodings).
In Eq.11, we set the weight of the hydrophobicity to 0.5 to prioritize the binding energy minimization. In our implementation, the invalid encodings are removed from the population to let the optimizer focus on valid encodings if less than 20% of the samples are invalid. The detailed diagram of the optimization design is provided in SI section xx.
### Sampler selection
The optimization trajectory collection is necessary due to the highly non-convex optimization in the encoding space and the depletion of the encodings at a single location. To avoid collecting similar samplers, we sort them based on the loss function of the CMA-ES and select the top 500 samplers. We cluster the 500 samplers using a 10 cluster K-mean algorithm. 10 samplers that are the closest to the cluster centers are selected. The K-mean algorithm is a clustering algorithm that tries to minimize the distance between samples in individual clusters. Its objective function is shown in Eq.4.5. Here we use Euclidean distance as the distance measure. In SI, we show the clusters of the samplers in a 2d tsne plot. It is obvious that the selected samplers are distant from each other.
\[loss_{k-mean}=\sum_{n=1}^{N}\sum_{k=1}^{K}I(c_{i}==k)f(z_{i},\mu_{k}) \tag{12}\]
where,
N and K are the total number of the samplers and the K =10 is the number of the clusters.
I(.) is an indicator function.
\(z_{i}\) is an sampler mean. \(c_{i}\) is the cluster of the \(z_{i}\).\(\mu_{k}\) is the mean of the \(k^{th}\) cluster.
### Peptide selection of the experiments
We first sample 100 peptides using LSATC and rank the peptides using the overall scores to yield top 40 peptides. We further clustered these peptide extensions with GibbsCluster[44]. GibsCluster is an unsupervised peptide pattern discovery algorithm that simultaneously samples, clusters and aligns
peptide data. The algorithm learns a m \(\times\) n log-odd matrix where an element at \(i^{th}\) row and \(j^{th}\) column represents the information gain of the \(i^{th}\) amino acid at the \(j^{th}\) position compared with the background information of the \(i^{th}\) amino acid. m is the 20 natural amino acids. n is 5 because we only extend 5 more residues. This information gain is proportional to the probability of an amino acid occurring at the location in this cluster.
A peptide's representative score in that cluster is calculated via the summation of its amino acid information gain in the cluster's LO matrix. The information gain is essentially a log probability score. Thus, calculating the summation value is similar to calculate the unnormalized probability score of occurrence of the peptide in this cluster.
### Peptide synthesis
Peptides were manually synthesized by SPPS on Rink amide resin by using Fmoc chemistry. The in vitro IC50 of the predicted peptides against \(\beta-\)catenin is measured through a fluorescence polarization (FP)based competition assay. FAMlabeled probe peptide (10 nM) was incubated with 50 nM GST\(-\beta-\)catenin in 20 mM Tris, 300 mM NaCl, pH 8.8, 0.01% Triton-X100 for 1 h as reported previously. Serial dilutions of a competitor peptide were prepared in 20 mM Tris, 300 mM NaCl, pH 8.8, and 0.01% Triton-X100. After 1 h, aliquots of the equilibrated probe peptide \(-\beta-\)catenin solution were added to serially diluted peptide solutions and incubated for 1 h at RT. Samples were transferred into black-on-black 384-well nonbinding microplates (Greiner), and FP was measured using a Tecan M1000 Infinite plate reader. The data were analyzed using GraphPad Prism v. 8.0 and normalized to FP values corresponding to the fully bound/unbound probe.
|
2308.14705 | Diversified Ensemble of Independent Sub-Networks for Robust
Self-Supervised Representation Learning | Ensembling a neural network is a widely recognized approach to enhance model
performance, estimate uncertainty, and improve robustness in deep supervised
learning. However, deep ensembles often come with high computational costs and
memory demands. In addition, the efficiency of a deep ensemble is related to
diversity among the ensemble members which is challenging for large,
over-parameterized deep neural networks. Moreover, ensemble learning has not
yet seen such widespread adoption, and it remains a challenging endeavor for
self-supervised or unsupervised representation learning. Motivated by these
challenges, we present a novel self-supervised training regime that leverages
an ensemble of independent sub-networks, complemented by a new loss function
designed to encourage diversity. Our method efficiently builds a sub-model
ensemble with high diversity, leading to well-calibrated estimates of model
uncertainty, all achieved with minimal computational overhead compared to
traditional deep self-supervised ensembles. To evaluate the effectiveness of
our approach, we conducted extensive experiments across various tasks,
including in-distribution generalization, out-of-distribution detection,
dataset corruption, and semi-supervised settings. The results demonstrate that
our method significantly improves prediction reliability. Our approach not only
achieves excellent accuracy but also enhances calibration, surpassing baseline
performance across a wide range of self-supervised architectures in computer
vision, natural language processing, and genomics data. | Amirhossein Vahidi, Lisa Wimmer, Hüseyin Anil Gündüz, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei | 2023-08-28T16:58:44Z | http://arxiv.org/abs/2308.14705v2 | # Diversified Ensemble of Independent Sub-Networks for
###### Abstract
Ensembling a neural network is a widely recognized approach to enhance model performance, estimate uncertainty, and improve robustness in deep supervised learning. However, deep ensembles often come with high computational costs and memory demands. In addition, the efficiency of a deep ensemble is related to diversity among the ensemble members which is challenging for large, over-parameterized deep neural networks. Moreover, ensemble learning has not yet seen such widespread adoption, and it remains a challenging endeavor for self-supervised or unsupervised representation learning. Motivated by these challenges, we present a novel self-supervised training regime that leverages an ensemble of independent sub-networks, complemented by a new loss function designed to encourage diversity. Our method efficiently builds a sub-model ensemble with high diversity, leading to well-calibrated estimates of model uncertainty, all achieved with minimal computational overhead compared to traditional deep self-supervised ensembles. To evaluate the effectiveness of our approach, we conducted extensive experiments across various tasks, including in-distribution generalization, out-of-distribution detection, dataset corruption, and semi-supervised settings. The results demonstrate that our method significantly improves prediction reliability. Our approach not only achieves excellent accuracy but also enhances calibration, surpassing baseline performance across a wide range of self-supervised architectures in computer vision, natural language processing, and genomics data.
## Introduction
Ensemble learning has become a potent strategy for enhancing model performance in deep learning [14, 13, 15]. This method involves combining the outputs of multiple independently-trained neural networks, all using the same architecture and same training dataset but differing in the randomness of their initialization and/or training. Despite its remarkable effectiveness, training deep ensemble models poses several challenges: i) The high performance achieved by deep ensembles comes with a significant increase in computational costs. Running multiple neural networks independently demands more resources and time. ii) Maintaining diversity among ensemble members - a property often critical to success - becomes progressively difficult for large, over-parameterized deep neural networks [16, 17] in which the main source of diversity comes from random weight initialization. iii) Most of the existing literature focuses on deep ensembles for supervised models. Adapting these approaches to unsupervised and self-supervised models requires careful consideration and evaluation to ensure comparable performance.
In recent years, self-supervised learning methods have achieved cutting-edge performance across a wide range of tasks in natural language processing (NLP; [18, 19], computer vision [20, 15, 16, 17, 18], computer vision [19, 21, 22], multimodal learning [14, 23, 24], bioinformatics [15]. In contrast to supervised techniques, these models learn representations of the data without relying on costly human annotation. Despite remarkable progress in recent years, self-supervised models do not allow practitioners to inspect the model's confidence. This problem is non-trivial given the degree to which critical applications rely on self-supervised methods. As recently discussed by LeCun1, representing predictive uncertainty is particularly difficult in self-supervised contrastive learning for computer vision. Therefore, quantifying the predictive uncertainty of self-supervised models is critical to more reliable downstream tasks. Here, we follow the definition of reliability as described by Plex [16], in which the ability of a model to work consistently across many tasks is assessed. In particular, Tran et al. (2022) introduce three general desiderata of reliable machine learning systems: a model should generalize robustly to _new tasks_, as well as _new datasets_, and represent the associated _uncertainty_ in a faithful manner.
Footnote 1: [https://ai.facebook.com/blog/self-supervised-learning-the-dark-matter-of-intelligence/](https://ai.facebook.com/blog/self-supervised-learning-the-dark-matter-of-intelligence/)
In this paper, we introduce a novel, robust, and scalable framework for ensembling _self-supervised learning_ while _preserving performance_ with a negligible increase in computational cost and _encouraging diversity among the ensemble of sub-networks_.
Our contributions can be summarized as follows:
* We propose a novel scalable ensemble of self-supervised learning to be robust, efficient, and enhance the model performance in various downstream tasks.
* We develop a complementary loss function to enforce diversity among the independent sub-networks.
* We perform extensive empirical analyses to highlight the benefits of our approach. We demonstrate that this inexpensive modification achieves very competitive (in most cases, better) predictive performance: 1) on in-distribution (IND) and out-of-distribution (OOD) tasks; 2) in semi-supervised settings; 3) learns a better predictive performance-uncertainty trade-off than compared baselines (i.e., exhibits high predictive performance and low uncertainty on IND datasets as well as high predictive performance and high uncertainty on OOD datasets).
## Related Work
Self-supervised learningFor most large-scale modeling problems, learning under full supervision is severely inhibited by the scarcity of annotated samples. Self-supervised learning techniques, which solve _pretext tasks_[16] to generate labels from (typically abundant) unlabeled data, have proven to be a powerful remedy to this bottleneck. The learned feature maps can serve as a starting point for _downstream_ supervised tasks, such as classification, object detection, or sentiment analysis, with a substantially reduced need for labeled examples [10]. Alternatively, the downstream application may directly use the extracted representation for problems such as anomaly OOD detection. While there have been attempts to make pretraining more robust by preventing embedding collapse [14, 15] or boosting performance in OOD detection [17, 18, 19], the aspect of _uncertainty-awareness_ has been studied to a lesser extent in the self-supervised context. Motivated by this, we present a simple way to make self-supervised learning robust during pretext-task learning.
**Ensemble learning** Deep Ensembles [10] comprise a set of \(M\) neural networks that independently train on the same data using random initialization. Deep ensembles often outperform other approaches in terms of calibration and predictive accuracy [13, 14, 15, 16], but their naive application incurs high computational complexity, as training, memory, and inference cost multiplies with the number of base learners. BatchEnsemble [20] introduces multiple low-rank matrices with little training and storage demand, whose Hadamard products with a shared global weight matrix mimic an ensemble of models. Masksensemble [13] builds upon Monte Carlo dropout [12] and proposes a learnable (rather than a random) selection of masks used to drop specific network neurons. MIMO [10] uses ensembles of sub-networks diverging only at the beginning and end of the parent architecture - thus sharing the vast majority of weights - in order to obtain multiple predictions with a single forward pass. At test time, several copies of each sample are fed to the enlarged input layer, and the multi-head last layer returns an according number of predictions. Although these methods reduce the inference time and computational resources required at training, the benefits are limited to the larger pretraining model that is used in self-supervised learning.
**Diversity in ensembles**: Diversity is a crucial component for successful ensembles. Rame and Cord (2021) classify existing approaches for encouraging diversity among ensemble members into three groups: i) methods that force _diversity in gradients_ with adaptive diversity in prediction [12], or using joint gradient phase and magnitude regularization (GPMR) between ensemble members [1], ii) methods focusing on _diversity in logits_, improving diversity with regularization and estimating the uncertainty of out-of-domain samples [10], or by bounding the Lipschitz constant of networks and limiting the variety of predictions against slight input changes [11, 12], iii) methods promoting _diversity in features_ that increase diversity with adversarial loss [13] for conditional redundancy [15], information bottleneck [15, 16], or \(f1\)-divergences [13]. Our method belongs to this last category, where our loss function encourages the diversity of feature maps.
## Method
We propose a simple principle to 1) make self-supervised pretraining robust with an ensemble of diverse sub-networks, 2) improve predictive performance during pretraining of self-supervised deep learning, 3) while keeping an efficient training pipeline.
As depicted in Figure 1, our proposed method can be readily applied to the most recent trends in self-supervised learning [13, 14, 15, 16, 17] and is based on a joint embedding architecture. In the following sections, we first describe our proposed ensemble model, followed by the diversity loss, and then a discussion on diversity, and computational cost.
### Robust Self-Supervised Learning via Independent Sub-Networks
**Setting.** Given a randomly sampled mini-batch of data \(\mathbf{X}=\{\mathbf{x}_{k}\}_{k=1}^{N}\subset\mathcal{X}\subseteq\mathbb{R}^{p}\), the transformer function derives two augmented views \(\tilde{\mathbf{x}}=\tau(\mathbf{x}),\tilde{\mathbf{x}}^{\prime}=\tau^{\prime}(\mathbf{x})\) for each sample in \(\mathbf{X}\). The augmented views are obtained by sampling \(\tau,\tau^{\prime}\) from a distribution over suitable data augmentations, such as masking parts of sequences [1, 13], partially masking image patches [14], or applying image augmentation techniques [15].
The two augmented views \(\tilde{\mathbf{x}}\) and \(\tilde{\mathbf{x}}^{\prime}\) are then fed to an encoder network \(f_{\mathbf{\theta}}\) with trainable parameters \(\mathbf{\theta}\subseteq\mathbb{R}^{d}\). The
encoder (e.g., ResNet-50 (He et al., 2016), ViT (Dosovitskiy et al., 2021)) maps the distorted samples to a set of corresponding features. We call the output of the encoder the _representation_. Afterward, the representation features are transformed by \(M\) independent sub-networks \(\{g_{\mathbf{\phi_{m}}}\}_{m=1}^{M}\) with trainable parameters \(\mathbf{\phi}_{m}\) to improve the feature learning of the encoder network. The ensemble constructs from the representation \(M\) different \(q\)-dimensional _embedding_ vectors \(\{\mathbf{z}_{m}\}_{m=1}^{M}\), \(\{\mathbf{z}_{m}^{\prime}\}_{m=1}^{M}\), respectively, for \(\tilde{\mathbf{x}}\) and \(\tilde{\mathbf{x}}^{\prime}\). We modify the conventional self-supervised loss and replace the usual \(\mathbf{z}_{m}\) by the mean value \(\mathbf{\tilde{z}}=(\mathbf{z}_{1}+\ldots+\mathbf{z}_{M})/M\), and similarly \(\mathbf{z}_{m}^{\prime}\) by \(\tilde{\mathbf{z}}^{\prime}\). Averaging over the embeddings generated by the \(M\) sub-networks is likely to increase robustness, which in turn may help to improve predictive performance in downstream tasks
**Self-supervised loss.** In the case of contrastive learning (Chen et al., 2020), the self-supervised loss \(\ell_{\text{ssl}}\) with temperature \(t>0\) and cosine similarity \(\mathrm{sim}(\cdot,\cdot)\) is computed as:
\[\ell_{\text{ssl}}\left(\tilde{\mathbf{x}}_{k},\tilde{\mathbf{x}}^{\prime }_{k}\right)=-\log\ \frac{\exp(\mathrm{sim}(\mathbf{\bar{z}}_{k},\mathbf{\bar{z}}^{\prime}_{k})/t)}{ \sum_{i=1}^{2N}\mathbb{I}_{[k\neq i]}\exp(\mathrm{sim}(\mathbf{\bar{z}}_{k},\tilde {z}_{i})/t)}. \tag{1}\]
**Diversity loss.** Since diversity is a key component of successful model ensembles (Fort, Hu, and Lakshminarayanan, 2019), we design a new loss function for encouraging diversity during the training of the sub-networks. We define the diversity regularization term \(\ell_{\text{div}}\) as a hinge loss over the difference of the standard deviation across the embedding vectors \(\{\mathbf{z}_{k,m}\}_{m=1}^{M}\), \(\{\mathbf{z}_{k,m}^{\prime}\}_{m=1}^{M}\) to a minimum diversity of \(\alpha>0\). The standard deviation is the square root of the element-wise variance \(\{\sigma_{k,o}^{2}\}_{o=1}^{q}\):
\[\sigma_{k,o}^{2}=\tfrac{1}{M-1}\sum_{m=1}^{M}(z_{k,m,o}-\bar{z}_{k,o})^{2}+ \epsilon\,,\]
where we add a small scalar \(\epsilon>0\) to prevent numerical instabilities. The diversity regularization function is then given by:
\[\ell_{\text{div}}\left(\tilde{\mathbf{x}}_{k},\tilde{\mathbf{x}}^{\prime }_{k}\right)=\sum_{o=1}^{q}\max\left(0,\alpha-\sigma_{k,o}\right) \tag{2}\] \[+\max(0,\alpha-\sigma^{\prime}_{k,o})\,,\]
where \(\sigma\) and \(\sigma^{\prime}\) indicate standard deviation for the input sample and augmented views, respectively.
**Total loss.** The objective of the diversity loss is to encourage disagreement among sub-networks by enforcing the element-wise standard deviations to be close to \(\alpha>0\) and to thus prevent the embeddings from collapsing to the same vector. Figure 2 underlines the importance of the diversity loss on the total sum of standard deviations between different sub-networks, which increases by adding this loss. The total loss is calculated by combining the self-supervised loss (Eq. 1) and the diversity loss (Eq. 2), where the degree of regularization is controlled by a tunable hyperparameter \(\lambda\geq 0\):
\[\ell\left(\tilde{\mathbf{x}}_{k},\tilde{\mathbf{x}}^{\prime}_{k}\right)=\ell_{\text{ ssl}}\left(\tilde{\mathbf{x}}_{k},\tilde{\mathbf{x}}^{\prime}_{k}\right)+\lambda\cdot \ell_{\text{div}}\left(\tilde{\mathbf{x}}_{k},\tilde{\mathbf{x}}^{\prime}_{k}\right). \tag{3}\]
Figure 1: Illustration of our proposed method. Given a batch \(\mathbf{X}\) of input samples, two different views \(\tilde{\mathbf{x}}\) and \(\tilde{\mathbf{x}}^{\prime}\) are produced for each sample, which is then encoded into representations by the encoder network \(f_{\mathbf{\theta}^{\prime}}\). The representations are projected to the ensemble of independent sub-networks \(g_{m}\), where each sub-network produces embedding vectors \(\mathbf{z}\) and \(\mathbf{z}^{\prime}\). The mean value of these embeddings is passed to the self-supervised loss, while their standard deviation is used for the diversity loss. Finally, the total loss is computed by a combination of the two loss components.
Figure 2: **Total Standard Deviation**: sum of all standard deviations between independent sub-networks during training. Training with diversity loss (Eq. 2) increases the standard deviation and improves the diversity between independent sub-networks.
Finally, the total loss is aggregated over all the pairs in mini-batch \(\mathbf{X}\):
\[\mathcal{L}_{\text{total}}=\tfrac{1}{N}{\sum_{k=1}^{N}\ell\,(\bar{\mathbf{x}}_{k}, \bar{\mathbf{x}}_{k}^{\prime})}. \tag{4}\]
**Gradients.** Consider the output of the encoder \(f_{\mathbf{\theta}}(\mathbf{x})=b\) and the output of the \(m\)-th linear sub-network \(\mathbf{z}_{m}=g_{m}(b)=w_{m}\cdot b\). The weight \(w_{m}\) is updated by two components during backpropagation, the first of which depends on the self-supervised loss and is the same for the entire ensemble, while the second term depends on the diversity loss and is different for each sub-network. Given Eq. 2, we simplify the equation by vector-wise multiplication since the sub-networks are linear; furthermore, we omit the numerical stability term since it does not have an effect on the derivative. The element-wise standard deviation can be computed as follows:
\[\sigma_{k,o}=\left(\tfrac{1}{M-1}\sum_{m=1}^{M}(\mathbf{z}_{k,m,o}-\bar{\mathbf{z}}_{k,o})^{2}\right)^{\tfrac{1}{2}}. \tag{5}\]
Consider Eq. 2 for aggregating the element-wise standard deviations for one observation (\(\mathbf{x}\)) and assume \(\sigma_{k}<\alpha\); otherwise, the diversity loss is zero when \(\alpha\leq\sigma_{k}\). The derivative of the loss with respect to \(\mathbf{z}_{k,\hat{m},o}\), \(\hat{m}\in{1,\ldots,M}\), is then given as follows:
\[\frac{\partial\left(\ell_{\text{div}}\right)}{\partial\mathbf{z}_{k,\hat{m},o}}= \frac{-A}{M-1}\cdot(\mathbf{z}_{k,\hat{m},o}-\bar{\mathbf{z}}_{k,o}), \tag{6}\]
where \(A:=\frac{1}{M-1}\sum_{m=1}^{M}(\mathbf{z}_{k,m,o}-\bar{\mathbf{z}}_{k,o})^{2}\)). The proof is provided in the appendix (see Theoretical Supplement).
In the optimization step of stochastic gradient descent (SGD), the weight of sub-network \(\hat{m}\) is updated by:
\[\eta\cdot\nabla_{w_{\hat{m},o}}\ell_{\text{div}}=-C\cdot(\mathbf{z}_{k,\hat{m},o} -\bar{\mathbf{z}}_{k,o}), \tag{7}\]
where \(\eta>0\) is the learning rate, and \(C\) is constant with respect to \(w_{\hat{m},o}\), which depends on the learning rate, number of sub-networks, \(A\), and \(b\). The proof is provided in Appendix (see Theoretical Supplement).
Eq. 7 shows the updating step in backpropagation. Hyper-parameter \(\alpha\) prevents \(\mathbf{z}_{k,\hat{m},o}\) from collapsing to the a single point. Hence, \(w_{\hat{m},o}\) is updated in the opposite direction of \(\bar{\mathbf{z}}_{k,o}\), so the diversity loss prevents weights in the sub-networks from converging to the same values.
### Empirical Analysis of Diversity
Diversity of ensemble members is an important feature for powerful model ensembles and reflects the degree of independence among its members (Zhang and Ma, 2012; Ortega, Cabanas, and Masegosa, 2022). We follow Fort, Hu, and Lakshminarayanan (2019) to quantify the diversities among the ensemble of sub-networks. Specifically, we report the diversities in terms of _disagreement score_ between the members' predictive distributions and a baseline. Diversity disagreement is defined as _distance disagreement_ divided by \(1-\)_accuracy_, where the distance disagreement between two classification models \(h_{i}\) and \(h_{j}\) is calculated as \(\frac{1}{N}\sum_{k=1}^{N}\left[h_{i}(\mathbf{x}_{k})\neq h_{j}(\mathbf{x}_{k} )\right],\) with \(N\) denoting the number of samples. Figure 3 compares the diversity disagreement between our method with \(10\)-sub-networks, a deep ensemble with \(10\) members, and the single-network baseline. The results clearly indicate that our proposed method achieves comparable results with deep self-supervised ensembles in terms of both accuracy and diversity disagreement.
### Computational Cost and Efficiency Analysis
We analyze the efficiency of our proposed method in Table. 1. SSL-Ensemble increases memory and computational requirements compared to the baseline by 200% and 900% for 3 and 10 members, respectively. The increase in the number of parameters is 32% and 143%, and the increase in computational requirement is \(\sim 0-6\%\) for our method. A more detailed description of the relative cost and the reason for the difference between the increase in memory and the computational requirements of our method is provided in Appendix (see Computation Cost Analysis).
## Experimental Setup
We perform several experiments with a variety of self-supervised methods to examine our hypothesis for robustness during both pretext-task learning and downstream tasks (fine-tuning).
**Deep self-supervised network architecture** Our proposed approach builds on two recent popular self-supervised models in computer vision: i) **SimCLR**(Chen et al., 2020) is a contrastive learning framework that learns representations by maximizing agreement on two different augmentations of the same image, employing a contrastive loss in the latent embedding space of a convolutional network architecture (e.g., ResNet-50 (He et al., 2016)), and ii) **DINO**(Caron
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Members & Parameters(M) & Memory / GPU & Time / 800-ep. \\ \hline Baseline (SSL) & 1 & 28 & 9 Q & 3.6 (h) \\ SSL-Ensemble & 3 & 3\(\times\)28 & 3\(\times\)9 G & 3\(\times\) 3.6 (h) \\ SSL-Ensemble & 10 & 10\(\times\)28 & 10\(\times\)9 G & 10\(\times\)3.6 (h) \\ Our method & 3 & 37 & 9.2 G & 3.6 (h) \\ Our method & 10 & 68.1 & 10 G & 3.8 (h) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computational cost in 4 DGX-A100 40G GPUs (PyTorch) on CIFAR 10.
Figure 3: **Diversity Analysis: prediction diversity disagreement vs. achieved accuracy on test set of CIFAR-10. Diversity analysis encompasses the comparative assessment of two distinct models that have been trained on test datasets, with a focus on quantifying the dissimilarity in their respective predictions. This evaluation entails computing the fraction of test data points for which predictions of models disagree, the diversity, and normalizing it by the model’s error rate. Our method with \(10\) subnetworks is on par with the deep self-supervised ensemble with \(10\) members in terms of both accuracy and diversity disagreement. Models in the top right corner are better.**
et al. 2021) is a self-distillation framework in which a student vision transformer (ViT; (Dosovitskiy et al. 2021a)) learns to predict global features from local image patches supervised by the cross-entropy loss from a momentum teacher ViT's embeddings. Furthermore, we study the impact of our approach in NLP and modify **SCD**Klein and Nabi (2022), which applies the bidirectional training of transformers to language modeling. Here, the objective is self-supervised contrastive divergence loss. Lastly, we examine our approach on **Self-GenomeNet**Gunduz et al. (2021), a contrastive self-supervised learning algorithm for learning representations of genome sequences. More detailed descriptions of the employed configurations are provided in Appendix (see Implementation Details)
**Deep independent sub-networks** We implement \(M\) independent sub-networks on top of the encoder, for which many possible architectures are conceivable. For our experiments on computer vision datasets, we consider an ensemble of sub-network architecture where each network includes a multi-layer perceptron (MLP) with two layers of 2048 and 128 neurons, respectively, with ReLU as a non-linearity and followed by batch normalization (Ioffe 2017). Each sub-network has its own independent set of weights and learning parameters. For the NLP dataset, the projector MLP contains three layers of 4096 neurons each, also using ReLU activation's as well as batch normalization. For the genomics dataset, our ensemble of sub-networks includes one fully connected layer with an embedding size of 256.
**Optimization** For all experiments on image datasets based on DINO and SimCLR, we follow the suggested hyperparameters and configurations by the paper (Caron et al. 2021; Chen et al. 2020b). Implementation details for pretraining with DINO on the 1000-classes ImagetNet dataset without labels are as follows: coefficients \(\epsilon\), \(\alpha\), and \(\lambda\) are respectively set to \(0.0001,0.15,\) and \(2\) in Eq.2, 2, and 3. We provide more details in ablation studies (Section ) on the number of sub-networks and the coefficients \(\lambda\) and \(\alpha\) used in the loss function. The encoder network \(f_{\mathbf{\theta}}\) is either a ResNet-50 (He et al. 2016) with 2048 output units when the baseline is SimCLR (Chen et al. 2020b) or ViT-s (Dosovitskiy et al. 2021b) with 384 output units when the baseline is DINO (Caron et al. 2021). The best prediction and calibration performance is achieved when the number of sub-networks is 5. We followed the training protocol and settings suggested by (Caron et al. 2021).
**Datasets** We use the following datasets in our experiments: **CIFAR-10/100**Krizhevsky (2009) are subsets of the tiny images dataset. Both datasets include 50,000 images for training and 10,000 validation images of size \(32\times 32\) with 10 and 100 classes, respectively. **SVH**Netzer et al. (2011) is a digit classification benchmark dataset that contains 600,000 \(32\times 32\) RGB images of printed digits (from 0 to 9) cropped from pictures of house number plates. **ImageNet**Deng et al. (2009), contains 1,000 classes, with 1.28 million training images and 50,000 validation images. For the NLP task, we train on a dataset of 1 million randomly sampled sentences from **Wikipedia articles**Huggingface (2021) and evaluate our models on 7 different semantic textual similarity datasets from the SentEval benchmark suite (Conneau and Kiela 2018): **MR** (movie reviews), **CR** (product reviews), **SUBJ** (subjectivity status), **MPQA** (opinion-polarity), **SST-2** (sentiment analysis), **TREC** (question-type classification), and **MRPC** (paraphrase detection). The **T6SS** effector protein dataset is a public real-world bacteria dataset (SecReT6, (Li et al. 2015)) with actual label scarcity. The sequence length of the genome sample is 1000nt in all experiments.
**Tasks** We examine and benchmark a model's performance on different tasks considering evaluation protocols by self-supervised learning (Chen et al. 2020b) and Plex's benchmarking tasks (Tran et al. 2022). Specifically, we evaluate our model on the basis of **uncertainty-aware IND generalization**, **OOD detection**, **semi-supervised learning**, **corrupted dataset evaluation** (see Section ), and **transfer learning to other datasets and tasks** (see Appendix: Transfer to Other Tasks and Datasets )
**Evaluation metrics** We report prediction/calibration performance with the following metrics, where upward arrows indicate that higher values are desirable, _et vice versa_. **Top-1 accuracy**\(\uparrow\): share of test observations for which the correct class is predicted. **AUROC**\(\uparrow\): area under the ROC curve arising from different combinations of false-positive and false-negative rates (here: with positive and negative classes referring to being in and out of distribution, respectively) for a gradually increasing classification threshold. **Negative log-likelihood (NLL)**\(\downarrow\): negative log-likelihood of test observations under the estimated parameters. **Expected calibration error (ECE)**;(Naeini, Cooper, and Hauskrecht 2015) \(\downarrow\): mean absolute difference between accuracy and confidence (highest posterior probability among predicted classes) across equally-spaced confidence bins, weighted by relative number of samples per bin. **Thresholded adaptive calibration error (TACE)**; (Nixon et al. 2019)) \(\downarrow\): modified ECE with bins of equal sample size, rather than equal interval width, and omitting predictions with posterior probabilities falling below a certain threshold (here: 0.01) that often dominate the calibration in tasks with many classes.
**Compared methods** We compare our method to the following contenders. **Baseline:** self-supervised architectures (i.e., SimCLR, DINO, SCD, or Self-GenomeNet, depending on the task). **SSL-Ensemble:** deep ensemble comprising a multiple of the aforementioned baseline networks. **Monte Carlo (MC) dropout**: (Gal and Ghahramani 2016) baseline networks with dropout regularization applied during pretraining of baseline encoder. **BatchEnsemble**: baseline encoder with BatchEnsemble applied during pretraining.
## Results and Discussion
**In-distribution generalization** IND generalization (or _prediction calibration_) quantifies how well model confidence aligns with model accuracy. We perform several experiments on small and large image datasets as well as the genomics sequence dataset to evaluate and compare the predictive performance of our proposed model in IND generalization. Here, the base encoder \(f_{\mathbf{\theta}}\) is frozen after unsupervised pretraining, and the model is trained on a supervised linear classifier. The linear classifier is a fully connected
layer followed by softmax, which is placed on top of \(f_{\mathbf{\theta}}\) after removing the ensemble of sub-networks. High predictive scores and low uncertainty scores are desired.
Figure 4 illustrates the predictive probability of correctness for our model on CIFAR-10, CIFAR-100, ImageNet, and T6SS datasets in terms of Top-1 accuracy, ECE, and NLL, respectively. Based on Figure 4, our method achieves better calibration (ECE and NLL) than the deep ensemble of self-supervised models. The discrepancy in performance between our model and the deep ensemble can be explained by various factors, including differences in uncertainty modeling, complexity, and robustness. While the deep ensemble excels in top-1 accuracy, our model's superior ECE and NLL scores indicate better-calibrated and more reliable predictions, which are essential for safety-critical applications and decision-making under uncertainty. More detailed descriptions are provided in Appendix (see Additional Results) (Tables 4, 5, 6, and 7).
**Out-of-distribution detection** OOD detection shows how well a model can recognize test samples from the classes that are unseen during training [12]. We perform several experiments to compare the model generalization from IND to OOD datasets and to predict the uncertainty of the models on OOD datasets. Evaluation is performed directly after unsupervised pretraining without a fine-tuning step. Table 2 shows the AUROC on different OOD sets for our model, baseline, and deep self-supervised ensemble. Our approach improves overall compared to other methods.
**Semi-supervised evaluation** We explore and compare the performance of our proposed method in the low-data regime. Again, the encoder \(f_{\mathbf{\theta}}\) is frozen after self-supervised pretraining, and the model is trained on a supervised linear classifier using 1% and 10% of the dataset. The linear classifier is a fully connected layer followed by softmax. Table 3 shows the result in terms of top-1 accuracy, ECE, and NLL. The results indicate that our method outperforms other methods in the low-data regime - in terms of calibration.
**Corrupted dataset evaluation** Another important component of model robustness is its ability to make accurate predictions when the test data distribution changes. Here, we evaluate model robustness under _covariate shift_. We employ a configuration similar to the one found in [13]. Figure 5 summarizes the improved performance across metrics of interest. The results confirm that our method outperforms the baseline and achieves comparable predictive performance as a deep self-supervised ensemble - both in terms of calibration (TACE) and AUROC.
## Ablation Study
In order to build intuition around the behavior and the observed performance of the proposed method, we further investigate the following aspects of our approach in multiple ablation studies exploring: (1) the number \(M\) of sub-networks, (2) the role of each component of the proposed loss, and (3) analysis of diversity with visualization of the gradients of subnetworks. We also present more results on (4) the impact of our approach during pretraining vs. at the finetuning step, (5) the size of sub-networks, and (6) the impact of model parameters in the Appendix (see Additional Ablation Analysis).
**Number of sub-networks** We train \(M\) individual deep neural networks on top of the representation layer. The networks receive the same inputs but are parameterized with different weights and biases. Here, we provide more details regarding our experiments on IND generalization by considering varying \(M\). Fig. (a)a compares the performance in terms of top-1 accuracy, ECE, and NLL for CIFAR-10 and CIFAR-100. Based on the quantitative results depicted in Fig. (a)a, the predictive performance improves in both datasets when increasing the number of sub-networks (\(M\)) until a certain point. For example, in the case of CIFAR-10, when \(M=3\), our performance is \(91.9\%\); increasing \(M\) to 10 levels top-1 accuracy up to \(92.6\%\), while the ECE and NLL decrease from \(0.026\) and \(0.249\) to \(0.023\) and \(0.222\), respectively. These findings underline that training our sub-networks with a suitable number of heads can lead to a better representation of the data and better calibration. Recently [14, 15] provided a theoretical statement as well as experimental results that projection heads help with faster convergence.
**Analysis of loss** The total loss (Eq. 3) is calculated by the combination of self-supervised loss (Eq. 1) and diversity loss (Eq. 2), where the mean value of the embeddings across the ensemble of sub-networks is fed to the self-supervised loss, and the corresponding standard deviation is used for the diversity loss. First, we note that the use of our diversity regularizer indeed improves calibration and provides better uncertainty prediction. The results in Fig. 4 show the impact of our loss function in relation to the baseline. By comparing the first and fifth rows of Table 4, it can be inferred that our proposed loss function results in a much lower ECE (\(0.016\)) than the network trained by SimCLR (baseline) with \(0.039\) on the CIFAR-10 dataset. Similarly, the first and third rows of Table 6 compare the predictive probability of correctness of DINO (baseline) and our model on ImageNet.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline IND & OOD & Baseline & SSL-Ensemble & Our method \\ \hline \multirow{4}{*}{CIFAR-100} & SVHN & 84.22 & 84.95 & **88.00** \\ & Uniform & 91.65 & 90.53 & **97.57** \\ & Gaussian & 90.00 & 89.42 & **94.10** \\ & CIFAR-10 & 74.71 & 74.80 & **75.18** \\ \hline \multirow{4}{*}{CIFAR-10} & SVHN & 95.03 & 96.68 & **97.07** \\ & Uniform & 96.73 & 91.64 & **99.05** \\ & Gaussian & 96.39 & 93.24 & **99.24** \\ & CIFAR-100 & 91.79 & 91.59 & **91.87** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **OOD detection**. Results reported using AUROC show our method enhances the baseline up to 6%.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c} \hline \hline Method & \multicolumn{3}{c}{CIFAR-10 (15\%)} & \multicolumn{3}{c}{CIFAR-10 (105\%)} & \multicolumn{3}{c}{CIFAR-100 (15\%)} & \multicolumn{3}{c}{CIFAR-100 (105\%)} \\ \cline{2-13} & ACC & ECE & NLL & ACC & ECE & NLL & ACC & ECE & NLL & ACC & ECE & NLL \\ \hline Baseline & 89.1 & 0.005 & 0.364 & 91.1 & 0.039 & 0.247 & 86.2 & 0.097 & 2.01 & 89.5 & 0.008 & 1.79 \\ Base-Incomplete & 90.4 & 0.018 & 0.206 & 92.6 & 0.016 & 0.249 & 99.3 & 0.060 & 1.71 & 82.4 & 0.002 & 1.56 \\ \hline Our method & 90.4 & 0.018 & 0.206 & 92.6 & 0.016 & 0.249 & 99.3 & 0.060 & 1.71 & 82.4 & 0.002 & 1.56 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Semi-supervised evaluation**: Top-1 accuracy (ACC), ECE, and NLL for semi-supervised CIFAR-10/100 classification using 1% and 10% training examples.
Second, we explore different hyperparameter configurations to find the optimal values for \(\alpha\) and \(\lambda\) in Fig. 6(b), 6(c). Note that, in practice, \(\alpha\) and \(\lambda\) must be optimized jointly. The best top-1 accuracy in our case is achieved when \(\alpha\) and \(\lambda\) are set to 0.08 and 1.5, respectively, on the CIFAR-10 dataset.
Analysis of diversityIn addition to quantitative results for diversity analysis provided in Figure 3, we visualize the activation map for the last convolution layer in the encoder for each ensemble member and each subnetwork to motivate the effect of subnetworks on the encoder. As illustrated in Fig. 6, different subnetworks have more feature diversity compared to the deep ensemble as we expected.
## Conclusion
In this paper, we presented a novel diversified ensemble of self-supervised framework. We achieved high predictive performance and good calibration using a simple yet effective idea - an ensemble of independent sub-networks. We introduced a new loss function to encourage diversity among different sub-networks. It is straightforward to add our method to many existing self-supervised learning frameworks during pretraining. Our extensive experimental results show that our proposed method outperforms, or is on par with, an ensemble of self-supervised baseline methods in many different experimental settings.
Figure 4: **IND generalization** in terms of (a) **Top-1 Accuracy** (b) **ECE** (c) **NLL** averaged over in-distribution on test samples of _CIFAR-10/100_, _ImageNet_, _T6SS_ datasets. Here, we compare our method with the ensemble of deep self-supervised networks (SSL-Ens), as well as the baseline. Detailed descriptions of IND generalization for each dataset and other competitors are presented in Appendix (see Additional Results) (Tables 4, 5, 6, and 7).
Figure 5: Performance under **dataset corruption** (CIFAR-10/100 with five levels of increasing perturbation), evaluation in terms of AUROC and TACE for several types of corruption (vertical spread).
Figure 6: We compare the feature diversity for different sub-networks and ensemble members. The top images are for different sub-networks, and the bottom images are for different ensemble members. We used Grad-CAM (Selvaraju et al., 2017) for visualization.
Figure 7: Ablation study on number of \(M\) sub-networks (a), hyperparameters of our proposed loss (b) \(\lambda\) and (c) \(\alpha\). |
2310.01624 | Experimental techniques for evaluating the performance of high-blockage
cross-flow turbine arrays | In confined flows, such as river or tidal channels, arrays of turbines can
convert both the kinetic and potential energy of the flow into renewable power.
The power conversion and loading characteristics of an array in a confined flow
is a function of the blockage ratio, defined as the ratio of the array's
projected area to the channel cross-sectional area. In this work, we explore
experimental methods for studying the effects of the blockage ratio on turbine
performance while holding other variables constant. Two distinct methods are
considered: one in which the array area is held constant and the channel area
is varied, and another in which the array area is varied and the channel area
is held constant. Using both approaches, the performance of a laboratory
cross-flow turbine array in a water tunnel is evaluated at blockage ratios
ranging from 30% to 60%. As the blockage ratio is increased, the coefficient of
performance increases, eventually exceeding the Betz limit and unity. While
similar trends are observed with both experimental approaches, at high blockage
and high tip-speed ratios, the values of the performance and force coefficients
are found to depend on the experimental approach. The advantages and
disadvantages of each approach are discussed. Ultimately, we recommend
investigating blockage effects using a fixed array area and variable channel
area, as this approach does not convolve blockage effects with interactions
between the turbine blades and support structures. | Aidan Hunt, Brian Polagye | 2023-10-02T20:36:01Z | http://arxiv.org/abs/2310.01624v1 | # Experimental techniques for evaluating the performance of high-blockage cross-flow turbine arrays
###### Abstract
In confined flows, such as river or tidal channels, arrays of turbines can convert both the kinetic and potential energy of the flow into renewable power. The power conversion and loading characteristics of an array in a confined flow is a function of the blockage ratio, defined as the ratio of the array's projected area to the channel cross-sectional area. In this work, we explore experimental methods for studying the effects of the blockage ratio on turbine performance while holding other variables constant. Two distinct methods are considered: one in which the array area is held constant and the channel area is varied, and another in which the array area is varied and the channel area is held constant. Using both approaches, the performance of a laboratory cross-flow turbine array in a water tunnel is evaluated at blockage ratios ranging from 30% to 60%. As the blockage ratio is increased, the coefficient of performance increases, eventually exceeding the Betz limit and unity. While similar trends are observed with both experimental approaches, at high blockage and high tip-speed ratios, the values of the performance and force coefficients are found to depend on the experimental approach. The advantages and disadvantages of each approach are discussed. Ultimately, we recommend investigating blockage effects using a fixed array area and variable channel area, as this approach does not convolve blockage effects with interactions between the turbine blades and support structures.
Cross-flow turbine, blockage, array, experiment
## I Introduction
The efficiency of a turbine operating in a confined flow, such as a river or tidal channel, is influenced by how much of the channel the turbine occupies. The size of the turbine relative to the size of the channel is typically represented by the blockage ratio, defined as the ratio between the turbine's projected area and the channel cross-sectional area:
\[\beta=\frac{A_{turbine}}{A_{\text{channel}}}. \tag{1}\]
As the blockage ratio increases, the turbine presents greater resistance to the incoming flow, and thus experiences greater thrust. For a constant volumetric flow rate, this thrust, combined with confinement from the channel boundaries, yields accelerated flow through the turbine rotor. As a consequence, a turbine operating in a confined flow produces more power relative to the same turbine in an unconfined flow [1, 2, 3].
Lateral arrays or "fences" of turbines deployed in river or tidal channels can harness these blockage effects to enhance power production [3, 4, 5]. However, since blockage-driven increases in power generation are accompanied by increases in the forces on the turbine, understanding how both power and loads scale with the blockage ratio is critical for rotor design [6] and control [7] Additionally, the blockage ratio in natural channels may vary daily (e.g., with the tides), seasonally (e.g., runoff from snowmelt or storms), or as needed when individual turbines are deactivated for maintenance or to allow the passage of vessels and marine animals. Therefore, understanding how changes in blockage will alter turbine hydrodynamics, and thus power production, is necessary for the management of these systems.
The study of blockage effects is also applicable to turbines that are intended for use in unconfined environments. Many laboratory settings in which model turbines are tested, such as wind tunnels or flumes, are inherently confined flows, and the associated blockage effects will yield augmented performance [8, 9], even at blockages of 10% and below [10, 11]. Further, in experimental design, increasing model scale to achieve Reynolds numbers that are more representative of a full scale turbine generally increases blockage. While analytical corrections have been developed to predict unconfined turbine performance using measurements of confined turbine performance [12, 13, 14, 15], most are simplified models based on linear momentum theory, and their accuracy can vary [16]. Given this, dedicated study of blockage effects, particularly at the upper end of achievable blockage in practical situations, is relevant.
Table I summarizes prior experimental work that has explored how changing the blockage ratio affects the performance of both axial-flow turbines and cross-flow turbines. While we focus this review on experimental studies, we acknowledge that there is a complementary body of numerical work (e.g., [2, 27, 28, 29]). In alignment with theory, an increase in blockage is found to increase the thrust loading on the turbine [9, 11, 16, 26] and the maximum efficiency of the turbine [9, 11, 15, 16, 17, 19, 21, 24, 25, 26], as well as the tip-speed ratio at which this maximum occurs. From a fluid dynamic standpoint, an
increase in blockage is observed to accelerate the flow through and around the rotor, as well as narrow the turbine wake [26, 10, 22]. Although these key trends are common across these studies, multiple approaches for varying blockage have been used across these experiments. Some studies vary \(\beta\) by changing the size of the channel; for example, testing the same turbine in different facilities [24, 25, 16, 26, 15], or altering the water depth in a flume [21, 17]. Others vary \(\beta\) by changing the dimensions of the turbine itself [22, 18, 9, 20].
All of these experimental approaches yield similar trends, yet differences in approach limit a deeper understanding of blockage effects. While differences in rotor geometry [6, 16] and type of test facility [23] can influence observed trends, even blockage effects measured in a single test facility with a single turbine design can be inadvertently convolved with the effects of other variables. For example, McAdam _et al._[17] and Birjandi _et al._[21] both change the blockage ratio by changing water depth in a flume, but in doing so convolve the effects of blockage with those of the Froude number and proximity to the free surface. Overall, the variety of approaches employed by prior work motivate a thorough consideration of experimental methods for studying blockage effects on turbines and the robustness of these techniques.
In this work, we discuss two approaches by which the blockage ratio of a turbine array can be varied in a single test facility while holding constant or minimizing the effects of other dimensionless parameters. Using both approaches, we evaluate the performance of a cross-flow turbine array at blockage ratios between 30% and 60%, and compare the results. These blockages are of importance for associated research employing control co-design to understand the potential for high-blockage arrays of cross-flow turbines to significantly reduce cost of energy.
## II Background
Consider a row of identical, straight-bladed, vertical-axis cross-flow turbines operating in a rectangular water channel. Neglecting the area of any support structures or fixturing, the array blockage ratio for such a system is given by
\[\beta=\frac{A_{\mathrm{turbines}}}{A_{\mathrm{channel}}}=\frac{NHD}{hw}\ \, \tag{2}\]
where \(N\) is the number of turbines, \(D\) is the rotor diameter, \(H\) is the blade span, \(h\) is the time-varying channel depth measured at the turbine's axis of rotation, and \(w\) is the channel width (Fig. 1). To characterize the array's performance as a function of \(\beta\), there are two distinct approaches by which \(\beta\) can be varied.
### Approach 1: Fixed \(A_{\mathrm{turbines}}\), variable \(A_{\mathrm{channel}}\)
For a fixed array geometry, \(\beta\) can be varied by changing the cross-sectional area of the channel. For both flumes and wind tunnels, it is possible, though logistically challenging, to reduce \(A_{\mathrm{channel}}\) by changing channel width through the installation of intermediate partitions [11, 10, 17]. For flumes, \(A_{\mathrm{channel}}\) is more easily altered by varying the water depth (\(h\)). However, to experimentally isolate the effects of blockage on array performance while doing so, other non-dimensional flow parameters must be carefully controlled.
For example, consider an increase in \(\beta\) achieved via a decrease in the water depth. This decrease in \(h\) will simultaneously increase the depth-based Froude number,
Fig. 1: Dimensions and fluid properties for a single straight-bladed cross-flow turbine in a water channel, as viewed looking downstream (left) and looking cross-stream (right).
\[Fr_{h}=\frac{U_{\infty}}{\sqrt{gh}}\ \, \tag{3}\]
where \(U_{\infty}\) is the freestream velocity and \(g\) is the acceleration due to gravity. The depth-based Froude number represents the balance between inertial forces in the flow and gravitational forces in the flow. The array's proximity to the free surface will also change as \(h\) is decreased, which can be represented by the normalized submergence, \(s/h\), where \(s\) is the distance between the free surface and the top of the turbine rotors (Fig. 1). Both \(Fr_{h}\) and \(s/h\) have been shown to impact turbine performance [9, 30, 31, 21]. Therefore, if \(h\) decreases, \(U_{\infty}\) must be decreased to hold \(Fr_{h}\) constant. Similarly, \(s\) must be decreased (i.e., the rotors positioned dimensionally nearer to the free surface) to hold \(s/h\) constant at this new water depth.
However, a decrease in \(U_{\infty}\) will also decrease the Reynolds number, which here is defined with respect to the turbine diameter as
\[Re_{D}=\frac{U_{\infty}D}{\nu}\ \, \tag{4}\]
where \(\nu\) is the kinematic viscosity. The Reynolds number represents the balance between inertial forces and viscous forces in the flow. The dependence of turbine performance on the Reynolds number is well documented [32, 33, 9, 34]. Although turbine performance becomes independent of the Reynolds number above a certain threshold, for cross-flow turbines this is difficult to achieve at laboratory scale without the use of compressed-air wind tunnels [32], so in most facilities, the Reynolds number must be held constant to isolate blockage effects. To compensate for the decrease in \(U_{\infty}\) necessitated by holding \(Fr_{h}\) constant, the kinematic viscosity \(\nu\) can be increased by changing water temperature (\(T\)). In this way, \(\beta\) may be varied in water channels, while holding \(Fr_{h}\), \(s/h\), and \(Re_{D}\) constant, without the complications of intermediate partitions to adjust width.
### Approach 2: Variable \(A_{\rm turbines}\), fixed \(A_{\rm channel}\)
Alternatively, \(\beta\) can be varied by changing \(A_{\rm turbine}\). For an array of straight-bladed cross-flow turbines, this can be achieved by changing \(N\), \(H\), and/or \(D\). However, as for the previous approach, experimental isolation of blockage effects can be complicated by unintended effects introduced by changing the turbine geometry or the number of turbines.
For example, if \(A_{\rm turbines}\) is increased by increasing the rotor diameter, several geometric and flow parameters are simultaneously varied. First, increasing \(D\) decreases the chord-to-radius ratio, \(c/R\), which influences the flow curvature effects (e.g., virtual camber and virtual incidence) experienced by the blades [35]. Second, increasing \(D\) also increases the diameter-based Reynolds number \(Re_{D}\) (4), and necessitates corresponding increases to \(U_{\infty}\) and/or \(\nu\) to hold \(Re_{D}\) constant across blockage cases. Finally, for a fixed-width channel, changing \(D\) changes both the spacing between adjacent turbines as well as the proximity of the turbines at the ends of the array to the channel side walls. For proximity on the order of the blade chord length, this alters the hydrodynamic interactions between adjacent rotors [36] as well as the lateral boundary effects on array performance [28].
One could also increase \(A_{\rm turbines}\) by increasing the number of turbines on a cross-sectional transect. However, as for a change in \(D\), for a fixed-width channel the proximity between turbines and the channel side-walls decreases, producing the same changes to boundary effects as increasing diameter. Further, this introduces the potential for new turbine-turbine interactions, which are similar to turbine-wall interactions, but have an additional degree of freedom in the rotational phase of adjacent turbines [36, 29]. As a result, if the number of turbines in the array is changed, it can be difficult to separate the effects of blockage from those of intra-array interactions and boundary effects.
Conversely, if \(A_{\rm turbines}\) is increased by increasing only \(H\) (and each rotors' position in the water column is adjusted to hold the submergence depth \(s\) constant), then only the rotor aspect ratio, \(H/D\), is simultaneously varied with \(\beta\). Given that this method varies fewer secondary parameters than either \(D\) or \(N\), changing \(A_{\rm turbines}\) via \(H\) alone is a conceptually attractive means of varying array blockage for fixed \(A_{\rm channel}\). However, this approach would only be effective at experimentally isolating blockage effects if the effects of changing aspect ratio on turbine performance are minor by comparison. Prior work by Hunt _et al._[37] has shown that the efficiency of a single turbine with blade-end struts at \(\beta=11\%\) is invariant for \(H/D=0.95{-}1.63\), although the range of invariance likely depends on the type of support structure used (e.g., endplates, midspan struts) [38, 39] and may be different for high-blockage arrays. As \(H/D\) is further decreased via a decrease in \(H\), it is hypothesized that hydrodynamic interactions between the blades and the support structures could become more prominent and alter performance.
In summary, for fixed \(A_{\rm turbines}\), \(\beta\) is best varied by changing the water depth in the channel, and for fixed \(A_{\rm channel}\), \(\beta\) is best varied by changing the height of the rotors via the blade span. In this work, both methods are evaluated experimentally and compared.
## III Experimental Methods
### _Test Setup_
Experiments were conducted in the Alice C. Tyler recirculating water flume at the University of Washington. The flume has a test section that is 0.76 m wide and 4.88 m long, and can accommodate water depths up to 0.60 m and flow speeds up to \(\sim\)\(1.1\) m/s. The flume is equipped with both a heater and chiller for temperature control, and can maintain water temperatures between \(10^{\circ}\)C and \(40^{\circ}\)C during operation.
The laboratory-scale array consists of two identical straight-bladed cross-flow turbines. The rotors are each two-bladed and have a diameter of 0.315 m, defined as the outermost circle swept by the turbine blades. Each blade has a NACA 0018 profile, a 0.0742 m chord length, and is mounted at a \(6^{\circ}\) preset pitch angle as
referenced from the quarter chord. The blades are attached to the central driveshaft of each rotor using thin, hydrodynamic blade-end struts (NACA 0008 profile, 0.0742 m chord length). The chord-to-radius ratio is 0.47 and the solidity is 0.15.
The two rotors are integrated into the experimental set-up shown in Fig. 2, which consists of two identical test-rigs. The top of each turbine's central shaft is connected by a flexible shaft coupling (Zero-Max SC040R) to a servomotor (Yaskawa SGMCS-05BC341) which regulates the rotation rate of the turbine. The angular position of each turbine is measured via the servomotor encoder, from which the angular velocity is estimated. The bottom of each turbine's central shaft sits in a bearing. The net forces and torques on each turbine are measured by a pair of 6-axis load cells: an upper load cell (ATI Mini45-IP65) mounted to the servomotor and fixed to a crossbeam, and a lower load cell (ATI Mini45-IP68) mounted to the bottom bearing and fixed to the bottom of the flume via a suction plate. Measurements from the load cells and servomotor encoders for both turbines are acquired synchronously at 1000 Hz in MATLAB using a pair of National Instruments PCIe-6353 DAQs.
The freestream velocity is measured using an acoustic Doppler velocimeter (Nortek Vectrino Profiler) sampling at 16 Hz. The velocimeter sampled a single cell positioned laterally in the center of the flume, vertically at the array midplane, and 5 turbine diameters upstream of the array centerline. Velocity measurements are despiked using the method of Goring and Nikora [40]. The water depth upstream of the array is measured at the center of the flume \(\sim\)\(5.8\) turbine diameters upstream of the array centerline by an ultrasonic free-surface transducer (Omega LVU 32) sampling at 1 Hz. The water temperature is measured using a temperature probe (Omega Ultra-Precise RTD) and maintained within \(\pm 0.1^{\circ}\mathrm{C}\) of the target value during each experiment.
### _Test Matrix_
Array performance is characterized at blockage ratios ranging from \(30\%\) to \(60\%\) using combinations of \(A_{\mathrm{turbines}}\) and \(A_{\mathrm{channel}}\). Table II summarizes the turbine geometries and flume conditions used to achieve each \(\beta\). Using two different blade spans, two values of \(A_{\mathrm{turbines}}\) are tested. For each value of \(A_{\mathrm{turbines}}\), \(\beta\) is varied by changing the water depth, with corresponding variations in \(U_{\infty}\) and \(\nu\) to maintain constant \(Fr_{h}\) and \(Re_{D}\) across all experiments. To test whether the same characteristic performance is measured for arrays with identical \(\beta\), but different \(A_{\mathrm{turbines}}\) and \(A_{\mathrm{channel}}\), both the \(A_{\mathrm{turbines}}=0.135~{}\mathrm{m^{2}}\) rotors and the \(A_{\mathrm{turbines}}=0.203~{}\mathrm{m^{2}}\) rotors are tested at \(45.0\%\), \(50.0\%\), and \(55.0\%\) blockage.
The values of \(Fr_{h}\) and \(Re_{D}\), which are held constant across all experiments, are constrained by the maximum \(U_{\infty}\) at which the highest blockage arrays can be tested. This velocity, in turn, is constrained by rotor ventilation (i.e., air entrainment), the onset of which becomes more likely with decreasing \(s/h\) and results in significant performance degradation due to an increase in form drag on the blades [41, 21]. As turbines at the highest blockages (\(\beta\geq 55.0\%\)) necessarily operate close to the free surface, the maximum allowable \(U_{\infty}\) for these cases is set such that any ventilation occurs well beyond the optimal performance point. While \(s/h\) could be held constant across all experiments by adjusting the array submergence depth, preliminary experiments showed that varying \(s/h\) has minimal effect on array performance until ventilation begins to
Fig. 2: A rendering of the experimental test-rig, as viewed from upstream. The array shown is at \(\beta=60\%\) with \(H=0.322~{}\mathrm{m}\), \(h=0.445~{}\mathrm{m}\), and \(s=0.062~{}\mathrm{m}\).
occur. Therefore, we instead choose to maximize \(s/h\) at each blockage so as to limit the risk of ventilation as much as possible.
### _Array Layout and Control_
An overhead view of the array layout is shown in Fig. 3. The center-to-center spacing between the turbines is \(\sim\)\(1.2D\), and the array is positioned laterally such that the blade-to-blade spacing between adjacent turbines is twice the wall-to-blade spacing (i.e., the walls notionally correspond to symmetry planes in a larger array). The turbines in the array were operated under a counter-rotating, phase-locked scheme, wherein both turbines rotate at the same, constant speed, but in opposite directions, with a constant angular phase offset, \(\Delta\theta\), between them. This control strategy was achieved by specifying the angular velocities of the rotors, which yields similar time-average performance to an array with a constant control torque [42]. The turbines were counter-rotated such that the blades of adjacent rotors pass nearest each other while moving downstream, which has been shown to augment performance relative to other rotation schemes [29, 43]. We limit the present experiments to \(\Delta\theta=0^{\circ}\), an operating case in which the lateral forces and reaction torques for a pair of counter-rotating turbines are equal and opposite. A closed-loop controller maintained \(\Delta\theta\) to within \(1^{\circ}\) of the target value at all rotation rates across all experiments.
### _Performance Metrics_
Performance metrics are calculated for individual turbines from the measured quantities shown in Fig. 3. The rotation rate, which is the same for both turbines, is non-dimensionalized as the ratio of the blade tangential velocity to the freestream velocity, or the tip-speed ratio
\[\lambda=\frac{\omega R}{U_{\infty}}\ \, \tag{5}\]
where \(\omega\) is the angular velocity of the turbine and \(R\) is the turbine radius. Data are collected at each tip-speed ratio for 60 seconds, and the time series is cropped to an integer number of turbine rotations before performance metrics are calculated.
The efficiency (formally, the coefficient of performance) of each turbine is the mechanical power produced normalized by the kinetic power in the freestream flow that passes through the turbine's projected area
\[C_{P,i}=\frac{\tau_{i}\omega_{i}}{\frac{1}{2}\rho U_{\infty}^{3}DH}\ \, \tag{6}\]
where \(\omega_{i}\) and \(\tau_{i}\) are the angular velocity and hydrodynamic torque on turbine \(i\), and \(\rho\) is the density of the working fluid. The efficiency of each turbine is a function of the power produced by its blades and power losses due to parasitic torque on its blade support structures; in this case, blade-end struts. Because a constant set of blade-end struts is used for all experiments (i.e., the thickness of the struts is not scaled as the blade span is changed), the relative impact of these parasitic torques on turbine efficiency is larger for turbines with shorter blades (i.e., smaller \(A_{\rm turbines}\) as in Table II). To account for this, we utilize the approach of Strom _et al._[38] and Bachant _et al._[44] to estimate a blade-level \(C_{P}\) for each turbine (i.e., the efficiency of the turbine blades in the absence of the support structures) via superposition as
\[\begin{split} C_{P,i,\rm blade}(\beta,\lambda)& \approx C_{P,i,\rm turbine}(\beta,\lambda)\\ &-C_{P,i,\rm supports}(\beta,\lambda)\end{split} \tag{7}\]
where \(C_{P,i,\rm turbine}\) is the measured efficiency of the full turbine \(i\), and \(C_{P,i,\rm supports}\) is the measured efficiency of turbine \(i\) with no blades attached.
Structural loads on each turbine are characterized via the thrust and lateral force coefficients, respectively given as
\[C_{F_{X,i}}=\frac{F_{X,i}}{\frac{1}{2}\rho U_{\infty}^{2}DH}\ \, \tag{8}\]
\[C_{F_{Y,i}}=\frac{F_{Y,i}}{\frac{1}{2}\rho U_{\infty}^{2}DH}\ \, \tag{9}\]
where \(F_{X,i}\) and \(F_{Y,i}\) are the streamwise force and lateral force, respectively, on turbine \(i\). To estimate blade-level loading for each turbine in the array, we apply superposition equations for \(C_{F_{X,i}}\) and \(C_{F_{Y,i}}\) analogous to that given for \(C_{P,i}\) in (7). However, we note that, unlike for \(C_{P,i}\), the validity of this approach for estimating blade-level \(C_{F_{X,i}}\) and \(C_{F_{Y,i}}\) has not been examined in the existing literature.
Since the turbines in this array are identical, the array-average performance metrics are obtained simply as the average of the individual turbine performance metrics. For example, \(C_{P,array}\) is simply the average of \(C_{P,1}\) and \(C_{P,2}\) for the full turbine, and the average of \(C_{P,1,\rm blade}\) and \(C_{P,2,\rm blade}\) at the blade-level. As noted in Section III-C, the net lateral force on this array of two identical counter-rotating turbines is zero since when \(\Delta\theta=0^{\circ}\), the array is symmetric about its centerline. However, by defining the directions of \(F_{Y,1}\) and \(F_{Y,2}\) with this counter-rotation in mind as in Fig. 3, the array-average of \(C_{F_{Y}}\) is nonzero and represents
Fig. 3: Overhead view of the array layout in the Tyler flume, with key measured quantities annotated.
the average lateral force coefficient experienced by an individual rotor.
## IV Results and Discussion
### _Non-dimensional Parameters_
The time-averaged measured values of \(\beta\), \(Fr_{h}\), \(Re_{D}\), and \(s/h\) at each tip-speed ratio are shown in Fig. 4. Across all experiments, the measured \(\beta\) are within \(1.5\%\) of the target values in Table II, and the measured \(Fr_{h}\) and \(Re_{D}\) do not deviate more than \(5\%\) from the nominal values in Table II. However, all of these parameters vary slightly with \(\lambda\) due to turbine-channel interactions. As \(\lambda\) increases, the array presents greater resistance to the flow, causing a reduction of the upstream freestream velocity and a rise in the upstream free surface, followed by a drop in the free surface across the turbines as the flow accelerates through the rotors. This causes \(\beta\), \(Fr_{h}\) and \(Re_{D}\) as measured upstream of the turbine to decrease slightly with \(\lambda\), and \(s/h\) as measured upstream of the turbine to increase slightly with \(\lambda\). Most of the variation in these parameters associated with turbine-channel interaction occurs for \(\lambda<2\) (Fig. 4). Rotor ventilation occurred only for the highest tip-speed ratios at the highest \(\beta\) for each \(A_{\rm turbines}\), during which the turbine blades pierced the free surface during their downstream sweep. The turbine-channel interaction did not affect the turbulence intensity, which was \(\sim\)\(2\%\) for all test conditions.
During operation, the actual \(h\) and \(U_{\infty}\) in the flume are a function of the volume of water in the flume, the pump drive frequency, and the resistance to flow imposed by the turbines (primarily a function of their rotation rate). It is theoretically possible to hold \(Fr_{h}\) and \(Re_{D}\) truly constant across all cases by adjusting the static water depth or pump drive frequency at each \(\beta\)-\(\lambda\) set-point to compensate for the array's effect on the flow. However, the trial-and-error iteration required to achieve this would be experimentally intractable. Therefore, for each target blockage, we choose to set the flume fill and pump frequency to that which achieves the nominal \(h\) and \(U_{\infty}\) when no turbines are present, and report the variation in these non-dimensional parameters during experiments as in Fig. 4. However, for the case of \(\beta=50.0\%\) and \(A_{\rm turbines}=0.135~{}\rm{m}^{2}\) a step-change in \(U_{\infty}\) was observed near \(\lambda=3\), resulting in corresponding sharp decreases in \(Fr_{h}\) and \(Re_{D}\) (indicated by the dashed black boxes in Fig. 4). As this step-change was repeatable and only observed for this configuration, we attribute this to a unique interaction between the flume and the turbines under these conditions. Consequently, for this case only, the pump drive frequency was increased halfway through the test to counteract the velocity reduction and maintain similar \(Fr_{h}\) and \(Re_{D}\) to that measured in the rest of the experiments (Fig. 4).
### _Array Performance_
Fig. (a)a shows the time-averaged array-average efficiencies as a function of \(\beta\) and \(\lambda\). In agreement with prior work, \(C_{P,\rm array}\) tends to increase as the array blockage ratio is increased. This trend is primarily observed for \(\lambda>1.5\); at lower \(\lambda\), \(C_{P,\rm array}\) does not vary significantly with blockage. Additionally, as \(\beta\) increases the array produces power over a broader range of tip-speed ratios, and the \(\lambda\) at which maximum \(C_{P,\rm array}\) occurs increases. Beginning at \(\beta=33.4\%\), \(C_{P,\rm array}\) exceeds the Betz limit [7] and, at \(\beta=55.0\%\), values of \(C_{P,\rm array}\) begin to exceed unity. Such efficiencies are not violations of energy conservation since the definition of \(C_{P}\) given in (6) considers only the kinetic power that passes through the rotor plane. This definition neglects the available power associated with the fluid's potential energy, which is appreciably drawn down as \(\beta\) and thrust increase. Given the relevance of both
Fig. 4: Time-averaged non-dimensional flow parameters as measured during the experiments. The marker of each line indicates the \(A_{\rm turbines}\) used to achieve a particular \(\beta\). Test points at which ventilation occurred are marked with \(\times\). The dashed box indicates the step-change in \(Fr_{h}\) and \(Re_{D}\) observed for the \(\beta=50.0\%\), \(A_{\rm turbines}=0.135~{}\rm{m}^{2}\) case without pump frequency adjustment.
the freestream kinetic and potential energy for high-confinement arrays, a more representative efficiency metric may resemble the hydraulic efficiency of a hydropower turbine, in which the available power is a function of volumetric flow rate and net head. However, for comparison with prior studies, the conventional definition of \(C_{P}\) is used here.
Time-averaged \(C_{F_{X},\mathrm{array}}\) and \(C_{F_{Y},\mathrm{rotor}}\) are shown in Fig. 5b and Fig. 5c, respectively. As expected from theory and supported by prior work, the array-average thrust coefficient increases as the blockage ratio is increased. Similarly, the magnitude of the lateral force coefficient (which is seldom reported for cross-flow turbines in the literature) also tends to increase with \(\beta\). As for efficiency, \(C_{F_{X},\mathrm{array}}\) and \(C_{F_{Y},\mathrm{rotor}}\) do not vary significantly with blockage at low tip-speed ratios (\(\lambda\leq 1\)).
As mentioned in Section IV-A, ventilation of the turbine rotors occurred only at high \(\lambda\) for the highest \(\beta\) tested for each \(A_{\mathrm{turbines}}\). At these test points, which are well beyond the maximum efficiency point, foil drag due to ventilation amplifies the decreases in \(C_{P,\mathrm{array}}\) and increases in \(C_{F_{X},\mathrm{array}}\) and \(C_{F_{Y},\mathrm{rotor}}\).
### _Blade-level performance_
For \(\beta=45.0\%\), \(50.0\%\), and \(55.0\%\), the results in Fig. 5a-c show that even when \(Fr_{h}\) and \(Re_{D}\) are held constant, achieving the same \(\beta\) via different values of
Fig. 5: Time-averaged \(C_{P,\mathrm{array}},C_{F_{X},\mathrm{array}}\), and \(C_{F_{Y},\mathrm{rotor}}\) as a function of \(\lambda\) for the full-turbines (left column) and blades only (right column). The marker of each line indicates the \(A_{\mathrm{turbines}}\) used to achieve a particular \(\beta\). The shaded regions indicate the interquartile range of the array- and cycle-averaged performance at each \(\beta\) and \(\lambda\) (the vertical span of the shaded region at each point is similar to the size of the plot markers). Test points at which ventilation occurred are marked with \({}^{\chi}\).
\(A_{\rm turbines}\) and \(A_{\rm channel}\) yields similar, but not identical, performance. We attribute this to the relative difference in support structure losses and forces between the two cases. The \(A_{\rm turbines}=0.135\)\(\rm m^{2}\) configuration uses shorter blades than the \(A_{\rm turbines}=0.203\)\(\rm m^{2}\) configuration, but identically sized blade support structures and fixturing. Therefore, any parasitic torque or drag forces associated with these supports are normalized by a smaller projected area when power and force coefficients are calculated. This results in lower power coefficients and higher force coefficients for the \(A_{\rm turbines}=0.135\)\(\rm m^{2}\) configuration than for the \(A_{\rm turbines}=0.203\)\(\rm m^{2}\) configuration.
To account for disparities in \(C_{P,\rm array}\) between the different configurations tested at \(\beta=45.0\%\), \(50.0\%\), and \(55.0\%\), we subtract support structure losses (\(C_{P,\rm supports}\)) via (7) to estimate the \(C_{P,\rm array}\) associated with the blades only, the results of which are given in Fig. 5d. The agreement in full turbine and blade-only \(C_{P,\rm array}\) between the two \(A_{\rm turbines}\) configurations at each blockage is shown in Fig. 6a. For \(\beta=45.0\%\), subtracting \(C_{P,\rm supports}\) improves \(C_{P,\rm array}\) agreement between the two \(A_{\rm turbines}\) configurations at all \(\lambda\). For \(\beta=50.0\%\), subtracting \(C_{P,\rm supports}\) improves agreement in \(C_{P,\rm array}\) between the two \(A_{\rm turbines}\) configurations up to \(\lambda\approx 3\), but agreement worsens for \(\lambda>3\). Similarly, for \(\beta=55.0\%\), subtracting \(C_{P,\rm supports}\) improves agreement in \(C_{P,\rm array}\) up to \(\lambda\approx 3.5\), but agreement worsens for \(\lambda>3.5\).
We attribute the poorer agreement in blade-only \(C_{P,\rm array}\) at high \(\beta\) and \(\lambda\) to the difficulty of estimating representative \(C_{P,\rm supports}\) in the absence of the turbine blades. As described in Section III-D, \(C_{P,\rm supports}\) is estimated by testing an array of blackeless turbines at the same nominal \(Fr_{h}\), \(s/h\), and \(Re_{D}\) as the array of full turbines. However, the array of blackeless turbines does not influence the flow field in the same way that the array of full turbines does. Specifically, as \(\beta\) and \(\lambda\) are increased, the thrust on the array increases, resulting in a free surface drop across the rotors and acceleration of the flow bypassing the array. These changes to the flow field are absent for an array of blackeless turbines. Consequently, we hypothesize that the superposition technique in (7) breaks down at higher \(\beta\) and \(\lambda\), where turbine-channel interactions are most significant. Despite these limitations, it is remarkable that (7) improves \(C_{P,\rm array}\) agreement at _any_\(\lambda\) for \(\beta=45.0\%\), \(50.0\%\), and \(55.0\%\) given that appreciable turbine-channel interactions are observed at these blockage ratios for lower values of \(\lambda\).
Blade-only \(C_{F_{X,\rm array}}\) and \(C_{F_{Y,\rm rotor}}\) are similarly estimated via analogous equations to (7), the results of which are shown in Fig. 5e and Fig. 5f. Unlike for \(C_{P,\rm array}\), subtracting \(C_{F_{X,\rm supports}}\) and \(C_{F_{Y,\rm supports}}\) does not meaningfully change the agreement in \(C_{F_{X,\rm array}}\) or \(C_{F_{Y,\rm rotor}}\) between the two \(A_{\rm turbines}\) configurations at \(\beta=45.0\%\), \(50.0\%\), or \(55.0\%\) for any \(\lambda\), implying that the forces on the blades dominate the forces on the support structures. Additionally, as highlighted in Fig. 6b and Fig. 6c, the difference in the force coefficients between the two \(A_{\rm turbines}\) configurations does not change significantly at high \(\lambda\) (unlike \(C_{P,\rm array}\)). As before, we hypothesize that the lack of force coefficient agreement between turbines with different aspect ratios is due to differences between the flow fields experienced by the support structures when blades are present versus when blades are absent. Further investigation into techniques for estimating blade-only force coefficients is warranted, but is outside the scope of the present study.
### _Evaluation of approaches for varying blockage_
As both of the blockage-varying approaches from Section II were utilized in the present experiments, we now consider the advantages and disadvantages of implementing each approach.
Variation of \(A_{\rm channel}\) via changing the water depth was used to achieve \(\beta=30.0\%-55.0\%\) with \(A_{\rm turbines}=0.135\)\(\rm m^{2}\) and \(\beta=45.0\%-60.0\%\) with \(A_{\rm turbines}=0.203\)\(\rm m^{2}\). The blockages testable with each \(A_{\rm turbines}\) were constrained by the relative sizes of the flume and the turbines, as well as the physical limitations of the test facility. In the Tyler flume, the minimum
Fig. 6: Percent difference in performance and force coefficients (relative to the mean values in Fig. 5) between the \(A_{\rm turbines}=0.135\)\(\rm m^{2}\) case and the \(A_{\rm turbines}=0.203\)\(\rm m^{2}\) case at 45.0%, 50.0%, and 55.0% blockage. A positive percent difference implies that the \(A_{\rm turbines}=0.203\)\(\rm m^{2}\) case performed better than the \(A_{\rm turbines}=0.135\)\(\rm m^{2}\) case. While the percent difference in \(C_{P,\rm array}\) does tend to increase with \(\lambda\), the sharp increase near the end of each full turbine curve is a result of division by small \(C_{P,\rm array}\) values when computing the percent difference.
testable blockage for each \(A_{\rm turbines}\) was set by the maximum dynamic channel depth of 0.60 m, above which overtopping of the flume walls occurs. The maximum testable blockage was constrained by the ventilation risk associated with low \(s/h\): the lower the water depth, the higher the \(\beta\) that can be achieved, but the closer the turbines are to the surface the greater the risk of ventilating. However, as described in Section II, the range of testable blockages was further constrained to the water depths at which \(Fr_{h}\) and \(Re_{D}\) could be matched across all tests via corresponding adjustments to the freesream velocity and water temperature. Critically, temperature control is required to avoid convolving blockage effects with variations in the Reynolds number; as many flumes do not have this capability, this is a general limitation of the variable-\(A_{\rm channel}\) approach unless Reynolds-independent performance can be achieved. In the present study, the relatively wide range of temperatures achievable in the Tyler flume enabled careful control of non-dimensional flow parameters and an effective isolation of blockage effects. Even so, the duration of each experiment was extended by the need to adjust the flume fill and water temperature for each test.
Given the facility requirements of the variable-\(A_{\rm channel}\) approach, the present experiments also explored how \(\beta\) could be varied at fixed \(A_{\rm channel}\) through variation in \(A_{\rm turbines}\). For example, as shown in Table II, array blockage ratios of 40.1% and 60.0% were achieved simply by testing arrays with \(A_{\rm turbines}=0.135~{}\rm m^{2}\) and \(A_{\rm turbines}=0.203~{}\rm m^{2}\) at the same nominal water depth (and thus same nominal \(A_{\rm channel}\)). Since the water depth was unchanged, testing different blockages was convenient and fast since no adjustments to the freesream velocity or temperature were necessary. In a general case, if no secondary effects are introduced by changing \(A_{\rm turbines}\), then the range of testable blockages at a given facility using this approach would be constrained only by 1) the minimum and maximum blade spans available, and 2) ventilation risk at the highest blockages.
However, as shown in Fig. 5, turbines tested at similar \(\beta\), \(Fr_{h}\), and \(Re_{D}\), but different \(A_{\rm turbines}\) and \(A_{\rm channel}\), exhibit small, but appreciable differences in power and force coefficients. The superposition-based support structure subtraction techniques for reconciling these differences begin to break down for \(C_{P,\rm array}\) at high \(\beta\) and \(\lambda\), and do not have any effect on disagreements in \(C_{F_{X},\rm array}\) or \(C_{F_{Y},\rm rotor}\) across \(A_{\rm turbines}\). While, for these experiments, the disparities in \(C_{P,\rm array}\), \(C_{F_{X},\rm array}\) and \(C_{F_{Y},\rm rotor}\) between the different \(A_{\rm turbines}\) are small relative to the overall blockage effects, the scale of these disparities likely depends on the specific blade spans and support structures used [37, 38, 39]. Consequently, interpreting trends in blockage effects obtained via a variable-\(A_{\rm turbines}\) approach requires better models for support structure effects than simple experimental superprognet. The uncertainty associated with the variable-\(A_{\rm turbines}\) approach can only be quantified by changing \(A_{\rm turbines}\) at constant \(\beta\), as was performed in this study at \(\beta=45.0\%\), \(50.0\%\), and \(55.0\%\). To do so while holding \(Fr_{h}\) and \(Re_{D}\) constant requires either a facility with temperature control or a facility capable of velocities of \(\sim\)10 m/s to achieve Reynolds independence. Consequently, the facility requirements for fully interpreting results obtained via the variable-\(A_{\rm turbines}\) method negate the principal advantage of this method. Therefore, the variable-\(A_{\rm channel}\) fixed-\(A_{\rm turbines}\) approach is most robust.
## V Conclusion
In this work, we explored two experimental methods for characterizing the effects of blockage ratio on the performance of an array of two, straight-bladed cross-flow turbines operating in a water channel. For fixed \(A_{\rm turbines}\) and variable \(A_{\rm channel}\), the blockage ratio is most easily varied by changing the water depth, with corresponding changes in the freesream velocity and temperature to hold the Reynolds number and Froude numbers constant. For fixed \(A_{\rm channel}\) and variable \(A_{\rm turbines}\) the blockage ratio is most appropriately varied by changing the blade span, as several secondary effects are introduced if the turbine diameter or number of turbines are changed. A laboratory-scale array operating at blockages between 30% and 60% is tested using both approaches. While similar trends in efficiency and force are observed regardless of approach, the values of the array-average performance and force coefficients vary with the method used to achieve a particular \(\beta\). For future experimental studies focusing on blockage effects, we recommend that the blockage ratio is varied by changing \(A_{\rm channel}\) with fixed \(A_{\rm turbines}\) while holding the Reynolds and Froude numbers constant. Although this method requires the use of a flume with temperature control or the ability to achieve Reynolds-invariant turbine performance, we find that it was the most robust approach since changes in blade-support structure interactions associated with changes in \(A_{\rm turbines}\) can be difficult to quantify.
We recommend that future studies investigate more robust methods for estimating the parasitic losses and drag forces of turbine blade support structures such that blade-only efficiency and force coefficients can be better estimated at high blockage and tip-speed ratio, allowing more accurate comparisons of blockage effects to be drawn across turbines. The development of such a method would improve the reliability of the fixed-\(A_{\rm channel}\) variable-\(A_{\rm turbines}\) experimental approach, and facilitate studies of blockage effects at a wider range of test facilities. Additionally, as ventilation was a constraint on experimental design for both experimental methods, future work should evaluate how the onset and effects of ventilation are influenced by the blockage ratio, the Froude number, and the normalized submergence depth.
## Acknowledgement
The authors would like to thank Gregory Talpey and Gemma Calandra for their assistance in commissioning the high-blockage test-rig, as well as help with data collection. The authors would also like to thank Abigale Snorrland for several insightful discussions regarding support structure torque and force subtraction techniques. |
2308.00770 | DYMOND: DYnamic MOtif-NoDes Network Generative Model | Motifs, which have been established as building blocks for network structure,
move beyond pair-wise connections to capture longer-range correlations in
connections and activity. In spite of this, there are few generative graph
models that consider higher-order network structures and even fewer that focus
on using motifs in models of dynamic graphs. Most existing generative models
for temporal graphs strictly grow the networks via edge addition, and the
models are evaluated using static graph structure metrics -- which do not
adequately capture the temporal behavior of the network. To address these
issues, in this work we propose DYnamic MOtif-NoDes (DYMOND) -- a generative
model that considers (i) the dynamic changes in overall graph structure using
temporal motif activity and (ii) the roles nodes play in motifs (e.g., one node
plays the hub role in a wedge, while the remaining two act as spokes). We
compare DYMOND to three dynamic graph generative model baselines on real-world
networks and show that DYMOND performs better at generating graph structure and
node behavior similar to the observed network. We also propose a new
methodology to adapt graph structure metrics to better evaluate the temporal
aspect of the network. These metrics take into account the changes in overall
graph structure and the individual nodes' behavior over time. | Giselle Zeno, Timothy La Fond, Jennifer Neville | 2023-08-01T18:20:05Z | http://arxiv.org/abs/2308.00770v1 | # DYMOND: DYnamic MOtif-NoDes Network Generative Model
###### Abstract.
Motifs, which have been established as building blocks for network structure, move beyond pair-wise connections to capture longer-range correlations in connections and activity. In spite of this, there are few generative graph models that consider higher-order network structures and even fewer that focus on using motifs in models of dynamic graphs. Most existing generative models for temporal graphs strictly grow the networks via edge addition, and the models are evaluated using static graph structure metrics--which do not adequately capture the temporal behavior of the network. To address these issues, in this work we propose DYnamic MOtif-NoDes (DYMOND)--a generative model that considers (i) the dynamic changes in overall graph structure using temporal motif activity and (ii) the roles nodes play in motifs (e.g., one node plays the hub role in a wedge, while the remaining two act as spokes). We compare DYMOND to three dynamic graph generative model baselines on real-world networks and show that DYMOND performs better at generating graph structure and node behavior similar to the observed network. We also propose a new methodology to adapt graph structure metrics to better evaluate the temporal aspect of the network. These metrics take into account the changes in overall graph structure and the individual nodes' behavior over time.
Dynamic Networks, Temporal Graphs, Motifs, Network Evolution +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
To summarize, we make the following contributions:
* We conduct an empirical study of motif behavior in dynamic networks, which shows that motifs do not change/evolve from one timestep to another, rather they keep re-appearing in the same configuration
* Motivated by the above observation, we develop of a novel statistical dynamic-graph generative model that samples graphs with realistic structure and temporal node behavior using motifs
* We outline a new methodology for comparing dynamic-graph generative models and measuring how well they capture the underlying graph structure distribution and temporal node behavior of a real graph
The rest of the paper is organized as follows. First, we go over related work and discuss where our model fits in Section 2. Then in Section 3, we present our empirical study of the evolution of motifs in temporal graphs. In Section 4, we propose our generative model DYMOND. In Section 5, we present the datasets and baseline models used in our evaluation. We also describe our evaluation metrics and how to adapt them to dynamic networks. Finally, we discuss the results of our evaluation (Section 6) and present our conclusions (Section 7).
## 2. Related Work
Most models for temporal or dynamic networks have focused on modeling the edges over time (Beng et al., 2015; Chen et al., 2016; Li et al., 2017). A straightforward approach to generating temporal networks is to generate first a static graph from some model, and for each link generate a sequence of contacts (Chen et al., 2016). Holme (Beng et al., 2015) uses an approach where they draw degrees from a probability distribution and match the nodes in random pairs for placing links. Then, for every link, they generate an active interval duration from a truncated power-law distribution and uniform random starting time within that time frame. Rocha and Blondel (2015) use a similar method where the active interval of a node starts when another node's interval ends. Another approach is to start with an empty graph. Then, every node is made active according to a probability and connected to \(m\) random nodes. Perra et al. (Perra et al., 2017) use this approach with a truncated power-law distribution for each node's probability of being active. Laurent et al. (Laurent et al., 2017) extend this model to include memory driven interactions and cyclic closure. Other extensions include aging effects (Lavent et al., 2017) and lifetimes of links (Lavent et al., 2017). Vestergard et al. (Vestergard et al., 2017) model nodes and links as being active or inactive using temporal memory effects. All of these node and edge-based models do not consider higher-order structures and fail to create enough clustering in the networks generated.
Motivated by the work that established motifs as building blocks for the structure of networks (Lavent et al., 2017), the definition of motifs was extended to temporal networks by having all the edges in a given motif occur inside a time period (Zhou et al., 2017; Li et al., 2017; Zhang et al., 2017). Zhang et al. (Zhang et al., 2017) study the evolution of motifs in temporal networks by looking at changes in bipartite motifs in subsequent timesteps. Benson et al. (Benson et al., 2015) study higher-order networks and how 3-node motifs evolve from being empty to becoming a triangle in aggregated temporal graphs. Purohit et al. (Purohit et al., 2017) propose a generative model that creates synthetic temporal networks where links are aggregated over time (i.e., no link deletions). Zhou et al. (Zhou et al., 2017) propose a dynamic graph neural network model that takes into account higher-order structure by using node-biased temporal random walks to learn the network topology and temporal dependencies. The models that use temporal motifs are not designed for dynamic networks. We propose the first motif-based dynamic network generative model.
## 3. Motific Evolution
Related work on modeling temporal networks showed the evolution of motifs using a triadic closure mechanism (e.g., wedges becoming triangles) (Benson et al., 2015). However, these works make the assumption that edges will remain in the network once they are added (Benson et al., 2015; Li et al., 2017; Li et al., 2017). This assumption would hold on growing networks, but does not apply to dynamic networks where links can also be removed.
We make the distinction that we are interested in the evolution of motifs in dynamic networks, where edges can appear and disappear. In our initial study below, we investigate if similar motif behavior occurs in dynamic networks across subsequent time windows (e.g., if the motifs appear, merge, split and/or disappear over time). Specifically, we investigate 3-node motifs and look for changes from one motif type to another (for example, wedges becoming triangles and vice versa).
### Definitions
Here we introduce our main definitions used in this paper. The rest of notations and symbols are summarized in Table 1.
Definition 3.1 (Graph Snapshot).: A graph snapshot is a time-slice of a network at time \(t\), defined as \(G_{t}=(V_{t},E_{t},S_{t})\), where \(V_{t}\subseteq V\) is the set of active nodes, \(E_{t}\subseteq E\) is the set of edges at time \(t\), and \(S_{t}\subseteq S\) are the edge timestamps.
Definition 3.2 (Dynamic Network).: A dynamic network (or graph) \(\mathbf{G}=\{G_{1},\ldots,G_{T}\}\) is a sequence of graph time-slices where \(T\) is the number of timesteps.
Definition 3.3 (Motif).: We define a motif as a 3-node subgraph \(\{u,v,w\}\) and its _motif type_ is determined by the number of edges between the nodes (i.e., _empty_ has 0, _1-edge_ has 1, _wedge_ has 2, _triangle_ has 3 edges).
### Empirical Study
We test the hypothesis that motifs changing configurations is driven by a time-homogeneous Markov process, where the graph structure at the next timestep \(t+1\) depends on the current timestep \(t\). Each timestep corresponds to a time window of the temporal graph. Then, we consider all 3-node motifs at each timestep to either transform from one motif type to another or remain the same. Note isomorphisms are combined into the same configuration.
We study the effectiveness of this approach on the Enron Emails and EU Emails datasets, described in Subsubsections 5.2.1 and 5.2.2 respectively. Additionally, we use a Wikipedia Links dataset, which shows the evolution of hyperlinks between Wikipedia articles (Benson et al., 2015; Li et al., 2017). The nodes represent articles. Edges include timestamps and indicate that a hyperlink was added or removed depending on the edge weight (-1 for removal and +1 for addition). The transition probability matrices for both email datasets (Enron Emails and EU Emails) show that the motifs with edges (i.e., 1-edge, wedge, and triangle) will either keep their current motif type, or become
empty with almost equal probability (Figures 0(a) and 0(b)). For each motif type with edges, the count of times it stayed is very close to that of becoming empty at the next time period. In contrast, the Wikipedia Links dataset is a growing network, with more links between articles being added and very few removed. This makes it unlikely to see any motif with edges becoming empty (Figure 0(c)).
In the dynamic network datasets we investigated: (1) we do not observe motifs with edges changing from one motif type to another (e.g., wedges becoming triangles and vice versa), even when selecting different time windows to create the timesteps, and (2) motifs stay as the same type or disappear in the next time window. This motivates our use of motifs and inter-arrival rates in our proposed generative model for dynamic networks, which we outline next.
## 4. Dynamic Motif-Nodes Model
We formally define the problem of dynamic network generation as follows:
Problem 1 [] **Dynamic Network Generation**
Input: _A dynamic network_\(\mathbf{G}=\{G_{1},\ldots,G_{T}\}\)
Output: _A dynamic network_\(\mathbf{G}^{\prime}=\{G^{\prime}_{1},\ldots,G^{\prime}_{T^{\prime}}\}\), where the distribution of graph structure for \(\mathbf{G}^{\prime}\) matches \(\mathbf{G}\) and node behavior is aligned across \(\mathbf{G}^{\prime}\) and \(\mathbf{G}\) (i.e., the node behavior of a specific node \(v_{i^{\prime}}\) in \(\mathbf{G}^{\prime}\) should be similar to a specific node \(v_{i}\) in \(\mathbf{G}\)).
Specifically, consider an arbitrary graph statistic \(s(G)\) (e.g., average path length). Then the distribution of statistic values observed in the input dynamic network \(s_{in}=\{s(G_{1}),\ldots,s(G_{T})\}\) should match the distribution of statistic values observed in the output dynamic network \(\mathbf{s}_{out}=\{s(G^{\prime}_{1}),\ldots,s(G^{\prime}_{T^{\prime}})\}\). Similarly, take any node statistic \(\mathbf{s}(v_{i}|\mathbf{G})\) (e.g., node degree). Then, using the temporal distribution of values for a node \(\mathbf{s}(v_{i}|\mathbf{G})=\{s(v_{i}|G_{1}),\ldots,s(v_{i}|G_{T})\}\), the distribution of values for all nodes in the input dynamic network \(\{\mathbf{s}(v_{j}|\mathbf{G})\}_{j\in\mathbf{G}}\) should match the distribution of values for all nodes in the output dynamic network \(\{\mathbf{s}(v_{j^{\prime}}|\mathbf{G}^{\prime})\}_{j^{\prime}\in\mathbf{G}^{ \prime}}\).
To generate dynamic networks as specified above, we propose the DYnamic MOtif-NoDes (DYMOND) model1. Our model makes the following assumptions about the graph generative process:
Footnote 1: Code is available at [https://github.com/zeno129/DYMOND](https://github.com/zeno129/DYMOND)
1. nodes in the graph become active and remain that way,
2. nodes have a probability distribution over role types that they play in different motifs,
3. node triplets have a single motif type over time,
4. there is a distribution of motif types over the set of graphs,
5. motif occurrences over time are distributed exponentially with varying rate.
First we describe DYMOND's generative process below. Then we describe our approach to estimate model parameters from an observed dynamic network. We model the time until nodes become active as Exponential random variables with the same rate \(\lambda_{V}\). Since all possible 3-node motifs are considered, there will be edges shared among them. Therefore, to estimate the inter-arrival rate for each motif, we weigh the count of times a motif appeared by the number of edges shared with other motifs in a timestep. For each motif type with edges (i.e., triangle, wedge, and 1-edge), the model fits an Exponential distribution with the motif inter-arrival rates of that type. Motivated by our findings in Section 3, when a motif is first sampled it will keep the same configuration in the future.
In the generation process, the motifs are sampled from a probability distribution based on the probability of the nodes in a triplet participating in a particular motif type, while also ensuring the motif type proportions in the graph are maintained. The motif type probability for a triplet considers the roles each node would play in a motif. For example, in a wedge one node would be a hub and the other two would be spokes (Figure 2). The node role probabilities are learned from the input graph's structure and the motifs that the node participates in.
The motivation for this modeling approach is based on the following conjectures: (1) by modeling higher-order structures (i.e., motifs), the model will capture the underlying distribution of graph structure, and (2) by using the motif roles that nodes take in the
Figure 1. Observed Motif Transition Probabilities
Figure 2. Motif Types and Node Roles
dynamic network, the model will also capture correlations in node connections and activity.
### Generative Process
The overall generative process is described in Alg. 1: In line 2, we first get the nodes that are active at each timestep using the node arrival rate \(\lambda_{V}\) (see Alg. 4, Appendix A). Whenever new nodes become active, we calculate the new triplets of active nodes that are now eligible to be sampled as a motif in line 5. In line 6, we proceed to sample the motifs, based on the node role probabilities \(p_{R}\) for each motif type, and the timesteps the motifs will appear using the motif inter-arrival rates \(\lambda_{M}\) (see Alg. 2). In line 8, we place the motifs' edges (Alg. 6, Appendix A) and in line 12 we construct the graph (Alg. 7, Appendix A).
```
input:\(T,N,\lambda_{V}\), \(p_{M},\lambda_{M}\), \(p_{R},c_{R}\) output:\(G^{\prime}=\{G^{\prime}_{1},\dots,G^{\prime}_{T}\}\)
1begin
2\(V\leftarrow\)GetActiveNodes(\(T,\ N,\lambda_{V}\))
3\(M\leftarrow\emptyset,M^{S}\leftarrow\emptyset,M^{E}\leftarrow\emptyset\)
4for\(t\in[1,\dots,T]\)do // New active triplets at timestep \(t\)\(U_{t}\leftarrow\left\{m=\{u,u,w\}\subseteq V_{t}\mid u<v<w,\ m\notin U_{t-1}\right\}\)\(M_{t},M^{T}_{t},M^{S}_{t},p_{R},c_{R}\leftarrow\) SampleMotifs(\(U_{t},p_{M},\lambda_{M},p_{R},c_{R}\))
5\(M\leftarrow\cup M\cup M_{t}\) // save new motifs
6\(M_{t}^{E}\leftarrow\)DiscotifiedGges(\(M_{t},M^{T},M^{R}_{t}\))
7\(M^{T}\gets M^{T}\cup M^{T}_{t}\) // store their types
8\(M^{E}\gets M^{E}\cup M^{E}_{t}\) // store their edges
9\(M^{S}\gets M^{S}\cup M^{S}_{t}\) // store their timestamps
10\(G^{\prime}\leftarrow\)ConstructGraph(\(M,M^{E},M^{S}\))
```
**Algorithm 1**GenerateDynamicGraph
In Alg. 2 line 4, the model calculates the expected count of motifs \(n^{(i)}\) to be sampled for each motif type \(i\) using the motif type proportions \(p_{M}\). Then in line 5, the expected number of motifs for each type is sampled, given the probability \(p_{T}\) that the nodes in the triplet take on the roles needed (Eq. 6). Each node will have an expected count \(c_{R}\) of times they will appear having each role, for the total number of timesteps \(T\) to be generated. For this reason, in line 8 we sample the timesteps each motif will appear in (Alg. 5, Appendix A), and in line 10 we use the timestep counts to sample the node roles (Alg. 3).
### Learning
Given an observed dynamic graph \(\mathbf{G}\), we estimate the input parameters for our generative process as outlined in Alg. 8, Appendix A.
#### 4.2.1. Node Arrivals
We begin by estimating the node arrival rate \(\widehat{\lambda}_{V}\), which will determine when nodes become active in the dynamic network, by using the first timestep in which each node had its first node.
\[\widehat{\lambda}_{V}=\frac{\sum_{v\in V}\big{(}\arg\min_{t}\mathbbm{1}(v \in V_{t})\big{)}}{|V|} \tag{1}\]
find its motif type \(i\) at timestep \(t\) (line 8). If we have previously seen \(\{u,v,w\}\) and the motif type \(i\) is of higher order at the current timestep \(t\), then we update the type stored to be \(i\) (line 13). For example, if we observe the triplet \(\{u,v,w\}\) is a triangle at timestep \(t\) and we previously saw it as a wedge, we update \(M^{T}\{(u,v,w)\}\) as a triangle.
Then, we calculate the motif proportions \(\widehat{p}_{M}^{(i)}\) of each type in the graph, where \(i\) corresponds to the number of edges in the motif (i.e., \(i=1\) for a 1-edge, \(i=2\) for a wedge, and \(i=3\) for a triangle motif).
\[\widehat{p}_{M}^{(i)}=\frac{\left\{\{u,v,w\}\in M\left|\frac{M^{T}\{(u,v,w)\}} {N}\right.}{\widehat{N}},\quad\text{for }i\in[1,2,3]\right.\] \[\widehat{p}_{M}^{(0)}=1-\sum_{i=1}^{n}\widehat{p}_{M}^{(i)} \tag{2}\]
where \(M\) is the set of motifs, and \(\{u,v,w\}\) is a motif consisting of the nodes \(u,v,w\).
#### 4.2.3. Motif Inter-Arrivals
We estimate the inter-arrival rates of each observed motif \(\{u,v,w\}\) using weighted edge counts (Eq. 3a). Their rates are then used to learn a rate of inter-arrival rates \(\widehat{\lambda}_{M}^{(i)}\) from the motifs of each type \(i\) (Eq. 3b). Note that we do not need to estimate rates for the empty motif type (\(i=0\)).
\[\widehat{\lambda}_{M}(\{u,v,w\})=\frac{\sum_{t=1}^{T}C_{t}^{M}\{ (u,v,w)\}}{T} \tag{3b}\] \[\widehat{\lambda}_{M}^{(i)}=\frac{\sum_{\{u,v,w\}\in M^{(i)}}\left( \widehat{\lambda}_{M}\{(u,v,w)\}\right)}{|M^{(i)}|} \tag{3a}\]
where \(M^{(i)}\) is the set of all motifs of type \(i\).
Since edges might be shared by more than one motif, we use edge-weighted Poisson counts \(C_{t}^{M}\), per timestep \(t\), to estimate the inter-arrival rate for each motif \(\{u,v,w\}\) (Eq. 4). The weights \(W_{t}^{(i)}\) will depend on the motif type \(i\) of \(\{u,v,w\}\) and are calculated for each edge of the motif (Eq. 5).
\[C_{t}^{M}\{(u,v,w)\}=\frac{\sum_{(u^{\prime},v^{\prime})\in E_{t}\left(\{u,v, w\}\right)}W_{t}^{(i)}\big{(}(u^{\prime},v^{\prime})\big{)}}{|E_{t}(\{u,v,w\})|} \tag{4}\]
For a motif \(\{u,v,w\}\), we calculate the weight of its edge \((u^{\prime},v^{\prime})\) using the count for the edge in the timestep window and considering its motif type \(i\) (Eq. 5). We give larger edge-weight to motif types with more edges, since they are more likely to produce the observed edges. This also ensures that motif types with smaller proportion \(p_{M}^{(i)}\) (Eq. 2) have a high enough inter-arrival rate to show up (i.e., triangles).
\[W_{t}^{(i)}(u^{\prime},v^{\prime}) =\frac{r_{t}^{(i)}(u^{\prime},v^{\prime})}{|N^{(i)}(u^{\prime},v^ {\prime})|} \tag{5b}\] \[r_{t}^{(i-1)}(u^{\prime},v^{\prime}) =\min\left(r_{t}^{(i)}(u^{\prime},v^{\prime}),\ |N^{(i-1)}(u^{\prime},v^{\prime})|\right) \tag{5a}\]
where \(|N^{(i)}(u^{\prime},v^{\prime})|\) is the number of motifs of type \(i\), the number of times \((u^{\prime},v^{\prime})\) appears in \(E_{t}\) is \(c_{t}(u^{\prime},v^{\prime})\), the remaining edge count is \(r_{t}^{(i)}(u^{\prime},v^{\prime})\), and for triangles \(r_{t}^{(i+1)}(u^{\prime},v^{\prime})=c_{t}(u^{\prime},v^{\prime})\).
#### 4.2.4. Motif Types
The probability of a node triplet becoming a triangle, wedge, or 1-edge motif is based on the probability that each node takes on the roles needed to form that motif type. The roles for each motif type are shown in Figure 2. Specifically, a triangle requires all three nodes to have the equal3 role, a wedge requires one node to be a hub and the rest to have the spoke role, a 1-edge requires two nodes to have the equal2 role and the remaining one the outlier role (Eq. 6).
\[P_{T}^{(i)}\{(u,v,w)\}=P[u\text{ is }r_{1}\wedge v\text{ is }r_{2}\wedge w\text{ is }r_{2}] \tag{6}\]
\[P[u\text{ is }r]=\frac{count(u,r)}{\sum_{r^{\prime}\in R}count(u,r^{\prime})} \tag{7}\]
where \(R=\{\text{equal3, hub, spoke, equal2, outlier}\}\) is the set of possible roles, and \(count(u,r)\) is the weighted count of times that node \(u\) had role \(r\) (see Alg. 10, Appendix A). The weights are used to avoid over-counting the roles for motifs of the same type with a shared edge.
## 5. Methodology
We first describe the baseline models (Subsection 5.1) and datasets (Subsection 5.2) used in our evaluation. Then, we introduce the metrics for evaluating graph structure, our novel approach for
evaluating node behavior, and implementation details of all models (Subsection 5.3).
### Baselines
The related work using motif-based models for temporal graphs focuses on the aggregated temporal graph and not its dynamic changes over time (Zhou et al., 2017). With that in mind, we picked baselines that aim to model the changes in dynamic graphs. We compare our model with three baselines: a temporal edge-based model (SNLD), a model based on node-activity (ADN), and a graph neural network (GNN) model based on temporal random walks (TagGen).
#### 5.1.1. Static Networks with Link Dynamics Model (SNLD)
We used an approach based on (Krishnaman et al., 2017), where they begin by generating a static graph and then generate a series of events. Their procedure begins by sampling degrees from a probability distribution. They refer to these degrees as "stubs" and they create links by connecting these "stubs" randomly. Finally, for each link, they assign a time-series from an inter-event distribution.
In our implementation of the SNLD model, we start by sampling the degrees from a Truncated Power-law distribution. Since our starting point is a static graph, we assume all the nodes to be active already. Then, we sample inter-event times for every edge. We found that we could best model the edge inter-event times in the real data using an Exponential distribution. To learn the Truncated Power-law parameters, we aggregated and simplified the real graph.
#### 5.1.2. Activity-Driven Network Model (ADN)
We use the approach in (Krishnaman et al., 2017), which extends the model in (Krishnaman et al., 2017) by adding memory effects and triadic closure. The triadic closure takes place when node \(i\) connects to node \(k\) forming a triangle with its current neighbor \(j\). Adding a triadic closure mechanism helps to create clustering (communities) (Berg et al., 2016). The memory effect is added by counting the number of times that the nodes have connected up to the current time \(t\). The procedure starts by creating an empty graph \(G_{t}\) at each timestep. Then, for each node \(i\): delete it with probability \(p_{d}\) or mark it as active with probability \(a_{i}\). If the node is "deleted", then the edges in the current timestep are removed, the counts of connections set to zero, and another degree is sampled to estimate a new \(a_{i}\). If a node \(i\) is sampled as active, we connect it to either: (1) a neighbor \(j\), (2) a neighbor of \(j\), or (3) a random node.
In our implementation of the SNLD model, we base the probability to create new edges \(a_{i}\) on the degree of node \(i\), which we sample from a Truncated Power-law distribution. We estimate the parameters using the average degree across timesteps for the nodes in the real graph. There is a fixed probability \(p_{d}\) for any node being "deleted" (losing its memory of previous connections and sampling a new \(a_{i}\)). We estimate this probability using the average ratio of nodes becoming disconnected in the next timestep. To estimate the probability for triadic closure (forming a triangle), we use the average global clustering coefficient across timesteps.
#### 5.1.3. TagGen
TagGen is a deep graph generative model for dynamic networks (Zhou et al., 2017). In their learning process they treat the data as a temporal interaction network, where the network is represented as a collection of temporal edges and each node is associated with multiple timestamped edges at different timestamps. It trains a bi-level self-attention mechanism with local operations (addition and deletions of temporal edges), to model and generate synthetic temporal random walks for assembling temporal interaction networks. Lastly, a discriminator selects generated temporal random walks that are plausible in the input data, and feeds them into an assembling module. We used the available implementation of TagGen2 to learn the parameters from the input graph and assemble the dynamic network using the generated temporal walks.
Footnote 2: [https://github.com/davidchouzdw/TagGen](https://github.com/davidchouzdw/TagGen)
### Datasets
We use the datasets described below, with more detailed statistics shown in Table 4 and Figure 4 of Appendix A.
#### 5.2.1. Enron Emails
The Enron dataset is a network of emails sent between employees of Enron Corporation (Email et al., 2017; Email et al., 2018). Nodes in the network are individual employees and edges are individual emails. Since it is possible to send an email to oneself, loops were removed.
#### 5.2.2. EU Emails
The EU dataset is an email communication network of a large, undisclosed European institution (Email et al., 2018; Email et al., 2018). Nodes represent individual persons and edges indicate at least one email has been sent from one person to the other. All edges are simple and spam emails have been removed from the dataset.
#### 5.2.3. DNC Emails
The DNC dataset is the network of emails of the Democratic National Committee that were leaked in 2016 (Email et al., 2018; Zhou et al., 2017). The Democratic National Committee (DNC) is the formal governing body for the United States Democratic Party. Nodes in the network correspond to persons and an edge denotes an email between two people. Since an email can have any number of recipients, a single email is mapped to multiple edges in this dataset.
#### 5.2.4. Facebook Wall-Posts
The Facebook dataset is a network of a small subset of posts to other users' walls on Facebook (Email et al., 2018; Email et al., 2018). The nodes of the network are Facebook users, and each directed edge represents one post, linking the users writing a post to the users whose wall the post is written on. Since users may write multiple posts on a wall, the network allows multiple edges connecting a single node pair. Since users may write on their own wall, loops were removed.
#### 5.2.5. CollegeMsg
The CollegeMsg dataset is comprised of private messages sent on an online social network at the University of California, Irvine (Levine, 2016; Livine, 2016). Users could search for other users in the network, based on profile information, and then begin conversation. An edge \((j,k,t)\) means that user \(j\) sent a private message to user \(k\) at time \(t\).
### Evaluation
We use two sets of metrics in our evaluation for graph structure and node behavior. The majority of graph structure metrics we selected are widely used to characterize graphs. With these first set of metrics we aim to measure if the overall graph structure of the generated graph \(\mathbf{G}^{\prime}\) is similar to the dataset graph \(\mathbf{G}\). For the second set, we propose to use node-aligned metrics to capture node behavior.
#### 5.3.1. Graph Structure Metrics
We use the following graph metrics: density, average local clustering coefficient, global clustering coefficient, average path length of largest connected component (LCC), and s-metric. _Density_ measures ratio of edges in the graph versus the number of edges if it were a complete graph. The _local clustering coefficient_ quantifies the tendency of the nodes of a graph to cluster together, and the _global clustering coefficient_ measures the ratio of closed triplets (triangles) to open and closed triplets (wedges and triangles). The _average (shortest) path length_, for all possible pairs of nodes, measures the efficiency of information transport. The _s-metric_, which is less well-known, measures the extent to which a graph has hub-like structure (K
Emails and Facebook). SNLD performs better on the EU Emails dataset, with our model being a close second, and the CollegeMsg dataset. Finally, ADN performs best on the DNC Emails dataset, but DYMOND significantly outperforms the other two baselines.
### Discussion
SNLD creates a static graph with a degree distribution learned from the input graph and models the edge inter-event times independently. This fails to create graph structure similar to the datasets due to little clustering. The CollegeMsg dataset has low clustering (local and global) but a high s-metric, which indicates large star structure in the graph (i.e., high degree nodes), as seen in Figure 4. In this case, SNLD is able to better match the clustering than the other datasets (Figure 5(d)).
endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. LLNL-CONF-819670
|
2310.15651 | Towards chemical accuracy using a multi-mesh adaptive finite element
method in all-electron density functional theory | Chemical accuracy serves as an important metric for assessing the
effectiveness of the numerical method in Kohn--Sham density functional theory.
It is found that to achieve chemical accuracy, not only the Kohn--Sham
wavefunctions but also the Hartree potential, should be approximated
accurately. Under the adaptive finite element framework, this can be
implemented by constructing the \emph{a posteriori} error indicator based on
approximations of the aforementioned two quantities. However, this way results
in a large amount of computational cost. To reduce the computational cost, we
propose a novel multi-mesh adaptive method, in which the Kohn--Sham equation
and the Poisson equation are solved in two different meshes on the same
computational domain, respectively. With the proposed method, chemical accuracy
can be achieved with less computational consumption compared with the adaptive
method on a single mesh, as demonstrated in a number of numerical experiments. | Yang Kuang, Yedan Shen, Guanghui Hu | 2023-10-24T09:09:07Z | http://arxiv.org/abs/2310.15651v1 | Towards chemical accuracy using a multi-mesh adaptive finite element method in all-electron density functional theory
###### Abstract
Chemical accuracy serves as an important metric for assessing the effectiveness of the numerical method in Kohn-Sham density functional theory. It is found that to achieve chemical accuracy, not only the Kohn-Sham wavefunctions but also the Hartree potential, should be approximated accurately. Under the adaptive finite element framework, this can be implemented by constructing the _a posteriori_ error indicator based on approximations of the aforementioned two quantities. However, this way results in a large amount of computational cost. To reduce the computational cost, we propose a novel multi-mesh adaptive method, in which the Kohn-Sham equation and the Poisson equation are solved in two different meshes on the same computational domain, respectively. With the proposed method, chemical accuracy can be achieved with less computational consumption compared with the adaptive method on a single mesh, as demonstrated in a number of numerical experiments.
keywords: Chemical accuracy; Kohn-Sham equation; Adaptive finite element method; Multi-mesh method
## 1 Introduction
The Kohn-Sham density functional theory plays an important role in the area of quantum physics, condensed matter, and computational chemistry. For a quantum system containing \(N_{nuc}\) nuclei and \(N_{ele}\) electrons, the ground state can be obtained by solving the lowest \(N_{occ}\) eigenpairs \((\varepsilon_{i},\psi_{i})_{i=1,\ldots,N_{occ}}\) from the Kohn-Sham equation
\[\left(-\frac{1}{2}\nabla^{2}+V_{\mathrm{ext}}(\mathbf{r})+V_{\mathrm{Har}}( \mathbf{r})+V_{\mathrm{xc}}(\mathbf{r})\right)\psi_{i}(\mathbf{r})=\varepsilon _{i}\psi_{i}(\mathbf{r}),\ i=1,\ldots,N_{occ}, \tag{1}\]
where \(N_{occ}=N_{ele}/2\) if \(N_{ele}\) is even and \((N_{ele}+1)/2\) if \(N_{ele}\) is odd is the number of occupation number. \(V_{\mathrm{ext}}(\mathbf{r})=-\sum_{I}Z_{I}/|\mathbf{R}_{I}-\mathbf{r}|\) represents the external potential due to the nuclei located at \(\{\mathbf{R}_{I}\}_{I=1,\ldots,N_{nuc}}\) with charge \(\{Z_{I}\}_{I=1,\ldots,N_{nuc}}\). \(V_{\mathrm{Har}}(\mathbf{r})\) stands for the Hartree potential which describes the interaction among electrons. \(V_{\mathrm{xc}}(\mathbf{r})\) is the exchange-correlation potential used to absorb the difference between the Kohn-Sham one-body non-interactive system and the real many-body interactive system.
Various numerical methods have been presented to solve the Kohn-Sham density functional theory [27; 1; 29; 2; 4; 18; 5]. To verify the effectiveness of these methods, the ability to achieve chemical accuracy has been a commonly accepted criterion. The chemical accuracy is generally considered as 1 kcal/mole accuracy (\(\approx 1.59\times 10^{-3}\) Hartree/particle) with respect to the total energy [20]. In this work, we consider the chemical accuracy as the total energy to achieve the accuracy within \(1\times 10^{-3}\) Hartree/atom compared with the energy obtained from reference software.
Due to the rapid variations of the wavefunctions around the nuclear positions, a uniform discretization of the Kohn-Sham equation in the computational domain would result in an unaffordable implementation,
especially in all-electron calculations. The adaptive finite element (AFE) methods which enable different mesh sizes are favored in solving the Kohn-Sham equation. Furthermore, the ability of handling the non-periodic boundary condition and complex computational domain makes AFE methods more and more attractive in electronic structure calculation in recent decades [23; 2; 18; 3; 15; 5; 17].
There have been lots of efforts for efficiently achieving chemical accuracy by the adaptive finite element methods, such as using the high-order finite elements [13; 18; 21; 15; 5; 9], high-order numerical quadrature [21; 5], etc. In addition, an accurate approximation to the Hartree potential is shown to be key in achieving chemical accuracy [23; 18; 21; 5]. Under the finite element discretization framework, the Hartree potential is generally obtained from solving a Poisson equation on the same computational domain as the Kohn-Sham equation. It is noted that the decay of Hartree potential in the infinity behaves as \(1/r\) which is much slower than the exponential decay of the wavefunctions, hence the homogeneous boundary condition in the finite computational domain for the Hartree potential may introduce large errors in the ground state energy [5]. To efficiently reduce such errors, the multipole expansion of the Hartree potential [12] can be used to generate an inhomogeneous boundary condition for the Poisson equation. It is noted that to keep using the homogeneous boundary condition, there also have been several works in the literature. The neutralization techniques have been studied [13; 19] where an additional source term is introduced in the Poisson equation leading an \(r^{-2}\) or even faster decay of the potential such that the homogeneous boundary condition makes sense. Another approach is adopting a sufficiently large computational domain for the Poisson equation as in [18], wherein the diameter for the computational domain of the Poisson equation is around 100 times larger than that of the Kohn-Sham equation. In such a way, the zero boundary condition is adequate for the Hartree potential.
As stated in [21; 5], one key issue in accurately approximating the Hartree potential is to choose an appropriate finite element space for the Poisson equation. Notably, the electron density, which represents the sum of squared wavefunctions and constitutes the right hand side of the Poisson equation, belongs to a finite element space twice the polynomial degree of that used for the wavefunctions. This suggests a preference for a larger finite element space in the quest to find the Hartree potential. Various approaches, including using a finer mesh with the same polynomial degree as the Kohn-Sham equation [23] or doubling the polynomial degree [21; 5], have shown improvements in the total energy calculations. However, it should be pointed out that these approaches are far from perfect, due to the reason that the approximate space for the Hartree potential in each approach is inherited from the one for wavefunctions. This space may not ideally suit the Hartree potential's distinct behaviors in comparison to wavefunctions. A potential remedy for creating a suitable space accommodating both the Hartree potential and the wavefunctions is the use of an _a posteriori_ error indicator [25], considering contributions from both the Poisson and Kohn-Sham equations. While this approach can yield accurate results, it also introduces significant computational complexity, resulting from the substantial number of degrees of freedom defined on a single mesh, driven by differences not only in decay behavior but also in the magnitude of the error indicator between the two quantities. In light of these observations, a practical solution, that is, the _multi-mesh adaptive method_, to simultaneously address accuracy and complexity concerns emerges through a combination of the aforementioned approaches. This involves the adaptive design of separate approximate spaces for wavefunctions and the Hartree potential. With such an idea, the numerical accuracy issue can be handled well since the approximate space for each variable is tailored according to its own feature, while the complexity is effectively reduced by replacing a large system of linear equations with two smaller ones.
Distinct from the method that solves all the equations on the same mesh (hereafter we call such method _single-mesh method_), the multi-mesh adaptive method (abbreviated as _multi-mesh method_) is to solve different variables with different regularity on different meshes in the same computational domain. The quality of mesh associated with a certain variable can be improved from the mesh adaption process similar to the mesh adaption in the single-mesh method. In this study, the implementation of the multi-mesh method is based on the framework proposed in [14]. Within this framework, mesh adaptation is achieved by locally refining or coarsening the element, a process known as \(h\)-adaptation. A similar idea can be found in [8; 22] where \(p\)-adaption of the finite element space is also considered. The multi-mesh method has been successfully applied to various fields and models, such as in simulating the two- and three-dimensional dendritic growth [11; 6], the wetting and spreading problems [7], the photonic band structure optimization [28], the eigenvalue problems [10], etc. With the multi-mesh method, chemical accuracy is able to be achieved since both wavefunctions and the Hartree potential are solved on different but qualified spaces.
An anticipated advantage of the multi-mesh method, when compared to the single-mesh method, is
the potential to reduce computational costs in achieving chemical accuracy. This can be explained by a sample as displayed in Figure 1. The first mesh can be viewed as the mesh for solving the Kohn-Sham equation, with a dense grid distribution near the singularity and a sparse distribution for regions far from it. The middle mesh is used to solve the Hartree potential, and due to its smooth behavior and low decay speed, the mesh grid distribution remains uniform. Finally, the last mesh, formed without employing the multi-mesh strategy, should be able to capture both the singularity and the low decay of the Hartree potential. The number of mesh grids for these three meshes is 37, 49, and 57, respectively. Assume that the linear finite element method is adopted. In the multi-mesh method, a generalized eigenvalue problem \(Ax=\lambda Bx\) with \(A,B\in\mathbb{R}^{37\times 37}\) and a linear system \(Sx=b\) with \(S\in\mathbb{R}^{49\times 49}\). While for the finite element method using one mesh, the eigenvalue problem and the linear system are of the same size \(A,B,S\in\mathbb{R}^{57\times 57}\). Apparently, the computational cost for solving these two problems in the multi-mesh method is much less than the single-mesh method in this sample.
Nevertheless, the application of the multi-mesh method is accompanied by several challenges. The first of these challenges is the effective management of mesh grids, requiring the capacity to perform local grid refinements or coarsening in response to specific problem demands. This management process must seamlessly facilitate the update of solutions from the previous mesh to the new one. The second challenge centers on the critical necessity for efficient and accurate multi-mesh communication, particularly when computing integrals. The establishment of smooth communication is indispensable for maintaining consistency and accuracy across different meshes. To address these challenges, we introduce the hierarchical geometry tree data structure. This tree-based approach facilitates the natural and efficient manner in the management of the mesh grids. Furthermore, in order to prevent any loss of accuracy during communications across various meshes, we adopt a strategy that maximizes the utilization of quadrature points in numerical integrals, relying solely on interpolation without the need for a projection process. This approach ensures the preservation of numerical accuracy throughout the communication process.
By employing the multi-mesh approach, we establish a multi-mesh adaptive finite element framework for the all-electron density functional theory. With the additional treat on the Hartree potential, systematical convergence of the total energy to chemical accuracy is observed and the computational cost is less compared with the single-mesh method given the same accuracy. The mesh adaption is based on the residual-based _a posteriori_ error estimation of the Kohn-Sham equation and the Poisson equation whose solution is the Hartree potential. The ability to achieve chemical accuracy is verified by a series of numerical examples of atoms and molecules. By a detailed comparison of computational time between the single-mesh method and the multi-mesh method, the efficiency of the multi-mesh adaptive method is justified.
The organization of this paper is listed below. In Section 2, we introduce the Kohn-Sham equation, the Poisson equation, and the finite element discretizations of these two equations. Then in Section 3, the single-mesh adaptive finite element framework for Kohn-Sham density function theory is reviewed and tested on the Helium atom example. Followed by Section 4 in which the multi-mesh adaptive method is demonstrated and examined also on the Helium example. In Section 5, the efficiency and accuracy of the presented method are discussed in detail from numerical examples. Finally, this paper ends with the conclusion in Section 6.
Figure 1: A sample for multi-mesh method. The left two images show the meshes used in the multi-mesh method. The right one shows the mesh used in the single-mesh method. The number of the mesh grids for the three meshes are 37, 49, and 57, respectively.
## 2 The Kohn-Sham equation and finite element discretization
### Kohn-Sham density functional theory
The Kohn-Sham equation for a \(p\)-electron system is read as the following eigenvalue problem
\[\left\{\begin{array}{ll}\hat{H}\psi_{l}(\mathbf{r})=\varepsilon_{l}\psi_{l}( \mathbf{r}),&l=1,2,\ldots,p,\\ \int_{\mathbb{R}^{3}}\psi_{l}\psi_{l^{\prime}}\,\mathrm{d}\mathbf{r}=\delta_{ ll^{\prime}},&l,l^{\prime}=1,2,\ldots,p,\end{array}\right. \tag{2}\]
where \((\varepsilon_{l},\psi_{l})\) is the \(l\)-th eigenpair, \(\delta_{ll^{\prime}}\) is the Kronecker delta, and \(\hat{H}\) stands for the Hamiltonian operator. We denote \(\rho(\mathbf{r})=\sum_{l=1}^{p}|\psi_{l}|^{2}(\mathbf{r})\) as the electron density. Specifically, \(\hat{H}\) consists of the following four terms
\[\hat{H}([\rho];\mathbf{r})=-\frac{1}{2}\nabla_{\mathbf{r}}^{2}+V_{\mathrm{ext }}(\mathbf{r})+V_{\mathrm{Har}}([\rho];\mathbf{r})+V_{\mathrm{xc}}([\rho]; \mathbf{r}), \tag{3}\]
where the notation \(V([\rho];\mathbf{r})\) implies that \(V\) is a functional of the electron density \(\rho\). The first term \(-\nabla^{2}/2\) in \(\hat{H}\) is the kinetic operator. The second term in \(\hat{H}\) describes the Coulomb external potential due to the nuclei which takes the form
\[V_{\mathrm{ext}}(\mathbf{r})=-\sum_{j=1}^{M}\frac{Z_{j}}{|\mathbf{r}- \mathbf{R}_{j}|},\]
where \(M\) is the number of nuclei. The third term is the Hartree potential describing the Coulomb repulsion among the electrons
\[V_{\mathrm{Har}}([\rho];\mathbf{r})=\int_{\mathbb{R}^{3}}\frac{\rho(\mathbf{r }^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}\,\mathrm{d}\mathbf{r}^{\prime}. \tag{4}\]
The last term \(V_{\mathrm{xc}}\) stands for the exchange-correlation potential, which is caused by the Pauli exclusion principle and other non-classical Coulomb interactions. Note that the analytical expression for the exchange-correlation term is unknown and therefore an approximation is needed. Specifically, the local density approximation (LDA) from the library Libxc [16] with the slater exchange potential and the Vosko-Wilk-Nusair (VWN4) [26] is adopted in this work.
Note that direct evaluation of the Hartree potential (4) requires computational cost \(\mathcal{O}(N^{2})\) with \(N\) being the number of grid points on the computational domain \(\Omega\). For simplicity, we denote \(\phi=V_{\mathrm{Har}}(\mathbf{r})\) hereafter. In this paper, the Hartree potential is obtained by solving the Poisson equation
\[\begin{cases}-\nabla^{2}\phi(\mathbf{r})=4\pi\rho(\mathbf{r}),\,\text{in}\, \,\Omega,\\ \phi(\mathbf{r})=\phi_{\partial\Omega}(\mathbf{r}),\,\text{on}\,\,\partial \Omega.\end{cases} \tag{5}\]
The boundary value \(\phi_{\partial\Omega}(\mathbf{r})\) is evaluated by the multipole expansion method. Specifically, the following approximation is used
\[\phi(\mathbf{r})|_{\mathbf{r}\in\partial\Omega}\approx \frac{1}{|\mathbf{r}-\mathbf{r}^{\prime\prime}|}\int_{\Omega}\rho \left(\mathbf{r}^{\prime}\right)\,\mathrm{d}\mathbf{r}^{\prime}+\sum_{i=1,2,3 }p_{i}\cdot\frac{r^{i}-r^{\prime\prime,i}}{\left|\mathbf{r}-\mathbf{r}^{ \prime\prime}\right|^{3}}+\sum_{i,j=1,2,3}q_{ij}\cdot\frac{3\left(r^{i}-r^{ \prime\prime,i}\right)\left(r^{j}-r^{\prime\prime,j}\right)-\delta_{ij}\left| \mathbf{r}-\mathbf{r}^{\prime\prime}\right|^{2}}{\left|\mathbf{r}-\mathbf{r}^ {\prime\prime}\right|^{5}},\]
where
\[p_{i}=\int_{\Omega}\rho\left(\mathbf{r}^{\prime}\right)\left(r^{\prime,i}-r^{ \prime\prime,i}\right)\,\mathrm{d}\mathbf{r}^{\prime},\quad q_{ij}=\int_{ \Omega}\frac{1}{2}\rho\left(\mathbf{r}^{\prime}\right)\left(r^{\prime,i}-r^{ \prime\prime,i}\right)\left(r^{\prime,j}-r^{\prime\prime,j}\right)\,\mathrm{d} \mathbf{r}^{\prime}.\]
In the above expressions, \(\mathbf{r}^{\prime\prime}\) stands for an arbitrary point in \(\Omega\). In the simulations, we choose it to be
\[\mathbf{r}^{\prime\prime}=\frac{\int\mathbf{r}\rho(\mathbf{r})\,d\mathbf{r}}{ \int\rho(\mathbf{r})\,d\mathbf{r}}.\]
### Finite element discretization
In practical simulations, a bounded polyhedral domain \(\Omega\subset\mathbb{R}^{3}\) is served as the computational domain. Assume that the finite element space \(V_{h}\) is constructed on \(\Omega\) and the finite element basis of \(V_{h}\) is denoted as \(\{\varphi_{1},\ldots,\varphi_{n}\}\) with \(n\) being the dimension of the space. Then the wavefunction \(\psi_{i}\) can be approximated as \(\psi_{i}^{h}\) on \(V_{h}\) via
\[\psi_{i}^{h}(\mathbf{r})=\sum_{k=1}^{n}X_{k,i}\varphi_{k},\quad X_{k,i}=\psi_{i }\left(\mathbf{r}_{k}\right),\quad X\in\mathbb{R}^{n\times p}, \tag{6}\]
where \(\mathbf{r}_{k}\) denotes the node corresponding to the \(i\)-th basis function. As a result, to find the approximation of the wavefunctions \(\{\psi_{i}\}\) on \(V_{h}\) is to find \(X\), i.e., the value of \(\psi_{i}\) at each node \(\mathbf{r}_{k}\).
In the finite element space \(V_{h}\), the variation form for the KS equation (2) becomes: find \((\psi_{i}^{h},\varepsilon_{i}^{h})_{i=1,\ldots,p}\in V_{h}\times\mathbb{R}\), such that
\[\int_{\Omega}\varphi\hat{H}\psi_{i}^{h}\,\mathrm{d}\mathbf{r}= \varepsilon_{i}^{h}\int_{\Omega}\varphi\psi_{i}^{h}\,\mathrm{d}\mathbf{r}, \quad\forall\varphi\in V_{h}.\]
By letting \(\varphi\) be the finite element basis function and inserting (6) to the above variational form, we have the following discrete eigenvalue problem
\[AX=\varepsilon MY. \tag{7}\]
Here \(A\) and \(M\) are symmetric matrices with the entries
\[A_{i,j} =\int_{\Omega}\frac{1}{2}\nabla\varphi_{j}\cdot\nabla\varphi_{i} +\big{(}V_{\mathrm{ext}}+\phi+V_{\mathrm{xc}}\big{)}\varphi_{j}\varphi_{i}\, \mathrm{d}\mathbf{r}, \tag{8}\] \[M_{i,j} =\int_{\Omega}\varphi_{j}\varphi_{i}\,\mathrm{d}\mathbf{r}. \tag{9}\]
Similarly, we can obtain the linear system for the Poisson equation (5) on \(V_{h}\):
\[S\Phi=\mathbf{b}, \tag{10}\]
where \(S\) is the stiff matrix with the entry
\[S_{i,j}=\int_{\Omega}\frac{1}{2}\nabla\varphi_{j}\cdot\nabla\varphi_{i}\, \mathrm{d}\mathbf{r}, \tag{11}\]
and the right hand side \(\mathbf{b}\) and the discretized \(\phi^{h}\)
\[\mathbf{b}_{i}=\int_{\Omega}4\pi\rho(\mathbf{r})\varphi_{i}\,\mathrm{d} \mathbf{r},\quad\phi^{h}(\mathbf{r})=\sum_{k=1}^{n}\Phi_{k}\varphi_{k},\quad \Phi\in\mathbb{R}^{n}.\]
Owing to the singularity in the Hamiltonian (3), a uniform finite element discretization would lead to a large number of mesh grids to achieve chemical accuracy. Hence the adaptive mesh method is necessary to efficiently solve the Kohn-Sham equation, which will be discussed in the next section.
## 3 The adaptive finite element method
In this section, we will begin by introducing the adaptive finite element method that relies on the _a posteriori_ error estimates [25]. Subsequently, we will present and compare two error indicators used to facilitate mesh adaptation for solving the Kohn-Sham equation (2). The sole distinction between these two error estimators is their utilization of the error involved in solving the Hartree potential.
### The adaptive method based on the residual
a posteriori _error estimation_
The adaptive mesh techniques offer enhanced numerical accuracy while requiring fewer mesh grids compared to uniform mesh methods. In this study, our primary focus lies on the \(h\)-adaptive methods, which involve local refinement and/or coarsening of mesh grids. An important aspect of \(h\)-adaptive methods is the determination of an error indicator. Generally speaking, error indicators identify regions within the domain that necessitate local refinement or coarsening, and they are typically derived from _a posteriori_ error estimations [25]. When there is only one orbital in the system, it is natural to generate the indicator based on information from that specific orbital. However, when there are multiple orbitals in the system, generating the indicator solely from an individual orbital is no longer advisable. This is because every orbital in the system is expected to be well-resolved using the mesh grids after mesh adaptation. To address this, we adopt the strategy proposed in [2] for indicator generation. First, indicators are individually generated for each orbital using a specific method. Then, normalization is applied to each indicator. The final indicator is obtained by combining these normalized indicators.
Specifically, based on the _a posteriori_ error estimates [25], the residual-based _a posteriori_ error indicator for the KS equation (2) in the element \(\mathcal{T}_{K}\) can be defined as
\[\eta_{K,\mathrm{KS}}=\left(h_{K}^{2}\sum_{l=1}^{p}\left\|\mathbb{R}_{K, \mathrm{KS}}(\psi_{l})\right\|_{K}^{2}+\sum_{e<\mathcal{O}\mathcal{T}_{K}} \frac{1}{2}h_{e}\sum_{l=1}^{p}\left\|\mathbb{J}_{e}(\psi_{l})\right\|_{e}^{2} \right)^{\frac{1}{2}}, \tag{12}\]
where \(h_{K}\) represents the largest length of the edges of the element \(\mathcal{T}_{K}\), \(h_{e}\) stands for the largest length of the common face \(e\) of \(\mathcal{T}_{K}\) and \(\mathcal{T}_{J}\), \(\mathbb{R}_{K,\mathrm{KS}}(\psi)\) and \(\mathbb{J}_{e}(\psi)\) are the residual and jump term on the element \(\mathcal{T}_{K}\), whose formulations are written as
\[\begin{cases}\mathbb{R}_{K,\mathrm{KS}}(\psi_{l})=\hat{H}\psi_{l}-\varepsilon _{l}M\psi_{l},\\ \mathbb{J}_{e}(\psi_{l})=\left(\nabla\psi_{l}\left|\right._{\mathcal{T}_{K}} -\nabla\psi_{l}\left|\right._{\mathcal{T}_{J}}\right)\cdot\mathbf{n}_{e},\end{cases}\]
where \(\mathbf{n}_{e}\) is the out normal vector on the face \(e\) w.r.t the element \(\mathcal{T}_{K}\). The definition of the norms are
\[\left\|f(x)\right\|_{K}=\left(\int_{K}(f(x))^{2}dx\right)^{\frac{1}{2}},\ \ \left\|f(x)\right\|_{e}=\left(\int_{e}(f(x))^{2}dx\right)^{\frac{1}{2}}.\]
The indicator (12) involves the error arising from the Kohn-Sham equation (2) and is usually directly adopted to guide the mesh adaption.
An aspect that the indicator (12) may overlook is the error associated with the Hartree potential. It is important to emphasize the accurate approximation of the Hartree potential, as it is a crucial component of the Hamiltonian (3) and contributes to the Hartree potential energy. As the Hartree potential is obtained by solving the Poisson equation (5), a similar approach can be employed to generate an error indicator specifically for the Hartree potential, just as done for the Kohn-Sham equation. Specifically, the error indicator for the Hartree potential can be defined as
\[\eta_{K,\mathrm{Har}}=\left(h_{K}^{2}\left\|\mathbb{R}_{K,\mathrm{Har}}(\phi) \right\|_{K}^{2}+\sum_{e<\mathcal{O}\mathcal{T}_{K}}\frac{1}{2}h_{e}\left\| \mathbb{J}_{e}(\phi)\right\|_{e}^{2}\right)^{\frac{1}{2}}, \tag{13}\]
where the residual part is defined as \(\mathbb{R}_{K,\mathrm{Har}}(\phi)=\nabla^{2}\phi+4\pi\rho\). Based on the analysis above, the second indicator for solving the Kohn-Sham equation can be designed as
\[\eta_{K,\mathrm{KS+Har}}=\sqrt{\eta_{K,\mathrm{Har}}^{2}+\eta_{K,\mathrm{KS}}^ {2}}. \tag{14}\]
The aim of using this error indicator is to generate a mesh on which both the wavefunctions and the Hartree potential are approximated well.
The mesh adaptation process, along with the solution of the Kohn-Sham equation, can be carried out using either the first (12) or the second error indicator (14). It is important to note that the adaptive algorithms utilizing these two different indicators are essentially identical, except for the choice of the indicator. Moreover, all simulations are performed on a single mesh simultaneously, i.e., the Poisson equation (5) for the Hartree potential is discretized and solved on the same finite element space for the Kohn-Sham equation. Consequently, we present the adaptive finite element method for solving the
Kohn-Sham equation, outlined in Algorithm 1, and refer to this approach as the _single-mesh adaptive method_. Briefly, the adaptive algorithm consists of an outer iteration and an inner iteration. In the inner iteration, it is an SCF method for solving the nonlinear Kohn-Sham equation. Meanwhile, it is the mesh adaption procedure in the outer iteration. In the next subsection, we would like to show an example using the single mesh adaptive method with the first indicator (12).
```
Input: Initial mesh \(\mathcal{T}^{(0)}\), initial electron density \(\rho\), initial energy \(E=0\), energy tolerance \(tol_{1}\), density tolerance \(tol_{2}\). while\(|E-E_{old}|<tol_{1}\)do \(E_{old}=E\); while\(\|\rho-\rho_{old}\|<tol_{2}\)do \(\rho_{old}=\rho\); Calculate the Hartree potential \(\phi\); Generate the Hamiltonian matrix \(A\); Solve the the eigenvalue problem \(AX=\varepsilon MX\); Update the electron density \(\rho\); Calculate the ground state energy \(E\); Mesh adaption based on the error indicator. Output: Total ground state energy \(E\)
```
**Algorithm 1**Single-mesh adaptive method.
### An issue for numerical solutions towards chemical accuracy
#### 3.2.1 Single-mesh adaptive method using indicator (12)
To assess the convergence and behavior of the single mesh adaptive finite element method with the error indicator (12), we conduct a series of eight experiments for the helium atom, each involving a different number of mesh grids, controlled by varying adaption tolerance values. In this example, the referenced value is obtained from the state-of-art software NWChem using the _aug-cc-pv6z_ basis set [24]. The hardware and software configurations of the experiment are introduced in the beginning of Section 5.
The results are displayed in the Figure 2. As the tolerance for error indicator decreases, the number of degrees of freedom increases. The convergence of the total energy would serve as evidence of the superiority of the adaptive method compared to the method employing a uniform mesh partition. However, it is evident that the total energy does not converge to the reference result observed from the left of Figure 2. Even in the mesh generated from the smallest tolerance, which consists of 1,737,569 mesh grids, a discrepancy of 0.01 Hartree in the energy is still observed, indicating that the adaptive method does not achieve the desired convergence.
As discussed earlier, the error in the total energy primarily stems from the inaccurate approximation of the Hartree energy. This is confirmed by comparing the Hartree energy with the reference value \(E_{\text{Har,ref}}=1.9961\) Hartree, as depicted on the right side of Figure 2. It is observed that although the error in the Hartree energy diminishes with respect to the number of Dofs, on the finest mesh, the Hartree
Figure 2: Convergence history of the total enegy (left), absolute errors of energies and eigenvalue (right) for the adaptive method using error indicator (12).
energy is found to be 0.01 Hartree lower than the reference value, resulting in a 0.01 Hartree error in the total energy. Furthermore, an imprecise approximation of the Hartree potential can also introduce a 0.01 Hartree error in the eigenvalue, as depicted on the right side of Figure 2. As a comparison, it can be observed that the errors in the one-electron energy and the exchange-correlation energy systematically decrease as the number of Dofs increases.
The reason behind the inaccurate approximation of the Hartree potential can be elucidated by referring to Figure 3 (upper row), which displays the results obtained from the finest mesh consisting of 1,765,990 mesh grids. In the left two columns, the global and local mesh distributions are depicted. It is evident that the mesh grid density is considerably high around the singularity located at the origin, with the smallest mesh size being approximately 0.001 Bohr. Conversely, in regions distant from the singularity, the mesh grid distribution is sparse. Specifically, near the boundary, the largest mesh size can exceed 2 Bohr. Such a mesh distribution proves to be suitable for representing wavefunctions and electron density, as illustrated in the third column of Figure 3 (upper row), since both quantities exhibit exponential decay, and the region where the density exceeds 0.1 is confined to a small interval of \([-1,1]^{3}\) for the helium atom. However, in the fourth column of Figure 3 (upper row), it is apparent that the Hartree potential exhibits a much slower decay behavior, with potential values surpassing 0.1 throughout the entire computational domain of \([-10,10]^{3}\). Furthermore, the contours near the boundary oscillate and are not smooth. As a result, a mesh size as large as 2 Bohr is evidently insufficient for capturing the variations in the Hartree potential accurately. In order to enhance the numerical accuracy of the Kohn-Sham equation, it is essential to obtain a more precise approximation of the Hartree potential.
#### 3.2.2 Single mesh adaptive method using indicator (14)
To achieve a more accurate Hartree potential, a direct approach is to incorporate the corresponding error indicator (13) with the indicator of the Kohn-Sham equation (12). This combination results in the second indicator (14). Consequently, we apply the single mesh adaptive method again to the Helium example, but this time utilizing the second indicator (14).
The convergence of energies and eigenvalues is depicted in Figure 4. Notably, there is a systematic convergence observed in both the total energy and the Hartree potential energy. Moreover, the eigenvalue demonstrates convergence towards the reference value. On the finest mesh, which consists of 2,874,807 grid points, chemical accuracy is achieved. Sliced mesh representations and profiles of the electron density and Hartree potential are shown in Figure 3 (lower row). A noticeable difference from the mesh in Figure 3 (upper row) is the presence of smaller mesh sizes in regions far away from the origin. As a result, the Hartree potential is better captured, as illustrated in the fourth column of Figure 3 (lower row), from which we can find smoother contour lines than that in Figure 3 (upper row) using the first indicator (12).
Figure 3: Top row: results from a single mesh adaptive method using the indicator (12), with total 1,765,990 mesh grids, i.e., Global mesh and zoomed-in mesh on the sliced \(X\)-\(Y\) plane (left two), density profile (third one), and Hartree potential profile (fourth one). Bottom row: corresponding results using the indicator (14), with total 2,847,807 mesh grids.
While the Hartree potential is approximated accurately and chemical accuracy is obtained, the computational cost increases significantly. The number of mesh grids on the finest mesh reaches 2,874,807, surpassing the number of grids used in the previous section. This increment in the number of mesh grids is primarily caused by the need to capture the variations of the Hartree potential. Such an increase in the computational grid introduces additional complexity and computational overhead, leading to longer computation time and higher memory requirements. This becomes especially problematic when dealing with larger system sizes or when conducting complex simulations. To address this issue and reduce the computational cost, an alternative approach, known as the multi-mesh adaptive method, will be introduced in the next section.
## 4 The multi-mesh adaptive method
In the preceding section, we explored the solution of the discretized Kohn-Sham equation on a single mesh, employing either the residual-based _a posteriori_ error indicator (12) or (14). The results indicated that employing the first indicator (12) alone may fall short of achieving chemical accuracy. On the other hand, the second indicator (14) demonstrated the ability to attain chemical accuracy; however, it came at the cost of significantly increased computational requirements.
To tackle this challenge, we introduce an approach known as the multi-mesh adaptive method [14]. The primary objective of this method is to strike a balance between achieving chemical accuracy and managing computational costs effectively. By utilizing multiple meshes instead of a single mesh, the multi-mesh adaptive method offers greater flexibility in adapting the mesh resolution to capture variations for different quantities of interest.
Specifically, in this method, we will utilize two distinct meshes. Both two meshes will be adapted during the simulation. The first mesh is specifically designed for solving the Kohn-Sham equation, with the primary objective of accurately capturing the variations in the wavefunctions. This mesh is tailored to ensure high resolution in regions where the wavefunctions exhibit significant changes. Conversely, the second mesh is dedicated to solving the Poisson equation (5). Its primary purpose is to capture the variations in the Hartree potential effectively. This second mesh is strategically designed to provide optimal resolution and precision in regions where the Hartree potential exhibits substantial changes. By employing two separate meshes with specific focus areas, we can ensure that each equation is solved with the appropriate level of accuracy and capture the variations unique to the wavefunctions and the Hartree potential, respectively.
To implement the multi-mesh method, careful handling of two components is crucial. The first component involves effectively managing the mesh grids, allowing for flexible local refinement or coarsening as needed. This management mechanism should also facilitate efficient solution updates from the old mesh to the new mesh. The second component focuses on ensuring efficient and accurate communication between different meshes, particularly in the calculation of integrals. Efficient communication protocols play a vital role in maintaining consistency and accuracy across the various meshes. These requirements can be fulfilled by the hierarchical geometry tree data structure, which will be introduced in detail in the following.
Figure 4: Convergence history of the total energy (left), absolute errors of energies and eigenvalue (right) for the adaptive method using error indicator (14).
### The hierarchical geometry tree
A well-designed data structure for the mesh grids is needed for an effective management mechanism. In the presented algorithm, the hierarchical geometry tree (HGT) [14; 2] is utilized.
Firstly, the mesh structure is described hierarchically, which means that the mesh information is given from the lowest dimension (0-D, the points) to the highest dimension (3-D, the tetrahedron) hierarchically. An element such as a point for 0-D, an edge for 1-D, a triangle for 2-D, a tetrahedron for 3-D is called a geometry. In the hierarchical description of a tetrahedron, all geometries have a belonging-to relationship. For example, if an edge is one of the edges of a triangle, this edge belongs to this triangle. With this hierarchical structure, the geometry information of a tetrahedron can be referred to flexibly, and the refinement and coarsening of a mesh can also be implemented efficiently.
Secondly, the mesh is stored and managed by a tree data structure. The validity of using the tree data structure is due to the strategy of element refinement and coarsening. Specifically, for a tetrahedron element (the left of Figure 5), the refinement of this tetrahedron is dividing it into eight equally small tetrahedrons via connecting the midpoints on each edge (the right of Figure 5). As a result, a belonging-to relationship can be established, for example, any small tetrahedron that is called "child" belongs to the large "parent" tetrahedron. Meanwhile, the coarsening of any child tetrahedron in the right of Figure 5 is releasing all the children to obtain the parent tetrahedron. By this refinement and coarsening strategy, the tree data structure is able to be established.
For a better understanding of the tree data structure, a two-dimensional illustration is presented. Similar to the 3D case, the refinement of a 2D element, i.e. the triangle, is to divide the triangle into four equal triangles, as displayed in the left two columns in Figure 6. By refining a triangle \(\mathcal{T}_{0}\), four sub-triangles \(\{\mathcal{T}_{0,0},\mathcal{T}_{0,1},\mathcal{T}_{0,2},\mathcal{T}_{0,3}\}\) are generated. Furthermore, via refining \(\mathcal{T}_{0,0}\) the sub-triangles \(\{\mathcal{T}_{0,0,0},\mathcal{T}_{0,0,1},\mathcal{T}_{0,0,2},\mathcal{T}_{0, 0,3}\}\) are obtained, which is demonstrated in Figure 6. To manage this procedure, the quadtree data structure in which each internal node has exactly four children is utilized, as described in Figure 7. When the local refinement and coarsening techniques are adopted, only some triangles are to refined, as indicated in the right of Figure 6 and the bottom of Figure 7 where only the triangle \(\mathcal{T}_{0,0}\) is refined. In the quadtree, we call \(\mathcal{T}_{0}\) the root node, and those nodes without further descents like \(\mathcal{T}_{0,1}\) and \(\mathcal{T}_{0,0,1}\) the leaf nodes. Suppose that there is a set of root nodes \(\{\mathcal{T}_{i}\},i=0,1,2,\dots\), which form a mesh for a domain \(\Omega\), then from the above definition we know that a set of all leaf nodes also form a mesh. For example, the set \(\{\mathcal{T}_{0}\}\) forms the mesh in the left of Figure 6, the set \(\{\mathcal{T}_{0,0},\mathcal{T}_{0,1},\mathcal{T}_{0,2},\mathcal{T}_{0,3}\}\) forms the middle mesh in Figure 6, and the set \(\{\mathcal{T}_{0,0,0},\mathcal{T}_{0,0,1},\mathcal{T}_{0,0,2},\mathcal{T}_{0,0,3},\mathcal{T}_{0,1},\mathcal{T}_{0,2},\mathcal{T}_{0,3}\}\) forms the right mesh in Figure 6.
With the hierarchical description of the geometry and the tree data structure, the mesh is effectively managed by the HGT. However, building the finite element space directly on a mesh results in non-conforming finite element because of the hanging points in the direct neighbors of those refined triangles. These hanging points can be handled mainly in two ways. If there are more than one hanging point on a triangle, this triangle will be refined. This process is called the semi-regularization procedure [14]. If there exists only one hanging point on a triangle, for example, the triangle \(\triangle DEF\) in the right of Figure 6 has a hanging point \(I\) arose in the refinement of \(\mathcal{T}_{0,0}\). To deal with these kind of hanging points, the twin-triangle geometry is introduced, which is demonstrated in Figure 8. The twin-triangle actually consists of two triangles \(\triangle FDI\) and \(\triangle FIE\). Meanwhile, the degrees of freedom and the interpolation information of the twin-triangle \(\triangle DEF\) inherit from the two triangles \(\triangle FDI\) and \(\triangle FIE\). To make the finite element space conforming, the following strategy is adopted to construct the basis functions in the
Figure 5: The parent tetrahedron (left) and the eight child tetrahedrons (right).
twin-triangle geometry: For each basis function, its value is 1 at its corresponding interpolation point and 0 at other interpolation points. The support of the basis function whose interpolation point is a common point of two sub-triangles in a twin-triangle geometry such as \(F\) and \(I\) in Figure 8, is the whole twin-triangle geometry. For the non-common points like \(D\), the support of its basis function is only the triangle \(\triangle FDI\). With such a strategy, a conforming finite element space can be built in a mesh where the local refinement is implemented.
In virtue of the tree data structure, multiple meshes are allowed to be described by the same HGT. Naturally, a problem arises: is it possible to make the information communicates among these meshes without any loss? The answer is affirmative, and the reason relies on the belonging-to relationship between any two nodes in the HGT. Briefly speaking, there exist only three kinds of relationship between any two nodes: equal, belonging-to, and no-overlap. The communication among equal or no-overlap elements is trivial. Consequently, we only need to take care of the second kind of relationship. We introduce the implementation details in the following.
In the presented method, the Hamiltonian matrix and energy require information from both the KS mesh and the Hartree mesh, which mainly involve numerical integration. We take the evaluation of the Hartree potential energy for example, which could be written as
\[E_{\text{Har}}=\frac{1}{2}\int_{\Omega}\phi(\mathbf{r})\left(\sum_{l=1}^{p} \psi_{l}(\mathbf{r})^{2}\right)\mathrm{d}\mathbf{r}. \tag{15}\]
The integration should be carefully treated since the Hartree potential \(\phi(\mathbf{r})\) and the wavefunction \(\psi_{l}(\mathbf{r})\) belong to different finite element spaces built on different meshes. An intuitive illustration is given to show how the presented method calculates the integral (15). Assume the finite element space \(V_{\mathcal{T}_{\text{KS}}}\) for the wavefunction is built on the mesh \(\mathcal{T}_{\text{KS}}\), and the space \(V_{\mathcal{T}_{\text{flav}}}\) is built on the mesh \(V_{\mathcal{T}_{\text{flav}}}\), see Figure 9.
Figure 8: Twin-triangle.
Figure 6: The refinement of a triangle \(\mathcal{T}_{0}\) and its sub-triangle \(\mathcal{T}_{0,0}\).
Figure 7: The quadtree data structure for the mesh in the right of Figure 6.
For the common elements such as \(\triangle CDF\) which belongs to both \(\mathcal{T}_{\mathrm{KS}}\) and \(\mathcal{T}_{\mathrm{Har}}\), the numerical integration can be calculated directly. While for the remained elements, for example, in \(\mathcal{T}_{\mathrm{KS}}\), the triangle \(\triangle DAE\) is refined, while in \(\mathcal{T}_{\mathrm{Har}}\) this triangle is kept, and a similar case for the triangle \(\triangle FEB\), special treatment is needed to avoid the loss of accuracy.
In order to prevent the loss of accuracy, we employ a strategy that maximizes the utilization of quadrature points in numerical integrals. Specifically, the numerical integration on the element \(\triangle ADE\) is divided into the integrations on its four refined sub-triangles,
\[\frac{1}{2}\int_{\triangle ADE}\phi(\mathbf{r})\left(\sum_{l=1}^{p}\psi_{l}( \mathbf{f})^{2}\right)\mathrm{d}\mathbf{r}\approx\sum_{K=1}^{4}\sum_{j=1}^{q} \mathrm{area}(K)J_{j}^{K}w_{j}^{K}\phi(\mathbf{r}_{j}^{K})\left(\sum_{l=1}^{p} \psi_{l}(\mathbf{r}_{j}^{K})^{2}\right),\]
where the element \(K\) represent the four sub-triangles of \(\triangle ADE\), \(\mathbf{r}_{j}^{K}\) is the \(j\)-th quadrature point of \(K\), \(J_{j}^{K}\) is the jacobian at \(\mathbf{r}_{j}^{K}\), and \(w_{l}^{K}\) is the associated weight in the numerical quadrature. The values of the Hartree potential on these quadrature points can be obtained by numerical interpolation. Similarly, the integral on the triangle \(\triangle FEB\) is evaluated by doing the numerical integral on its sub-triangles. In this way, the accuracy of the integral will not be affected in the communication of different meshes.
With the HGT, the solution update from the old mesh to the new mesh after the local refinement and coarsening can also be implemented efficiently. As we mentioned before, a set of all leaf nodes forms a mesh. Although different meshes correspond to different sets of leaf nodes, all meshes are from the same set of root nodes. Then the belonging-to relationship of the geometries between two meshes can be analyzed easily. As a result, the solution update can be implemented efficiently according to the relationship.
### The multi-mesh adaptive algorithm
In virtue of the HGT data structure, the multi-mesh method can be efficiently implemented. To solve the Kohn-Sham equation, we propose the multi-mesh adaptive algorithm illustrated in the flowchart presented in Figure 10. The Kohn-Sham equation (2) is discretized and solved on the finite element space built on the mesh \(\mathcal{T}_{\mathrm{KS}}\), and the Hartree potential (5) is solved on the mesh \(\mathcal{T}_{\mathrm{Har}}\). The KS mesh \(\mathcal{T}_{\mathrm{KS}}\) is adapted using the error indicator (12) and the Har mesh \(\mathcal{T}_{\mathrm{Har}}\) is adapted using the error indicator (13). Both of these two mesh adaptions occur after the end of the SCF iteration. The mesh adaptations continue until the final total energy reaches convergence, at which point the mesh adaptation ceases, and the results are outputted.
In order to assess the efficiency and make a comparison between the multi-mesh algorithm Figure 10 and the single mesh algorithm Algorithm 1, we categorize the calculations into three distinct types. Namely, the calculations on the Kohn-Sham mesh \(\mathcal{T}_{\mathrm{KS}}\) with a white background in Figure 10, the calculations conducted in the Hartree mesh \(\mathcal{T}_{\mathrm{Har}}\) with a blue background, and the calculations that require information from both \(\mathcal{T}_{\mathrm{KS}}\) and\(\mathcal{T}_{\mathrm{Har}}\) with a red background. When comparing the multi-mesh algorithm to the single mesh algorithm, it becomes evident that additional calculations fall into the third category. These calculations necessitate communication between the Kohn-Sham mesh and the Hartree mesh. Specifically, such communication is required during the generation of the Hamiltonian, the updating of the electron density (which forms the right-hand side of the Poisson equation (5) for the Hartree potential), and the computation of the total energy. We will conduct a detailed comparison of the time consumed by these individual parts to evaluate the overall performance of the algorithms in the next section.
By employing the multi-mesh method and incorporating adaptive algorithms guided by appropriate error indicators, we can achieve accurate and efficient solutions for the Kohn-Sham equation. The flexibility of mesh adaptation ensures that the mesh resolution aligns with the specific requirements of each equation, ultimately leading to improved convergence and reliable results. This can be verified by the Helium example.
## 5 Numerical Experiments
In this section, we examine the convergence and efficiency of the multi-mesh adaptive method through a series of numerical examples. All the simulations are performed on a workstation "Moss" with two AMD EPYC 7713 64-Core Processors (at 2.0GHz\(\times\)64, 512M cache) and 900GB of RAM, and the total number of cores is 128. The software is the C++ library AFEABIC[2] under Ubuntu 20.04.
### Case study: for chemical accuracy and computational efficiency
The multi-mesh adaptive method Figure 10 is first tested for three cases: the helium atom, the LiH molecule, and the \(\text{BeH}_{2}\) molecule. Our examination unfolds in three phases: firstly, we assess the method's systematic convergence in the context of the helium atom example; next, we determine whether it can attain chemical accuracy across all three cases; and lastly, we undertake a comparative analysis of the computational costs required to achieve chemical accuracy between the multi-mesh method and the single-mesh method.
#### 5.1.1 Helium atom
Similar to the adaptive method using error indicator (14), the systematical convergence of the energies and eigenvalue are observed in Figure 11, and the chemical accuracy is obtained when the number of degrees of freedom in \(\mathcal{T}_{\text{KS}}\) achieves 1,262,599, which is far less than that requires in the single-mesh algorithm. Furthermore, there are 2,185,920 degrees of freedom in the Hartree mesh \(\mathcal{T}_{\text{KS}}\), which is also less than the number of mesh grids used to achieve the chemical accuracy using the single mesh adaptive method.
Figure 10: Flowchart of the multi-mesh adaptive algorithm for the KS equation.
The final meshes are displayed in Figure 12. In the top of Figure 12 is the sliced mesh of \(\mathcal{T}_{\mathrm{KS}}\) which is quite similar with Figure 3 (upper row). The meshes of \(\mathcal{T}_{\mathrm{Har}}\) are demonstrated in the bottom of Figure 12, which has a similar mesh grid distribution as that in Figure 3 (lower row) as shown in a global vision, while it has a sparser grid distribution comparing with Figure 3 (lower row) since the Hartree potential behaves less singular than the Kohn-Sham wavefunctions. Although there is a larger number of mesh grids in \(\mathcal{T}_{\mathrm{Har}}\) than in \(\mathcal{T}_{\mathrm{KS}}\), it should be mentioned that on \(\mathcal{T}_{\mathrm{Har}}\) we only need to solve a linear system, while in \(\mathcal{T}_{\mathrm{KS}}\), a nonlinear eigenvalue problem should be solved. Consequently, the major computational cost is from the calculations on \(\mathcal{T}_{\mathrm{KS}}\), and the increment of the computational cost due to the large number of mesh grids on \(\mathcal{T}_{\mathrm{Har}}\) is slight. As a result, the computational cost of the multi-mesh method is of the same order of magnitude of the adaptive method, while it can achieve chemical accuracy.
As discussed in previous sections, both the single-mesh and multi-mesh methods are able to achieve chemical accuracy when the Hartree potential is considered in constructing the error indicator. To demonstrate the efficiency of the multi-mesh method, we compare the serial time for these two methods to achieve chemical accuracy. For the sake of fairness, we start the two algorithms from the same mesh
Figure 11: Convergence history of the total energy (left), absolute errors of energies, and eigenvalue (right) for the multi-mesh adaptive method.
Figure 12: Global and zoomed-in meshes on the sliced \(X\)-\(Y\) plane (top: \(\mathcal{T}_{\mathrm{KS}}\) (1,2262,599 mesh grids); bottom: \(\mathcal{T}_{\mathrm{Har}}\) (2,185,920 mesh grids)), density profile (top right), and Hartree potential profile (bottom right) for the multi-mesh adaptive method.
and initial guess, and the tolerance for guiding the mesh adaption is also the same. With these settings, it is found that the convergence in SCF is quite similar and the number of mesh adaptions is the same, as displayed in Figure 13. From the figure, it suggests that the introduction of Hartree mesh will not affect the convergence of the SCF procedure in KS mesh. Therefore, the comparison of the CPU time is fair and effective.
A summary of the comparison is illustrated in Table 1. Both methods achieved chemical accuracy, and the number of mesh grids in the single-mesh method is the largest among all meshes. The CPU time is compared with respect to two parts: the SCF part and the mesh adaption part. From Table 1, the multi-mesh method is proved to be more efficient than the single-mesh method in achieving the chemical accuracy of the helium atom.
A more detailed comparison is presented in Figure 14. The method can be divided into six parts as indicated in Figure 14. The four images represent CPU time on four meshes during the mesh adaption process. Obviously, the cost for solving the Hartree potential and the eigenvalue value problem is less in the multi-mesh method, as expected. Furthermore, the cost of constructing a matrix in the multi-mesh method is also less than that in the single-mesh method on the finest mesh. It is noted that this part contributes the largest portion of the total cost since in the helium example the eigensolver only needs to solve one eigenpair. In larger systems, solving the eigenvalue problem will be the most time-consuming part.
#### 5.1.2 LiH molecule
A similar comparison for the LiH molecule is employed. The referenced total energy is \(E_{\text{ref}}=-7.9195\) Hartree. The results are displayed in Table 2, revealing the attainment of chemical accuracy. From this
\begin{table}
\begin{tabular}{c r r r r r r} \hline & \(E_{tot}\) & \(\Delta E\) & \(N_{KS}\) & \(N_{Har}\) & \(t_{all}\) & \(t_{SCF}\) & \(t_{MA}\) \\ \hline single-mesh & -2.8341 & 0.0007 & 2,874,807 & 2,874,807 & 10290.45 & 7585.62 & 2704.83 \\ \hline multi-mesh & -2.8340 & 0.0008 & 1,262,599 & 2,185,920 & 7063.97 & 5135.69 & 1928.28 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison on He with respect to results and serial computational time. \(t_{all}\) represents the total CPU time. \(t_{SCF}\) stands for the time for SCF and \(t_{MA}\) represents the time for mesh adaption.
Figure 14: CPU time for single-mesh method and multi-mesh method on six parts: **MA** (mesh adaption), **SH** (solve Hartree potential), **CM** (construct matrix), **SE** (solve eigenvalue problem), **UD** (update electron density) and **CE** (calculate energy). Four mesh adaptations are needed to achieve chemical accuracy in both methods starting from the same initial setup.
Figure 13: SCF convergence history (left) and mesh adaption history (right) for Helium.
table, the multi-mesh approach demonstrates its efficiency compared to the single-mesh method.
The detailed comparison regarding CPU time is illustrated in Figure 15. As expected, in the LiH example, the solution of the eigenvalue problem significantly contributes to the CPU time due to the increased number of required eigenvalues. Notably, the cost of the third mesh is higher than that of the final mesh. This is because, during the third mesh adaptation step, the mesh tends to stabilize with minimal changes, leading to fewer iteration steps in the final mesh, as shown in Figure 13. Consequently, the CPU time on the third mesh becomes the most significant part contributing to the total CPU time. Moreover, since the number of Dofs in the multi-mesh method is smaller than that in the single-mesh method, the SCF time for the multi-mesh approach is also reduced. Overall, the multi-mesh method proves to be more efficient in terms of computational time than the single-mesh method in this case.
#### 5.1.3 BeH\({}_{2}\) molecule
The comparison between the multi-mesh method and the single-mesh method is also presented in the BeH\({}_{2}\) example, as displayed in Table 3 and Figure 16. A similar conclusion to the previous examples on the accuracy and efficiency can be delivered from the results. In this case, given the same mesh adaption tolerance, the multi-mesh method succeeds in achieving the chemical accuracy, while the single-mesh method fails. This observation illustrates the superior accuracy through the multi-mesh method compared to the single-mesh method.
### Collection of results for more general electronic structures
In this subsection, we list the results of atoms and molecules using multi-mesh adaptive methods. The referenced values are generated from the state-of-art software NWChem using _aug-cc-pv5z_ basis sets except for the helium atom in which the _aug-cc-pv6z_ basis set is adopted. According to Table 4, we observe that chemical accuracy is achieved in the first four examples. However, as the number of atoms increases, the accuracy diminishes. This degradation can be attributed to the limitations of computational resources. In such scenarios, we try to fully utilize our computational resources for simulations using both the single-mesh and multi-mesh adaptive methods, which can be implemented by dynamically adjusting the mesh adaption tolerance. Hence a comparative assessment between the two approaches can be delivered.
The results are summarized in Table 4 and we display several meshes and isosurfaces for these examples in Figure 17. From the table and the figure, we can see that
* For the last four molecules, besides a similar observation for the CPU time in Figures 14, 15, and 16, another observation can be made, that with comparable computational resource, our multi-mesh method can generally produce a much better result. It means that the multi-mesh method would be an answer to the question of how to generate the most accurate result, by fully using the given computational resource.
* When maximizing our computational resources, the multi-mesh approach yields more precise results. To be specific, the multi-mesh method attains chemical accuracy for the first four examples, as indicated in the last row of Table 4. In contrast, the single-mesh method only achieves chemical accuracy in the first two examples.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline molecule & He & LiH & BeH\({}_{2}\) & CH\({}_{4}\) & CH\({}_{3}\)OH & CH\({}_{3}\)CH\({}_{2}\)OH & C\({}_{6}\)H\({}_{6}\) \\ \hline \(N_{s,\text{KS}}\) & 2,874,807 & 6,029,109 & 7,338,638 & 8,121,252 & 7,552,976 & 7,720,132 & 7,210,851 \\ \(N_{\text{KS}}\) & 1,262,599 & 3,786,064 & 4,852,528 & 5,440,384 & 6,226,903 & 6,864,714 & 6,036,918 \\ \(N_{\text{Har}}\) & 2,185,920 & 3,665,015 & 4,276,863 & 5,322,827 & 4,063,275 & 3,593,622 & 4,798,747 \\ \hline \(E_{\text{ref}}\) & -2.8348 & -7.9196 & -15.6606 & -40.1198 & -114.8503 & -153.8158 & -230.1916 \\ \(E_{s,\text{KS}}\) & -2.8341 & -7.9178 & -15.6573 & -40.1105 & -114.8033 & -153.7418 & -229.9824 \\ \(E_{\text{KS}}\) & -2.8340 & -7.9181 & -15.6580 & -40.1147 & -114.8205 & -153.7734 & -230.0663 \\ \hline \(|\Delta E_{s,\text{KS}}/E_{\text{ref}}|\) & 0.0002 & 0.0002 & 0.0002 & 0.0002 & 0.0004 & 0.0005 & 0.0009 \\ \(|\Delta E_{\text{KS}}/E_{\text{ref}}|\) & 0.0003 & 0.0002 & 0.0002 & 0.0001 & 0.0003 & 0.0003 & 0.0005 \\ \(|\Delta E_{s,\text{KS}}/n_{atom}|\) & **0.0007** & **0.0009** & 0.0011 & 0.0019 & 0.0078 & 0.0082 & 0.0174 \\ \(|\Delta E_{\text{KS}}/n_{atom}|\) & **0.0008** & **0.0008** & **0.0009** & **0.0010** & 0.0050 & 0.0047 & 0.0104 \\ \hline \end{tabular}
\end{table}
Table 4: List of examples. The referenced value is obtained from NWChem. \(N_{s,\text{KS}}\) represents the number of mesh grids for the single mesh method. \(E_{s,\text{KS}}\) stands for the energy obtained from the single-mesh method. The last row refers to the comparison with chemical accuracy.
Figure 17: Sliced KS meshes (top) and isosurfaces (bottom) for the following molecules: LiH, BeH\({}_{2}\), CH\({}_{4}\), CH\({}_{3}\)OH, and CH\({}_{3}\)CH\({}_{2}\)OH. Contours are depicted for the two-dimensional structures (LiH and BeH\({}_{2}\)). The computational domain is set as \([-10,10]^{3}\), with the exception of the CH\({}_{4}\) example, for which it is extended to \([-20,20]^{3}\).
* In the multi-mesh method, the number of mesh grids for the Hartree mesh exceeds the number for the KS mesh only in the case of helium. As the system size scales up, the number of Hartree mesh grids becomes progressively less than the number of KS mesh grids. Consequently, the computational time needed for the Hartree potential becomes less significant as the system size increases. There is an exception in the cases of CH\({}_{4}\) and C\({}_{6}\)H\({}_{6}\), which arises because we opted for a larger computational domain, specifically, \([-20,20]^{3}\).
## 6 Conclusion
In this paper, we introduce a multi-mesh adaptive finite element framework for Kohn-Sham density functional theory, aiming to achieve chemical accuracy. We investigate the impact of the Hartree potential approximation on total energy, finding that chemical accuracy cannot be attained without a well-approximated Hartree potential. While the single-mesh adaptive method, considering the Hartree potential, achieves chemical accuracy, it comes with significant computational costs. To address this, we propose the multi-mesh adaptive method, which offers a more efficient route to achieving chemical accuracy.
We demonstrate the effectiveness and accuracy of the multi-mesh adaptive method through various numerical examples. However, we observe that even for the helium atom, achieving chemical accuracy demands a substantial number of mesh grids, mainly due to the use of linear Lagrange finite elements. One strategy is to adopt higher-order elements under the multi-mesh framework. Additionally, we consider future work on conducting numerical analysis for the multi-mesh adaptive method. Moreover, we recognize the need for accelerating the multi-mesh adaptive approach, particularly through parallel implementation and the development of more effective error indicators for mesh adaptation methods. These enhancements will further refine the efficiency and accuracy of the multi-mesh adaptive framework.
## Acknowledgement
The work of Y. Kuang was supported in part by the National Natural Science Foundation of China (Grant No.12201130) and the Guangzhou Municipal Science and Technology Bureau (Grant No.2023A04J1321). The work of Y. Shen was supported by Hunan Key Laboratory for Computation and Simulation in Science and Engineering (Grand No. LCSSE202307). The work of G. Hu was supported by the National Natural Science Foundation of China (Grant Nos.11922120 and 11871489), the FDCT of MacaoSAR (Grant No.0082/2020/A2), the MYRG of the University of Macau (Grant No. MYRG2020-00265-FST), and the Guangdong-Hong Kong-Macao Joint Laboratory for Data Driven Fluid Mechanics and Engineering Applications (Grant No. 2020B1212030001).
|
2309.00459 | Faint [CI](1-0) emission in z $\sim$ 3.5 radio galaxies | We present Atacama Large Millimeter/sub-millimeter Array (ALMA) neutral
carbon, [C I](1-0), line observations that probe molecular hydrogen gas (H$_2$)
within seven radio galaxies at $z = 2.9 - 4.5$ surrounded by extended
($\gtrsim100$ kpc) Ly-$\alpha$ nebulae. We extract [C I](1-0) emission from the
radio-active galactic nuclei (AGN) host galaxies whose positions are set by
near-infrared detections and radio detections of the cores. Additionally, we
place constraints on the galaxies' systemic redshifts via He II $\lambda$1640
lines seen with the Multi-Unit Spectroscopic Explorer (MUSE). We detect faint
[C I] emission in four out of seven sources. In two of these galaxies, we
discover narrow line emission of full width at half maximum $\lesssim100$ km
s$^{-1}$ which may trace emission from bright kpc-scale gas clouds within the
ISM. In the other two [C I]-detected galaxies, line dispersions range from
$\sim100 - 600$ km s$^{-1}$ and may be tracing the rotational component of the
cold gas. Overall, the [C I] line luminosities correspond to H$_2$ masses of
M$_{\rm H_2,[C I]} \simeq (0.5 - 3) \times 10^{10} M_\odot$ for the detections
and M$_{H_2,[C I]} < 0.65 \times 10^{10} M_\odot$ for the [C I] non-detections
in three out of seven galaxies within the sample. The molecular gas masses in
our sample are relatively low in comparison to previously reported measures for
similar galaxies which are M$_{H_2,[C I]} \simeq (3 - 4) \times 10^{10}.$ Our
results imply that the observed faintness in carbon emission is representative
of a decline in molecular gas supply from previous star-formation epochs and/or
a displacement of molecular gas from the ISM due to jet-powered outflows. | S. Kolwa, C. De Breuck, J. Vernet, D. Wylezalek, W. Wang, G. Popping, A. W. S. Man, C. M. Harrison, P. Andreani | 2023-09-01T13:45:35Z | http://arxiv.org/abs/2309.00459v1 | # Faint [C i](1-0) emission in \(z\sim 3.5\) radio galaxies
###### Abstract
We present Atacama Large Millimeter/sub-millimeter Array (ALMA) neutral carbon, [C i](1-0), line observations that probe molecular hydrogen gas (H\({}_{2}\)) within seven radio galaxies at \(z=2.9-4.5\) surrounded by extended (\(\gtrsim\)100 kpc) Ly\(\alpha\) nebulae. We extract [C i](1-0) emission from the radio-active galactic nuclei (AGN) host galaxies whose positions are set by near-infrared detections and radio detections of the cores. Additionally, we place constraints on the galaxies' systemic redshifts via He ii \(\lambda\)1640 lines seen with the Multi-Unit Spectroscopic Explorer (MUSE). We detect faint [C i] emission in four out of seven sources. In two of these galaxies, we discover narrow line emission of full width at half maximum \(\lesssim 100\) km s\({}^{-1}\) which may trace emission from bright kpc-scale gas clouds within the ISM. In the other two [C i]-detected galaxies, line dispersions range from \(\sim 100-600\) km s\({}^{-1}\) and may be tracing the rotational component of the cold gas. Overall, the [C i] line luminosities correspond to H\({}_{2}\) masses of \(M_{\rm H_{2},[C\textsc{i}]}\simeq(0.5-3)\times 10^{10}\) M\({}_{\odot}\) for the detections and \(M_{\rm H_{2},[C\textsc{i}]}<0.65\times 10^{10}\) M\({}_{\odot}\) for the [C i] non-detections in three out of seven galaxies within the sample. The molecular gas masses in our sample are relatively low in comparison to previously reported measures for similar galaxies which are \(M_{\rm H_{2},[C\textsc{i}]}\simeq(3-4)\times 10^{10}\) M\({}_{\odot}\). Our results imply that the observed faintness in carbon emission is representative of a decline in molecular gas supply from previous star-formation epochs and/or a displacement of molecular gas from the ISM due to jet-powered outflows.
keywords: galaxies: high-redshift - galaxies: active - galaxies: star formation - ISM: molecules - galaxies: kinematics and dynamics
## 1 Introduction
Observations of cold gas in radio galaxies (HzRGs) which host high-power jetted active galactic nuclei (AGN) are an excellent tool for studying the impact of radio-mode feedback on baryonic matter within the interstellar medium and extended halo of a high-mass galaxy. The distinguishing features of radio galaxies at high-redshift (\(z>1\)) are their high-luminosity nuclear emission, prominent radio-jets, and high stellar masses which typically range from \(M_{\star}\simeq 10^{11}-10^{12}\) M\({}_{\odot}\)(Seymour et al., 2007; De Breuck et al., 2010). Such a combination of physical features make HzRGs particularly important sites for examining the co-evolution of galaxies and their central supermassive black-holes as well as feedback mechanisms regulated by the active galactic nucleus (AGN).
Within the past three decades, it has become increasingly apparent that HzRGs are also commonly associated with enormous Ly\(\alpha\) nebulae (ELANe) that can extend out to distances of \(\gtrsim 100\) pkpc from their nuclei. The combination of powerful radio jets and enormous haloes of ionised gas surrounding the galaxy mean that HzRGs are also very convenient probes for investigating the interplay between ionised gas and radio emission (McCarthy, 1993; Reuland et al., 2003; Villar-Martin et al., 2003; Villar-Martin, 2007; Humphrey et al., 2007; Swinbank et al., 2015; Morais et al., 2017; Silva et al., 2018). ELAN surrounding HzRGs represent the warm ionised gas component (\(\Gamma\sim 10^{4}-10^{5}\) K) of the circumgalactic medium (CGM) where turbulent gas motions is observed up to tens of kpc from the AGN while on the outermost envelopes of the halo (\(\gtrsim 10\) pkpc) warm gas tends to be more kinematically quiet (Villar-Martin et al., 2003).
Generally, the CGM of a galaxy is defined as the gas region through which baryonic matter is transported (via accretion, and outflows) between the intergalactic medium (IGM) and the interstellar medium (ISM). With the CGM being multi-phase, consisting of cold molecular, neutral as well as ionised gas, it has become an important subject for investigation in galaxy evolution, allowing us to determine how multi-phase gas components interact physically with one another throughout the extent of a galaxy's halo (Steidel et al., 2010; Tumlinson et al., 2017).
The ionised component of the CGM surrounding galaxies that host AGN has been examined in numerous studies involving observations, photoionisation modelling and theoretical simulations. These studies
have made significant use of integral field unit (IFU) spectrographs to observe the ionised gas, providing a more detailed look into the kinematics of the Ly\(\alpha\) nebulae surrounding galaxies which had been discovered a decade or two prior to the advent of IFUs. The 3D imaging capability of IFUs provide a crucial in-depth view of the gaseous environments of galaxies up to 100s of kpc from their nuclei. Ly\(\alpha\) line emission traced directly from the target galaxy with an IFU is equally useful in studying the warm gas component of the CGM (Borisova et al., 2016; Ginolfi et al., 2018; Arrigoni Battaia et al., 2018, 2019; Fossati et al., 2021). When the CGM of a galaxy coincides with a quasar sight-line, absorption-line measures in the quasar spectrum become a very useful diagnostic for the chemical composition and kinematics of the foreground galaxy's CGM (Bielby et al., 2017; Peroux et al., 2017; Dutta et al., 2020).
For HzRGs in particular, instruments such as the Spectrograph for INtegral Field Observations in the INfrared (SINFONI; e.g. Nesvadba et al., 2006, 2017) and the Multi-Unit Spectroscopic Explorer (MUSE; e.g. Swinbank et al., 2015; Gullberg et al., 2016; Vernet et al., 2017; Kolwa et al., 2019; Falkendal et al., 2021; Wang et al., 2021) have illustrated the impact of the powerful radio jets on the kinematics of emitting gas, in terms of producing outflows and increasing gas turbulence, showing that jets can drive out molecular gas and effectively shut down the fuel supply for star-formation.
While all of the above observations have been beneficial in providing information on the properties of ionised gas in radio-loud AGN hosts, a systematic CO survey to trace H\({}_{2}\) in a wide sample of HzRGs has yet to be performed. This is the case despite the fact that low-\(J\) transitions of CO have minimal excitation requirements (i.e., low critical densities for \(E/\)kg values). This fact implies that CO lines can easily trace H\({}_{2}\) clouds irrespective of their thermal states at low gas densities \(n\sim 10^{2}\) cm\({}^{-3}\) which are typically seen in optically thick CO(1-0) gas. While CO has clear advantages as a molecular gas tracer, solely relying on it comes with drawbacks. For instance, in jetted AGN, CO within the ISM can be photodissociated by cosmic rays, effectively destroying CO molecules and reducing the CO/H\({}_{2}\) abundance in giant molecular clouds (GMCs) (Bisbas et al., 2015, 2017).
Within this framework, neutral carbon fine-structure line transitions, [C i] \({}^{3}P_{1}\rightarrow{}^{3}P_{0}\) and [C i] \({}^{3}P_{2}\rightarrow{}^{3}P_{1}\), hereinafter [C i](1-0) and [C i](2-1), have been introduced as an alternative tracer for H\({}_{2}\) in GMCs. Selecting [C i] as a tracer may be especially wise given that radio galaxies have jetted AGN that produce high-energy cosmic rays which are capable of photodissociating CO in molecular gas. As a result, [C i] lines may retain their ability to trace H\({}_{2}\) in clouds irradiated by cosmic rays (CRs). This situation may change as one shifts away from the nuclear region and towards the interstellar medium (ISM) where the number density of ionising photons is relatively lower than it is in a galaxy's nuclear region (Papadopoulos et al., 2004)
Previously, a pilot study of molecular gas in ultra-luminous infrared galaxies (ULIRGs) found reasonable agreement between the H\({}_{2}\) masses inferred from [C i] and low-\(J\) CO lines (e.g. Papadopoulos & Greve, 2004). While more recently, observations in Papadopoulos et al. (2018) have shown that high energy cosmic rays (CRs) produced by star-formation and active galactic nuclei (AGN) can provide the right conditions for creating a [C i]-rich/CO-poor molecular gas phase. Another advantage to using [C i] line tracers is that they provide a straightforward interpretation of the molecular clouds allowing us to infer H\({}_{2}\) mass. This is due to [C i] lines having typically low optical depths in comparison to CO lines. Hence, the [C i] line surface brightness correlates more directly with the mass of neutral carbon and also the H\({}_{2}\) mass of a molecular cloud (Nesvadba et al., 2019). Furthermore, both [C i](1-0) and [C i](2-1) transitions have sufficiently low critical densities of \(n_{10}=500\) cm\({}^{-3}\) and \(n_{21}=10^{3}\) cm\({}^{-3}\), respectively.
One of the first major high-redshift (\(z>2\)) [C i] line surveys was carried out by Walter et al. (2011) and showed emission in sub-millimetre galaxies and quasar hosts. Since then, several [C i] surveys have been performed with the goal of tracing H\({}_{2}\) and inferring molecular gas masses and the dynamics thereof for both lensed and unlensed galaxies at high-\(z\)(Alaghband-Zadeh et al., 2013; Bothwell et al., 2017; Valentino et al., 2018; Nesvadba et al., 2019). Adding to these surveys are several single-source studies which reveal the presence of [C i] emission in star-forming galaxies (Popping et al., 2017; Andreani et al., 2018). For HzRGs, [C i](2-1) has traced H\({}_{2}\) in PKS 0529-549 at \(z=2.57\)(Lelli et al., 2018; Man et al., 2019). Within the halo of the Spiderweb Galaxy (MRC 1138-262 at \(z\simeq 2.16\)), Guillberg et al. (2016) have reported broad [C i](2-1) emission at a projected distance of \(d\simeq 4\) kpc from the main galaxy's radio core. In addition to this, [C i](1-0) emission, within the Spiderweb's gas halo, has been detected between \(d\simeq 17\) and 70 kpc from its core (Emonts et al., 2018). Other recent studies of [C i](1-0) line emission in HzRGs have been carried out for two sources: 4C+14.17 at \(z=3.8\)(Nesvadba et al., 2020) and 4C+19.71 at \(z\simeq 3.6\)(Falkendal et al., 2021). With all these results, it is clear that the drive to use [C i] as a molecular gas tracer has certainly caught on thus strengthening its case for being a similarly and perhaps even more reliable H\({}_{2}\) tracer than CO (e.g. Dunne et al., 2022).
For tracing [C i](1-0) with sub-arcsecond resolution, the Atacama Large Millimeter-submillimeter Array (ALMA) has frequency bands 3 and 4 which are best suited for probing emission from this atomic transition at a redshift range of \(z=2.9-4.5\) where rest-UV lines are detectable within the observational window of the Multi-Unit Spectroscopic Explorer (MUSE). Hence, with a combined ALMA and MUSE dataset, we can perform a detailed kinematic analysis of both the ionised and molecular gas. In doing so, we are capable of assessing the probable effects of shock-induced turbulence from radio jets as well as photon-heating on the molecular gas within the ISM (Mahony et al., 2013; Morganti & Oosterloo, 2018) and possibly also the CGM (McNamara et al., 2016).
We have therefore constructed a sample of seven HzRGs with redshifts spanning \(z=2.9-4.5\). This galaxy sample has been observed with both ALMA in [C i](1-0) and MUSE which probes the Ly\(\alpha\) emission. In tandem, the ALMA and MUSE observations provide us with a multi-wavelength view of the cold (\(\sim 10\) K) and warm (\(\sim 10^{4}-10^{5}\) K) gas phases of baryonic haloes surrounding radio galaxies. With such an imaging set, we are capable of gauging the impact of high-powered radio jets on the multi-phase gas within both a galaxies' ISM and its surrounding CGM.
This paper is structured in the following way: Section 2 describes the sample selection; Section 3 provides an overview of the data acquisition and analysis procedures; Section 4 explains the method for the analysis; and in Section 5, we describe the overall results of this study and explore the implications of our findings. A summary is provided in Section 6. Throughout, we have made use of \(\Lambda\)CDM cosmology as defined by Planck Collaboration et al. (2016) where H\({}_{0}=67.8\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{\rm m}=0.308\).
## 2 Galaxy Sample Selection
The radio galaxies covered in this work are drawn from legacy radio surveys. Our sample, in particular, comprises seven targets which are listed sources in the Molongolo Reference Catalogue (MRC; Large et al., 1981) and Fourth Cambridge Catalogue (4C; Pilkington & Scott
1965). The targets are radio sources with optical counterparts i.e. radio galaxies with rest-frame 5 GHz luminosities of \(\geq 10^{25}\) W Hz\({}^{-1}\) which imply the presence of radio-loud AGN (Miller et al., 1990). Currently, radio galaxies are known out to redshifts of \(z\)=5.72 (Saxena et al., 2018). Generally speaking, those above a redshift of 2 (i.e., \(z>2\)) are referred to as high-\(z\) radio galaxies (HzRGs) which are rare and require a significant amount of optical and infrared spectroscopic observing time for reliable constraints on their redshifts to be made. Despite this, several radio selection techniques have already been applied to pre-select high-\(z\) radio galaxy candidates (e.g. Drouart et al., 2020). As a result of the stringency of pre-selections, however, the space density of HzRGs is not sufficiently constrained, at the present time.
Our ALMA+MUSE sample has been selected from a large survey program designed to study stellar mass build up and star-formation in distant radio galaxies at \(1.0<z<5.2\) with the _Spitzer_ Space Telescope (Seymour et al., 2007; De Breuck et al., 2010). This dedicated _Spitzer_ HzRGs program consists of 71 HzRGs which have stellar masses which range from \(M_{\star}\approx 10^{11}-10^{11.5}\) M\({}_{\odot}\) (details on stellar mass inferences are given in Section 3.3). The \(3.6-850\)\(\mu\)m _Spitzer_ and _Herschel_ photometry first reported by Drouart et al. (2014) have been combined with ALMA continuum observations to obtain SED fits that provide the star-formation rate measures for the galaxies in our sample (Falkendal et al., 2019).
With the goal of simultaneously tracing the ionised and molecular gas components of the extended halos of HzRGs, we have defined a sub-sample of seven HzRGs that are observable both in Ly\(\alpha\) with MUSE and in [C i] with ALMA. For ALMA, we targeted the neutral carbon, ground-state transition line [C i] \({}^{3}P_{1}\rightarrow{}^{3}P_{0}\) (hereinafter [C i](1-0) at \(v_{\rm rest}=492.161\) GHz), with simultaneous coverage of \({}^{13}\)CO(4-3) at \(v_{\rm rest}=440.765\) GHz. This was done with the aim of tracing the cold gas component within the host galaxies and their extended haloes. At the \(2.9\lesssim z\lesssim 4.5\) redshift coverage of our sub-sample, the [C i](1-0) line can be observed in ALMA bands 3 and 4.
On the other hand, the MUSE survey targeted rest-frame UV emission lines that are observable within the spectral window of MUSE (in the wide-field mode) which covers a wavelength range of \(\lambda_{\rm obs}=4800-9300\) A. The MUSE observations are ideal for tracing warm ionised gas at redshifts of \(z=2.9-6.6\) where the Ly\(\alpha\)\(\lambda\)1216 A line falls within the MUSE spectral window. The blue edge of MUSE sets the lower redshift range of our subsample.
To place our sample into the general context of high-\(z\) galaxies, we construct a SFR-\(M_{\star}\) plane (Fig. 1) where our targets are shown against HzRGs from previous studies and star-forming galaxies (SFGs) at \(z>1.0\). The SFRs for the HzRGs represented in the figure have been adapted from Falkendal et al. (2019) who make use of _Herschel_, _Spitzer_ and ALMA photometry to perform SED-template fitting and derive the infrared (IR) luminosities required to estimate SFRs for the galaxies. Additionally, Seymour et al. (2007) use _Spitzer_ photometry to perform SED-fits that yield \(M_{\star}\) as upper limits for the 26 out of 29 HzRGs from our study as well as the literature in Fig. 1. \(M_{\star}\) constraints are made for 3 out of the 29 HzRGs shown.
## 3 Observations and Data Reduction
### Alma
The radio galaxies in our sample were observed in ALMA bands 3 (Kerr et al., 2014) and 4 (Asayama et al., 2014) during ALMA Cycle 3 under the project ID 2015.1.00530.S (PI: De Breuck) on the dates provided in Table 1. The correlator configurations was set to observe two contiguous spectral windows covering the [C i](1-0) line and continuum in one side-band and the remaining two spectral windows simultaneously covering the \({}^{13}\)CO(4-3) line and continuum in the other side-band. All four spectral windows were recorded in Frequency Domain Mode with a bandwidth 1875 MHz and spectral resolution of 3.904 MHz.
We generated the calibrated measurement sets using the calibration scripts provided by the ALMA Observatory using casa (_Common Astronomy Software Applications_) versions 4.5.1, 4.5.3, 4.6.0, and 4.7.2 (McMullin et al., 2007). For the imaging, we used the data reduction pipeline (casa-6.2.1.7 and a natural _uv_-weighting (robust parameter 2) to optimise the signal-to-noise ratio (S/N) at the expense of spatial resolution. As the CGM emission can be quite spatially extended (e.g. Emonts et al., 2018), we also made an attempt at tapering the 12m _uv_-data with 2\({}^{\prime\prime}\), 3\({}^{\prime\prime}\), and 4\({}^{\prime\prime}\) Gaussian width options. For all sources, with the exception of MRC 0943-292, an increase in noise due to the reduced amount of data decreased the S/N data overall. Hence, we opted to continue forward with the untapered images for which the spatial resolution is provided in Table 3. Continuum subtraction was performed using the _uv_-_cust_(_b_) task and spectral-line cubes were generated with a range of velocity resolutions from 9 to 64 km s\({}^{-1}\), where we selected the binning that provides the most optimal line S/N for the observed line width. The images were primary beam corrected with the casa task _imbprox()_. For the 1D spectra, pixels were averaged over 2\({}^{\prime\prime}\) diameter apertures. The moment-0 maps (i.e. narrow-band images) were created with casa _imomments()_ in which we images were integrated over frequency ranges shown in Fig. 3. The 1\(\sigma\) rms levels in the [C i] spectra are two orders of magnitude smaller than the [C i] contour levels shown in the corresponding images due to the velocity integration performed during the moment-0 map creation.
### Muse
From the MUSE IFU spectrograph, we acquired rest-UV line and continuum observations for the sample of radio galaxies. Mounted on VLT Yepun (UT4), MUSE covers a spectral range of 4800\(-\)9300 A where the spectral resolution ranges from \(2.82-2.74\) A between the blue and red wavelength extremities of the MUSE spectral window. All of the galaxies, excluding for 4C+03.24 and MRC 0943-242, were observed in WFM-NOAO-N (nominal wide-field mode with no adaptive optics system in place) mostly under the program IDs 096.B-0752 and 097.B-0323 (PI: Vernet) on several nights between 2015 October and 2016 September (see Table 1).
The galaxy, 4C+03.24, was observed under program IDs 060.A-9100(G) during the GALACSI/WFM commissioning run in 2017 June. TN1338-1942 was observed under 60.A-9318(A) and 060.A-9100(B) during the science verification and second MUSE commissioning run, from 2014 April to June. For MRC 0943-242 which was observed in WFM-NOAO-E (extended wide-field mode, no adaptive optics), we also include the data of obtained during first commissioning in 2014 February under 60.A-9100(A).
We processed the raw observations using a standard data reduction procedure in soscre v.2.8.4 (Weibelaber et al., 2020) which produced the final data-cube for each galaxy. We then ran a principal component analysis based procedure called _Zurich Atmosphere Purge_ (zap 2.0) (Soto et al., 2016) on the reduced data-cubes in order to subtract telluric sky line emission where it is most present at the red end of the MUSE spectral window.
To create the Ly\(\alpha\) narrow-band images, the MUSE data-cubes
were integrated along the spectral axes over 15 A around the peak in Ly\(\alpha\) emission in each source. The images have a sampling rate of \(0.2\times 0.2\) arcsec\({}^{2}\) per pixel which is seeing-limited to an average \(1.0^{\prime\prime}\) in our observations. We note that some of the MUSE data on individual sources has already been published in previous papers (Swinbank et al., 2015; Gullberg et al., 2016; Vernet et al., 2017; Falkendal et al., 2019; Kolwa et al., 2019; Wang et al., 2021). Additionally, detailed tomography of the Ly\(\alpha\) nebulae for eight radio-loud AGN (including the seven galaxies from our sample) are reported in Wang et al. (2023).
\begin{table}
\begin{tabular}{|l|c c c|l c c|} \hline & **ALMA** & & & **MUSE** & & \\ Galaxy & Observing dates & \(t_{\rm int}\) & bmaj \(\times\) bmin & Observing Dates & \(t_{\rm exp.}\) & Seeing \\ & & (min) & (arcsec\({}^{2}\)) & & (min) & (arcsec) \\ \hline MRC 0943-242 & 04/03/2016 - 2005/2016 & 45 & \(2.09\times 1.33\) & 21/02/2014 - 18/01/2016 & 312 & 0.65 \\ TN J0205-2242 & 08/03/2016 - 130/09/2016 & 44 & \(2.28\times 1.81\) & 03/12/2015 - 08/12/2015 & 254 & 0.73 \\ TN J0121+1320 & 06/03/2016 - 240/09/2016 & 40 & \(1.98\times 1.47\) & 06/10/2015 - 28/08/2016 & 318 & 0.83 \\
4C+03.24 & 06/03/2016 & 38 & \(2.23\times 1.62\) & 17/06/2017 - 18/06/2017 & 75 & 0.63 \\
4C+19.71 & 06/03/2016 - 240/09/2016 & 39 & \(1.98\times 1.81\) & 08/06/2016 - 02/09/2016 & 350 & 1.03 \\ TN J1338-1942 & 16/03/2016 - 220/09/2016 & 40 & \(1.98\times 1.61\) & 30/04/2014 - 30/06/2014 & 535 & 0.77 \\
4C+04.11 & 05/03/2016 - 03/05/2016 & 43 & \(2.24\times 2.17\) & 03/12/2015 - 15/12/2015 & 254 & 0.88 \\ \hline \end{tabular}
\end{table}
Table 1: ALMA and MUSE observation details for the \(2.9\leq z\leq 4.5\) radio galaxy sample. Columns (1) and (2) show the galaxy catalogue names and literature redshifts. Column (3) indicates the start and end dates for the completion of the 12m ALMA observations, column (4) shows the on-source integration time (\(t_{\rm int}\)). In column (5), we provide the full-width at half maximum (FWHM) of the synthesised beam along its major and minor axes. Column (6) shows the MUSE observing dates. Column (7) indicates the source exposure time (\(t_{\rm exp.}\)). Column (8) indicates the seeing conditions during observations which are obtained from the average FWHM of the brightest foreground star in the field.
Figure 1: The star-formation rate as a function of stellar mass for HzRGs and several classes of SFGs. The HzRGs from the ALMA+MUSE sample (orange) presented in this work are shown alongside an HzRG sample (Falkendal et al., 2019) in red. Lensed SFGs (ISFGs in navy blue), compact SFGs (blue) and normal SFGs (sky blue) are shown. The main sequence of star-forming galaxies from Schreiber et al. (2015) at \(z=2.52\), the median redshift across all the galaxies depicted, is shown in grey with a 0.3 dex region of scatter in SFR which corresponds to a \(1\sigma\) dispersion in log\({}_{10}\)(SFR/M\({}_{\odot}\)yr\({}^{-1}\)) which is based on empirical results from a flux-limited sample of galaxies with _Spitzer_ MIPS (Noeske et al., 2007; Elbaz et al., 2007). The data point sizes are scaled as \(10z^{2}\) where \(z\) is the redshift of the galaxy. Literature references of the galaxy samples are provided in the text. We include a sample of a _Herschel_-detected, lensed SFGs using their magnification-corrected SFR and \(M_{\bullet}\) measures (Sharon et al., 2013; Bothwell et al., 2013; Dessauges-Zavadsky et al., 2015; Nayyeri et al., 2017). The figure also shows a sample of six compact SFGs (Tadaki et al., 2015; Spilker et al., 2016; Popping et al., 2017; Tadaki et al., 2017; Barro et al., 2017). Additionally, a sample of normal SFGs have been adapted from Tadaki et al. (2015) and are shown in the figure as well.
### Ancillary data
Optical, near-infrared (NIR) and radio data are available for the H\(\alpha\)RG sample which we have selected. The multi-wavelength dataset includes _Hubble Space Telescope_ (HST) wide-band imaging obtained with Wide-field Planetary Camera 2 (WFPC2) with the F702W filter which has a spectral coverage of \(5865.66-8433.19\) A useful for tracing the continua of O & B-type stars in high-\(z\) sources (Pentericci et al., 1997). The reduced HST images images are accessible via the Hubble Legacy Archive (HLA).
The _Spitzer_ Space Telescope images of radio galaxies reveal emission from evolved stellar populations (Seymour et al., 2007). We have obtained rest-frame \(K\)-band detections from the Infrared Array Camera (IRAC; \(3.6-8.0~{}\mu\)m). In addition to those taken by the other _Spitzer_ instruments: Infrared Spectrograph (IRS; \(16~{}\mu\)m) and Multiband Imaging Photometer for _Spitzer_ (MIPS; \(24-160~{}\mu\)m), _Spitzer_ images we use in this study are available and accessible from a dedicated _Spitzer_ H\(\alpha\)RGs (SHzRGs) archive. The near-IR, _Spitzer_ and _Herschel_ data (De Breuck et al., 2010; Drouart et al., 2014) provide stellar masses and star formation rates (or limits thereof) for our entire sample, which allows us to derive the evolutionary state of the galaxies relative to the Main Sequence of star-forming galaxies (see Fig. 1). We note that the stellar masses reported by Seymour et al. (2007) and De Breuck et al. (2010) are often upper limits due to potential contributions from the hot dust torus emission to the rest-frame \(1.6~{}\mu\)m photometry. However, this approach is conservative, as most of the stellar populations are consistent with the \(K\)-band photometry. As a result, the real stellar masses in Fig. 1 are expected to be sufficiently close to the upper limits, which unlike normaly upper limits, have been measured from significantly deep photometry. For the radio imaging of these sources, we obtained archival C-band (4.8 GHz) and X-band (8.3 GHz) images from the Karl G. Jansky Very Large Array (VLA) that indicate the locations of the radio hotspots (Carilli et al., 1997; Pentericci et al., 2000).
## 4 Data analysis
### Systemic Redshift Estimation
The galaxies in our combined ALMA+MUSE sample have spectroscopic redshifts from the literature which are based on a variety of line tracers which we provide references of in table 2. With the advent of optical 3D data cubes, the complex kinematic structure of extended ionised gas haloes has become more apparent revealing outflowing ionised gas from AGN host galaxies (e.g. Molyneux et al., 2019; Couto et al., 2020; Riffel et al., 2023). The presence of such components can lead to offsets of several hundreds of km s\({}^{-1}\) when estimating systemic redshifts, even when non-resonant lines such as He ii are used (e.g. Kolva et al., 2019).
The He ii\(\lambda\)1640 recombination line is our benchmark for setting the systemic redshift of each galaxy. The He ii profiles are extracted from the galaxy positions in the MUSE data-cubes for all the sources except 4C+03.24 which does not have detected He ii emission at its core, and 4C+19.71, where the He ii line coincides with a spectral region affected by skyline residuals. The 1D spectra are extracted from aperture diameters of 5 pixels or 1\({}^{\prime\prime}\) at the positions of the host galaxies which are determined from the radio cores (Carilli et al., 1997; Pentericci et al., 2000) and the _Spitzer_ imaging which indicates the extent of the evolved stellar distribution (see Fig 3). The size of the aperture is chosen (i) to match the seeing element (of width \(0.9^{\prime\prime}\sim 1.0^{\prime\prime}\)) estimated from stars in the MUSE fields of view; (ii) to maximise the spectral S/N ratio; (iii) to avoid contamination from non-systemic line emission components further away from the core which may emerge within the line profile as the aperture size is increased.
We began by applying a Gaussian fit to the He ii profiles of MRC 0943-242, TN J0205+2242, TN J0121+1320 and 4C+04.11 where the Gaussian peak (\(A\)), width (\(\sigma\)) and centre (\(\lambda_{c}\)) were fit to the data with \(A\) as a free parameter and sigma constrained within the wavelength range \(2-15\) A and \(\lambda_{c}\) is allowed to deviate \(\pm 5\) A from its initial value. The He ii emission of TN J1338-1942 is spatially extended and, when extracted to form a 1D spectrum, it is seen to have two distinct spectral profiles: one at the position of the host galaxy and the other offset in projected distance from the host galaxy illustrated in Fig. 2. As a result, a double Gaussian profile, with components that are independent of one another, is fit to the spectrum. The peak, width and line centres are constrained by the same boundary conditions as those used for the single Gaussian fits.
In the case of a single emission component, the Gaussian centre is used to infer the systemic redshift of the source. For a double emission component, we needed to carefully select which Gaussian component represents the systemic redshift. We therefore look for a spatial overlap between the He ii and \(K\)-band continuum from _Spitzer_/IRAC2 to confirm which emission component in He ii traces the host galaxy and should thus represent the systemic redshift.
As a result, we obtain 1D spectral line models which are shown in Fig. 3, with the results being listed in Table 2. Where [C i] is detected, the He ii and [C i] redshifts are fully consistent within the margin of error. Given the low S/N of our [C i] data, the uncertainties of the [C i]
Figure 2: MUSE-detected He ii\(\lambda\)1640 emission in TN J1338-1952 integrated over \(8345.65-8362.38\) Å in panel A and \(8376-8393\) Å in panel B (grey-scale with black contours) against the _Spitzer_ IRAC2 continuum (red).
redshifts are higher than on the He ii redshifts (Table 2) hence the current ALMA data may not be able to improve the overall systemic redshift accuracy but they do allow for a consistency check. In two sources, the He ii line cannot be fit due to its low S/N. Here, the [C i] line is used to constrain the systemic redshift for the 4C+03.24 and 4C+19.71. In 4C+03.24, He ii is not detected and we fix the redshift to the centre of the [C i] line (bottom panel of Fig. 3). In 4C+19.71, He ii is strongly affected by skyline residuals (e.g Falkendal et al., 2021), even though some of the line emission is discernible, we provide as a Gaussian profile that is fixed to the [C i] redshift (top panel of Fig. 3) and label this _He ii fix_ in the figure.
### [C i] narrow-band imaging
Locating [C i] emission in projection is crucial in determining whether the traced gas is associated with the host galaxy or extended CGM. We use the narrow-band images created with the procedure described in Section 3.1. The maps are spectrally integrated over velocity intervals which display the spatial extent of [C i] emission by summing the consecutive channels which exceed the zero-flux level. For the three galaxies where [C i] is not detected, we do not construct a moment-0 map or continuum-subtract because none of the galaxies show any significant continuum emission. The [C i] contours in Fig. 3 are rather close to the AGN host galaxies detected with _Spitzer_, indicating that our faint [C i] detections are likely to be associated with the host galaxies.
### Neutral carbon spectral line fits
In this study, we trace molecular gas via the [C i] lines which we fit using single Gaussian models. Our [C i] line spectra are extracted from primary-beam corrected image cubes in 2'' diameter apertures (\(\sim\)15 kpc at \(z=3.5\)) which have similar dimensions as the synthesised beams for each observation. We select this aperture size to ensure that (i) the full central part of the host galaxy is included, and (ii) the extracted 1D spectra have sufficient S/N to provide a stable convergence of the fitting algorithm. The sky co-ordinates of the host galaxies are already known from VLA radio continuum detections and (Carilli et al., 1997; Pentericci et al., 2000) and _Spitzer_ near-infrared observations of the host galaxies (Seymour et al., 2007; De Breuck et al., 2010).
A high degree of certainty in the host galaxy positions is provided by the sub-arcsecond astrometry in both the MUSE and legacy VLA observations; hence we can extract spectra of the AGN host galaxies using the positions of the radio cores. For this procedure, fixing the locations, however, does not necessarily optimise the S/N as the peak of the [C i] emission may have a small spatial offset from the AGN host galaxy. This is still acceptable for our purposes because we only aim to determine the [C i] content inside the AGN host galaxies which can provide us with an indication of the molecular gas available for star-formation. The synthesised beam sizes are 1'' in radius which at \(z=3.5\) corresponds to a projected size of \(\sim\)15 kpc which sufficiently covers the host galaxy sizes represented by the _Spitzer_ IRAC \(K\)-band contours in the narrow-band images.
On the extracted 1D spectra, we fit the lines in a similar manner as that for the single-peaked He ii lines (see Section 4.1). The extracted spectra are shown in the bottom left panels of Fig. 3 with the results of the fit convergence displayed in Table 3. The He ii and [C i] line fits are completely independently of one another.
### Neutral carbon line luminosity and molecular gas mass
From the [C i] spectral line fitting, we obtain [C i] line flux density measures to infer line luminosities and [C i]-derived H\({}_{2}\) masses for the radio-AGN host galaxies. For non-detections, we estimate 5\(\sigma\) flux density upper limits of \(S_{\rm[C\,]}\)dV = 5\(\sigma_{\rm rms}\sqrt{6\sigma_{\rm\Delta}v}\) where \(\sigma_{\rm rms}\) is the root-mean-square (RMS) level in the 1D spectrum estimated over a velocity range of -50 to 50 km s\({}^{-1}\). The average channel size is denoted by \(\delta v\). This velocity width, \(\Delta v\approx 100\) km s\({}^{-1}\), is based on the assumption that emission from the cold gas is traced by line dispersions of the order \(\sim\)100 km s\({}^{-1}\).
We calculate the line luminosity of [C i], \(L^{\prime}_{\rm[C\,]}\), (e.g. Solomon et al., 1992) as,
\[L^{\prime}_{\rm[C\,]}=3.25\times 10^{4}\ S_{\rm[C\,]}\ {\rm dV}\ \nu_{\rm obs}^{-2}\ (1+z)^{-3}\ D_{\rm L}^{2}, \tag{1}\]
to obtain \(L^{\prime}_{\rm[C\,]}\) in K km s\({}^{-1}\) pc\({}^{2}\). Here, \(S_{\rm[C\,]}\)dV is the integrated flux in mJy km s\({}^{-1}\), \(\nu_{\rm obs}\) is the observed frequency of [C i] in GHz, \(D_{\rm L}\) is the luminosity distance of the source in Mpc and \(z\) is its redshift. Alternatively, the line luminosity in solar luminosities (L\({}_{\odot}\)) may be written as,
\[L_{\rm[C\,]}=1.04\times 10^{-6}\ S_{\rm[C\,]}\ {\rm dV}\ \nu_{\rm rest }\ (1+z)^{-1}\ D_{\rm L}^{2} \tag{2}\]
where, as in equation 1, \(S_{\rm[C\,]}\)dV is the integrated flux in mJy km s\({}^{-1}\), \(\nu_{\rm rest}=492.161\) GHz, the rest frequency of the [C i](1-0) line, and \(D_{\rm L}\) is the luminosity distance of the source in Mpc and \(z\) is its redshift.
The [C i] line luminosity provides an inference for molecular gas mass in solar masses (M\({}_{\odot}\)) via the equation,
\[M_{\rm H_{2},[C\,]}=\frac{1375.8}{Q_{10}}\ \frac{D_{\rm L}^{2}}{(1+z)}\left[ \frac{X_{\rm[C\,]}}{10^{-5}}\right]^{-1}\ \left[\frac{A_{10}}{10^{-7}\rm s^{-1}}\right]^{-1}\ \left[\frac{S_{\rm[C\,]}\rm dV}{\rm Jy\ km\ s^{-1}} \right]. \tag{3}\]
In equation 3, the luminosity distance, \(D_{\rm L}\), is in Mpc, the Einstein A-coefficient is A\({}_{10}=7.93\times 10^{-8}\) s\({}^{-1}\). We use a [C i](1-0) excitation factor of \(Q_{10}=0.48\) and a carbon abundance of \(X_{\rm[C\,]}=3.0\times 10^{-5}\)(Weiss et al., 2005).
Three galaxies in our sample are not detected in [C i](1-0). For mJy km s\({}^{-1}\) where we assume a line-width of 100 km s\({}^{-1}\). To concrextualise these non-detections, we recast our sensitivity limits in terms of molecular gas mass limits, assuming (like for our detections) Q\({}_{10}=0.48\) and \(X_{\rm[C\,]}=3\times 10^{-5}\). This results in upper limits of \(M_{\rm H_{2},[C\,]}<(0.5-1.0)\times 10^{10}\) M\({}_{\odot}\). Unfortunately, no CO observations have been reported for these three galaxies, hence we cannot determine whether the [C i] upper limits are consistent with CO-derived masses in these three sources.
## 5 Results and Discussion
### Neutral carbon within the host galaxies
Based on our results, we find that [C i] emission is fainter than expected compared to recent detections of this line in HzRGs such as the Spiderweb galaxy (Gullberg et al., 2016; Emonts et al., 2018) and PKS 0529-54 (Lelli et al., 2018; Man et al., 2019). At least three HzRGs in our observations are not detected in [C i], while the remaining four have \(2-3\sigma\) level line detections. Nevertheless, we consider this faint line emission worth reporting because it overlaps both spatially and spectrally with the He ii emission seen by MUSE which traces the warm ionised gas within the ISM. Additionally, the [C i] detections spatially overlap with the AGN based on
the VLA-detected synchrotron emission labelled in Fig. 3. Although deeper [C i] observations are required to accurately constrain the line widths and the spatial extent of cold gas traced by neutral carbon, we already observe [C i] faint emission as a necessary motivation for future searches of [C i] within other samples of AGN host galaxies.
In our results, the galaxy TN J0121+132 is observed with a 2\(\sigma\) [C i] that traces the full velocity dispersion of the galaxy. Its observed line width of \(565\pm 194\) km s\({}^{-1}\) is consistent with that of the CO(3-2) detection in this galaxy (De Breuck et al., 2003). In the other galaxies from our sample, we observe narrow [C i] emission up to a minimum width of 40 km s\({}^{-1}\) (see Table 3). The four detections include the previously reported narrow [C i] line in 4C+19.71 (Falkendal et al., 2021). Due to our different extraction aperture, we recover a slightly different line profile from the same dataset, which has an intermediate in line width between TN J0121+1320 and two galaxies, 4C+03.24 and MRC 0943-242, which have line-widths with the same order of magnitude (\(\sim\)50 km s\({}^{-1}\)). [C i] velocity dispersions of this width are not commonly observed and should be followed up with deeper observations to trace the full extent of the [C i] emission line. We note that there is a significant agreement in redshift between [C i] and He ii, especially in the galaxy MRC 0943-242 (see Fig. 3). This further justifies the use of [C i] lines as a proxy for the systemic redshifts of galaxies (see SS4.1 for more details). Overall, we obtain flux density measures of S\({}_{[\rm C\,I]}\)d\(\chi\)\(\,\)\(\,\)\(100\) mJy km s\({}^{-1}\) and non-detections being reported with upper-limits in the [C i] flux density. Both [C i] line constraints and upper limits of as well as the molecular gas masses inferred are shown in Table 3.
Generally, [C i] line-widths represent gas kinematics of the cold atomic and molecular gas phase. In our study, the line-widths have been integrated over the full extent of the galactic disk. In MRC 0943-242 and 4C+03.24, the dispersions range from 40-50 km s\({}^{-1}\) and are insufficiently broad to cover the rotational velocity of the host galaxy and can therefore only trace single molecular clouds within the galaxy. For these two galaxies in our small sample, we observe dispersions that are consistent with galaxy kinematics in 4C+19.17 and TN J0121+1320 which have line-widths of \(179\pm 60\) km s\({}^{-1}\) and \(565\pm 194\) km s\({}^{-1}\), respectively. Such dispersions are comparable to those observed in high-\(z\) dusty star-forming galaxies (DSFGs) and SMGs where FWHM \(\approx 200-1000\) km s\({}^{-1}\)(Alaghand-Zadeh et al., 2013; Bothwell et al., 2017; Nesvadba et al., 2019). In DSFGs and SMGs, X-ray and UV radiation from the AGN and star-forming regions, respectively, heat up gas sufficiently to sustain the broad line dispersions observed in cold gas tracers such as [C i].
Broad CO emission lines have also been detected in low-\(z\) ULIRGS which have similar ISM thermodynamic conditions as high-\(z\) DSFGs and SMGs where CO gas can exist in gas regions where cosmic rays are dominant (Bradford et al., 2003; Papadopoulos and Thi, 2013). Two galaxies from our sample, TN J0121+1320 and 4C+19.71 have similar line widths as the DSFGs and SMGs. The line widths in these two HzRGs may be tracing the gas kinematics of molecular clouds spread throughout the galaxy. We note, however, that significantly deeper [C i] observations would be required to obtain reliable kinematic inferences as previously demonstrated in De Breuck et al. (2014) where shallow [C ii] data provided a rotation curve based on the cold gas tracers. In fact, these results have been recently fine-tuned using deeper detections in Lelli et al. (2021) as evidence for the importance in sometimes obtaining follow-up observations of low S/N data.
A more direct comparison of our results to previous observations is possible via [C i] detections in PKS 0529-54 and MRC 1138-262 (\(z\simeq 2.2\); Spiderweb Galaxy). In PKS 0529-54 (\(z\simeq 2.6\)), the shallow [C i](2-1) profile was interpreted in two ways: Man et al. (2019) associated both velocity components with distinct star-forming regions within the galaxy, while Lelli et al. (2018) interpreted the observations as evidence for a rotating disk with a circular velocity of 310 km s\({}^{-1}\). In an upcoming study of the same source, ALMA Cycle 6 observations provide [C i](2-1) line emission of FWHM \(=615\pm 56\) km s\({}^{-1}\) and [C i](1-0) line emission of FWHM \(=848\pm 230\) km s\({}^{-1}\)(Huang et al., 2023). Both emission components are broader than the values reported in Table 4 of Man et al. (2019). Additionally, the newly detected broadened line emission has made it possible to constrain the redshift from the deeper [C i] line observations.
For the Spiderweb galaxy, Gullberg et al. (2016) report a more complex [C i](2-1) morphology consisting of two spatial components: one of which contains a broad emission lines of width 1100 km s\({}^{-1}\) and one with a width of 270 km s\({}^{-1}\). Additionally, Emonts et al. (2018) report a similar velocity profile for the Spiderweb Galaxy in [C i](1-0) emission seen in observations that unfortunately lack the spatial resolution required to resolve the two spatial components along the spectral axis. In comparison the [C i](1-0) line detections in 4C+03.24 and MRC 0943-242 from our results have line-widths which are a factor of \(5-10\) narrower than what is expected based on the previous studies we have discussed here (Gullberg et al., 2016; Emonts et al., 2018; Lelli et al., 2018; Man et al., 2019). In the case of MRC 0943-242, however, Gullberg et al. (2016) report a CO(8-7) FWHM of 43\(\pm\)13 km s\({}^{-1}\), fully consistent with the 40\(\pm\)14 km s\({}^{-1}\) we find in [C i].
In terms of [C i] velocity dispersion, the most anomalous source in our sample is TN J0121+1320 which has a [C i] line-width of \(\sim\)600 km s\({}^{-1}\) and is located above the Main Sequence of star-forming galaxies, as seen in Fig. 1. Within our sample, this source also has the highest SFR measure (see Table 3) which places it at the upper reach of the Main Sequence in Fig. 5 from Falkendal et al. (2019)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Galaxy & He ii redshift & [C i] redshift & Literature redshift & Reference \\ \hline MRC 0943-242 & 2.9230 \(\pm\) 0.0001 & 2.9215 \(\pm\) 0.003 & 2.9230 \(\pm\) 0.0020 & Roettgering et al. (1997) \\ TN J0205+2242 & 3.5060 \(\pm\) 0.0003 & \(\dots\) & 3.5061 \(\pm\) 0.0004 & De Breuck et al. (2001) \\ TN J0121+1320 & 3.5190 \(\pm\) 0.0002 & 3.5230 \(\pm\) 0.004 & 3.5200 \(\pm\) 0.0007 & Nesvadba et al. (2007) \\
4C+03.24 & \(\dots\) & 3.5828 \(\pm\) 0.004 & 3.5699 \(\pm\) 0.0003 & van Oijk et al. (1996) \\
4C+19.71 & \(\dots\) & 3.5892 \(\pm\) 0.004 & 3.5935 \(\pm\) 0.0007 & Nesvadba et al. (2017) \\ TN J1338-1942 & 4.0959 \(\pm\) 0.0005 & \(\dots\) & 4.1057 \(\pm\) 0.0004 & Swinbank et al. (2015) \\
4C+04.11 & 4.5080 \(\pm\) 0.0002 & \(\dots\) & 4.5100 \(\pm\) 0.0001 & Nesvadba et al. (2017) \\ \hline \end{tabular}
\end{table}
Table 2: Redshifts of galaxies from the ALMA+MUSE HzRG sample, named in column (1), are based on He ii \(\lambda\)1640 and [C i](1-0) line fitting shown in columns (2) and (3). The literature redshift is shown in column (4) with the cited source provided in column (5).
as well. In this work, we have obtained a tentative 2\(\sigma\) [C i] detection from which we infer a molecular gas mass of \(\sim 2.60\times 10^{10}\) M\({}_{\odot}\). Furthermore, the [C i] line dispersion and inferred H\({}_{2}\) mass are both consistent with the galaxy's position above the Main Sequence which is evidence that is still has sufficient fuel star-formation at a relatively high level (\(\sim\)626 M\({}_{\odot}\) yr\({}^{-1}\)) compared to the other HzRGs within our sample. For the HzRGs with narrow line emission, [C i] may be tracing sub-kpc star-forming regions within the galaxy.
Another anomalous case from our sample is TN J1338-1942 which has a relatively high star-formation rate of \(\sim\)461 M\({}_{\odot}\) yr\({}^{-1}\) but no traceable [C i] line emission within its host galaxy. Its inferred molecular gas mass is an upper limit of \(<1.08\times 10^{10}\) M\({}_{\odot}\). For this source, Falkendal et al. (2019) reported a 92 GHz ALMA detection which coincides spatially with the northern radio lobe. The spectral energy distribution (in Fig. 11 of Falkendal et al. (2019)) predicts equal contributions from synchrotron and thermal dust emission at \(\sim\)92 GHz. Given the multi-band _Herschel_ detection, we are certain the the high SFR measure is valid however the continuum data lacks the
Figure 11: **Left:** MUSE He ii and ALMA [C i](1-0) line spectra of the host galaxies of MRC 0943-42 and TN J0205+2242, in the top and bottom panels, respectively. The 1\(\sigma\) (rms) noise levels in the [C i] spectra for MRC 0943-42 and TN J0205+2242 are 0.335 and 0.260 mJy beam\({}^{-1}\) respectively. The middle panel shows the MUSE spectral variance (\(\sigma_{V}\)). All spectra have been extracted from the data cubes with a 1\(\arcsec\) aperture centred at the peak of the \(K\)-band continuum. The ALMA [C i] channel binning is 9 and 11 km s\({}^{-1}\) for MRC 0943-42 and TN J0205+2242, respectively. When the radio core is not seen, the He ii is extracted from the peak of the _Spitzer_/IRAC (4.5 \(\mu\)m) continuum. The yellow vertical bars in the MUSE spectra indicate the locations of xylines with predicted \(f_{\lambda}>\)10\({}^{-16}\) erg s\({}^{-1}\) Å\({}^{-1}\) arcsec\({}^{-2}\) (Hanuschik 2003). The He ii line is decomposed into the emission component at the host galaxy (systemic redshift) and the blue-shifted emission is represented by the blue line (b.1). **Right:**\(14\times 14\arcsec\) MUSE narrow-band image centered on the Ly\(\alpha\) line (the covered wavelength range is reported in the image) with _Spitzer_ contours overplotted in pink. The [C i] contours are shown in blue and are in units of mJy beam\({}^{-1}\) km s\({}^{-1}\) The orange crosses represent the hotspots of the radio lobes observed by the VLA in its C-band configuration. The contour levels represented for each data set are shown directly in the overlay plots.
spatial resolution to determine whether such a high SFR is spatially coincident with the AGN or rather a component within a jet-cloud interacting region at \(\sim\)10 kpc north of the galaxy nuclei. Higher spatial resolution thermal dust continuum detections are thus required to properly constrain the star-formation properties of this source.
We estimate the strength of atomic line cooling by calculating the ratio of the [C i] line luminosity to L\({}_{\rm IR}\) luminosity (\(8-1000~{}\mu\)m). For the radio galaxies sampled in this work, these line luminosity ratios range from \((2-15)\times 10^{-6}\). A similarly radio-loud AGN host at \(z\simeq 2.2\) (MRC 1138-257) has a luminosity ratio of \(5.6\times 10^{-6}\) based on L\({}_{\rm IR}\) (the starburst component) reported in Seymour et al. (2012) and \(L^{\prime}_{\rm[C i]}\) from Emonts et al. (2018). Previous works have shown that the \(L^{\prime}_{\rm[C i]}\)/L\({}_{\rm IR}\) values for lensed SFGs and SMGs (at \(z=2-4\)) range from \((5-20)\times 10^{-6}\)(Nesvadba et al., 2019). For lensed, dusty star-forming galaxies, this range is \((2-18)\times 10^{-6}\)(Bothwell et al., 2017). For unlensed SMGs at \(z\sim 2.5\), this value ranges from \((5-30)\times 10^{-6}\)(Algabhand-Zadeh et al., 2013). A sample of Main Sequence galaxies at \(z\sim 1.2\) show line luminosity ratios of \((4-187)\times 10^{-6}\) Valentino et al. (2018). The \(L^{\prime}_{\rm[C i]}\)/L\({}_{\rm IR}\) values obtained for the high-\(z\) radio galaxies sample in this work, are comparable to those for other high-\(z\) galaxy populations but trace the lower end of their \(L^{\prime}_{\rm[C i]}\)/L\({}_{\rm IR}\) ranges. We summarise all \(L^{\prime}_{\rm[C i]}\)/L\({}_{\rm IR}\) ratios for our sample as well as those from the literature in Table 4.
Figure 3: continued: Here, we show the He ii and [C i] spectra alongside the multiwavelength narrow-band images for TN J0121+1320 (top) and 4C+03.24 (bottom). The 1\(\sigma\) (rms) noise levels in the [C i] spectra for TN J0121+1320 and 4C+03.24 are 0.311 and 0.189 mJy beam\({}^{-1}\) respectively. The aperture for [C i] spectral extraction is centered on the VLA-detected radio core position denoted by the white star. The ALMA [C i] binning are 64 and 43 km s\({}^{-1}\) for TN J0121+1320 and 4C+03.24, respectively.
Figure 3: continued: Here, we show the He ii and [C i] spectra alongside the multiwavelength narrow-band images for 4C+19.71 (top), TN J1338-1934 (middle), and 4C04+11 (bottom). The 1\(\sigma\) (rms) noise levels in the [C i] spectra for 4C+19.71, TN J1338-1942 and 4C04+11 are 0.366, 0.208 and 0.352 mJy beam\({}^{-1}\) respectively. The ALMA [C i] spectra are extracted from the \(K\)-band continuum peak in 4C+19.71 and the VLA-detected radio core TN J1338-1942 and 4C04+11. The [C i] binnings are 43, 48 and 13 km s\({}^{-1}\) for 4C+19.71, TN J1338-1942 and 4C04+11, respectively.
### Comparing CO and [C i]-derived molecular gas in high-\(z\) radio galaxies
For two sources with both [C i] and CO detections, we can make a direct comparison between the molecular gas tracers. In MRC 0943-242, Gullberg et al. (2016a) report a CO(8-7) detection and a CO(1-0) upper limit. Scaling the CO(8-7) detections of 0.33 Jy km s\({}^{-1}\) in _Yggdrasil_ (the AGN host galaxy) to the 0.54 Jy km s\({}^{-1}\) in the companion galaxy _Loke_ which is detected in both CO(1-0) and CO(8-7) and implies an inferred mass of \(M_{\rm H_{2},CO}\sim 1.4\times 10^{10}\) M\({}_{\odot}\). Such a mass, however, is less likely to be a proper constraint than an upper limit as the CO line emission of _Yggdrasil_ will likely have a higher excitation level due to the presence of the AGN than that seen at the position of _Loke_. Our \(M_{\rm H_{2},[C\textsc{i}]}=(4.9\pm 2.7)\times 10^{9}\) M\({}_{\odot}\) is therefore consistent with these previously reported CO observations. Similarly, at the host galaxy of TN J0121+1320, the CO(4-3) line flux of 1.2 Jy km s\({}^{-1}\) from De Breuck et al. (2003) leads to an inferred mass of \(M_{\rm H_{2},CO(4-3)}\sim 7.0\times 10^{10}\) M\({}_{\odot}\), when we assume a CO(4-3)/CO(1-0) ratio of 0.5 and an \(\alpha_{\rm CO}=0.8\) M\({}_{\odot}/(\)K km s\({}^{-1}\) pc\({}^{2})\). This is a factor of \(\sim 3\) larger than the [C i] inferred mass of M\({}_{\rm H_{2},[C\textsc{i}]}=(2.60\pm 1.46)\times 10^{10}\) M\({}_{\odot}\). The caveat here is that the CO(4-3) observations from De Breuck et al. (2003) were obtained over a synthesised beam of 8''\(\times\)4''and may therefore include multiple components that are not included in the [C i] spectrum extracted over the 1'' diameter aperture, in our results. As expected, the [C i] and CO gas estimates do not lead to equivalent results due to the assumptions on \(\alpha_{\rm CO}\) and X\({}_{\rm[C\textsc{i}]}\) and Q\({}_{\rm 10}\), which are more appropriate for populations of galaxies, but have large uncertainties for individual galaxies. If the the CO-inferred H\({}_{2}\) mass were to be made equivalent to that of [C i], we would require a [C i] flux density of \(S\)d\(\rm V_{[C\textsc{i}]}\geq 264\) mJy km s\({}^{-1}\) which is inconsistent with the observations presented in this work.
We conclude that the uncertainties on the H\({}_{2}\) masses derived from the high-\(J\) CO lines are too high to provide a detailed prediction for the detectability of [C i]. We have, however, with the current depth reached corresponding H\({}_{2}\) masses lower than those previously reported in HzRGs (e.g. Papadopoulos et al., 2000; De Breuck et al., 2003, 2005; Ivison et al., 2008; Emonts et al., 2014, 2015; Gullberg et al., 2016b). With sufficiently longer integration times on these sources, targeting the [C i] line with an interferometer such as ALMA, there is a probability for detection in the sources where we have derived upper limits.
### Star-formation efficiency
The star-formation efficiency (SFE) is defined as \(\rm SFE=SFR/M_{gas}\), where \(M_{gas}\) should include all gas phases (H i, H ii, H\({}_{2}\)). In HzRGs, the H\({}_{2}\) component was often assumed to be the dominant one (e.g. De Breuck et al., 2003), though in some cases the neutral H i gas causing the Ly\(\alpha\) can reach masses of order \(10^{10}\) M\({}_{\odot}\)(Gullberg et al., 2016b; Kolwa et al., 2019; Falkendal et al., 2019). Using the lower \(H_{2}\) masses when concentrating on the AGN host galaxies only from our ALMA [C i] data, we now compare HzRGs with other high redshift galaxies in terms of SFE and gas fraction \(f_{\rm gas}=M_{gas}/(M_{gas}+M_{\star})\).
For our HzRG sample, we use the molecular gas mass (or their limits) as derived from the [C i](1-0) emission. Due to the lack of systematic measurements, we neglect the H i component, which may lead to an under-estimate of \(M_{\rm gas}\) by a factor up to two. We then use the SFRs from Falkendal et al. (2019) to derive the SFE. To calculate the \(f_{\rm gas}\), we consider the radio galaxies' stellar masses from Seymour et al. (2007) and De Breuck et al. (2010), where the upper limits of 4C+03.24 (= USS1243+036) and 4C+19.71 (= MG2144+1928) are considered detections because their observed \(K-\)band detections are fully consistent with the _Spitzer_ 3.6 and 4.5 \(\mu\)m photometry. We enumerate our HzRGs in Fig. 4 from (6) to (10) where (6) MRC 0943-242, (7) TN J0121+1320, (8) 4C+03.24, (9) 4C+19.71 and (10) TN J1338-1942. Note TN J0205+2242 is not included as both SFR and M\({}_{\rm H_{2}}\) are upper limits, while 4C+04.11 does not have an estimate of the stellar mass. Furthermore, we provide the gas fractions and
\begin{table}
\begin{tabular}{l c c} \hline \hline Source(s) & \(L_{\rm[C\textsc{i}]}^{*}/L_{\rm IR}\) & Reference \\ & \((10^{-6})\) & \\ \hline MRC 0943-242 & 5.8 & This work \\ TN J0205+2242 &... &... \\ TN J0121+1320 & 2.0 &... \\
4C+03.24 & 2.3 &... \\
4C+19.71 & 15 &... \\ TN J1338-1942 & \(<3.5\) &... \\ MRC 1138-257 & 8.2 & Emonts et al. (2018) \\ \hline PSFGs & 5 –20 & Nesvadba et al. (2019) \\ \(\,\) \(\) \(\,\) \(\,\) \(\,\) \(\,\,\) \(\,\,\) \(\,\,\) \(\,\,\) \(\,\,\) \(\,\,\) \(\,\,\) \(\,\,\) \(\,\,\,\) \(\,\,\,\) \(\,\,\,\) \(\,\,\,\) \(\,\,\,\,\) \(\,\,\,\,\) \(\,\,\,\,\) \(\,\,\,\,\,\) \(\,\,\,\,\,\) \(\,\,\,\,\,\) \(\,\,\,\,\,\,\) \(\,\,\,\,\,\) \(\,\,\,\,\,\,\) \(\,\,\,\,\,\,\) \(\,\,\,\,\,\,\,\,\,\) \(\,\,\,\,\,\,\,\,\) \(
star-forming efficiency in Table 5 for galaxies within our sample where these values are either limits or constraints.
We also include SFE and \(f_{\rm gas}\) for HzRGs from the literature and number them (1) to (5). These sources are (1) PKS 0529-549 at \(z\simeq 2.6\) from Man et al. (2019) who use [C i] to trace H\({}_{2}\); (2) 4C41.17 at \(z\simeq 3.8\) from De Breuck et al. (2005) who use CO(4-3); (3) MRC 0152-209 at \(z\simeq 1.9\) from Emonts et al. (2015) who use CO(1-0); (4) MRC 1138-262 at \(z\simeq 2.2\) from Gullberg et al. (2016b) who use [C i](2-1); and (5) 4C60.07 at \(z\simeq 3.8\) from Greve et al. (2004), who use CO(1-0). Fig. 4 also includes a sample of a _Herschel_-detected, lensed SFGs (ISFGs) based on their magnification-corrected SFR and \(M_{\bullet}\) measures (Sharno et al., 2013; Bothwell et al., 2013b; Dessauges-Zavadsky et al., 2015; Nayyeri et al., 2017). In the figure, we have also included six compact SFGs (cSFGs) (Tadaki et al., 2015; Spilker et al., 2016; Popping et al., 2017; Tadaki et al., 2017; Barro et al., 2017). A sample of normal SFGs have been adapted from Tadaki et al. (2015) and are shown in the figure as well.
In Fig. 4, we compare the HzRG sample with high-redshift compact, normal and lensed star-forming galaxies as well as sub-millimeter galaxies. To derive SFE and \(f_{\rm gas}\), various tracers and methods have been used. The star-forming galaxies (SFGs) from Daddi et al. (2010) reside within clusters at \(z\sim 1.4-1.6\) and have molecular masses derived from CO(2-1). In Noble et al. (2017), SFGs at \(z\sim 1.6\) have H\({}_{2}\) gas masses derived from CO(2-1). In Hayashi et al. (2018), cluster-centric SFGs at \(z\sim 1.46\) have molecular gas masses derived from CO(2-1). The sub-millimeter galaxies (SMGs) in the diagram have molecular gas masses inferred from CO line emission in several transitions from \(J_{\rm up}=2-7\)(Bothwell et al., 2013a). Additionally, the lensed SFGs, compact SFGs and SFGs from Fig. 1 are included in the plot where the gas fractions are derived from [C i], CO or the dust continuum.
From the SFE-\(f_{\rm gas}\) plot (Fig. 4), we find that HzRGs have generally lower \(f_{\rm gas}\) and higher SFE than SFGs and SMGs. The HzRGs with constrained \(f_{\rm gas}\) and SFE occupy have SFE \(\gtrsim 10\) Gyr\({}^{-1}\) and \(f_{\rm gas}\lesssim 0.3\). SFGs with \(M_{\bullet}\gtrsim 10^{11}\) M\({}_{\odot}\) close to those of HzRGs also occupy this high SFE, low \(f_{\rm gas}\) region. This result agrees well with observations of galaxies at \(z\simeq 2-4\) where high mass galaxies tend to have lower molecular gas abundances than their low stellar mass counterparts of stellar mass below \(10^{11}\) M\({}_{\odot}\) as displayed in Figs 5 and 6 of Tacconi et al. (2018). It is also consistent with the observation that AGN galaxies appear to have lower CO(3-2) gas masses than non-active galaxies (Cirosta et al., 2021).
### Why molecular gas may be undetected within the ISM of radio galaxies at high-\(z\)
Fig. 4 demonstrates that high-\(z\) radio galaxies in this work and others tend to have significantly lower molecular gas fractions than SMGs and SFGs. Furthermore, [C i] line emission, a molecular gas tracer, in 4/7 of the galaxies sampled in our work have \(2-3\sigma\) detections while in 3/7 galaxies, [C i] emission is too faint to be detected in our \(\sim\)45 minute integration times. Additionally, the sampled galaxies devoid of [C i] emission do not have previous CO detections either leading us to conclude that molecular gas is almost or fully depleted within the ISM of the radio-loud AGN host galaxies. In the sections that follow, we briefly discuss the physical mechanisms that would lead to such observations.
#### 5.4.1 High-\(z\) radio galaxies in a low star-formation rate phase?
Tentative evidence for a decline in SFR for HzRGs at \(1.3<z<4.0\) has been provided (Falkendal et al., 2019). In this study, 25 HzRGs are shown against the Main Sequence (MLS.) of star-forming galaxies from Schreiber et al. (2015) and Santini et al. (2017). We have provided a similar plot in Fig. 1 showing the Main Sequence from Schreiber et al. (2015). In Fig. 1, a reasonable majority of the HzRGs sampled sit within 0.3 dex region of scatter around the MS average. Generally, sources located a factor of 10 below the Main Sequence are considered to be in a low star-forming phase (Rodighiero et al.,
\begin{table}
\begin{tabular}{l c c} \hline \hline Galaxy (Fig. 4 ID) & \(f_{\rm gas}\) & SFE \\ & & (Gyr\({}^{-1}\)) \\ \hline MRC 0943-242 (6) & \(0.03\pm 0.02\) & \(8.38^{+8.2}_{-7.09}\) \\ TN J0121+1320 (7) & \(0.20\pm 0.11\) & \(24.1^{+6.6}_{-17.0}\) \\
4C+03.24 (8) & \(>\)0.03 & 21.7 \\
4C+19.71 (9) & \(>\)0.16 & 3.30 \\ TN J1338-1942 (10) & \(<\)0.09 & \(>\)42.6 \\ \hline \end{tabular}
\end{table}
Table 5: The gas fractions (\(f_{\rm gas}\)) and star-formation efficiencies (SFE) of the high-\(z\) radio galaxies where the \(f_{\rm gas}\) and SFE can be shown as either a limit or a constrained value. The host galaxy 4C04.11 does not have a well-constrained \(M_{\bullet}\) and it is, therefore, not shown.
Figure 4: Star-formation efficiency as a function of molecular gas fraction. The HzRGs from our sample (magenta) and from the literature (orange) are numbered as described in the text. We compare our sub-sample of HzRGs with star-forming galaxies at \(z\sim 1.4\)(H18; Hayashi et al., 2018), at \(z\sim 1.6\)(N17; Noble et al., 2017), at \(z\sim 1.4-1.6\)(D10; Daddi et al., 2010) as well as a \(z\sim 1.2-4.1\) sub-mm galaxy population (B13a; Bothwell et al., 2013a), lensed SFGs (ISFGs - aury blue), compact SFGs (cSFGs - blue) and SFGs ( faint blue), see referenced in the text. HzRGs from the literature are numbered as (1) PKS 0529-549 at \(z\simeq 2.6\)(Man et al., 2019); (2) 4C41.17 at \(z\simeq 3.8\)(De Breuck et al., 2005); (3) MRC 0152-209 at \(z\simeq 1.9\) from (Emonts et al., 2015); (4) MRC 1138-262 at \(z\simeq 2.2\)(Gullberg et al., 2016b); and (5) 4C60.07 at \(z\simeq 3.8\)(Greve et al., 2004). The hatched symbols represent galaxies in each sample with stellar masses of \(M_{\bullet}\)\(\geq 10^{11}\) M\({}_{\odot}\). The five low represents the high SFE and low \(f_{\rm gas}\) region adopted in previous studies (e.g Man et al., 2019). Details on the tracers used to obtain \(f_{\rm gas}\) are provided in the text. The HzRG uncertainties for sources (6) and (8) are propagated from errors in SFR errors based on the work of (Drouart et al., 2014) and Falkendal et al. (2019), \(M_{\bullet}\) errors from De Breuck et al. (2010) and \(M_{\rm H_{2}}\) errors reported in Table 3.
2011). Furthermore, two of the sampled galaxies (TN J0121+1320 and TN J1338-1942) are vertically offset by \(\sim\)0.15 dex above the MS while three (MRC 0943-242, 4C+03.24, 4C+19.71) are located \(\sim\)0.15 dex below it. TN J0205+2242 has a SFR upper limit of \(<\)84 M\({}_{\odot}\) yr\({}^{-1}\) such that even if it were to eventually receive a constrained measure of SFR, this would be low enough to be place it significantly below the MS average. Due to 4C+04.11 not having a constrained SFR, we have no indication for its position in the MS diagram. Overall, we find that radio galaxies at \(1.3<z<4.5\) are more likely to exist in a predominantly low star-formation rate phase in their evolution where they may lack sufficient molecular gas to continue fuelling star-formation in the ISM at the SFR predicted by the Main Sequence when a constant SFR is assumed from the Kennicutt-Schmidt law (Kennicutt, 1998).
A previous epoch of violent and rapid star-formation may have removed the supply of molecular gas (e.g. Scholtz et al., 2023), leaving only trace amounts, explaining the faint line emission in [C i] and CO observed within the host galaxies. This idea is supported by the [C i] line-widths in our sample which are predominantly narrow (FWHM = \(40-200\) km s\({}^{-1}\)) with the exception of TN J0121+1320 which has a line-width of \(\sim\)600 km s\({}^{-1}\) which is uncharacteristic of galaxies undergoing major starbursts.
#### 5.4.2 AGN feedback effects on cold gas
The molecular gas available to fuel star-formation could also have been displaced from the ISM, dissociated or heated as radio jets propagate through the gas medium (McNamara and Nulsen, 2012; Fabian, 2012). Previous studies have provided sufficient evidence for mechanical feedback occurring in radio AGN host galaxies (Villar-Martin et al., 1999; Best et al., 2005; Merloni and Heinz, 2007; Nesvadba et al., 2008; Rosario et al., 2010; McNamara and Nulsen, 2012; Hardcastle et al., 2012; Ishibashi et al., 2014; Williams and Rottgering, 2015; Mahony et al., 2016; Nesvadba et al., 2017; Santoro et al., 2020). The notion that radio jets from the AGN cause cold gas removal has also been examined by theoretical predictions. Cosmological simulations of AGN feedback have demonstrated that galaxies within the stellar mass range \(M_{\bullet}\simeq 10^{10}-10^{11}\) M\({}_{\odot}\) undergo a higher level of quenching than their low mass counterparts (Weinberger et al., 2017; Nelson et al., 2019). In the case of radio AGN specifically, the kinetic feedback may be associated with powerful jets (Dave et al., 2020; Hardcastle and Croston, 2020; Thomas et al., 2020).
A causal link between starbursts in radio galaxies and gas-rich mergers has been suggested before (Ivison et al., 2012). In these sources, which are also luminous in the far-infrared, turbulence in the cold gas is induced by an energy injection from either X-ray radiation or mechanical feedback (or both) which result in the subsequent, slow termination of star-formation within the ISM (Papadopoulos et al., 2008, 2012). This result is similar to the merger-associated starbursts in the Spiderweb galaxy, \(\mathrm{a}z=2.2\) radio-loud AGN, wherein turbulent gas dynamics have been traced via broad [C i] line emission of width FWHM \(\simeq 1100\) km s\({}^{-1}\).
It is possible that cold molecular gas has been removed from the galaxy ISM via negative kinetic AGN feedback events within our sample. The gas would be entrained by propagating radio jets and accelerated out of the ISM resulting in the decline in cold molecular gas abundance that would ordinarily be traced via [C i] or CO line emission. We can briefly approximate the kinetic energy injection produced by radio jets relativistic speeds \(v/\mathrm{c}\sim 0.1\). The kinetic jet powers typically measured for radio-loud AGN are within the range \(P_{\mathrm{jet}}\simeq 10^{46}-10^{48}\) erg s\({}^{-1}\)(Carvalho and O'Dea, 2002; Bicknell et al., 2003). Detailed calculations have already demonstrated that such relativistic jets produce sufficient power to produce outflows of molecular gas from the ISM of radio galaxies (Nesvadba et al., 2010, 2021).
As stated in Section 2, our sample comprises jetted radio AGN host galaxies where the kinetic mode of feedback is likely to operate (Heckman and Best, 2014). Given the results from Falkendal et al. (2019), there is clear supporting evidence that a considerable number of HzRGs have low star-formation rates compared to common SFGs and that AGN feedback is responsible for removing molecular gas and shutting off the star formation. Hence, given the high jet powers, relatively low SFR, low gas fractions and high SFE's observed in our sample, mechanical AGN feedback is a possible cause for the removal of molecular gas from the ISM of the radio-loud AGN host galaxies in our sample.
In summary, due to the observed faintness of the molecular gas as traced via [C i] in the AGN host galaxies galaxies with high stellar masses and powerful radio jets, we tentatively conclude that AGN feedback may be the cause for the low molecular gas fractions. However, we can not rule out an earlier starburst as the cause of the depletion. Without a well constrained star-formation history of radio-AGN out to their first formation redshifts, we are limited from making strictly conclusive statements.
## 6 Conclusions
We have presented ALMA bands 3 and 4 observations of [C i] \({}^{3}P_{1}\) \(-\)\(\rightarrow\)\({}^{3}P_{0}\) for seven radio galaxies (radio-loud AGN hosts) at redshifts \(z=2.9-4.5\) with the goal of tracing molecular hydrogen via neutral carbon emission. Knowing the locations of the host galaxies, we searched for [C i] emission at the prescribed co-ordinates.
In our sample, four galaxies are detected with \(2-3\sigma\) level [C i] line emission. The remaining three galaxies are reported as non-detections with [C i] flux densities given as 5\(\sigma\) upper limits. [C i] line widths within three out the seven sources in the sample range from \(\sim 40-180\) km s\({}^{-1}\) indicative of emission from bright molecular gas clouds within the ISM. In one of the galaxies, TN J0121+1320 (SFR \(\simeq 626\) M\({}_{\odot}\) yr\({}^{-1}\).), a [C i] line-width of \(\sim\)600 km s\({}^{-1}\) is measured and is indicative of cold gas that is rotationally supported. Overall, we have obtained [C i] flux densities that provide molecular gas mass inferences where the upper limits are \(M_{\mathrm{H_{2}}}<0.65\times 10^{10}\) and the constraints are in the range, \(M_{\mathrm{H_{2}},[\mathrm{C}\textsc{i}]}\lesssim(0.5-3)\times 10^{10}\) M\({}_{\odot}\).
We compare the star-formation efficiencies (SFE) and molecular gas fractions (\(f_{\mathrm{gas}}\)) of our sample to other high-\(z\) radio galaxy populations as well as star-forming and sub-mm galaxies (SFGs and SMGs) at \(z\simeq 2\). Generally, we find that high-\(z\) radio galaxies have \(f_{\mathrm{gas}}<0.2\) and relatively high SFE's of \(4-45\) Gyr\({}^{-1}\). Furthermore, three galaxies in our sample (TN J0121+1320, 4C+03.24, and 4C+19.72) have gas fractions below 10%.
Based on these results, we consider two physical mechanisms that may explain the faintness of the [C i](1-0) emission within our sample. The first being that the galaxies have experienced vigorous starburst activity at early epochs in their evolution which has caused a period of star-formation quiescence where molecular gas mass is close to depletion and cannot be traced by either [C i] or CO. This would be followed by a second mechanism - the kinetic-mode of feedback that operates in jetted and high-\(z\) radio-loud AGN host galaxies, such as the sources in our high-\(z\) radio galaxy sample which would eject a significant amount of cold gas from the ISM of the galaxies resulting in the faintness of [C i] and CO line tracers. While such outflows have been observed in the ionized gas of a sample of
\(\sim\)50 HzRGs (Nesvadba et al., 2017), it is unclear if similar outflows are found in the cold molecular gas. Our current data is too shallow to allow for a proper conclusion on which mechanism is primarily responsible for the low molecular gas fractions we have observed in our sample of radio galaxies.
In future, we aim to conduct a follow-up study of the cold gas within the extended haloes of this sample as well as other \(z>2\) radio galaxies that ties in a greater focus on the mm/sub-mm continuum as well as the tracers of ionised gas observed within the optical window of MUSE. Generally, the warm hot ionised component of the circumgalactic halo gas contributes a higher proportion of a galaxies' baryon budget (see Tumlinson et al. (2017)) in the CGM than cold gas which remains rather elusive in high-redshift galaxies at \(z>2\) as has been demonstrated by this study. Hence, a greater focus should be placed on investigating the warm and hot ionised gas within the baryonic haloes of jetted radio-AGN host galaxies.
## Acknowledgements
SK acknowledges the financial grants of the National Research Foundation (NRF) and the South African Radio Astronomy Observatory (SARAO; www.sarao.ac.za) whose contribution towards this research is hereby acknowledged (2020). The Inter-University Institute for Data Intensive Astronomy (IDiA; www.idia.ac.za) is thanked for their provision of computing resources utilised in this project (2020-). SK thanks Federico Lelli and Helmut Dannerbauer for providing clarifying remarks on early drafts. Thanks to Ryan Trainor for essential feedback on later versions of this paper. A tremendous thanks to the expert on all things related to carbon in galaxies, Padelis Papadopoulos, who provided invaluable feedback on many iterations of this paper throughout the grueling editing phase.
This paper made use of calibrated measurement sets provided by the European ALMA Regional Centre network (Hatziminiaoglou et al., 2015) through the calMS service (Petry et al., 2020).
AWSM acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Dunlap Fellowship at the Dunlap Institute for Astronomy & Astrophysics, funded through an endowment established by the David Dunlap family and the University of Toronto.
CMH acknowledges funding from a United Kingdom Research and Innovation grant (code: MR/V022830/1). For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) license to any author accepted version arising.
This paper is based on observations collected at the European Southern Observatory under ESO programmes 097.B-0323(B), 097.B-0323(C), 096.B-0752(A), 096.B-0752(B), 096.B-0752(C) and 096.B-0752(F).
This paper also makes use of the following ALMA observations: ADS/MO_ALMA#2015.1.00530.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
## Data Availability
The raw Atacama Large Millimeter/submillimeter Array (ALMA) data underlying this article are available in the _ALMA Science Portal_ at [https://almascience.eso.org/](https://almascience.eso.org/), and can be accessed with the project ID: 2015.1.00530. S. The Multi-unit Spectroscopic Explorer (MUSE) data are available in the _ESO Science Archive Facility_ at [http://archive.eso.org/cms.html](http://archive.eso.org/cms.html). The processed MUSE data will be shared on reasonable request to the corresponding author. The _Hubble Space Telescope_ (HST) data are available in the _Hubble Legacy Archive_ at [https://hla.stsci.edu/](https://hla.stsci.edu/). The _Spitzer_ Space Telescope (SST) data are available on the _SHzRGs Archive_ at [http://www.eso.org/~cdebreuc/shzrg/](http://www.eso.org/~cdebreuc/shzrg/). The Karl G. Jansky Very Large Array (VLA) data is available from the VLA Legacy Data Archive at [https://data.nrao.edu/portal/](https://data.nrao.edu/portal/).
|
2305.00954 | Nearly Heisenberg-limited noise-unbiased frequency estimation by
tailored sensor design | We consider entanglement-assisted frequency estimation by Ramsey
interferometry, in the presence of dephasing noise from spatiotemporally
correlated environments.By working in the widely employed local estimation
regime, we show that even for infinite measurement statistics, noise renders
standard estimators biased or ill-defined. We introduce ratio estimators which,
at the cost of doubling the required resources, are insensitive to noise and
retain the asymptotic precision scaling of standard ones. While ratio
estimators are applicable also in the limit of Markovian noise, we focus on
non-Markovian dephasing from a bosonic bath and show how knowledge about the
noise spectrum may be used to maximize metrological advantage, by tailoring the
sensor's geometry. Notably, Heisenberg scaling is attained up to a logarithmic
prefactor by maximally entangled states. | Francisco Riberi, Gerardo Paz-Silva, Lorenza Viola | 2023-05-01T17:32:55Z | http://arxiv.org/abs/2305.00954v2 | # Nearly Heisenberg-limited noise-unbiased frequency estimation by tailored sensor design
###### Abstract
We consider entanglement-assisted frequency estimation by Ramsey interferometry, in the presence of dephasing noise from spatiatomemporally correlated environments. By working in the widely employed _local_ estimation regime, we show that even for infinite measurement statistics, noise renders standard estimators biased or ill-defined. We introduce _ratio estimators_ which, at the cost of doubling the required resources, are insensitive to noise and retain the asymptotic precision scaling of standard ones. While ratio estimators are applicable also in the limit of Markovian noise, we focus on non-Markovian dephasing from a bosonic bath and show how knowledge about the noise spectrum may be used to maximize metrological advantage, by tailoring the sensor's geometry. Notably, Heisenberg scaling is attained up to a logarithmic prefactor by maximally entangled states.
pacs: 03.67.-a, 03.67.-a, 03.67.Lk High-precision estimation of transition frequencies (or energy splittings) is a fundamental task in quantum metrology, with implications ranging from atomic spectroscopy [1; 2; 3] to time-keeping with atomic clocks [4; 5; 6]. In the context of Ramsey interferometry [7; 8] with a quantum sensor comprising \(N\) probes, the use of initial entangled states can yield asymptotic precision bounds which surpass the optimal \(N^{-1/2}\)_standard quantum limit_ (SQL) achievable classically. Under ideal conditions, the ultimate \(N^{-1}\) precision bound is set by the _Heisenberg limit_ (HL), and saturated by maximally entangled Greenberger-Horne-Zeilinger (GHZ) states [9; 10; 11].
In practice, noise inevitably degrades the attainable precision, to an extent that depends on the model specifics [12]. In many quantum sensors, dephasing noise which couples through the same operators as the signal provides the dominant noise mechanism. While no gain over the SQL can be achieved for independent Markovian noise (that is, noise with no spatial and temporal correlations) [9; 13], temporal correlations can be exploited to achieve a superclassical _Zeno limit_ (ZL) \(\propto N^{-3/4}\)[14; 15; 16]. Strong temporal correlations have been detected in a variety of systems via quantum noise spectroscopy experiments, see e.g., [17; 18; 19; 20; 21; 22; 23], including for non-classical noise environments [24; 25; 26]. Spatial noise correlations also tend to emerge due to probe proximity [27; 28; 29; 30], making noise substantially more harmful than uncorrelated one. For perfectly correlated (collective) Markovian noise, GHZ states are the most fragile, resulting in an \(N\)-independent precision scaling [28; 31], and sub-SQL scaling is also precluded in the non-Markovian regime [32; 33]. For noise with partial spatial correlations, superclassical precision scaling can be restored by tailored error-correcting codes in the Markovian case [34], or by means of a randomized protocol for general temporally correlated scenarios [33].
While, as the above shows, the impact noise on the scaling of precision has been extensively studied, far less attention has been devoted to the fact that noise may also introduce unwanted _bias_, compromising accuracy if unaccounted for. Most research on bias in quantum metrology has focused on estimation in the regime of limited data, using Bayesian approaches [35; 36; 37]. More recently, in the context of Markovian noise, bias due to finite-frequency error-corrected sensing was addressed in [38] through post-processing, while a purification-based protocol was proposed in [39] to mitigate bias due to imperfect knowledge of the noise model.
Here, we focus on spatiatomemporally correlated dephasing and show how, even in the ideal limit of infinite measurement statistics and perfect noise knowledge, standard estimators for both GHZ and one-axis twisted (OAT) states become systematically biased and possibly ill-defined. We introduce _ratio estimators_ which are insensitive to dephasing by construction and match the asymptotic \(N\)-scaling of standard estimators. While no noise knowledge is needed for constructing ratio estimators, we further show that access to the noise spectral features is key to _optimizing_ the achievable precision scaling. We examine a setting where \(N\) qubits are placed on a one-dimensional (1D) regular lattice with tunable separation and couple to a bosonic bath. We show that engineering _negative_ noise correlations yields a far greater scaling advantage than achievable via randomized protocols that spatially decorrelate the probes on average [33]. Remarkably, OAT states saturate the \(N^{-3/4}\) ZL which is optimal for independent non-Markovian dephasing [16], whereas a novel nearly-Heisenberg \(N^{-1}\sqrt{\log(N)}\) scaling emerges for GHZ states.
_Setting.--_ We consider \(N\) qubit probes, each coupled to the target frequency \(b\), and a "parallel" dephasing noise environment (or bath). In the interaction picture with respect to the free bath Hamiltonian, \(H_{\rm B}\), the joint evolution is generated by
\[H_{\rm SB}(t)=\frac{1}{2}\sum_{n=1}^{N}\,\sigma_{n}^{z}\otimes\left[\,b+B_{n}( t)\right], \tag{1}\]
where \(\sigma_{n}^{u}\), \(u\in\{x,y,z\}\) are Pauli matrices and the bath operators \(\{B_{n}(t)\}\) describe a noise process which we take to be zero-mean, stationary, and Gaussian. Under the assumption that the initial joint state is factorized, \(\rho_{\rm SB}(0)\equiv\rho_{0}\otimes\rho_{\rm B}\), with \([\rho_{\rm B},H_{\rm B}]=0\), the statistical properties of the noise are fully captured by the two-point correlation functions, \(C_{nm}(t)\equiv\langle B_{n}(t)B_{m}(0)\rangle=\mathrm{Tr}_{\rm B}\{B_{n}(t)B _{m}(0)\rho_{\rm B}\}\). For a classical noise environment, \(B_{n}(t)\) and \(\langle\bullet\rangle\) denote a stochastic commuting process and an ensemble average, respectively. In general, the noise is non-trivially correlated both in space, across different qubits, and in time (non-Markovian); "\(\delta\)-correlated" (Markovian) noise is included as a special case, \(C_{nm}(t)=c_{nm}\delta(t)\), for some Hermitian matrix \(c_{nm}\) that encodes the
spatial noise correlations. In the frequency domain, the presence of temporal correlations translates into colored classical (\(+\)) and quantum (\(-\)) noise spectra, given by \(S_{nm}^{\pm}(\omega)\equiv\int_{-\infty}^{\infty}\!dt\,e^{-i\omega t}\langle[B_{n }(t),B_{m}(0)]_{\pm}\rangle=S_{nm}(\omega)\!\pm\!S_{mn}(-\omega)\)[40].
Let \(|\vec{\alpha}\rangle\equiv\bigotimes_{n=1}^{N}|\alpha_{n}\rangle\), with \(\alpha_{n}=\pm 1\) corresponding to the eigenstates \(\{|\!\uparrow\rangle,|\!\downarrow\rangle\}\) of \(\sigma_{n}^{z}\), denote the \(z\) basis. The time-evolved state may then be represented as \(\langle\vec{\alpha}|\rho(t)|\vec{\beta}\rangle=e^{ibt\sum_{n=1}^{N}(\beta_{n} -\alpha_{n})}e^{-\gamma(t)+i\varphi_{0}(t)+i\varphi_{1}(t)}\langle\vec{\alpha} |\rho_{0}|\vec{\beta}\rangle,\) in terms of real functions \(\gamma(t)\), \(\varphi_{0}(t)\), \(\varphi_{1}(t)\) which involve products of a state-dependent component and a corresponding time-dependent dynamic coefficient [33]. Irrespective of the classical or quantum nature of the noise, \(\gamma(t)\) governs the decay of off-diagonal coherence elements, whereas phase evolution is distinctive of a quantum, non-commuting environment. One may show that \(\varphi_{0}(t)\) arises from a unitary "Lamb-shift" contribution due to bath-mediated entanglement between the qubits, while \(\varphi_{1}(t)\) is linked to whether the dephasing is "random unitary," hence classical [33]. In particular,
\[\gamma(t)\equiv\sum_{n,m=1}^{N}(\alpha_{n}-\beta_{n})(\alpha_{m}-\beta_{m}) \kappa_{nm}(t), \tag{2}\]
where the decay dynamic coefficient is expressible in terms of a frequency overlap integral:
\[\kappa_{nm}(t)=\frac{1}{32\pi}\,\int_{-\infty}^{\infty}\!\!d\omega\,\frac{ \sin^{2}(\omega t/2)}{\omega^{2}}\,S_{nm}^{+}(\omega). \tag{3}\]
Structurally similar expressions hold for \(\varphi_{0}(t)\) and \(\varphi_{1}(t)\), except for the fact that the quantum spectra \(S_{nm}^{-}(\omega)\) are now involved [33; 41]. Under the assumption that the spectra have vanishing support above a high-frequency cutoff, say, \(S_{nm}^{\pm}(\omega)\approx 0\) for \(|\omega|\gtrsim\omega_{c}\), Eq. (3) implies a quadratic dependence upon time of \(\kappa_{nm}(t)\), hence of \(\gamma(t)\), in the short-time limit \(\omega_{c}t\ll 1\). This contrasts with the linear behavior (\(\hat{\gamma}(t)=\text{const}\)) that arises in the formal limit \(\omega_{c}\rightarrow\infty\) of Markovian noise, described by semigroup dynamics [12].
_Noisy frequency estimation._-- In a Ramsey-type estimation protocol, the \(N\) qubit probes are initialized in a (possibly entangled) state \(\rho_{0}\); the system is then left to evolve under Hamiltonian (1) for an "encoding" period of duration \(\tau\), after which a suitable observable, say, \(\mathcal{O}\), is measured. This process is repeated \(\nu\equiv T/\tau\gg 1\) times over a total duration \(T\), resulting in a vector of independently distributed measurement outcomes \(\vec{\mu}\equiv\{\mu_{1},\ldots,\mu_{\nu}\}\), from which information about \(b\) is extracted. An _estimator_\(\hat{b}(\vec{\mu})\) is a function of random variable that associates each set of outcomes with an estimate of \(b\). Let the mean and variance of \(\hat{b}\) be denoted, respectively, by \(\langle\hat{b}(\vec{\mu})\rangle\) and \(\Delta\hat{b}^{2}(\vec{\mu})=\langle(\hat{b}(\vec{\mu})-\langle\hat{b}(\vec{ \mu})\rangle)^{2}\rangle\), where expectations \(\langle\cdot\rangle\) are taken over all possible measurement outcomes. The estimator is _unbiased_ if \(\langle\hat{b}(\vec{\mu})\rangle=b\) and is _asymptotically unbiased_ if the bias vanishes as \(\nu\rightarrow\infty\)[42].
If \(\hat{b}\) is unbiased, the variance associated with measuring \(\mathcal{O}\) is lower-bounded by the _quantum Cramer-Rao bound_[43; 44], \(\Delta\hat{b}^{2}\geq\Delta\hat{b}^{2}_{\text{QCR}}=(\nu F_{\text{Q}}[\rho_{b }(\tau),\mathcal{O}])^{-1}\), where \(F_{\text{Q}}[\rho_{b}(\tau),\mathcal{O}]\) is the _quantum Fisher information_ (QFI) of the pre-measurement state \(\rho_{b}(\tau)\) with respect to \(\mathcal{O}\). In the absence of noise, the QCR of initial GHZ states, \(|\text{GHZ}\rangle\equiv\frac{1}{\sqrt{2}}(|\!\uparrow\rangle^{\otimes N}+| \!\downarrow\rangle^{\otimes N})\), saturates the HL [3]. Likewise, spin-squeezed states, where the variance of the collective spin \(J_{u}\equiv\frac{1}{2}\sum_{n=1}^{N}\sigma_{n}^{u}\) is reduced along a particular direction, can also scale superclassically and are easier to generate and measure experimentally [45]. Here, we consider OAT states [46; 47; 48], \(|\text{OATS}\rangle=e^{-i\beta J_{x}}e^{-i\theta J_{x}^{2}}|\text{CSS}\rangle_{x}\), with \(\beta\), \(\theta\) rotation and squeezing angles set to minimize the initial variance along \(y\), and coherent spin states in the \(x\) direction, \(|\text{CSS}\rangle_{x}\equiv|+\rangle^{\otimes N}\), \(\sigma_{x}|+\rangle=|+\rangle\). For such an OAT state, the precision \(\Delta\hat{b}\) resulting from measuring \(J_{y}\) scales like \(N^{-5/6}\) in the noiseless scenario. Noise has a major impact on the estimation precision that any state may attain. At a basic level, it also calls for reconsidering the estimators the protocol hinges upon.
_Noise-induced bias and ratio estimator._-- Assume that, as common in Ramsey interferometric detection, we work in a _local_ estimation regime, whereby small deviations from a known frequency \(b_{0}\) are sensed [44; 8]. A widely used estimation technique is then provided by the _method of moments_[49]. First, a known functional relation between the target parameter \(b\) and the mean value of \(\mathcal{O}\) is established, say, \(\langle\mathcal{O}(\tau)\rangle=f(b)\). An estimate of the mean is obtained in terms of the sample mean, \(\langle\hat{\mathcal{O}}(\tau)\rangle\equiv\hat{f}(b)=\sum_{i=1}^{\nu}\mu_{i}/\nu\), where the locality assumption implies \(\hat{f}(b)=f(b_{0})+\epsilon\), with \(\epsilon\ll 1\), and convergence \(\hat{\mathcal{O}}(\tau)\rightarrow\langle\mathcal{O}(\tau)\rangle\) is ensured asymptotically by the (weak) law of large numbers. The estimator is then constructed by inverting the functional relationship \(\hat{b}\equiv f^{-1}(\hat{\mathcal{O}})=f^{-1}(\hat{f}(b))\). The estimator uncertainty is computed via error propagation, \(\Delta\hat{b}(\tau)=\nu^{-1/2}\Delta\mathcal{O}(\tau)/|\partial_{b}\langle \mathcal{O}(\tau)\rangle|\), with \(\Delta\mathcal{O}(\tau)^{2}=\langle\mathcal{O}^{2}(\tau)\rangle-\langle\mathcal{ O}(\tau)\rangle^{2}\). Crucially, this assumes the function \(f\) to be _one-to-one_, at least in a neighborhood of \(b_{0}\). Despite being ubiquitous in the literature, this inversion procedure becomes problematic in the presence of noise.
To illustrate the issue, consider an initial GHZ state evolving under Hamiltonian (1). Due to its symmetry, \(|\text{GHZ}\rangle\) is insensitive to phase evolution (\(\varphi_{0}(t)=\varphi_{1}(t)\equiv 0\)), and the QFI can be computed exactly [33], yielding \(\Delta_{\text{QCR}}\hat{b}(\tau)\geq e^{\gamma_{\text{GHZ}}(\tau)}/(N\sqrt{T\tau})\), with \(\gamma_{\text{GHZ}}(\tau)=\sum_{n,m=1}^{N}\kappa_{nm}(\tau)\) determined by Eq. (3). The bound can be saturated by measuring the survival probability \(p_{b}(\tau)\). In the absence of noise, \(p_{b,0}(\tau)=\frac{1}{2}[1+\cos(Nb\tau)]\) which, for a vector of outcomes \(\vec{\mu}\) containing \(\nu_{+}\) detections such that \(\hat{p}_{b,0}(\tau)=\nu_{+}/\nu\), leads to the estimator \(\hat{b}_{0}(\tau)\equiv\arccos[2(\nu_{+}/\nu)-1]/(N\tau)\). When dephasing is present, \(p_{b}(\tau)=\frac{1}{2}[1+\cos(Nb\tau)e^{-\gamma_{\text{GHZ}}(\tau)}]\). Thus, using the same inversion formula as in the noiseless scenario renders the estimator \(\hat{b}_{0}(\tau)\) systematically _biased_. Assuming _perfect_ knowledge of the decay factor \(e^{-\gamma_{\text{GHZ}}(\tau)}\), one could consider an estimator \(\hat{b}(\tau)^{\prime}\equiv\arccos[e^{\gamma_{\text{GHZ}}(\tau)}(2(\nu_{+}/\nu )-1)]/(N\tau)\). This, however, yields an imaginary result whenever the absolute value of the accrosg argument exceeds one which, due to the factor \(e^{\gamma_{\text{GHZ}}(\tau)}\), generically happens for a non-vanishing set of outcomes. The estimator \(\hat{b}(\tau)\) is thus ill-defined and quantities like \(\langle\hat{b}(\vec{\mu})\rangle\) are no longer meaningful, causing the moment estimation technique to break down.
An asymptotically unbiased noise-robust estimator can be constructed at the cost of slightly increasing the required resources - by combining results from _two_ distinct initializa
tions and detections, each involving \(\nu\) repetitions over time \(T\). For an initial GHZ state, suppose we also estimate the probability of finding \(|\mathrm{GHZ}\rangle\) in \(|\mathrm{GHZ^{\prime}}\rangle=\frac{1}{\sqrt{2}}(|\!\uparrow\rangle^{\otimes N}+ i|\!\downarrow\rangle^{\otimes N})\), given by \(p^{\prime}_{b}(\tau)=\frac{1}{2}[1+e^{-\gamma_{\mathrm{GHZ}}}(t)\sin(Nb\tau)]\). We can then solve for \(b\) from the ratio \((p^{\prime}_{b}(\tau)-\frac{1}{2})/(p_{b}(\tau)-\frac{1}{2})\) which, crucially, _cancels_ the effect of decay. Thus, for two vectors of outcomes \(\vec{\mu},\vec{\mu}^{\prime}\) containing \(\nu_{+}\) and \(\nu^{\prime}_{+}\) detections and a fixed total time of \(2T\), a _ratio estimator_ may be defined as
\[\hat{b}_{\text{R}}(\tau)\equiv\arctan\big{[}(\nu^{\prime}_{+}/\nu-\tfrac{1}{2 })/(\nu_{+}/\nu-\tfrac{1}{2})\big{]}/(N\tau), \tag{4}\]
with the sample mean and variance taken over _two_ sets of possible measurement outcomes, e.g \(\langle\hat{b}_{\text{R}}(\vec{\mu},\vec{\mu}^{\prime})\rangle\). By using error propagation and invoking the locality assumption as before, the estimator variance is found as [41]
\[\Delta\hat{b}_{\text{R}}^{2}(\tau)=\Big{[}e^{2\gamma_{\text{GHZ}}(\tau)}- \tfrac{1}{2}\sin(2Nb\tau)^{2}\Big{]}/(N^{2}T\tau), \tag{5}\]
leading to the same \(N\)-scaling as the usual QFI see [41].
In Fig. 1(left), the limiting estimator function for \(\hat{b}_{\text{R}}(\tau)\), \(\arctan[\tan(Nb\tau)]/(N\tau)\), is compared with an average over all measurement outcomes for two different numbers of repetitions, showing excellent asymptotic convergence. In Fig. 1(right), the analytic expression \(\Delta\hat{b}_{\text{R}}^{2}(\tau)\) is compared against the sample variance for finite \(\nu\), showing remarkable agreement in the region where the estimator is linear. Note that the sample variance diverges around the points where the estimator is discontinuous, forcing our prior knowledge of the frequency to be confined to an interval of length \(\pi/(N\tau)\).
A similar noise-robust ratio estimator can be constructed for a Ramsey setup employing initial OAT states, by measuring along two transverse angular momentum components, \(J_{x}\) and \(J_{y}\). The resulting uncertainty scaling is \(\sqrt{2}\) larger than the the one obtained via the method of moments for the same fixed total measurement time \(2T\), while circumventing potential inversion issues. For additional details, we refer the reader to the Supplement [41]. Having established that noise-induced bias may be circumvented by constructing an appropriately modified estimator, we now turn to the problem of maximizing the achievable estimation precision, by leveraging knowledge of the noise toward optimal sensor design.
_Noise-tailored lattice sensor.--_ As a concrete illustrative setting, consider a 1D dephasing spin-boson model, whereby the qubits interact with a collection of oscillators vibrating at frequencies \(\Omega_{k}\), in thermal equilibrium at inverse temperature \(\beta\). Then, \(B_{n}(t)=\sum_{k}(g_{k}\,e^{ik\tau_{n}}e^{i\Omega_{k}t}\,b_{k}^{\dagger}+ \text{H.c.})\), in terms of bosonic operators \(b_{k}\), \(b_{k}^{\dagger}\), with \(g_{k}\in\mathbb{R}\) being a coupling strength for momentum mode \(k>0\). We assume a linear dispersion \(\Omega_{k}\equiv vk\), with \(v>0\) a speed parameter, and envision the qubits to be arranged in a regular lattice with tunable spacing \(x_{0}\), that is, the position of qubit \(n\) obeys \(r_{n}=nx_{0}\), \(1\leq n\leq N\). For a qubit pair \(n,m\), a _transit time_ proportional to their distance may be then defined by \(t_{nm}\equiv|n-m|x_{0}/v\).
Notably, for the above setting we have \(\varphi_{1}(t)\equiv 0\)[33]. The dynamic coefficients determining \(\gamma(t)\) is given in Eq. (2), and, similarly, \(\varphi_{0}(t)\equiv\sum_{n,m=1}^{N}(\beta_{n}\beta_{m}-\alpha_{n}\alpha_{m}) \xi_{nm}(t)\). To compute \(\kappa_{nm}(t)\) and \(\xi_{nm}(t)\), we further assume a continuum of bosonic modes, characterized by a spectral density of the form \(J(\omega)\equiv\alpha\omega_{c}(\omega/\omega_{c})^{s}e^{-\omega/\omega_{c}}\), with \(\alpha>0\) a dimensionless strength constant and \(s>0\) the so-called Ohmicity parameter. The noise classical and quantum spectra are then calculated as \(S^{+}_{nm}(\omega)=4\pi J(\omega)\cos(\omega t_{nm})\coth(\beta|\omega|/2)\), \(S^{-}_{nm}(\omega)=4\pi J(\omega)\cos(\omega t_{nm})\text{sgn}(\omega)\), where \(\text{sign}(\omega)\) is the sign function. We assume that the operating temperature is sufficiently low for thermal effects to be negligible, \(\coth(\beta|\omega|/2)\approx 1\). The cutoff frequency \(\omega_{c}\) then defines a metrologically relevant short-time limit, given by \(\omega_{c}t\ll 1\), where the quantum advantage in estimation precision may be maximized [32; 33; 15]. In this regime, the dynamic coefficients can be approximated by \(\kappa_{nm}(t)\approx\kappa_{0}^{2}(\omega_{c}t)^{2}\delta_{1}(|n-m|x_{0})\) and \(\xi_{nm}(t)\approx\xi_{0}^{3}(\omega_{c}t)^{3}\delta_{2}(|n-m|x_{0})\), with constant factors \(\kappa_{0}^{2}\equiv\alpha\,\Gamma(s+1),\xi_{0}^{3}\equiv(\alpha/6)\,\Gamma (s+2)\) (\(\Gamma\) being the Euler Gamma function), and their temporal dependence factored from the relevant spatial correlations:
\[\delta_{1}(|n-m|x_{0}) = u^{s+1}\,T_{s+1}(u), \tag{6}\] \[\delta_{2}(|n-m|x_{0}) = u^{s+2}\,T_{s+2}(u), \tag{7}\]
where \(u\equiv[1+(|n-m|x_{0})^{2}]^{-1/2}\) denotes the \(n\)-th order Chebyshev polynomial of the first kind. Provided that knowledge of the spectral density (hence the spatial correlation functional forms \(\delta_{\ell}\)) is available, and \(s>1\), the spatial correlations can be made _negative_ by tuning the distance \(x_{0}\). We now show how this leads to a drastic improvement in the scaling of the frequency uncertainty with respect to both collective noise [32] and our previous randomization protocol [33].
_Initial GHZ state_. To determine the GHZ optimal precision, we minimize the uncertainty in Eq. (5) with respect to time and lattice parameter \(x_{0}\) for \(Nbt=(2k+1)\,\pi/4\), \(k\in\mathbb{N}\). Since \(\varphi_{0}(t)\equiv 0\) for \(|\mathrm{GHZ}\rangle\), \(\xi_{nm}(t)\equiv 0\) as well. Taking advantage of the fact that, in the short-time limit, \(\gamma_{\text{GHZ}}(t)\approx(\omega_{c}t)^{2}F_{N}(x_{0})\), with \(F_{N}(x_{0})\equiv\kappa_{0}^{2}\sum_{n,m}^{N}\delta_{1}(x_{0}|n-m|)\), the minimization may be carried out with respect to each variable separately. Replacing in Eq. (5) and optimizing with respect to \(\tau\) leads to the optimal measurement time \(\tau_{\text{GHZ}}^{\text{GHZ}}=\tfrac{1}{2}\omega_{c}^{-1}F_{N}(x_{0})^{-1/2}\). It follows that the best sensing performance is achieved by minimizing the spatial function for the time-optimized uncertainty, \(\Delta\hat{b}_{\text{R}}(\tau_{\text{R}\text{R}\text{R}\text{R}}^{\text{GHZ}}) \approx 2.96\,\sqrt{\omega_{c}/T}\,F_{N}(x_{0})^{1/4}\,N^{-1}\). While the details are provided
Figure 1: (Color online) **Performance of ratio estimator.** Left: Ratio estimator’s sample mean for \(\nu=30\) (red, solid), \(\nu=400\) (blue, dashed), and analytic \(\nu\to\infty\) mean value (grey, solid). Right: Ratio estimator’s sample variance for \(\nu=400\) and limiting analytic expression (grey, solid). In both cases, a GHZ state of \(N=100\) qubits is considered, subject to spin-boson dephasing from a 1D zero-temperature noise environment, with parameters: \(\alpha=1\), \(s=3\), \(\omega_{c}=1\), \(v=1\). Lattice spacing \(x_{0}=0.4296\) and measurement time \(\tau=0.2067\) are chosen to minimize uncertainty.
in [41], the resulting approximate (analytically) minimized spatial function, \(F_{N}(x_{0\text{ opt}}^{\text{GHZ}})\), can be shown to scale logarithmically in the \(N\gg 1\) limit: \(F_{N}(x_{0\text{ opt}}^{\text{GHZ}})\propto\mathcal{O}(\log(N)^{2})\). Accordingly, the optimal asymptotic sensing performance is
\[\Delta\hat{b}_{\text{R\, opt}}^{\text{GHZ}}\approx 2.96\,(\omega_{c}/T)^{1/2}\, \sqrt{\log(N)}\,N^{-1}, \tag{8}\]
which is _closer to Heisenberg scaling than any power law_.
Figure 2(left) demonstrates that the agreement between the analytic expression in Eq. (8) and the exact numerical optimization over \(x_{0},\tau\) is excellent even at finite \(N\gtrsim 20\), despite the fact that discrepancies between the lattice parameter that numerically minimizes \(g_{1}\) and its analytic approximation \(x_{0\text{ an}}^{\text{GHZ}}\) vanish only asymptotically. This indicates a high degree of robustness against deviations from the exact optimal lattice parameter and, in turn, against uncertainty in the characterization of the underlying noise spectral density.
_Initial OAT state._ We now analyze the performance of an initial OATS optimally squeezed along \(y\). Unlike GHZ states, OATS are not immune to the genuinely quantum contribution of the noise. For clarity, let us first consider the classical decay contribution, described as before by Eqs. (2)-(3), and assess the impact of phase \(\varphi_{0}(t)\) at a later stage. As detailed in [41], by performing a cumulant expansion over appropriate qubit operators [32], we may derive approximate expressions for the mean values \(\langle J_{v}(t)\rangle\), \(\langle J_{v}^{2}(t)\rangle\), \(v=x,y\), which are remarkably accurate in the short-time limit \(\omega_{c}t\ll 1\). It is then possible to expand the uncertainty in powers of \(t\), \(\Delta\hat{b}_{\text{R}}(t)\approx(\sqrt{Tt}\,h_{0}(N))^{-1}\left(\sum_{k=0}^ {2}a_{2k}(N,x_{0})\,(\omega_{c}t)^{2k}\right)^{1/2}\), where the coefficients \(h_{0}(N)\), \(a_{0}(N,x_{0})\), \(a_{2}(N,x_{0})\), and \(a_{4}(N,x_{0})\) can be evaluated analytically by the method used to compute the spatial function \(F_{N}(x_{0})\) for the GHZ state. This expansion is accurate in the region where the minimum occurs (see inset of Fig. 2(right)), and once again separates the spatial and temporal dependence of the uncertainty into two distinct contributions. The best performance can then be extracted by minimizing the expansion with respect to time and, subsequently, with respect to the lattice parameter [41]. For \(s>1\), we find that the noise-tailored sensor reaches the ZL,
\[\Delta\hat{b}_{\text{R\, opt}}^{\text{OATS}}\approx\omega_{c}/(3T)^{1/2}\, \Gamma(s+1)^{1/4}N^{-3/4}. \tag{9}\]
This is a mere \(N^{-1/12}\) scaling loss with respect to the noiseless scenario, and is found to be in full agreement with numerical minimization, see Fig. 2(right).
Accounting for the effect of quantum noise makes the expression for the mean values significantly more involved. This prevents us from deriving a tractable short-time expansion for the uncertainty and lengthens the computational time required to evaluate \(\Delta\hat{b}_{\text{R}}(\tau)\) numerically, limiting the accessible values of \(N\). Still, Fig. 2(right) (inset) shows that the uncertainty as a function of time has a nearly identical behavior as in the presence of classical noise alone, with the contribution of \(\varphi_{0}(t)\) entering as a small correction. In line with the above, numerical minimization in the range of \(N\) we were able to test leads to similar values for the optimal uncertainty as when quantum noise was disregarded, see Fig. 2(right). Such a behavior is plausible in light of the fact that, as in the GHZ case, \(\xi_{nm}(t)\propto(\omega_{c}t)^{3}\), as compared to \(\kappa_{nm}(t)\propto(\omega_{c}t)^{2}\), which causes the decay contributions to dominate in the short-time limit. Altogether, this strongly suggests that a ZL as in Eq. (9) is realized asymptotically, when quantum noise is included.
_Outlook.--_ We have constructed a noise-robust ratio estimator whose precision saturates the ZL for an OAT state under spin-boson dephasing with tunable spatial correlations, and reaches a novel scaling which is a \(\log(N)^{1/2}\) factor away from the HL for a GHZ state. This hints at the possibility that the HL may be reachable by further optimizing the initial state. We leave this to future study, along with the investigation of potential experimental realizations of our tunable lattice model, for instance in a trapped ion settings [50; 51].
Figure 2: **Noise-optimized superclassical precision scaling.** Left: GHZ optimal uncertainty vs. qubit number. Grey circles: Exact numerical optimization. Blue, solid line: Analytic expression for \(\Delta\hat{b}_{\text{opt}}^{\text{GHZ}}\). Orange, dashed: Asymptotic scaling limit, Eq. (8). Inset: GHZ uncertainty vs. time for \(N=100\) qubits and optimal lattice spacing \(x_{0}=0.4296\). Grey, solid: Exact expression. Blue, dashed: Short-time approximate uncertainty. Right: Optimal uncertainty vs. qubit number for an OATS state ideally squeezed along \(y\) before evolution. Blue triangles: Numerical optimization for purely classical noise. Gray circles: Numerical optimization including quantum noise. Orange, dashed: Asymptotic scaling limit, Eq. (9). Inset: OATS uncertainty vs. time for \(N=30\) qubits and optimal spacing \(x_{0}=0.46\). Blue, dashed: Purely classical dephasing. Grey, solid: Classical and quantum dephasing. Orange, dotted: Short-time approximate uncertainty. Noise parameters as in Fig. 1.
It is a pleasure to thank Augusto Smerzi for valuable input and clarifications, and Felix Beaudoin for a critical reading of the manuscript. L.V. also acknowledges early discussions with Felix Beaudoin on the issue of noise-induced bias. Work at Dartmouth was supported by the US NSF through Grant No. PHY-2013974, and the Constance and Walter Burke Special Projects Fund in QIS. Work at Griffith was supported (partially) by the Australian Government through the Australian Research Council's Discovery Projects funding scheme (project No. DP210102291).
|
2306.08680 | Temporally Extended Goal Recognition in Fully Observable
Non-Deterministic Domain Models | Goal Recognition is the task of discerning the correct intended goal that an
agent aims to achieve, given a set of goal hypotheses, a domain model, and a
sequence of observations (i.e., a sample of the plan executed in the
environment). Existing approaches assume that goal hypotheses comprise a single
conjunctive formula over a single final state and that the environment dynamics
are deterministic, preventing the recognition of temporally extended goals in
more complex settings. In this paper, we expand goal recognition to temporally
extended goals in Fully Observable Non-Deterministic (FOND) planning domain
models, focusing on goals on finite traces expressed in Linear Temporal Logic
(LTLf) and Pure Past Linear Temporal Logic (PLTLf). We develop the first
approach capable of recognizing goals in such settings and evaluate it using
different LTLf and PLTLf goals over six FOND planning domain models. Empirical
results show that our approach is accurate in recognizing temporally extended
goals in different recognition settings. | Ramon Fraga Pereira, Francesco Fuggitti, Felipe Meneguzzi, Giuseppe De Giacomo | 2023-06-14T18:02:00Z | http://arxiv.org/abs/2306.08680v1 | # Temporally Extended Goal Recognition in Fully Observable Non-Deterministic Domain Models
###### Abstract
_Goal Recognition_ is the task of discerning the correct intended goal that an agent aims to achieve, given a set of goal hypotheses, a domain model, and a sequence of observations (i.e., a sample of the plan executed in the environment). Existing approaches assume that goal hypotheses comprise a single conjunctive formula over a single final state and that the environment dynamics are deterministic, preventing the recognition of temporally extended goals in more complex settings. In this paper, we expand goal recognition to _temporally extended goals_ in _Fully Observable Non-Deterministic_ (fond) planning domain models, focusing on goals on finite traces expressed in _Linear Temporal Logic_ (ltl\({}_{f}\)) and _Pure Past Linear Temporal Logic_ (ppltl). We develop the first approach capable of recognizing goals in such settings and evaluate it using different ltl\({}_{f}\) and ppltl goals over six fond planning domain models. Empirical results show that our approach is accurate in recognizing temporally extended goals in different recognition settings.
## 1 Introduction
_Goal Recognition_ is the task of recognizing the intentions of autonomous agents or humans by observing their interactions in an environment. Existing work on goal and plan recognition addresses this task over several different types of domain settings, such as planlibraries (Avrahami-Zilberbrand and Kaminka, 2005), plan tree grammars (Geib and Goldman, 2009), classical planning domain models (Ramirez and Geffner, 2009, 2010; Sohrabi et al, 2016; Pereira et al, 2020), stochastic environments (Ramirez and Geffner, 2011), continuous domain models (Kaminka et al, 2018), incomplete discrete domain models (Pereira et al, 2019), and approximate control models (Pereira et al, 2019). Despite the ample literature and recent advances, most existing approaches to _Goal Recognition as Planning_ cannot recognize _temporally extended goals_, i.e., goals formalized in terms of time, e.g., the exact order that a set of facts of a goal must be achieved in a plan. Recently, Aineto et al (2021) propose a general formulation of a temporal inference problem in deterministic planning settings. However, most of these approaches also assume that the observed actions' outcomes are deterministic and do not deal with unpredictable, possibly adversarial, environmental conditions.
Research on planning for _temporally extended goals_ in _deterministic_ and _non-deterministic_ domain settings has increased over the years, starting with the pioneering work on planning for temporally extended goals (Bacchus and Kabanza, 1998) and on planning via model checking (Cimatti et al, 1997). This continued with the work on integrating ltl goals into planning tools (Patrizi et al, 2011, 2013), and, most recently, the work of Bonassi et al (2023), introducing a novel Pure-Past Linear Temporal Logic encoding for planning in the _Classical Planning_ setting. Other existing work relate ltl goals with _synthesis_ for planning in non-deterministic domain models, often focused on the _finite trace_ variants of ltl(De Giacomo and Vardi, 2013, 2015; Ca
macho et al, 2017, 2018; De Giacomo and Rubin, 2018; Aminof et al, 2020).
In this paper, we introduce the task of goal recognition in _discrete domains_ that are _fully observable_, and the outcomes of actions and observations are _non-deterministic_, possibly adversarial, i.e., _Fully Observable Non-Deterministic_ (fond), allowing the formalization of _temporally extended goals_ using two types of temporal logic on finite traces: _Linear-time Temporal Logic_ (ltl\({}_{f}\)) and _Pure-Past Linear-time Temporal Logic_ (pplitl) (De Giacomo et al, 2020).
The main contribution of this paper is three-fold. First, based on the definition of _Plan Recognition as Planning_ introduced in (Ramirez and Geffner, 2009), we formalize _the problem of recognizing temporally extended goals_ (expressed in ltl\({}_{f}\) or pplitl) in fond planning domains, handling both stochastic (i.e., strong-cyclic plans) and adversarial (i.e., strong plans) environments (Aminof et al, 2020). Second, we extend the probabilistic framework for goal recognition proposed in (Ramirez and Geffner, 2010), and develop a novel _probabilistic approach_ that reasons over executions of policies and returns a posterior probability distribution for the goal hypotheses. Third, we develop a _compilation approach_ that generates an augmented fond planning problem by compiling temporally extended goals together with the original planning problem. This compilation allows us to use any off-the-shelf fond planner to perform the recognition task in fond planning models with temporally extended goals.
We focus on fond domain models with stochastic non-determinism, and conduct an extensive set of experiments with different complex planning problems. We empirically evaluate our approach using different ltl\({}_{f}\) and pplitl goals over six fond planning domain models, including a real-world non-deterministic domain model (Nebel et al, 2013), and our experiments show that our approach is accurate to recognize temporally extended goals in different two recognition settings: _offline recognition_, in which the recognition task is performed in "one-shot", and the observations are given at once and may contain missing information; and _online recognition_, in which the observations are received incrementally, and the recognition task is performed gradually.
## 2 Preliminaries
In this section, we briefly recall the syntax and semantics of _Linear-time Temporal Logics_ on finite traces (ltl\({}_{f}\)/pplitl) and revise the concept and terminology of fond planning.
### Ltl\({}_{f}\) and Ppltl
_Linear Temporal Logic on finite traces_ (ltl\({}_{f}\)) is a variant of ltl introduced in (Pnueli, 1977) interpreted over _finite traces_. Given a set \(AP\) of atomic propositions, the syntax of ltl\({}_{f}\) formulas \(\varphi\) is defined as follows:
\[\varphi:=a\mid\neg\varphi\mid\varphi\land\varphi\mid\mathtt{O}\varphi\mid \varphi\,\mathcal{U}\,\varphi\]
where \(a\) denotes an atomic proposition in \(AP\), \(\mathtt{O}\) is the _next_ operator, and \(\mathcal{U}\) is the _until_ operator. Apart from the Boolean connectives, we use the following abbreviations: _eventually_ as \(\Diamond\varphi\doteq\neg\,\neg\varphi\); _always_ as \(\Box\varphi\doteq\neg\,\Diamond\neg\varphi\); _weak next_\(\bullet\varphi\doteq\neg\mathtt{O}\neg\varphi\). A trace \(\tau=\tau_{0}\tau_{1}\)... is a sequence of propositional interpretations, where \(\tau_{m}\in 2^{AP}(m\geq 0)\) is the \(m\)-th interpretation of \(\tau\), and \(|\tau|\) is the length of \(\tau\). We denote a finite trace formally as \(\tau\in(2^{AP})^{\star}\). Given a finite trace \(\tau\) and an ltl\({}_{f}\) formula \(\varphi\), we inductively define when \(\varphi\)_holds_ in \(\tau\) at position \(i\)\((0\leq i<|\tau|)\), written \(\tau,i\vdash\varphi\) as follows:
* \(\tau,i\models a\) iff \(a\in\tau_{i}\);
* \(\tau,i\vdash\neg\varphi\) iff \(\tau,i\,\not\vdash\varphi\);
* \(\tau,i\models\varphi_{1}\land\varphi_{2}\) iff \(\tau,i\,\not\vdash\varphi_{1}\) and \(\tau,i\,\not=\varphi_{2}\);
* \(\tau,i\models\mathtt{O}\varphi\) iff \(i+1<|\tau|\) and \(\tau,i+1\models\varphi\);
* \(\tau,i\models\varphi_{1}\,\mathcal{U}\,\varphi_{2}\) iff there exists \(j\) such that \(i\leq j<|\tau|\) and \(\tau,j\models\varphi_{2}\), and for all \(k,\ i\leq k<j\), we have \(\tau,k\models\varphi_{1}\).
An ltl\({}_{f}\) formula \(\varphi\) is _true_ in \(\tau\), denoted by \(\tau\models\varphi\), iff \(\tau,0\models\varphi\). As advocated in (De Giacomo et al, 2020), we also use the _pure-past_ version of ltl\({}_{f}\), here denoted as pplitl, due to its compelling computational advantage compared to ltl\({}_{f}\) when goal specifications are _naturally_ expressed in a past fashion. pplitl refers _only_ to the past and has a natural interpretation on finite traces: formulas are satisfied if they hold in the current (i.e., last) position of the trace.
Given a set \(AP\) of propositional symbols, pplitl formulas are defined by:
\[\varphi:=a\mid\neg\varphi\mid\varphi\land\varphi\mid\mathtt{O}\varphi\mid \mathtt{O}\,\varphi\]
where \(a\in AP\), \(\mathtt{O}\) is the _before_ operator, and \(\mathcal{S}\) is the _since_ operator. Similarly to ltl\({}_{f}\), common abbreviations are the _once_ operator \(\mathtt{O}\varphi\doteq\textit{true}\,\mathcal{S}\,\varphi\) and the _historically_ operator \(\mathtt{O}\varphi\doteq\neg\mathtt{O}\neg\varphi\). Given a finite trace \(\tau\) and a pplitl formula \(\varphi\), we inductively define when \(\varphi\)_holds_ in \(\tau\) at position \(i\)\((0\leq i<|\tau|)\), written \(\tau,i\vdash\varphi\) as follows. For atomic propositions and Boolean operators it is as for ltl\({}_{f}\). For past operators:
* \(\tau,i\models\mathtt{O}\varphi\) iff \(i-1\geq 0\) and \(\tau,i-1\models\varphi\);
* \(\tau,i\models\varphi_{1}\,\mathcal{S}\,\varphi_{2}\) iff there exists \(k\) such that \(0\leq k\leq i\) and \(\tau,k\models\varphi_{2}\), and for all \(j\), \(k<j\leq i\), we have \(\tau,j\models\varphi_{1}\).
A ppltl formula \(\varphi\) is _true_ in \(\tau\), denoted by \(\tau\vDash\varphi\), if and only if \(\tau,|\tau|-1\vDash\varphi\). A key property of temporal logics that we exploit in this work is that, for every ltl\({}_{f}\)/ptltl formula \(\varphi\), there exists a _Deterministic Finite-state Automaton_ (DFA) \(\mathcal{A}_{\varphi}\) accepting the traces \(\tau\) satisfying \(\varphi\)(De Giacomo and Vardi, 2013; De Giacomo et al, 2020).
### Fond
Planning
A _Fully Observable Non-deterministic Domain_ planning model (fond) is a tuple \(\mathcal{D}=(2^{\mathcal{F}},A,\alpha,tr)\)(Geffner and Bonet, 2013), where \(2^{\mathcal{F}}\) is the set of possible states and \(\mathcal{F}\) is a set of fluents (atomic propositions); \(A\) is the set of actions; \(\alpha(s)\subseteq A\) is the set of applicable actions in a state \(s\); and \(tr(s,a)\) is the non-empty set of successor states that follow action \(a\) in state \(s\). A domain \(\mathcal{D}\) is assumed to be compactly represented (e.g., in PDDL (McDermott et al, 1998)), hence its size is \(|\mathcal{F}|\). Given the set of _literals_ of \(\mathcal{F}\) as \(\mathit{Literals}(\mathcal{F})=\mathcal{F}\cup\{\neg f\mid f\in\mathcal{F}\}\), every action \(a\in A\) is usually characterized by \((\mathit{Pre}_{a},\mathit{Eff}_{a})\), where \(\mathit{Pre}_{a}\subseteq\mathit{Literals}(\mathcal{F})\) is the action preconditions, and \(\mathit{Eff}_{a}\) is the action effects. An action \(a\) can be applied in a state \(s\) if the set of fluents in \(\mathit{Pre}_{a}\) holds true in \(s\). The result of applying \(a\) in \(s\) is a successor state \(s^{\prime}\) non-deterministically drawn from one of the \(\mathit{Eff}_{a}^{\ast}\) in \(\mathit{Eff}_{a}=\{\mathit{Eff}_{a}^{1},...,\mathit{Eff}_{a}^{n}\}\). In fond planning, some actions have _uncertain outcomes_, such that they have _non-deterministic_ effects (i.e., \(|tr(s,a)|\geq 1\) in all states \(s\) in which \(a\) is applicable), and effects cannot be predicted in advance. PDDL expresses uncertain outcomes using the oneof(Bryce and Buffet, 2008) keyword, as widely used by several fond planners (Mattmuller et al, 2010; Muise et al, 2012). We define fond planning problems as follows.
Definition 1: A _fond_ planning problem is a tuple \(\mathcal{P}=(\mathcal{D},s_{0},G)\), where \(\mathcal{D}\) is a _fond_ domain model, \(s_{0}\) is an initial assignment to fluents in \(\mathcal{F}\) (i.e., initial state), and \(G\subseteq\mathcal{F}\) is the goal state.
Solutions to a fond planning problem \(\mathcal{P}\) are _policies_. A policy is usually denoted as \(\pi\), and formally defined as a partial function \(\pi:2^{\mathcal{F}}\to A\) mapping _non-goal_ states into applicable actions that eventually reach the goal state \(G\) from the initial state \(s_{0}\). A _policy_\(\pi\) for \(\mathcal{P}\) induces a set of possible _executions_\(\bar{E}=\{\bar{e}_{1},\bar{e}_{2},\dots\}\), that are state trajectories, possibly finite (i.e., histories) \((s_{0},\dots,s_{n})\), where \(s_{i+1}\in tr(s_{i},a_{i})\) and \(a_{i}\in\alpha(s_{i})\) for \(i=0,\dots,n-1\), or possibly infinite \(s_{0},s_{1},\dots\), obtained by choosing some possible outcome of actions instructed by the policy. A policy \(\pi\) is a solution to \(\mathcal{P}\) if every generated execution is such that it is finite and satisfies the goal \(G\) in its last state, i.e., \(s_{n}\vDash G\). In this case, we say that \(\pi\) is _winning_. Cimatti et al (2003) define three solutions to fond planning problems: _weak, strong_ and _strong-cyclic_ solutions. We formally define such solutions in Definitions 2, 4, and 3.
Definition 2: A _weak solution_ is a policy that achieves the goal state \(G\) from the initial state \(s_{0}\) under at least one selection of action outcomes; namely, such solution will have some chance of achieving the goal state \(G\).
Definition 3: A _strong-cyclic solution_ is a policy that guarantees to achieve the goal state \(G\) from the initial state \(s_{0}\) only under the assumption of fairness1. However, this type of solution may revisit states, so the solution cannot guarantee to achieve the goal state \(G\) in a fixed number of steps.
Footnote 1: The fairness assumption defines that all action outcomes in a given state have a non-zero probability.
Definition 4: A _strong solution_ is a policy that is guaranteed to achieve the goal state \(G\) from the initial state \(s_{0}\) regardless of the environment's non-determinism. This type of solution guarantees to achieve the goal state \(G\) in a finite number of steps while never visiting the same state twice.
In this work, we focus on _strong-cyclic solutions_, where the environment acts in an unknown but stochastic way. Nevertheless, our recognition approach applies to strong solutions as well, where the environment is purely adversarial (i.e., the environment may always choose effects against the agent).
As a running example, we use the well-known fond domain model called Triangle-Tireworld, where locations are connected by roads, and the agent can drive through them. The objective is to drive from one location to another. However, while driving between locations, a tire may go flat, and if there is a spare tire in the car's location, then the car can use it to fix the flat tire. Figure (a)a illustrates a fond planning problem for the Triangle-Tireworld domain, where circles are locations, arrows represent roads, spare tires are depicted as tires, and the agent is depicted as a car. Figure (b)b shows a policy \(\pi\) to achieve location \(22\). Note that, to move from location \(11\) to location \(21\), there are two arrows labeled with the action (move 11 21): (1) when moving does not cause the tire to go flat; (2) when moving causes the tire to go flat. The policy depicted in Figure (b)b guarantees the success of achieving location \(22\) despite the environment's non-determinism.
In this work, we assume from _Classical Planning_ that the cost is \(1\) for all _non-deterministic_ instantiated actions \(a\in A\). In this example, policy \(\pi\), depicted in
Figure 0(b), has two possible finite executions in the set of executions \(\tilde{E}\), namely \(\tilde{E}=\{\tilde{e}_{0},\tilde{e}_{1}\}\), such as:
* \(\tilde{e}_{0}\): [(move 11 21), (move 21 22)]; and
* \(\tilde{e}_{1}\): [(move 11 21), (changetire 21), (move 21 22)].
## 3 FOND Planning for \(\text{{LTL}}_{f}\) and PPLTL Goals
We base our approach to goal recognition in fond domains for _temporally extended goals_ on fond planning for \(\text{{ltl}}_{f}\) and ppltl goals (Camacho et al, 2017, 2018; De Giacomo and Rubin, 2018). We formally define a fond planning problem for \(\text{{ltl}}_{f}/\text{{ppltl}}\) goals in Definition 5, as follows.
Definition 5: A fond planning problem for \(\text{{ltl}}_{f}/\text{{ppltl}}\) goals is a tuple \(\Gamma=(\mathcal{D},s_{0},\varphi)\), where \(\mathcal{D}\) is a standard fond domain model, \(s_{0}\) is the initial state, and \(\varphi\) is a goal formula, formally represented either as an \(\text{{ltl}}_{f}\) or a ppltl formula.
In fond planning for temporally extended goals, a policy \(\pi\) is a partial function \(\pi:(2^{\mathcal{F}})^{+}\to A\) mapping _histories_, i.e., _states_ into applicable actions. A policy \(\pi\) for \(\Gamma\) achieves a temporal formula \(\varphi\) if and only if the sequence of states generated by \(\pi\), despite the non-determinism of the environment, is accepted by \(\mathcal{A}_{\varphi}\).
Key to our recognition approach is using off-the-shelf fond planners for standard reachability goals to handle also temporally extended goals through an encoding of the automaton for the goal into an extended planning domain expressed in PDDL. Compiling temporally extended goals into planning domain models has a long history in the _Planning_ literature. In particular, Baier and McIlraith (2006) develops _deterministic_ planning with a special first-order quantified ltl goals on finite-state sequences.
Their technique encodes a _Non-Deterministic Finite-state Automaton_ (NFA), resulting from ltl formulas, into deterministic planning domains for which _Classical Planning_ technology can be leveraged. Our parameterization of objects of interest is somehow similar to their approach.
Starting from Baier and McIlraith (2006), always in the context of deterministic planning, Torres and Baier (2015) proposed a polynomial-time compilation of ltl goals on finite-state sequences into alternating automata, leaving non-deterministic choices to be decided at planning time. Finally, Camacho et al (2017, 2018) built upon Baier and McIlraith (2006) and Torres and Baier (2015), proposing a compilation in the context of fond domain models that simultaneously determinizes on-the-fly the NFA for ltl\({}_{f}\) and encodes it into PDDL. However, this encoding introduces a lot of bookkeeping machinery due to the removal of any form of angelic non-determinism mismatching with the devilish non-determinism of PDDL for fond.
Although inspired by these works, our approach differs in several technical details. We encode the DFA directly into a non-deterministic PDDL planning domain by taking advantage of the _parametric_ nature of PDDL domains that are then instantiated into propositional problems when solving a specific task. Given a fond planning problem \(\Gamma\) represented in PDDL, we transform \(\Gamma\) as follows. First, we transform the temporally extended goal formula \(\varphi\) (formalized either in ltl\({}_{f}\) or ppltl) into its corresponding DFA \(\mathcal{A}_{\varphi}\) through the highly-optimized MONA tool (Henriksen et al, 1995). Second, from \(\mathcal{A}_{\varphi}\), we build a _parametric_ DFA (PDFA), representing the lifted version of the DFA. Finally, the encoding of such a PDFA into PDDL yields an augmented fond domain model \(\Gamma^{\prime}\). Thus, we reduce fond planning for ltl\({}_{f}/\text{{ppltl}}\) to a standard fond planning problem solvable by any off-the-shelf fond planner.
### Translation to Parametric DFA
The use of _parametric_ DFAs is based on the following observations. In temporal logic formulas and, hence, in the corresponding DFAs, propositions are represented
Figure 1: Triangle-Tireworld domain and policy.
by domain fluents grounded on specific objects of interest. We can replace these propositions with predicates using object variables and then have a mapping function \(m^{obj}\) that maps such variables into the problem instance objects. In this way, we get a lifted and _parametric_ representation of the DFA, i.e., PDFA, which is merged with the domain. Here, the objective is to capture the entire dynamics of the DFA within the planning domain model itself. To do so, starting from the DFA we build a PDFA whose states and symbols are the lifted versions of the ones in the DFA. Formally, to construct a PDFA we use a mapping function \(m^{obj}\), which maps the set of objects of interest present in the DFA to a set of _free_ variables. Given the mapping function \(m^{obj}\), we can define a PDFA as follows.
**Definition 6**: _Given a set of object symbols \(\mathcal{O}\), and a set of free variables \(\mathcal{V}\), we define a mapping function \(m\) that maps each object in \(\mathcal{O}\) with a free variable in \(\mathcal{V}\)._
Given a DFA and the objects of interest for \(\Gamma\), we can construct a PDFA as follows:
**Definition 7**: _A PDFA is a tuple \(\mathcal{A}^{p}_{\varphi}=(\Sigma^{p},Q^{p},q_{0}^{p},\delta^{p},F^{p})\), where: \(\Sigma^{p}=\{\sigma^{p}_{0},...,\sigma^{p}_{n}\}=2^{\mathcal{F}}\) is the alphabet of fluents; \(Q^{p}\) is a nonempty set of parametric states; \(q_{0}^{p}\) is the parametric initial state; \(\delta^{p}:Q^{p}\times\Sigma^{p}\to Q^{p}\) is the parametric transition function; \(F^{p}\subseteq Q^{p}\) is the set of parametric final states. \(\Sigma^{p},Q^{p},q_{0}^{p},\delta^{p}\) and \(F^{p}\) can be obtained by applying \(m^{obj}\) to all the components of the corresponding DFA._
**Example 1**: _Given the \(\textsc{ltl}_{f}\) formula "\(\Diamond\)(\(vAt\) 51)", the object of interest "51" is replaced by the object variable \(x\) (i.e., \(m^{obj}(51)=x\)), and the corresponding DFA and PDFA for this \(\textsc{ltl}_{f}\) formula are depicted in Figures 1(a) and 1(b)._
When the resulting new domain is instantiated, we implicitly get back the original DFA in the Cartesian product with the original instantiated domain. Note that this way of proceeding is similar to what is done in (Baier and McIlraith, 2006), where they handle \(\textsc{ltl}_{f}\) goals expressed in a special fol syntax, with the resulting automata (non-deterministic Buchi automata) parameterized by the variables in the \(\textsc{ltl}_{f}\) formulas.
### PDFA Encoding in PDDL
Once the PDFA has been computed, we encode its components within the planning problem \(\Gamma\), specified in PDDL, thus, producing an augmented fond planning problem \(\Gamma^{\prime}=(\mathcal{D}^{\prime},s^{\prime}_{0},G^{\prime})\), where \(\mathcal{D}^{\prime}=(2^{\mathcal{F}^{\prime}},A^{\prime},\alpha^{\prime},t ^{\prime})\) and \(G^{\prime}\) is a propositional goal as in _Classical Planning_. Intuitively, additional parts of \(\Gamma^{\prime}\) are used to synchronize the dynamics between the domain and the automaton sequentially. Specifically, \(\Gamma^{\prime}\) is composed of the following components.
#### Fluents
\(\mathcal{F}^{\prime}\) has the same fluents in \(\mathcal{F}\) plus fluents representing each state of the PDFA, and a fluent called turnDomain, which controls the alternation between domain's actions and the PDFA's synchronization action. Formally, \(\mathcal{F}^{\prime}=\mathcal{F}\cup\{q\mid q\in Q^{p}\}\cup\{\textsc{turnDomain}\}\).
#### Domain Actions
Actions in \(A\) are modified by adding turnDomain in preconditions and the negated turnDomain in effects: \(\mathit{Pre}^{\prime}_{a}=\mathit{Pre}_{a}\cup\{\textsc{turnDomain}\}\) and \(\mathit{Eff}^{\prime}_{a}=\mathit{Eff}_{a}\cup\{\textsc{-turnDomain}\}\) for all \(a\in A\).
#### Transition Operator
The _transition_ function \(\delta^{p}\) of a PDFA is encoded as a new domain operator with conditional effects, called trans. Namely, we have \(\mathit{Pre}_{\texttt{trans}}=\{\textsc{-turnDomain}\}\) and \(\mathit{Eff}_{\texttt{trans}}=\{\textsc{turnDomain}\}\cup\{\textsc{when}\,(q^{p}, \sigma^{p}),\texttt{then}\,\delta^{p}(q^{p},\sigma^{p})\cup\{\neg q\mid q\neq q ^{p},q\in Q^{p}\}\}\), for all \((q^{p},\sigma^{p})\in\delta^{p}\). To exemplify how the transition PDDL operator is obtained, Listing 1 reports the transition operator for the PDFA in Figure 1(b).
(:actiontrans :parameters (?x - location) :precondition (not (turnDomain)) :effect (and (when (and (q0?x) (not (vAt?x))) (turnDomain) ) (when (or (and (q0?x) (vAt?x)) (q1?x)) (and (q1?x) (not (q0?x)) (turnDomain)))))))))))))))))))))}}}}\)\)\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\ \(\)\)\(\)\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\)\(\)\)\(\)\)\(\)\)\(\)\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\(\)\)\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\)\(\)\(\)\(\)\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\)\(\)\(\)\)\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\)\(\)\(\)\)\)\(\)\)\)\(\)\)\(\)\)\)\(\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\)\(\)\(\)\)\(\)\)\(\)\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\)\(\)\(\)\)\(\)\(\)\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\
### Initial and Goal States
The new initial condition is specified as \(s^{\prime}_{0}=s_{0}\cup\{q^{p}_{0}\}\cup\{\texttt{turnDomain}\}\). This comprises the initial condition of the previous domain \(D\) (\(s_{0}\)) plus the initial state of the PDFA and the predicate turnDomain. Considering the example in Figure 0(a) and the PDFA in Figure 0(b), the new initial condition is as follows in PDDL:
```
(:init(and(road1121)(road1121)...(spare-in21)(spare-in12)...(q051)(turnDomain)))
```
Listing 2: PDDL initial condition for \(\varphi=\Diamond(vAt(51))\)
The new goal condition is specified as \(G^{\prime}=\{\bigvee q_{i}\mid q_{i}\in F^{p}\}\cup\{\texttt{turnDomain}\}\), i.e., we want the PDFA to be in one of its accepting states and turnDomain, as follows:
```
(:goal(and(q151)(turnDomain)))
```
Listing 3: PDDL goal condition for \(\varphi=\Diamond(vAt(51))\)
We note that, both in the initial and goal conditions of the new planning problem, PDFA states are grounded back on the objects of interest thanks to the inverse of the mapping \(m^{obj}\).
Executions of a policy for our new fond planning problem \(\Gamma^{\prime}\) are \(\bar{e}^{\prime}:[a^{\prime}_{1},t_{1},a^{\prime}_{2},t_{2},\ldots,a^{\prime}_ {n},t_{n}]\), where \(a^{\prime}_{i}\in A^{\prime}\) are the real domain actions, and \(t_{1},\ldots,t_{n}\) are sequences of synchronization trans actions, which, at the end, can be easily removed to extract the desired execution \(\bar{e}:[a^{\prime}_{1},a^{\prime}_{2},\ldots,a^{\prime}_{n}]\). In the remainder of the paper, we refer to the compilation just exposed as fond4lt\({}_{f}\).
### Theoretical Property of the PDDL Encoding
We now study the theoretical properties of the encoding presented in this section. Theorem 3.1 states that solving fond planning for ltl\({}_{f}\)/ppltl goals amounts to solving standard fond planning problems for reachability goals. A policy for the former can be easily derived from a policy of the latter.
Theorem 3.1: _Let \(\Gamma\) be a fond planning problem with an ltl\({}_{f}\)/ppltl goal \(\varphi\), and \(\Gamma^{\prime}\) be the compiled fond planning problem with a reachability goal state. Then, \(\Gamma\) has a policy \(\pi:(2^{\mathcal{F}})^{+}\to A\) iff \(\Gamma^{\prime}\) has a policy \(\pi^{\prime}:(2^{\mathcal{F}^{\prime}})^{+}\to A^{\prime}\)._
Proof (\(\longrightarrow\)): We start with a policy \(\pi\) of the original problem that is winning by assumption. Given \(\pi\), we can always build a new policy, which we call \(\pi^{\prime}\), following the encoding presented in Section 3 of the paper. The newly constructed policy will modify histories of \(\pi\) by adding fluents and an auxiliary deterministic action trans, both related to the DFA associated with the ltl\({}_{f}\)/ppltl formula \(\varphi\). Now, we show that \(\pi^{\prime}\) is an executable policy and that is winning for \(\Gamma^{\prime}\). To see the executability, we can just observe that, by construction of the new planning problem \(\Gamma^{\prime}\), all action effects \(\mathit{Eff}_{a^{\prime}}\) of the original problem \(\Gamma\) are modified in a way that all action effects of the original problem \(\Gamma\) are not modified and that the auxiliary action trans only changes the truth value of additional fluents given by the DFA\(\mathcal{A}^{p}_{\varphi}\) (i.e., automaton states). Therefore, the newly constructed policy \(\pi^{\prime}\) can be executed. To see that \(\pi^{\prime}\) is winning and satisfies the ltl\({}_{f}\)/ppltl goal formula \(\varphi\), we reason about all possible executions. For all executions, every time the policy \(\pi^{\prime}\) stops we can always extract an induced state trajectory of length \(n\) such that its last state \(s^{\prime}_{n}\) will contain one of the final states \(F^{p}\) of the automaton \(\mathcal{A}^{p}_{\varphi}\). This means that the induced state trajectory is accepted by the automaton \(\mathcal{A}^{p}_{\varphi}\). Then, by Theorem De Giacomo and Vardi (2013); De Giacomo et al (2020), we have that \(\tau\models\varphi\).
(\(\longleftarrow\)): From a winning policy \(\pi^{\prime}\) for the compiled problem, we can always project out all automata auxiliary trans actions obtaining a corresponding policy \(\pi\). We need to show that the resulting policy \(\pi\) is winning, namely, it can be successfully executed on the original problem \(\Gamma\) and satisfies the ltl\({}_{f}\)/ppltl goal formula \(\varphi\). The executability follows from the fact that the deletion of trans actions and related auxiliary fluents from state trajectories induced by \(\pi\) does not modify any precondition/effect of original domain actions (i.e., \(a\in\mathcal{A}\)). Hence, under the right preconditions, any domain action can be executed. Finally, the satisfaction of the ltl\({}_{f}\)/ppltl formula \(\varphi\) follows directly from Theorem De Giacomo and Vardi (2013); De Giacomo et al (2020). Indeed, every execution of the winning policy \(\pi^{\prime}\) stops when reaching one of the final states \(F^{p}\) of the automaton \(\mathcal{A}^{p}_{\varphi}\) in the last state \(s_{n}\), thus every execution of \(\pi\) would satisfy \(\varphi\). Thus, the thesis holds.
## 4 Goal Recognition in FOND Planning Domains with Ltl\({}_{f}\) and Ppltl Goals
We now introduce our recognition approach that is able to recognizing temporally extended (ltl\({}_{f}\) and ppltl) goals in fond planning domains. Our approach extends the probabilistic framework of Ramirez and Geffner (2010) to compute posterior probabilities over temporally extended goal hypotheses, by reasoning over the set of possible executions of policies \(\pi\) and the observations. Our goal recognition approach works in two stages: the _compilation stage_ and the _recognition stage_. In the next sections, we describe in detail how these two stages work. Figure 3 illustrates how our approach works.
### Goal Recognition Problem
We define the task of goal recognition in fond planning domains with ltl\({}_{f}\) and ppltl goals by extending the standard definition of _Plan Recognition as Planning_(Ramirez and Geffner, 2009), as follows.
Definition 8: A goal recognition problem in a fond planning setting with temporally extended goals (ltl\({}_{f}\) and/or ppltl) is a tuple \(\mathcal{T}_{\varphi}=\{\mathcal{D},s_{0},\mathcal{G}_{\varphi},Obs\}\), where: \(\mathcal{D}=(2^{\mathcal{F}},A,\alpha,tr)\) is a fond planning domain; \(s_{0}\) is the initial state; \(\mathcal{G}_{\varphi}=\{\varphi_{0},\varphi_{1},...,\varphi_{n}\}\) is the set of goal hypotheses formalized in ltl\({}_{f}\) or ppltl, including the intended goal \(\varphi^{*}\in\mathcal{G}_{\varphi}\); \(Obs=(o_{0},o_{1},...,o_{n})\) is a sequence of successfully executed (non-deterministic) actions of a policy \(\pi_{\varphi^{*}}\) that achieves the intended goal \(\varphi^{*}\), s.t. \(o_{i}\in A\).
Since we deal with non-deterministic domain models, an observation sequence \(Obs\) corresponds to a successful execution \(\bar{e}\) in the set of all possible executions \(\bar{E}\) of a _strong-cyclic policy_\(\pi\) that achieves the actual intended hidden goal \(\varphi^{*}\). In this work, we assume two recognition settings: _Offline Keyhole Recognition_, and _Online Recognition_. In _Offline Keyhole Recognition_ the observed agent is completely unaware of the recognition process (Armentano and Amandi, 2007), the observation sequence \(Obs\) is given at once, and it can be either _full_ or _partial_--in a _full observation sequence_, we observe all actions of an agent's plan, whereas, in a _partial observation sequence_, only a sub-sequence thereof. By contrast, in _Online Recognition_(Vered et al, 2016), the observed agent is also unaware of the recognition process, but the observation sequence is revealed incrementally instead of being given in advance and at once, as in _Offline Recognition_, thus making the recognition process an already much harder task.
An "ideal" solution for a goal recognition problem comprises a selection of the goal hypotheses containing only the single actual intended hidden goal \(\varphi^{*}\in\mathcal{G}\) that the observation sequence \(Obs\) of a plan execution achieves (Ramirez and Geffner, 2009, 2010). Fundamentally, there is no exact solution for a goal recognition problem, but it is possible to produce a probability distribution over the goal hypotheses and the observations, so that the goals that "best" explain the observation sequence are the most probable ones. We formally define a solution to a goal recognition problem in fond planning with temporally extended goals in Definition 9.
Definition 9: Solving a goal recognition problem \(\mathcal{T}_{\varphi}\) requires selecting a temporally extended goal hypothesis \(\hat{\varphi}\in\mathcal{G}_{\varphi}\) such that \(\hat{\varphi}=\varphi^{*}\), and it represents how well \(\hat{\varphi}\) predicts or explains what observation sequence \(Obs\) aims to achieve.
Existing recognition approaches often return either a probability distribution over the set of goals (Ramirez and Geffner, 2010; Sohrabi et al, 2016), or scores associated with each possible goal hypothesis (Pereira et al, 2020). Here, we return a probability distribution \(\mathbb{P}\) over the set of temporally extended goals \(\mathcal{G}_{\varphi}\) that "best" explains the observations sequence \(Obs\).
### Probabilistic Goal Recognition
We now recall the probabilistic framework for _Plan Recognition as Planning_ proposed in Ramirez and Geffner (2010). The framework sets the probability distribution for every goal \(G\) in the set of goal hypotheses \(\mathcal{G}\), and the observation sequence \(Obs\) to be a Bayesian posterior conditional probability, as follows:
\[\mathbb{P}(G\mid Obs)=\eta*\mathbb{P}(Obs\mid G)*\mathbb{P}(G) \tag{1}\]
Figure 3: Overview of our solution approach.
where \(\mathbb{P}(G)\) is the _a priori_ probability assigned to goal \(G\), \(\eta\) is a normalization factor inversely proportional to the probability of \(Obs\), and \(\mathbb{P}(Obs\,|\,G)\) is
\[\mathbb{P}(Obs\,|\,G)=\sum_{\pi}\mathbb{P}(Obs\,|\,\pi)\star\mathbb{P}(\pi\,|\,G) \tag{2}\]
\(\mathbb{P}(Obs\,|\,\pi)\) is the probability of obtaining \(Obs\) by executing a policy \(\pi\) and \(\mathbb{P}(\pi\,|\,G)\) is the probability of an agent pursuing \(G\) to select \(\pi\). Next, we extend the probabilistic framework above to recognize temporally extended goals in fond planning domain models.
### Compilation Stage
We perform a _compilation stage_ that allows us to use any off-the-shelf fond planner to extract policies for temporally extended goals. To this end, we compile and generate new fond planning domain models \(\Gamma^{\prime}\) for the set of possible temporally extended goals \(\mathcal{G}_{\varphi}\) using the compilation approach described in Section 3. Specifically, for every goal \(\varphi\in\mathcal{G}_{\varphi}\), our compilation takes as input a fond planning problem \(\Gamma\), where \(\Gamma\) contains the fond planning domain \(\mathcal{D}\) along with an initial state \(s_{0}\) and a temporally extended goal \(\varphi\). Finally, as a result, we obtain a new fond planning problem \(\Gamma^{\prime}\) associated with the new domain \(\mathcal{D}^{\prime}\). Note that such a new fond planning domain \(\Gamma^{\prime}\) encodes new predicates and transitions that allow us to plan for temporally extended goals by using off-the-shelf fond planners.
Corollary 1: _Let \(\mathcal{T}_{\varphi}\) be a goal recognition problem over a set of \(\textsc{ltl}_{f}\)/ppltl goals \(\mathcal{G}_{\varphi}\) and let \(\mathcal{T}^{\prime}\) be the compiled goal recognition problem over a set of propositional goals \(\mathcal{G}\). Then, if \(\mathcal{T}^{\prime}\) has a set of winning policies that solve the set of propositional goals in \(\mathcal{G}\), then \(\mathcal{T}_{\varphi}\) has a set of winning policies that solve its \(\textsc{ltl}_{f}\)/ppltl goals._
Proof: From Theorem 3.1 we have a bijective mapping between policies of fond planning for \(\textsc{ltl}_{f}\)/ppltl goals and policies of standard fond planning. Therefore, the thesis holds.
### Recognition Stage
The stage in which we perform the goal recognition task comprises extracting policies for every goal \(\varphi\in\mathcal{G}_{\varphi}\). From such policies along with observations \(Obs\), we compute posterior probabilities for the goals \(\mathcal{G}_{\varphi}\) by matching the observations with all possible executions in the set of executions \(\dot{E}\) of the policies. To ensure compatibility with the policies, we assume the recognizer knows the preference relation over actions for the observed agent when unrolling the policy during search.
#### Computing Policies and the Set of Executions \(\ddot{E}\) for \(\mathcal{G}_{\varphi}\)
We extract policies for every goal \(\varphi\in\mathcal{G}_{\varphi}\) using the new fond planning domain models \(\Gamma^{\prime}\), and for each of these policies, we enumerate the set of possible executions \(\ddot{E}\). The aim of enumerating the possible executions \(\ddot{E}\) for a policy \(\pi\) is to attempt to infer what execution \(\ddot{e}\in\ddot{E}\) the observed agent is performing in the environment. Environmental non-determinism prevents the recognizer from determining the specific execution \(\ddot{e}\) the observed agent goes through to achieve its goals. The recognizer considers possible executions that are all paths to the goal with no repeated states. This assumption is partially justified by the fact that the probability of entering loops multiple times is low, and relaxing it is an important research direction for future work.
After enumerating the set of possible executions \(\ddot{E}\) for a policy \(\pi\), we compute the average distance of all actions in the set of executions \(\ddot{E}\) to the goal state \(\varphi\) from initial state \(s_{0}\). We note that strong-cyclic solutions may have infinite possible executions. However, here we consider executions that do not enter loops, and for those entering possible loops, we consider only the ones entering loops _at most_ once. Indeed, the computation of the average distance is not affected by the occurrence of possibly repeated actions. In other words, if the observed agent executes the same action repeatedly often, it does not change its distance to the goal. The average distance aims to estimate "how far" every observation \(o\in Obs\) is to a goal state \(\varphi\). This average distance is computed because some executions \(\tilde{e}\in\ddot{E}\) may share the same action in execution sequences but at different time steps. We refer to this average distance as \(\mathbf{d}\). For example, consider the policy \(\pi\) depicted in Figure 0(b). This policy \(\pi\) has two possible executions for achieving the goal state from the initial state, and these two executions share some actions, such as (move 0(a) 0(b) 0(c) 0(d) 0(e) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f)(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f)(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f)(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f)(f) 0(f) 0(f) 0(f)(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f)(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f)(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f)(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f) 0(f)(
tion sequence \(Obs\) is to a temporally extended goal \(\varphi\) in comparison to the other goals in the set of goal hypotheses \(\mathcal{G}_{\varphi}\). This means that the goal(s) with the lowest score(s) along the execution of the observed actions \(o\in Obs\) is (are) the one(s) that, most likely, the observation sequence \(Obs\) aims to achieve. We note that, the average distance \(\mathbf{d}\) for those observations \(o\in Obs\) that are not in the set of executions \(\tilde{E}\) of a policy \(\pi\), is set to a large constant number, i.e., to \(\mathbf{d}=e^{5}\). As part of the computation of this _estimated score_, we compute a _penalty value_ that directly affects the _estimated score_. This _penalty value_ represents a penalization that aims to increase the _estimated score_ for those goals in which each pair of subsequent observations \(\langle o_{i-1},o_{i}\rangle\) in \(Obs\) does not have any relation of order in the set of executions \(\tilde{E}\) of these goals. We use the Euler constant \(e\) to compute this _penalty value_, formally defined as \(e^{\mathbf{p}(o_{i-1},o_{i})}\), in which we use \(\mathcal{R}(\tilde{e})\) as the set of order relation of an execution \(\tilde{e}\), where
\[\mathbf{p}(o_{i-1},o_{i})=\begin{cases}1,&\text{if }\{\forall\tilde{e}\in E|(o_{i-1 }\prec o_{i})\notin\mathcal{R}(\tilde{e})\}\\ 0,&\text{otherwise}\end{cases} \tag{3}\]
Equation 4 formally defines the computation of the _estimated score_ for every goal \(\varphi\in\mathcal{G}_{\varphi}\) given a pair of subsequent observations \(\langle o_{i-1},o_{i}\rangle\), and the set of goal hypotheses \(\mathcal{G}_{\varphi}\).
\[\frac{e^{\mathbf{p}(o_{i-1},o_{i})}*\mathbf{d}(o_{i},\varphi)}{\sum_{\varphi^{ \prime}\in\mathcal{G}_{\varphi}}\mathbf{d}(o_{i},\varphi^{\prime})} \tag{4}\]
**Example 2**: _To exemplify how we compute the estimated score for every goal \(\varphi\in\mathcal{G}_{\varphi}\), consider the recognition problem in Figure 4: \(s_{0}\) is \(vAt(11)\); the goal hypotheses \(\mathcal{G}_{\varphi}\) are expressed as \(\textsc{ltl}_{f}\) goals, such that \(\varphi_{0}=\mathcal{O}vAt(51),\varphi_{1}=\mathcal{O}vAt(33)\), and \(\varphi_{2}=\mathcal{O}vAt(15)\); \(Obs=\{o_{0}:\text{(move 11 21)},o_{1}:\text{(changetire 22)}\}\). The intended goal \(\varphi^{*}\) is \(\varphi_{1}\). Before computing the estimated score for the goals, we first perform the compilation process presented before. Afterward, we extract policies for every goal \(\varphi\in\mathcal{G}_{\varphi}\), enumerate the possible executions \(\tilde{E}\) for the goals \(\mathcal{G}_{\varphi}\) from the extracted policies, and then compute the average distance \(\mathbf{d}\) of all actions in the set of executions \(\tilde{E}\) for the goals \(\mathcal{G}_{\varphi}\) from \(s_{0}\). The number of possible executions \(\tilde{E}\) for the goals are: \(\varphi_{0}:|\tilde{E}|=8,\varphi_{1}:|\tilde{E}|=8\), and \(\varphi_{2}=|\tilde{E}|=16\). The average distances \(\mathbf{d}\) of all actions in \(\tilde{E}\) for the goals are as follows:_
* \(\varphi_{0}\)_: (move 11 21) = 4.5, (changetire 21) = 4, (move 21 31) = 3, (changetire 31) = 2.5, (move 31 41) = 1.5, (changetire 41) = 1, (move 41 51) = 0;_
* \(\varphi_{1}\)_: (move 11 21) = 4.5, (changetire 21) = 4, (move 22 23) = 21 22) = 3, (changetire 22) = 2.5, (move 22 23) = 1, (move 23 33): 0;_
* \(\varphi_{2}\)_: (move 11 21) = 6, changetire 21) = 5.5, (move 21 22) = 4.5, (changetire 22) = 4, (move 22 23) = 3, (changetire 23) = 2.5, (changetire 24) = 1, (move 23 24) = 1.5, (move 24 15) = 0._
_Once we have the average distances \(\mathbf{d}\) of the actions in \(\tilde{E}\) for all goals, we can then compute the estimated score for \(\mathcal{G}_{\varphi}\) for every observation \(o\in Obs\): \(o_{0}\)(move 11 21) : \(\varphi_{0}=\frac{4.5}{4.5+6}=\) 0.43, \(\varphi_{1}=\frac{4.5}{4.5+6}=\) 0.43, \(\varphi_{2}=\frac{6}{4.5+6}=\) 0.57; and \(o_{1}\)(changetire 22) : \(\varphi_{0}=\frac{\epsilon^{1}*e^{5}}{6.5}=\) 61.87, \(\varphi_{1}=\frac{2.5}{e^{5}+2.5}=\) 0.016, \(\varphi_{2}=\frac{4}{e^{5}+4}=\) 0.026. Note that for the observation \(o_{1}\), the average distance \(\mathbf{d}\) for \(\varphi_{0}\) is \(e^{5}=148.4\) because this observation is not an action for one of the executions in the set of executions for this goal (\(Obs\) aims to achieve the intended goal \(\varphi^{*}=\varphi_{1}\)). Furthermore, the penalty value is applied to \(\varphi_{0}\), i.e., \(e^{1}=2.71\). We can see that the estimated score of the intended goal \(\varphi_{1}\) is always the lowest for all observations \(Obs\), especially when we observe the second observation \(o_{1}\). Note that our approach correctly infers the intended goal \(\varphi^{*}\), even when observing with just few actions._
#### Computing Posterior Probabilities for \(\mathcal{G}_{\varphi}\)
To compute the posterior probabilities over the set of possible temporally extended goals \(\mathcal{G}_{\varphi}\), we start by computing the _average estimated score_ for every goal \(\varphi\in\mathcal{G}_{\varphi}\) for every observation \(o\in Obs\), and we formally define this computation as \(\mathcal{E}(\varphi,Obs,\mathcal{G}_{\varphi})\), as follows:
\[\mathcal{E}(\varphi,Obs,\mathcal{G}_{\varphi})=\left(\frac{\sum_{i=0}^{|Obs|} \frac{e^{\mathbf{p}(o_{i-1},o_{i})}*\mathbf{d}(o_{i},\varphi)}{\sum_{\varphi^{ \prime}\in\mathcal{G}_{\varphi}}\mathbf{d}(o_{i},\varphi^{\prime})}}{|Obs|}\right) \tag{5}\]
The _average estimated score_\(\mathcal{E}\) aims to estimate "how far" a goal \(\varphi\) is to be achieved compared to other goals (\(\mathcal{G}_{\varphi}\setminus\{\varphi\}\)) _averaging_ among all the observations in \(Obs\). The lower the _average estimated score_\(\mathcal{E}\) to a goal \(\varphi\)
Figure 4: Recognition problem example.
the more likely such a goal is to be the one that the observed agent aims to achieve. Consequently, \(\mathcal{E}\) has two important properties defined in Equation 5, as follows.
Proposition 1: _Given that the sequence of observations \(Obs\) corresponds to an execution \(\hat{e}\in\hat{E}\) that aims to achieve the actual intended hidden goal \(\varphi^{*}\in\mathcal{G}_{\varphi}\), the average estimated score outputted by \(\mathcal{E}\) will tend to be the lowest for \(\varphi^{*}\) in comparison to the scores of the other goals (\(\mathcal{G}_{\varphi}\setminus\{\varphi^{*}\}\)), as observations increase in length._
Proposition 2: _If we restrict the recognition setting and define that the goal hypotheses \(\mathcal{G}_{\varphi}\) are not sub-goals of each other, and observe all observations in \(Obs\) (i.e., full observability), we will have the intended goal \(\varphi^{*}\) with the lowest score among all goals, i.e., \(\forall\varphi\in\mathcal{G}_{\varphi}\) is the case that \(\mathcal{E}(\varphi^{*},Obs,\mathcal{G}_{\varphi})\leq\mathcal{E}(\varphi,Obs,\mathcal{G}_{\varphi})\)._
After defining how we compute the _average estimated score_\(\mathcal{E}\) for the goals using Equation 5, we can define how our approach tries to maximize the probability of observing a sequence of observations \(Obs\) for a given goal \(\varphi\), as follows:
\[\mathbb{P}(Obs\mid\varphi)=[1+\mathcal{E}(\varphi,Obs,\mathcal{G}_{\varphi} )]^{-1} \tag{6}\]
Thus, by using the _estimated score_ in Equation 6, we can infer that the goals \(\varphi\in\mathcal{G}_{\varphi}\) with the lowest _estimated score_ will be the most likely to be achieved according to the probability interpretation we propose in Equation 5. For instance, consider the goal recognition problem presented in Example 2, and the _estimated scores_ we computed for the temporally extended goals \(\varphi_{0}\), \(\varphi_{1}\), and \(\varphi_{2}\) based on the observation sequence \(Obs\). From this, we have the following probabilities \(\mathbb{P}(Obs\mid\varphi)\) for the goals:
* \(\mathbb{P}(Obs\mid\varphi_{0})=[1+(31.15)]^{-1}=0.03\)
* \(\mathbb{P}(Obs\mid\varphi_{1})=[1+(0.216)]^{-1}=0.82\)
* \(\mathbb{P}(Obs\mid\varphi_{2})=[1+(0.343)]^{-1}=0.74\)
After normalizing these computed probabilities using the normalization factor \(\eta\)2, and assuming that the prior probability \(\mathbb{P}(\varphi)\) is equal to every goal in the set of goals \(\mathcal{G}_{\varphi}\), we can use Equation 6 to compute the posterior probabilities (Equation 1) for the temporally extended goals \(\mathcal{G}_{\varphi}\). We define the _solution_ to a recognition problem \(\mathcal{T}_{\varphi}\) (Definition 8) as a set of temporally extended goals \(\mathcal{G}_{\varphi}^{*}\) with the _maximum probability_, formally: \(\mathcal{G}_{\varphi}^{*}=\operatorname*{arg\,max}_{\varphi\in\mathcal{G}_{ \varphi}}\mathbb{P}(\varphi\mid Obs)\). Hence, considering the normalizing factor \(\eta\) and the probabilities \(\mathbb{P}(Obs\mid\varphi)\) computed before, we then have the following posterior probabilities for the goals in Example 2: \(\mathbb{P}(\varphi_{0}\mid Obs)=0.001\); \(\mathbb{P}(\varphi_{1}\mid Obs)=0.524\); and \(\mathbb{P}(\varphi_{2}\mid Obs)=0.475\). Recall that in Example 2, \(\varphi^{*}\) is \(\varphi_{1}\), and according to the computed posterior probabilities, we then have \(\mathcal{G}_{\varphi}^{*}=\{\varphi_{1}\}\), so our approach yields only the correct intended goal by observing just two observations.
Footnote 2: \(\eta=[\sum_{\varphi\in\mathcal{G}_{\varphi}}\mathbb{P}(Obs\mid\varphi)* \mathbb{P}(\varphi)]^{-1}\)
Using the _average distance_\(\mathbf{d}\) and the _penalty value_\(\mathbf{p}\) allows our approach to disambiguate similar goals during the recognition stage. For instance, consider the following possible temporally extended goals: \(\varphi_{0}=\phi_{1}\,\mathcal{U}\,\phi_{2}\) and \(\varphi_{1}=\phi_{2}\,\mathcal{U}\,\phi_{1}\). Here, both goals have the same formulas to be achieved, i.e., \(\phi_{1}\) and \(\phi_{2}\), but in a different order. Thus, even having the same formulas to be achieved, the sequences of their policies' executions are different. Therefore, the average distances are also different, possibly a smaller value for the temporally extended goal that the agent aims to achieve, and the penalty value may also be applied to the other goal if two subsequent observations do not have any order relation in the set of executions for this goal.
#### Computational Analysis
The most expensive computational part of our recognition approach is computing the policies \(\pi\) for the goal hypotheses \(\mathcal{G}_{\varphi}\). Thus, we can say that our approach requires \(|\mathcal{G}_{\varphi}|\) calls to an off-the-shelf fond planner. Hence, the computational complexity of our recognition approach is linear in the number of goal hypotheses \(|\mathcal{G}_{\varphi}|\). In contrast, to recognize goals and plans in _Classical Planning_ settings, the approach of Ramirez and Geffner (2010) requires \(2*|\mathcal{G}|\) calls to an off-the-shelf _Classical_ planner. Concretely, to compute \(\mathbb{P}(Obs\mid G)\), Ramirez and Geffner's approach computes two plans for every goal and based on these two plans, they compute a _cost-difference_ between these plans and plug it into a Boltzmann equation. For computing these two plans, this approach requires a non-trivial transformation process that modifies both the domain and problem, i.e., an augmented domain and problem that compute a plan that _complies_ with the observations, and another augmented domain and problem to compute a plan that _does not comply_ with the observations. Essentially, the intuition of Ramirez and Geffner's approach is that the lower the _cost-difference_ for a goal, the higher the probability for this goal, much similar to the intuition of our _estimated score_\(\mathcal{E}\).
## 5 Experiments and Evaluation
We now present experiments and evaluations carried out to validate the effectiveness of our recognition approach. We empirically evaluate our approach over thousands of goal recognition problems using well-known
fond planning domain models with different types of temporally extended goals expressed in ltl\({}_{f}\) and ppltl.
The source code of our PDDL encoding for ltl\({}_{f}\) and ppltl goals3 and our temporally extended goal recognition approach4, as well as the recognition datasets and results are available on GitHub.
Footnote 3: [https://github.com/whitemech/FOND4LTLf](https://github.com/whitemech/FOND4LTLf)
Footnote 4: [https://github.com/ramonpereira/goal-recognition-ltlf_pltlf-fond](https://github.com/ramonpereira/goal-recognition-ltlf_pltlf-fond)
### Domains, Recognition Datasets, and Setup
For experiments and evaluation, we use six different fond planning domain models, in which most of them are commonly used in the AI Planning community to evaluate fond planners (Mattmuller et al, 2010; Muise et al, 2012), such as: Blocks-World, Logistics, Tidyup, Tireworld, Triangle-Tireworld, and Zeno-Travel. The domain models involve practical real-world applications, such as navigating, stacking, picking up and putting down objects, loading and unloading objects, loading and unloading objects, loading and unloading objects, and etc. Some of the domains combine more than one of the characteristics we just described, namely, Logistics, Tidyup(Nebel et al, 2013), and Zeno-Travel, which involve navigating and manipulating objects in the environment. In practice, our recognition approach is capable of recognizing not only the set of facts of a goal that an observed agent aims to achieve from a sequence of observations, but also the _temporal order_ (e.g., _exact order_) in which the agent aims to achieve this set of facts. For instance, for Tidyup, is a real-world application domain, in which the purpose is defining planning tasks for a household robot that could assist elder people in smart-home application, our approach would be able to monitor and assist the household robot to achieve its goals in a specific order.
Based on these fond planning domain models, we build different recognition datasets: a _baseline_ dataset using conjunctive goals (\(\phi_{1}\land\phi_{2}\)) and datasets with ltl\({}_{f}\) and ppltl goals.
For the ltl\({}_{f}\) datasets, we use three types of goals:
* \(\Diamond\phi\), where \(\phi\) is a propositional formula expressing that _eventually_\(\phi\) will be achieved. This temporal formula is analogous to a conjunctive goal;
* \(\Diamond(\phi_{1}\land\mathsf{O}(\Diamond\phi_{2}))\), expressing that \(\phi_{1}\) must hold before \(\phi_{2}\) holds. For instance, we can define a temporal goal that expresses the order in which a set of packages in Logistics domain should be delivered;
* \(\phi_{1}\,\mathcal{U}\,\phi_{2}\): \(\phi_{1}\) must hold _until_\(\phi_{2}\) is achieved. For the Tidyup domain, we can define a temporal goal that no one can be in the kitchen until the robot cleans the kitchen. For the ppltl datasets, we use two types of goals:
* \(\phi_{1}\land\mathsf{\Theta}\phi_{2}\), expressing that \(\phi_{1}\) holds and \(\phi_{2}\) held once. For instance, in the Blocks-World domain, we can define a past temporal goal that only allows stacking a set of blocks (a, b, c) once another set of blocks has been stacked (d, e);
* \(\phi_{1}\land(\neg\phi_{2}\,\mathcal{S}\,\phi_{3})\), expressing that the formula \(\phi_{1}\) holds and _since_\(\phi_{3}\) held \(\phi_{2}\) was not true anymore. For instance, in Zeno-Travel, we can define a past temporal goal expressing that person1 is at city1 and since the person2 is at city1, the aircraft must not pass through city2 anymore.
Thus, in total, we have six different recognition datasets over the six fond planning domains and temporal formulas presented above. Each of these datasets contains hundreds of recognition problems (\(\approx 390\) recognition problems per dataset), such that each recognition problem \(\mathcal{T}_{\varphi}\) in these datasets is comprised of a fond planning domain model \(\mathcal{D}\), an initial state \(s_{0}\), a set of possible goals \(\mathcal{G}_{\varphi}\) (expressed in either ltl\({}_{f}\) or ppltl), the actual intended hidden goal in the set of possible goals \(\varphi^{*}\in\mathcal{G}_{\varphi}\), and the observation sequence \(Obs\). We note that the set of possible goals \(\mathcal{G}_{\varphi}\) contains very similar goals (i.e., \(\varphi_{0}=\phi_{1}\,\mathcal{U}\,\phi_{2}\) and \(\varphi_{1}=\phi_{2}\,\mathcal{U}\,\phi_{1}\)), and all possible goals can be achieved from the initial state by a strong-cyclic policy. For instance, for the Tidyup domain, we define the following ltl\({}_{f}\) goals as possible goals \(\mathcal{G}_{\varphi}\):
* \(\varphi_{0}=\Diamond(\)((wiped desk1)\land\mathsf{O}(\Diamond(\)(on book1 desk1)));
* \(\varphi_{1}=\Diamond(\)((on book1 desk1)\land\mathsf{O}(\Diamond(\)(wiped desk1)));
* \(\varphi_{2}=\Diamond(\)((on cup1 desk2)\land\mathsf{O}(\Diamond(\)(wiped desk2)));
* \(\varphi_{3}=\Diamond(\)(wiped desk2)\(\land\mathsf{O}(\Diamond(\)(on cup1 desk2)));
Note that some of the goals described above share the same formulas and fluents, but some of these formulas must be achieved in a different order, e.g., \(\varphi_{0}\) and \(\varphi_{1}\), and \(\varphi_{2}\) and \(\varphi_{3}\). We note that the recognition approach we developed in the paper is very accurate in discerning (Table 1) the order that the intended goal aims to be achieved based on few observations (executions of the agent in the environment).
As we mentioned earlier in the paper, an observation sequence contains a sequence of actions that represent an execution \(\vec{e}\) in the set of possible executions \(\vec{E}\) of policy \(\pi\) that achieves the actual intended hidden goal \(\varphi^{*}\), and as we stated before, this observation sequence \(Obs\) can be full or partial. To generate the observations \(Obs\) for \(\varphi^{*}\) and build the recognition problems, we extract strong-cyclic policies using different fond planners, such as PRP and MyND. A full observation sequence represents an execution (a sequence of executed
actions) of a strong-cyclic policy that achieves the actual intended hidden goal \(\varphi^{*}\), i.e., 100% of the actions of \(\hat{e}\) being observed. A partial observation sequence is represented by a sub-sequence of actions of a full execution that aims to achieve the actual intended hidden goal \(\varphi^{*}\) (e.g., an execution with "missing" actions, due to a sensor malfunction). In our recognition datasets, we define four levels of observability for a partial observation sequence: 10%, 30%, 50%, or 70% of its actions being observed. For instance, for a full observation sequence \(Obs\) with 10 actions (100% of observability), a corresponding partial observations sequence with 10% of observability would have only one observed action, and for 30% of observability three observed actions, and so on for the other levels of observability.
We ran all experiments using PRP (Muise et al, 2012) planner with a single core of a 12 core Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz with 16GB of RAM, set a maximum memory usage limit of 8GB, and set a 10-minute timeout for each recognition problem. We note that we are unable to provide a _direct comparison_ of our approach against existing recognition approaches in the literature because most of these approaches perform a non-trivial process that transforms a recognition problem into planning problems to be solved by a planner (Ramirez and Geffner, 2010; Sohrabi et al, 2016). Even adapting such a transformation to work in fond settings with temporally extended goals, we cannot guarantee that it will work properly in the problem setting we propose in this paper.
### Evaluation Metrics
We evaluate our goal recognition approach using widely known metrics in the _Goal and Plan Recognition_ literature (Ramirez and Geffner, 2009; Vered et al, 2016; Pereira et al, 2020). To evaluate our approach in the _Offline Keyhole Recognition_ setting, we use four metrics, as follows:
* _True Positive Rate_ (_TPR_) measures the fraction of times that the intended hidden goal \(\varphi^{*}\) was correctly recognized, e.g., the percentage of recognition problems that our approach correctly recognized the intended goal. A **higher**_TPR_ indicates better accuracy, measuring how often the intended hidden goal had the highest probability \(P(\varphi\,|\,Obs)\) among the possible goals. _TPR_ (Equation 7) is the ratio between true positive results5, and the sum of true positive and false negative results6; Footnote 5: _False negative results_ represent the number of correct goals that has not been recognized. \[TPR=\frac{TP}{TP+FN}=1-FNR\] (7)
* _False Positive Rate_ (_FPR_) is a metric that measures how often goals other than the intended goal are recognized (wrongly) as the intended ones. A **lower**_FPR_ indicates better accuracy. _FPR_ is the ratio between false positive results7, and the sum of false positive and true negative results8; Footnote 7: _False positive results_ are the number of incorrect goals that has been recognized as the correct ones. \[FPR=\frac{FP}{FP+TN}\] (8)
* _False Negative Rate_ (_FNR_) aims to measure the fraction of times in which the intended correct goal was recognized incorrectly. A **lower**_FNR_ indicates better accuracy. _FNR_ (Equation 9) is the ratio between false negative results and the sum of false negative and true positive results; FNR = \(\frac{FN}{FN+TP}=1-TPR\) (9)
* _F1-Score_ (Equation 10) is the harmonic mean of precision and sensitivity (i.e., _TPR_), representing the trade-off between true positive and false positive results. The **highest possible value** of an _F1-Score_ is 1.0, indicating perfect precision and sensitivity, and the **lowest possible value** is 0. Thus, **higher**_F1-Score_ values indicate better accuracy. \[F1-Score=\frac{2*TP}{2TP+FP+FN}\] (10)
In contrast, to evaluate our approach in the _Online Recognition_ setting, we use the following metric:
* _Ranked First_ is a metric that measures the number of times the intended goal hypothesis \(\varphi^{*}\) has been correctly ranked first as the most likely intended goal, and **higher** values for this metric indicate better accuracy for performing online recognition.
In addition to the metrics mentioned above, we also evaluate our recognition approach in terms of _recognition time_ (_Time_), which is the average time in seconds to perform the recognition process (including the calls to a fond planner);
### Offline Keyhole Recognition Results
We now assess how accurate our recognition approach is in the _Keyhole Recognition_ setting. Table 1 shows three inner tables that summarize and aggregate the average results of all the six datasets for four different metrics, such as _Time_, _TPR_, _FPR_, and _FNR_. \(|\mathcal{G}_{\varphi}|\) represents the average number of goals in the datasets, and \(|Obs|\) the average number of observations. Each row in these inner tables represents the observation level, varying from 10% to 100%. Figure 5 shows the performance of our approach by comparing the results using _F1-Score_ for the six types of temporal formulas we used for evaluation. Table 2 shows in much more detail the results for each of the six datasets we used for evaluating of our recognition approach.
#### Offline Results for Conjunctive and Eventuality Goals
The first inner table shows the average results comparing the performance of our approach between conjunctive goals and temporally extended goals using the _eventually_ temporal operator \(\diamondsuit\). We refer to this comparison as the _baseline_ since these two types of goals have the same semantics. We can see that the results for these two types of goals are very similar for all metrics. Moreover, it is also possible to see that our recognition approach is very accurate and performs well at all levels of observability, yielding high _TPR_ values and low _FPR_ and _FNR_ values for more than 10% of observability. Note that for 10% of observability, and \(\textsc{ltl}_{f}\) goals for \(\diamondsuit\varphi\), the _TPR_ average value is 0.74, and it means for 74% of the recognition problems our approach recognized correctly the intended temporally extended goal when observing, on average, only 3.85 actions. Figure 4(a) shows that our approach yields higher _F1-Score_ values (i.e., greater than 0.79) for these types of formulas when dealing with more than 50% of observability.
#### Offline Results for \(\textsc{ltl}_{f}\) Goals
Regarding the results for the two types of \(\textsc{ltl}_{f}\) goals (second inner table), it is possible to see that our approach shows to be accurate for all metrics at all levels of observability, apart from the results for 10% of observability for \(\textsc{ltl}_{f}\) goals in which the formulas must be recognized in a certain order. Note that our approach is accurate even when observing just a few actions (2.1 for 10% and 5.4 for 30%), but not as accurate as for more than 30% of observability. Figure 4(b) shows that our approach yields higher _F1-Score_ values (i.e., greater than 0.75) when dealing with more than 30% of observability.
#### Offline Results for \(\textsc{ppltl}\) Goals
Finally, as for the results for the two types of \(\textsc{ppltl}\) goals, it is possible to observe in the last inner table that the overall average number of observations \(|Obs|\) is less than the average for the other datasets, making the task of goal recognition more difficult for the \(\textsc{ppltl}\) datasets. Yet, we can see that our recognition approach remains accurate when dealing with fewer observations. We can also see that the values of _FNR_ increase for low observability, but the _FPR_ values are, on average, inferior to \(\approx 0.15\). Figure 4(c) shows that our approach gradually increases the _F1-Score_ values when also increases the percentage of observability.
### Online Recognition Results
With the experiments and evaluation in the _Keyhole Offline_ recognition setting in place, we now proceed to present the experiments and evaluation in the _Online_ recognition setting. As before, performing the recognition task in the _Online_ recognition setting is usually harder than in the offline setting, as the recognition task has to be performed incrementally and gradually, and we see to the observations step-by-step, rather than performing the recognition task by analyzing all observations at once, as in the offline recognition setting.
Figure 6 exemplifies how we evaluate our approach in the _Online_ recognition setting. To do so, we use the _Ranked First_ metric, which measures how many times over the observation sequence the correct intended goal \(\varphi^{*}\) has been ranked first as the _top-1_ goal over the goal hypotheses \(\mathcal{G}_{\varphi}\). The recognition problem example depicted in Figure 6 has five goal hypotheses (y-axis), and ten actions in the observation sequence (x-axis). As stated before, the recognition task in the _Online_ setting is done gradually, step-by-step, so at every step our approach essentially ranks the goals according to the probability distribution over the goal hypotheses \(\mathcal{G}_{\varphi}\). We can see that in the example in Figure 6 the correct goal \(\varphi^{*}\) is _Ranked First_ six times (at the observation indexes: 4, 6, 7, 8, 9, and 10) over the observation sequence with ten observation, so it means that the goal correct intended goal \(\mathcal{G}_{\varphi}\) is _Ranked First_ (i.e., as the _top-1_, with the highest probability among the goal hypotheses \(\mathcal{G}_{\varphi}\)) 60% of the time in the observation sequence for this recognition example.
We aggregate the average recognition results of all the six datasets for the _Ranked First_ metric as a histogram, by considering full observation sequences that represent executions (sequences of executed actions) of strong-cyclic policies that achieves the actual intended goal \(\varphi^{*}\), and we show such results in Figure 7. The
Figure 5: F1-Score comparison.
Figure 6: Online Recognition example.
Figure 7: Online Recognition Histogram.
results represent the overall percentage (including the standard deviation - black bars) of how many times the of time that the correct intended goal \(\varphi^{*}\) has been ranked first over the observations. The average results indicated our approach to be in general accurate to recognize correctly the _temporal order_ of the facts in the goals in the _Online_ recognition setting, yielding _Ranked First_ percentage values greater than 58%.
Figures 8, 9, 10, 11, 10, 12, and 13 shows the _Online_ recognition results separately for all six domains models and the different types of temporally extended goals. By analyzing the _Online_ recognition results more closely, we see that our approach converges to rank the correct goal as the _top-1_ mostly after a few observations. This means that it is commonly hard to disambiguate among the goals at the beginning of the execution, which, in turn, directly affects the overall _Ranked First_ percentage values (as we can see in Figure 7). We can observe our approach struggles to disambiguate and recognize correctly the intended goal for some recognition problems and some types of temporal formulas. Namely, our approach has struggled to disambiguate when dealing with ltl\({}_{f}\) Eventuality goals in Blocks-World (see Figure 8a), for most temporal extended goals in Tidy-Up (see Figure 10), and for ltl\({}_{f}\) Eventuality goals in Zeno-Travel (see Figure 13a).
## 6 Related Work and Discussion
To the best of our knowledge, existing approaches to _Goal and Plan Recognition as Planning_ cannot explicitly recognize temporally extended goals in non-deterministic environments. Seminal and recent work on _Goal Recognition as Planning_ relies on deterministic planning techniques Ramirez and Geffner (2009); Sohrabi et al (2016); Pereira et al (2020) for recognizing conjunctive goals. By contrast, we propose a novel problem formalization for goal recognition, addressing temporally extended goals (ltl\({}_{f}\) or ppltl goals) in fond planning domain models. While our probabilistic approach relies on the probabilistic framework of Ramirez and Geffner (2010), we address the challenge of computing \(\mathbb{P}(Obs\,|\,G)\) in a completely different way.
There exist different techniques to _Goal and Plan Recognition_ in the literature, including approaches that rely on plan libraries Avrahami-Zilberbrand and Kaminka (2005), context-free grammars Geib and Goldman (2009), and Hierarchical Task Network (HTN) Holler et al (2018). Such approaches rely on hierarchical structures that represent the knowledge of how to achieve the possible goals, and this knowledge can be seen as potential strategies for achieving the set of possible goals. Note that the temporal constraints of temporally extended goals can be adapted and translated to such hierarchical knowledge. For instance, context-free grammars are expressive enough to encode temporally extended goals Chiari et al (2020). ltl\({}_{f}\) has the expressive power of the star-free fragment of regular expressions and hence captured by context-free grammars. However, unlike regular expressions, ltl\({}_{f}\) uses negation and conjunction liberally, and the translation to regular expression is computationally costly. Note, being equally expressive is not a meaningful indication of the complexity of transforming one formalism into another. De Giacomo et al (2020) show that, while ltl\({}_{f}\) and ppltl have the same expressive power, the best translation techniques known are worst-case 3EXPTIME.
As far as we know, there are no encodings of ltl\({}_{f}\)-like specification languages into HTN, and its difficulty is unclear. Nevertheless, combining HTN and ltl\({}_{f}\) could be interesting for further study. HTN techniques focus on the knowledge about the decomposition property of traces, whereas ltl\({}_{f}\)-like solutions focus on the knowledge about dynamic properties of traces, similar to what is done in verification settings.
Most recently, Bonassi et al (2023) develop a novel Pure-Past Linear Temporal Logic PDDL encoding for planning in the _Classical Planning_ setting.
## 7 Conclusions
We have introduced a novel problem formalization for recognizing _temporally extended goals_, specified in either ltl\({}_{f}\) or ppltl, in fond planning domain models. We have also developed a novel probabilistic framework for goal recognition in such settings, and implemented a compilation of temporally extended goals that allows us to reduce the problem of fond planning for ltl\({}_{f}\)/ppltl goals to standard fond planning. We have shown that our recognition approach yields high accuracy for recognizing temporally extended goals (ltl\({}_{f}\)/ppltl) in different recognition settings (_Keyhole Offline_ and _Online_ recognition) at several levels of observability.
As future work, we intend to extend and adapt our recognition approach for being able to deal with spurious (noisy) observations, and recognize not only the temporal extended goals but also anticipate the policy that the agent is executing to achieve its goals.
###### Acknowledgements.
This work has been partially supported by the EU H2020 project AIPlan4EU (No. 101016442), the ERC Advanced Grant WhiteMech (No. 834228), the EU ICT-48 2020 project TAILOR (No. 952215), the PRIN project RIPER (No. 20203FFYLK), and the PNRR MUR project FAIR (No. PE0000013).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{12}{c}{Congmicmicity Const} & \multicolumn{1}{c}{Ul.y. Sensitivity Const} & \multicolumn{1}{c}{Ul.y. Sensitivity Const} & \multicolumn{1}{c}{Ul.y. Sensitivity Const} & \multicolumn{1}{c}{Ul.y. Sensitivity Const} & \multicolumn{1}{c}{Prit. Const.} & \multicolumn{1}{c}{Prit. Const.} & \multicolumn{1}{c}{Prit. Const.} & \multicolumn{1}{c}{Prit. Const.} & \multicolumn{1}{c}{Prit. Const.} & \multicolumn{1}{c}{Ul.y. Sensitivity} \\ & \(\alpha_{1}\), \(\alpha_{2}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\alpha_{9}\) & \(\alpha_{10}\) & \(\alpha_{11}\) & \(\alpha_{12}\) & \(\alpha_{13}\) & \(\alpha_{14}\) & \(\alpha_{15}\) & \(\alpha_{16}\) & \(\alpha_{17}\) & \(\alpha_{18}\) & \(\alpha_ |
2310.05442 | Establishing Trustworthiness: Rethinking Tasks and Model Evaluation | Language understanding is a multi-faceted cognitive capability, which the
Natural Language Processing (NLP) community has striven to model
computationally for decades. Traditionally, facets of linguistic intelligence
have been compartmentalized into tasks with specialized model architectures and
corresponding evaluation protocols. With the advent of large language models
(LLMs) the community has witnessed a dramatic shift towards general purpose,
task-agnostic approaches powered by generative models. As a consequence, the
traditional compartmentalized notion of language tasks is breaking down,
followed by an increasing challenge for evaluation and analysis. At the same
time, LLMs are being deployed in more real-world scenarios, including
previously unforeseen zero-shot setups, increasing the need for trustworthy and
reliable systems. Therefore, we argue that it is time to rethink what
constitutes tasks and model evaluation in NLP, and pursue a more holistic view
on language, placing trustworthiness at the center. Towards this goal, we
review existing compartmentalized approaches for understanding the origins of a
model's functional capacity, and provide recommendations for more multi-faceted
evaluation protocols. | Robert Litschko, Max Müller-Eberstein, Rob van der Goot, Leon Weber, Barbara Plank | 2023-10-09T06:32:10Z | http://arxiv.org/abs/2310.05442v2 | # Establishing Trustworthiness: Rethinking Tasks and Model Evaluation
###### Abstract
Language understanding is a multi-faceted cognitive capability, which the Natural Language Processing (NLP) community has striven to model computationally for decades. Traditionally, facets of linguistic intelligence have been compartmentalized into tasks with specialized model architectures and corresponding evaluation protocols. With the advent of large language models (LLMs) the community has witnessed a dramatic shift towards general purpose, task-agnostic approaches powered by generative models. As a consequence, the traditional compartmentalized notion of language tasks is breaking down, followed by an increasing challenge for evaluation and analysis. At the same time, LLMs are being deployed in more real-world scenarios, including previously unforeseen zero-shot setups, increasing the need for trustworthy and reliable systems. Therefore, we argue that it is time to rethink what constitutes tasks and model evaluation in NLP, and pursue a more holistic view on language, placing trustworthiness at the center. Towards this goal, we review existing compartmentalized approaches for understanding the origins of a model's functional capacity, and provide recommendations for more multifaceted evaluation protocols.
*
Footnote *: Equal contribution.
\({}^{*}\)Trust arises from knowledge of origin as well as from knowledge of functional capacity.
_Trustworthiness - Working Definition_
_David G. Hays, 1979_
## 1 Introduction
Understanding natural language requires a multitude of cognitive capabilities which act holistically to form meaning. Modeling this ability computationally is extremely difficult, thereby necessitating a compartmentalization of the problem into _isolated tasks_ which are solvable with available methods and resources (Schlangen, 2021). Undoubtedly as of late 2022, we are witnessing a paradigm shift: Powerful LLMs, in the form of instruction-tuned, prompt-based generative models such as ChatGPT and GPT-4 (Wei et al., 2022; Touvron et al., 2023; Taori et al., 2023; OpenAI, 2023; Bubeck et al., 2023, _inter alia_), have found widespread adoption reaching far beyond the NLP community. Part of this success story is the casting of heterogeneous NLP tasks into sequence-to-sequence tasks (Raffel et al., 2020; Sanh et al., 2022; Wang et al., 2022); which in turn enables extreme multi-task learning, and cross-task transfer learning.
This is in stark contrast to the traditional compartmentalized NLP paradigm (visualized in Figure 1), wherein a human-motivated language task with an input _expression_ and an output _expectation_ is clearly formalized into a _dataset_ with machine-readable inputs and outputs. Both feature design and model development are highly task-specific--often manually curated. Paired with evaluation protocols for comparing model predictions with
Figure 1: **Contemporary NLP Paradigm** with language tasks formalized as datasets for which models produce predictions. Recent LLMs break down this compartmentalization (dashed lines), impacting all stages of the cycle. We argue that establishing _trust_ requires rethinking every facet of this framework, as formalization and evaluation become increasingly difficult.
human expectations via formalized metrics or qualitative judgement, this general methodology has been widely adopted and trusted.1 However, with contemporary LLMs this compartmentalization is breaking down--having severe impacts on all stages of the cycle. Therefore, a persistent and critical question regains importance: _How can trust be established between the human and the model?_
Footnote 1: While not without deficiencies, evaluation protocols were arguably more heterogeneous and established than today w.r.t. quantitative/qualitative evaluation, human judgements etc.
As early as 44 years ago, Hays (1979) offers an attempt and provides a definition of _trustworthiness_ (cf. quote). Today, the topic of trustworthiness is an ongoing discussion deserving special attention Baum et al. (2017); Eisenstein (2022); Clarke et al. (2023). We argue that to establish trust, it is time to rethink how we deal with tasks and their evaluation. Why now? It is getting increasingly hard to predict a priori when we can expect models trained on web-scale data to work well. Were we to live in a hypothetical world with full knowledge of origin and functional capacity, then each task instance could be routed to the right model(s) to not only tap into the LLMs' full potential, but to also enable trust in their predictions. Today, the absence of this knowledge is directly linked to our lack of trust in deploying models in real-world scenarios.
In this position paper, we synthesize contemporary work distributed throughout different subfields of NLP and ML into a conceptual framework for trust, guided by Hays (1979)'s definition and centered around _knowledge facets_ as a guiding principle for all aspects of the model development and evaluation cycle. We outline high-level desiderata (SS2), and suggest directions on how to gain trust, by providing starting points of facets (SS3) aimed to stipulate uptake and discussion. In SS4 we discuss how trustworthiness relates to user trust.
## 2 Desiderata for Trustworthy LLMs
LLMs today pose a conundrum: They are seemingly universally applicable, having high functional capacity, however, the larger the model, the less we appear to know about the origins of its capabilities. How did we get here, which aspects contribute to trustworthiness, and what did we lose on the way? In the following, we aim to provide a brief history of central trust desiderata (**D1-4**), discussing how our knowledge of functional capacity and its origins has changed over time.
### D1. Knowledge about Model Input
In the beginnings of NLP, researchers followed strict, task-specific formalizations and had precise control over which "ingredients"2 go into model training and inference (i.e., manual feature engineering). Neural models have caused a shift towards _learning_ representations, improving performance at the cost of interpretability. While analogy tasks Mikolov et al. (2013) have enabled analyses of how each word-level representation is grounded, contemporary representations have moved to the subword level, and are shared across words and different languages, obscuring our knowledge of the origin of their contents, and requiring more complex lexical semantic probing Vulic et al. (2020, 2023). This is amplified in today's instruction-based paradigm in which tasks are no longer formalized by NLP researchers and expert annotators but are formulated as natural language expressions by practitioners and end users Ouyang et al. (2022). The cognitive process of formalizing raw model inputs into ML features has been incrementally outsourced from the human to the representation learning algorithm, during which we lose knowledge over functional capacity.
Footnote 2: We refer to ingredients as explicit inputs and LLM’s parametric knowledge De Cao et al. (2021); Mallen et al. (2023).
### D2. Knowledge about Model Behaviour
In the old compartmentalized view of NLP, higher-level tasks are typically broken down into pipelines of subtasks Manning et al. (2014), where inspecting intermediate outputs improves our knowledge about model behaviour. Recently however, LLMs are usually trained on complex tasks in an end-to-end fashion Glasmachers (2017), which makes it more difficult to expose intermediate outputs and analyze error propagation. Over time we have gained powerful black-box models, but have lost the ability to interpret intermediate states and decision boundaries, thus increasing uncertainty and complexity. Because as of today, we cannot build models that always provide factually correct, up-to-date information, we cannot trust to employ these models at a large scale, in real-world scenarios, where reliability and transparency are key. In this regard, pressing questions are e.g., how _hallucination_ and _memorization_ behaviour can be explained Dziri et al. (2022); Mallen et al. (2023), how models behave when trained on many languages Conneau et al. (2020); Choenni et al. (2023), what internal features are overwritten when trained on differ
ent tasks sequentially (_catastrophic forgetting_; e.g., McCloskey and Cohen, 1989; French, 1999), how to improve models' ability to know when they do not know (_model uncertainty_; e.g., Li et al., 2022), or how do LLMs utilize skills and knowledge distributed in their model parameters.
D3. Knowledge of Evaluation Protocols.The emergence of LLMs has raised the question of how to evaluate general-purpose models. Many recent efforts have followed the traditional NLP evaluation paradigm and summarized LLM performance into evaluation metrics across existing benchmark datasets (Sanh et al., 2022; Wang et al., 2022; Scao et al., 2022; Wei et al., 2022; Touvron et al., 2023). This estimates LLM performance for tasks covered by the benchmark dataset and thus establishes trust when applying the model to the same task. However, the situation is different when LLMs are used to solve tasks outside of the benchmark, which is often the case for real-world usage of LLMs (Ouyang et al., 2022). Then, the expected performance becomes unclear and benchmark results become insufficient to establish trust. One proposal to solve this issue is to evaluate on a wide variety of task-agnostic user inputs and report an aggregate metric (Ouyang et al., 2022; Chung et al., 2022; Wang et al., 2023; Dettmers et al., 2023). This approach has the potential to cover a wider range of use cases, however, it relies mostly on manual preference annotations from human labelers or larger LLMs which is costly and has no accepted protocol yet.
D4. Knowledge of Data Origin.So far, we discussed trust desiderata from the viewpoint of knowledge of functional capacity. Next to this, a model's behaviour is also largely influenced by its training data. Knowledge about data provenance helps us make informed decisions about whether a given LLM is a good match for the intended use case. Therefore, open access to data must be prioritized. In compartmentalized NLP, models are trained and evaluated on well-known, manually curated, task-specific datasets. Today's models are instead trained on task-heterogeneous corpora at web scale, typically of unknown provenance. For novel tasks, this means we do not know how well relevant facets (e.g., language, domain) are represented in the training data. For existing tasks, it is unclear if the model has seen test instances in their large training corpora (i.e., test data leakage; Piktus et al., 2023), blurring the lines between traditional train-dev-test splits and overestimating the capabilities of LLMs. To compound matters further, models are not only trained on natural, but also on generated data, and unknown data provenance is also becoming an issue as annotators start to use LLMs (Veselovsky et al., 2023). LLMs trained on data generated by other LLMs can lead to a "curse of recursion" where (im-)probable events are over/underestimated (Shumailov et al., 2023).
## 3 What Can We Do to Gain Trust Now and in Future?
In a world where generative LLMs seemingly dominate every benchmark and are claimed to have reached human-level performance on many tasks,3 we advocate that now is the time to treat trust as a first-class citizen and place it at the center of model development and evaluation. To operationalize the concept of trust, we denote with _knowledge facets_ (henceforth, facets) all factors that improve our knowledge of functional capacity and knowledge of origin. Facets can be local (instance) or global (datasets, tasks). They refer to 1) descriptive knowledge such as meta-data or data/task provenance, and 2) inferred knowledge; for example which skills are exploited. We next propose concrete suggestions on how facets can help us gain trust in LLMs based on the desiderata in SS2.
Footnote 3: For example, GPT-4 reportedly passed the bar exam and placed top at GRE exams, see [https://openai.com/research/gpt-4](https://openai.com/research/gpt-4).
**Explain Skills Required versus Skills Employed.** It is instructive to think of prompt-based generative LLMs as instance-level problem solvers and, as such, we need to understand a-priori _the necessary skills for solving instances_ (local facets) as well as knowing _what skills are actually employed during inference_. Most prior work aims to improve our understanding of tasks and the skills acquired to solve them by studying models trained specifically for each task, and can be broadly classified into: (i) linguistically motivated approaches and (ii) model-driven approaches (**D1**). Linguistic approaches formalize skills as cognitive abilities, which are studied, e.g., through probing tasks (Adi et al., 2017; Conneau et al., 2018; Amini and Ciaramita, 2023), checklists (Ribeiro et al., 2020) and linguistic profiling (Miaschi et al., 2020, 2021; Sarti et al., 2021). Model-driven approaches attribute regions in the model parameter space to
skills (Ansell et al., 2022; Wang et al., 2022; Ponti et al., 2023; Ilharco et al., 2023). The former can be seen as describing global facets (i.e., the overall functional capacity of black-box models), while the latter identifies local facets (i.e., skill regions in model parameters). To establish trust, we need to know what skills are required to solve instances, which is different from which skills are exercised by a model at inference time, as described next.
Besides knowlege about skills needed to solve a task, it is important to gain knowledge about what skills are actually being applied by an LLM. This is linked to explainability and transparency, corresponding to (i) understanding the knowledge4 that goes into the inference process (**D1**), and (ii) the inference process itself in terms of applied skills (**D2**), e.g., examinations of LLMs' "thought processes". Regarding (i), existing work includes attributing training instances to model predictions (Pruthi et al., 2020; Weller et al., 2023) and explaining predictions through the lens of white-box models (Frosst and Hinton, 2017; Aytekin, 2022; Hedderich et al., 2022). They are, however, often grounded in downstream task data and thus do not provide insights connected to the knowledge memorized by LLMs during pre-training (_global facets_). Regarding (ii), existing approaches include guiding the generation process through intermediate steps (Wei et al., 2022; Wang et al., 2023; Li et al., 2023) and pausing the generation process to call external tools (Schick et al., 2023; Shen et al., 2023; Paranjape et al., 2023; Mialon et al., 2023). Their shortcoming is that they operate on the input level, and similarly do not capture cases where pre-existing, model-internal knowledge is applied. Furthermore, prior work has shown that LLMs follow the path of least resistance. That is, neural networks are prone to predict the right thing for the wrong reasons (McCoy et al., 2019; Schramowski et al., 2020), which can be caused by spurious correlations (Eisenstein, 2022).5 On the path to gaining trust, we advocate for LLMs that are able to attribute their output to internal knowledge and the skills used to combine that knowledge. Alternatively, LLMs could be accompanied by white-box explanation models that (are at least a proxy) for explaining the inference process.
Footnote 4: Including acquired knowledge such as common sense and world knowledge (Li et al., 2022; De Bruyn et al., 2022).
Footnote 5: “The sentiment of a movie should be invariant to the identity of the actors in the movie” (Eisenstein, 2022)
Facilitate Representative and Comparable Qualitative Analysis.Today, the standard target for NLP papers proposing a new model is to beat previous models on a certain _quantitative_ benchmark. We argue that if datasets and metrics are well-designed and well-grounded in skills/capabilities, they can be used as an indicator of progress.6 On the other hand, findings from negative results might be obscured without _faceted quantitative analysis_: even when obtaining lower scores on a benchmark, sub-parts of an NLP problem may be better solved compared to the baseline, but go unnoticed (**D3**). We therefore cannot trust reported SOTA results as long as the facets that explain how well sub-problems are solved remain hidden. Complementary to holistic quantitative explanations, as proposed by HELM (Liang et al., 2022), we call for a holistic qualitative evaluation where benchmarks come with _standardized qualitative evaluation protocols_, which facilitates comparable qualitative meta-analysis. This proposal is inspired by the manually-curated GLUE diagnostics annotations (Wang et al., 2018), which describe examples by their linguistic phenomena.7 Recycling existing tasks and augmenting them with diagnostic samples to study LLMs provides a very actionable direction for applying existing compartmentalization in a more targeted trustworthy way. Diagnostics samples should ideally represent the full spectrum of cognitive abilities required to solve a task. Designing these samples is however a complex task. We hypothesize that the set of required skills varies between tasks and should ideally be curated by expert annotators.
Footnote 6: Note that baseline comparisons can still be obscured by unfair comparisons (Ruffinelli et al., 2020).
Footnote 7: [https://gluebenchmark.com/diagnostics/](https://gluebenchmark.com/diagnostics/)
Be Explicit about Data Provenance.In ML, it is considered good practice to use stratified data splits to avoid overestimation of performance on dev/test splits based on contamination. Traditionally, this stratification was done based on, e.g., source, time, author, language (cross-lingual), or domain (cross-domain). Recent advances have hinted at LLMs' ability to solve new tasks, and even to obtain new, i.e., emergent abilities (Wei et al., 2022). These are in fact similar cross-\(\mathcal{X}\) settings, where \(\mathcal{X}\) is no longer a property at the level of dataset sampling, but of the broader task setup. We call for always employing a cross-\(\mathcal{X}\) setup (**D4**); whether it is based on data sampling, tasks, or capabilities
urging practitioners to make this choice explicit. Transparency about data provenance and test data leakage improve our trust in reported results. In practice, these data provenance facets are also valuable for identifying inferred knowledge such as estimated dataset/instance difficulty (Swayamdipta et al., 2020; Rodriguez et al., 2021; Ethayarajh et al., 2022), especially when used in conjunction with the aforementioned diagnostic facets.
Data provenance is also important when drawing conclusions from benchmark results (**D3**). Tedeschi et al. (2023) question the notion of superhuman performance and claims of tasks being solved (i.e., overclaiming model capabilities), and criticize how benchmark comparisons "do not incentivize a deeper understanding of the systems' performance". The authors discuss how external factors can cause variation in human-level performance (incl. annotation quality) and lead to unfair comparisons. Similarly, underclaiming LLMs' capabilities also obfuscates our knowledge of their functional capacity (Bowman, 2022). Additionally, in a recent study domain experts find the accuracy of LLMs to be mixed (Peskoff and Stewart, 2023). It is therefore important to be explicit about the limitations of benchmarks (Raji et al., 2021) and faithful in communicating model capabilities. At the same time, it is an ongoing discussion whether reviewers should require (i.e, disincentivize the absence of) closed-source baseline models such as ChatGPT and GPT-4, which do not meet our trust desiderata (Rogers et al., 2023). Closed-source models that sit behind APIs typically evolve over time and have unknown data provenance, thus lacking both knowledge of origin (**D4**), and the consistency of its functional capacity. Consequently, they make _untrustworthy baselines_ and should not be used as an isolated measure of progress.
## 4 Trustworthiness and User Trust
So far we have discussed different avenues for improving our knowledge about LLM's functional capacity and origin, paving the way for establishing trustworthiness. From a user perspective it is essential to not only understand knowledge facets but also how they empirically impact _user trust_ in a collaborative environment. This is especially important in high-risk scenarios such as in the medical and legal domain. One could argue, if LLMs such as ChatGPT are already widely adopted, do we already trust LLMs (too much)? To better understand user trust we need interdisciplinary research and user experience studies on human-AI collaboration. Specifically, we need to know what users do with the model output across multiple interactions (e.g., verify, fact check, revise, accept). For example, Gonzalez et al. (2021) investigate the connection between explanations (**D2**) and user trust in the context of question answering systems. In their study users are presented with explanations in different modalities and either accept (trust) or reject (don't trust) candidate answers. Similarly, Smith-Renner et al. (2020) discuss how generated explanations can promote over-reliance or undermine user trust. A closely related question is how the faithfulness of explanations affect user trust (Atanasova et al., 2023; Chiesurin et al., 2023). For a comprehensive overview on user trust we refer to the recent survey by Bach et al. (2022).
While such controlled studies using human feedback are cost and time intensive, the minimum viable alternative for establishing trust may simply be the publication of a model's input-output history. In contrast to standalone metrics and cherry-picked qualitative examples, access to prior predictions enables post-hoc _knowledge of model behaviour_ (**D2**), even without direct access to the model. This democratizes the ability to verify functional capacity and helps end users seeking to understand how well a model works for their task.
In summary, evaluating user trust is an integral part of trustworthiness and goes hand in hand with careful qualitative analyses and faceted quantitative evaluation. Towards this goal, we believe LLM development needs to be more human-centric.
## 5 Conclusions
In this position paper, we emphasize that the democratization of LLMs calls for the need to rethink tasks and model evaluation, placing trustworthiness at its center. We adopt a working definition of trustworthiness and establish desiderata required to improve our knowledge of LLMs (SS2), followed by suggestions on how trust can be gained by outlining directions guided by what we call _knowledge facets_ (SS3). Finally, we draw a connection between trustworthiness as knowledge facets and user trust as means to evaluate their impact on human-AI collaboration (SS4).
### Limitations
To limit the scope of this work, we did not discuss the topics of social and demographic biases [14], discrimination of minority groups [13] and hate speech as factors influencing our trust in LLMs. Within our proposed desiderata, this facet would fall under 'Knowledge of Data Origin' (SS2), in terms of understanding where model-internal knowledge and the associated biases originate from (**D4**).
Our proposed multi-faceted evaluation protocols rely strongly on human input--either via qualitative judgements and/or linguistically annotated diagnostic benchmarks (SS3). We acknowledge that such analyses require more time and resources compared to evaluation using contemporary, automatic metrics, and may slow down the overall research cycle. While we believe that slower, yet more deliberate analyses are almost exclusively beneficial to establishing trust, our minimum effort alternative of publishing all model predictions can also be used to build user trust (SS4). This simple step closely mirrors the scientific method, where hypotheses must be falsifiable by anyone [12]. Identifying even a single incorrect prediction for a similar task in a model's prediction history, can already tell us plenty about the model's trustworthiness.
## Acknowledgements
We thank the anonymous reviewers for their insightful comments. This research is supported by the Independent Research Fund Denmark (DFF) Sapere Aude grant 9063-00077B and ERC Consolidator Grant DIALECT 101043235.
|
2303.13318 | Implicit Active Flux methods for linear advection | In this work we develop implicit Active Flux schemes for the scalar advection
equation. At every cell interface we approximate the solution by a polynomial
in time. This allows to evolve the point values using characteristics and to
update the cell averages using fluxes obtained by integrating this polynomial.
The resulting schemes have order of convergence up to five, but show only
moderate oscillations with high frequencies for discontinuous solutions. In
numerical experiments we compare the different methods and show an application
to network flows. | Wasilij Barsukow, Raul Borsche | 2023-03-23T14:50:44Z | http://arxiv.org/abs/2303.13318v2 | ###### Abstract
###### Abstract
In this work we develop implicit Active Flux schemes for the scalar advection equation. At every cell interface we approximate the solution by a polynomial in time. This allows to evolve the point values using characteristics and to update the cell averages using fluxes obtained by integrating this polynomial. The resulting schemes have order of convergence up to five, but show almost no oscillations with high frequencies for discontinuous solutions. In numerical experiments we compare the different methods and show an application to network flows.
Keywords: linear advection, implicit methods, Active Flux
Implicit Active Flux methods for linear advection
Wasilij Barsukow1, Raul Borsche2
Footnote 1: Bordeaux Institute of Mathematics, Bordeaux University and CNRS/UMR5251, Talence, 33405 France, [email protected]
Footnote 2: University of Kaiserslautern-Landau, Gottlieb-Daimler-Strasse 48, 67663 Kaiserslautern, Germany, [email protected]
## 1 Introduction
Linear advection is the simplest hyperbolic PDE and is widely used as a starting point for the development of numerical methods for conservation laws. It is the perfect testbed for studying properties of numerical methods, e.g. the analysis of linear (von Neumann) stability or the order of convergence. Apart from being a prototype for nonlinear problems or systems of equations, there are some applications relying directly on the advection equation [20, 15, 6] or the wave equation in 1-d, which can be diagonalized with characteristic variables.
A special class of these problems considers advection phenomena on networks. Scalar equations are used for modeling supply chains [15] or district heating systems [6]. The wave equation on networks is considered e.g. in [28, 11, 5, 3]. All these applications demand highly accurate and efficient numerical methods and there is an overwhelming amount of possible schemes [20, 27]. However, most of these schemes are explicit and have to obey some kind of CFL condition bounding the size of the time step relative to the advection speed. In order to avoid such a restriction the scheme has to be implicit [8, 9, 30, 10, 16].
In this paper we want to extend these ideas from standard finite difference/finite volume methods to the recently developed Active Flux method. Active Flux was elaborated in [13, 14], but it began its existence as a method for linear advection, when it was proposed in [29] as "Scheme V". It has been extended to numerical methods for other conservation laws, e.g. the Euler equations of ideal hydrodynamics, see [17, 4]. Its main difference compared to classical finite differences or finite volume approaches is that it evolves point values and cell averages simultaneously. Active Flux shall be briefly reviewed next.
Consider a one-dimensional equidistant grid with cells \([x_{i-\frac{1}{2}},x_{i+\frac{1}{2}}]\), \(i\in\mathbb{Z}\) and spacing \(\Delta x\). The degrees of freedom of Active Flux are cell averages \(\{\bar{q}_{i}\}_{i\in\mathbb{Z}}\) and point values
\(\{q_{i+\frac{1}{2}}\}_{i\in\mathbb{Z}}\) located at cell interfaces such that
\[\bar{q}_{i}(t)\simeq\frac{1}{\Delta x}\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2} }}q(t,x)\,\mathrm{d}x,\qquad\qquad q_{i+\frac{1}{2}}(t)\simeq q(t,x_{i+\frac{1} {2}}).\]
The point values are evolved independently of the averages, contrary to e.g. the parabolic spline method [31], where the point values are computed at each time step from the given averages. The evolution of the cell averages is immediately obvious: Integrating a conservation law \(\partial_{t}q+\partial_{x}f(q)=0\) over the cell yields
\[\frac{\mathrm{d}}{\mathrm{d}t}\bar{q}_{i}(t)+\frac{f(q_{i+\frac{1}{2}}(t))-f(q _{i-\frac{1}{2}}(t))}{\Delta x}=0 \tag{1}\]
The order of accuracy of the update of the average is entirely given by the order of accuracy of the point value update, as equation (1) is exact.
There exist several suggestions for the point value update, all of them being explicit in time:
1. In [1, 2] it has been proposed to replace the space derivative by a (suitably upwinded) finite difference that uses the point values and averages, and thus to write down a semidiscretization of the conservation law for the point value. Together with (1), the two equations can then be updated in time using a standard, explicit (e.g. Runge-Kutta) method. Depending on the choice of the finite difference, these methods are stable for CFL numbers well below 1.
2. Initially, it was proposed in [29] to define a parabolic reconstruction in every cell, whose average matches \(\bar{q}_{i}\) and which passes through \(q_{i\pm\frac{1}{2}}\) at the endpoints of the cell, and to use it as initial data for a characteristics-based update of the point values. This means that the point value \(q_{i+\frac{1}{2}}(t^{n+1})\) at some time \(t^{n+1}\) is found at the foot point of the characteristic which passes through \(x_{i+\frac{1}{2}}\) at time \(t^{n+1}\). This ensures upwinding and stability. The natural CFL condition prevents the foot point of the characteristic from being further away than in a neighbouring cell. Von Neumann stability results agree with this "physical" stability bound, see [29, 7] for further details.
In deriving implicit methods, we follow both strategies in order to be able to compare the results:
1. As in [1, 2] we propose to integrate a semidiscretization of the conservation law for the point value, as well as (1) implicitly in time using standard methods.
2. We propose new implicit one-step Active Flux methods. The derivation of these methods is, on the one hand, based on the usage of characteristics as in (ii) above, on the other hand on the idea of reconstruction in time, that was used for finite difference methods in [12]. We analyze all methods in a certain class and identify those which are stable.
We find that the methods resulting from the latter approach are largely superior, which might be due to the fact that the time and space discretizations are not separated.
Note that the resulting methods have compact stencils due to the additional degrees of freedom of Active Flux. This is particularly advantageous whenever boundary conditions
are imposed. Active Flux offers the other great advantage of having a point value located just at the cell interface, where a Dirichlet boundary condition can be imposed immediately and unambiguously. We believe that this paper is also the starting point for investigations towards implicit Active Flux methods for more complex problems.
The paper is organized as follows. Section 2.1 presents implicit Active Flux methods based on strategy (i), and Section 2.2 presents those obtained through characteristics and reconstruction in time, following (ii). In Section 3 the implementation of Dirichlet boundaries is discussed, which demonstrates the advantages associated to using an Active Flux method. Numerical examples are shown in Section 4.
## 2 Implicit Active Flux schemes
We aim at solving the scalar advection equation with fixed speed \(u\in\mathbb{R}\)
\[\partial_{t}q+u\partial_{x}q =0, q\colon\mathbb{R}_{0}^{+}\times I\to\mathbb{R}, \tag{2}\]
on a compact domain \(I\subset\mathbb{R}\), endowed with either periodic or an inflow Dirichlet boundary condition and an initial condition \(q_{0}\colon I\to\mathbb{R}\), \(q(0,x)=q_{0}(x)\).
### Semi-discrete methods integrated implicitly
Semi-discrete Active Flux methods have been first introduced in [1]. The evolution equation of the cell average is trivially given by (1). In order to obtain an evolution equation for the point value, a finite difference approximation to the spatial derivative is used. A third-order approximation is (see [2])
\[\frac{\mathrm{d}}{\mathrm{d}t}q_{i+\frac{1}{2}}(t)=-u\frac{2q_{i-\frac{1}{2}}( t)-6\bar{q}_{i}(t)+4q_{i+\frac{1}{2}}(t)}{\Delta x}. \tag{3}\]
Note that there is no notion of a conservative update for a point value, and the only condition that needs to be imposed is stability. Equations (1) and (3) are a coupled system of ODEs that can be solved with standard methods. Explicit Runge-Kutta schemes were used in [2]; here the system shall be integrated in time implicitly.
We have studied the following methods:
* backward Euler
* Crank-Nicolson
* Radau IA (3rd order) \[\begin{array}{c|cc}0&1/4&-1/4\\ 2/3&1/4&5/12\\ \hline&1/4&3/4\\ \end{array}\]
* Radau IIA (3rd order) \[\begin{array}{c|cc}1/3&5/12&-1/12\\ 1&3/4&1/4\\ \hline&3/4&1/4\\ \end{array}\]
* DIRK (Crouzeix, 3rd order) \[\begin{array}{c|cc}1/2+\sqrt{3}/6&1/2+\sqrt{3}/6&0\\ 1/2-\sqrt{3}/6&-\sqrt{3}/3&1/2+\sqrt{3}/6\\ \hline&1/2&1/2\\ \end{array}\]
for (1) and (3), and find all of them to be stable experimentally. Obviously only the latter methods yield the necessary (3rd) order of accuracy that corresponds to that of the spatial discretization. The Radau methods are of true multi-step nature, which entails a significant increase in the linear system that needs to be solved at every time step. Numerical results are shown in Section 4.
### Single-step methods
#### 2.2.1 Derivation
Classical explicit schemes are often based on a polynomial interpolation in space in each cell. These polynomials are used to compute the fluxes at the interfaces, e.g. by solving a Riemann problem, which for linear advection amounts to tracing back the characteristics. This procedure, however, does not pair well with large time steps, as the characteristics will move a distance larger than the size of a single cell. In principle, it is possible to keep track of the correct cell, which leads to large-time-step methods [19].
Here, however, to overcome this restriction we choose an interpolation in time at cell interfaces. The values at different spatial locations are transported by the characteristics to the interface. Note that this can be done at the current time level \(t^{n}\) and also on the next time level \(t^{n+1}\). These values are then interpolated polynomially, and this interpolation in time mixes values from both time levels into one polynomial at the interface.
This procedure is illustrated in Figure 1. In the following we give a detailed description of the construction of the methods.
We aim at obtaining an implicit numerical method that has a compact stencil. Thus, for the interpolation at an interface we consider a combination of the values adjacent to
the interface at both times \(t^{n}\) and \(t^{n+1}\). For the interface at \(x_{i+\frac{1}{2}}\) these are
\[\bar{q}_{i}^{n},q_{i+\frac{1}{2}}^{n},\bar{q}_{i+1}^{n},\qquad\text{ and}\qquad\bar{q}_{i}^{n+1},q_{i+\frac{1}{2}}^{n+1},\bar{q}_{i+1}^{n+1}. \tag{4}\]
We therefore can choose among polynomials up to degree \(5\) and thus construct schemes of at most \(6^{\text{th}}\) order of accuracy. Depending on the degree of the polynomial, we can choose among the following equations to find a reconstruction polynomial \(q_{i+\frac{1}{2}}^{\text{recon}}\) in time:
\[q_{i+\frac{1}{2}}^{\text{recon}}(t^{n}) =q_{i+\frac{1}{2}}^{n} \tag{5}\] \[q_{i+\frac{1}{2}}^{\text{recon}}(t^{n+1}) =q_{i+\frac{1}{2}}^{n+1}\] (6) \[\frac{u}{\Delta x}\int_{t^{n+1}}^{t^{n+1}+\frac{\Delta x}{u}}q_{ i+\frac{1}{2}}^{\text{recon}}(t)\,\mathrm{d}t =q_{i}^{n+1}\] \[\frac{u}{\Delta x}\int_{t^{n+1}-\frac{\Delta x}{u}}^{t^{n+1}+ \frac{\Delta x}{u}}q_{i+\frac{1}{2}}^{\text{recon}}(t)\,\mathrm{d}t =q_{i+1}^{n+1}\] (7) \[\frac{u}{\Delta x}\int_{t^{n}}^{t^{n}+\frac{\Delta x}{u}}q_{i+ \frac{1}{2}}^{\text{recon}}(t)\,\mathrm{d}t =q_{i}^{n}\] \[\frac{u}{\Delta x}\int_{t^{n}-\frac{\Delta x}{u}}^{t^{n}}q_{i+ \frac{1}{2}}^{\text{recon}}(t)\,\mathrm{d}t =q_{i+1}^{n}. \tag{8}\]
Note that the interpolation can exceed the time interval \([t^{n},t^{n+1}]\), as indicated in Figure 1.
Once the polynomial is determined, the update of the cell averages follows the classical update formula
\[\bar{q}_{i}^{n+1} =\bar{q}_{i}^{n}-\Delta t\frac{\hat{f}_{i+\frac{1}{2}}^{n+\frac{1 }{2}}-\hat{f}_{i-\frac{1}{2}}^{n+\frac{1}{2}}}{\Delta x} \tag{9}\]
with
\[\hat{f}_{i+\frac{1}{2}}^{n+\frac{1}{2}} :=\frac{1}{\Delta t}\int_{t^{n}}^{t^{n+1}}f\left(q_{i+\frac{1}{2}} ^{\text{recon}}(t)\right)\,\mathrm{d}t. \tag{10}\]
The point values can be updated by tracing back the characteristic
\[q_{i+\frac{3}{2}}^{n+1} =q_{i+\frac{1}{2}}^{\text{recon}}\left(t^{n+1}-\frac{\Delta x}{u }\right). \tag{11}\]
Note that the reconstruction generally depends on values at the time level \(t^{n+1}\). Thus, equations (11) with (10) and (9) are implicit formulas, with the unknowns appearing linearly.
The stencil of the method involves at most the following values
\[\bar{q}_{i}^{n},q_{i+\frac{1}{2}}^{n}\bar{q}_{i+1}^{n}, \bar{q}_{i}^{n+1}, \bar{q}_{i+\frac{1}{2}}^{n+1},\] to update \[q_{i+\frac{3}{2}}^{n+1},\] \[\bar{q}_{i-1}^{n},q_{i-\frac{1}{2}}^{n},\bar{q}_{i}^{n},q_{i+ \frac{1}{2}}^{n},\bar{q}_{i+1}^{n}, \bar{q}_{i+1}^{n+1},q_{i-\frac{1}{2}}^{n+1},\bar{q}_{i}^{n+1},q_{i+ \frac{1}{2}}^{n+1}, \text{to update }\bar{q}_{i}^{n+1}.\]
Thus the update involves only values of neighboring cells.
In order to easily refer to the different methods, we will use a **pictorial representation**. The 6 symbols (boxes/circles) in represent the 6 degrees of freedom possibly involved in the reconstruction in time, as in (4). The point values are represented by circles and the averages by boxes, with the upper row denoting time level \(t^{n+1}\) (implicit) and the lower the time level \(t^{n}\) (explicit). Finally, the black symbols are those actually in use for the reconstruction. For simplicity, in Section A, the stable methods are also given a unique **identifier** consisting of their order of accuracy and a capital letter.
Below we illustrate the construction of the schemes on a particular example.
#### 2.2.2 Example
In this example we aim to construct a scheme of order three. Thus the reconstruction polynomial \(q^{\text{recon}}_{i+\frac{1}{2}}\) has to be quadratic and we can choose 3 equations out of (5)-(8). For example, one might select (5), (6) and (7), i.e. those involving \(q^{n}_{i+\frac{1}{2}},q^{n+1}_{i+\frac{1}{2}},q^{n+1}_{i+1}\). Then, after applying the interpolation described in 2.2.1 and some calculations one finds
\[q^{\text{recon}}_{i+\frac{1}{2}}(t) =q^{n}_{i+\frac{1}{2}}+(t-t^{n})\frac{2u\left(q^{n}_{i+\frac{1}{2 }}-q^{n+1}_{i+\frac{1}{2}}+3c\left(\bar{q}^{n+1}_{i+1}c-q^{n}_{i+\frac{1}{2}}+ (1-c)q^{n+1}_{i+\frac{1}{2}}\right)\right)}{c(3c-2)\Delta x}\] \[+(t-t^{n})^{2}\frac{3u^{2}\left(-2\bar{q}^{n+1}_{i+1}c+q^{n}_{i+ \frac{1}{2}}+(-1+2c)q^{n+1}_{i+\frac{1}{2}}\right)}{c(3c-2)\Delta x^{2}}\]
where \(c=\frac{u\Delta t}{\Delta x}\) is the CFL number. From here, (11) gives
\[q^{n+1}_{i+\frac{1}{2}}=\frac{6\bar{q}^{n+1}_{i+1}(c-1)c+q^{n}_{i+\frac{1}{2} }-\left(1-4c+3c^{2}\right)q^{n+1}_{i+\frac{1}{2}}}{c(3c-2)}\]
Moreover, (10) yields the numerical flux
\[\tilde{f}^{n+\frac{1}{2}}_{i+\frac{1}{2}}=u\frac{\bar{q}^{n+1}_{i+1}c^{2}+(c -1)\left(q^{n}_{i+\frac{1}{2}}+q^{n+1}_{i+\frac{1}{2}}-cq^{n+1}_{i+\frac{1}{2} }\right)}{3c-2}\]
and inserting it into (9) finally gives (having brought all the terms on one side of the equation)
\[0 =\bar{q}^{n}_{i}(2-3c)+\bar{q}^{n+1}_{i+1}c^{3}-\bar{q}^{n+1}_{i} (2-3c+c^{3})\] \[+(c-1)c\Big{(}-q^{n}_{i-\frac{1}{2}}+(c-1)q^{n+1}_{i-\frac{1}{2} }+q^{n}_{i+\frac{1}{2}}+q^{n+1}_{i+\frac{1}{2}}-cq^{n+1}_{i+\frac{1}{2}}\Big{)}\]
Following the notation introduced above, this method is denoted by. The corresponding identifyer from Section A is 3C.
#### 2.2.3 Stability of single-step methods
Von Neumann, or \(\ell^{2}\) stability analysis aims at quantifying whether Fourier modes \(\exp(\mathfrak{k}x)\) of spatial frequency \(k\in\mathbb{R}\) are amplified or damped by the numerical method. To this end, the ansatz for the solution at continuous level is taken as
\[q(t,x)=\hat{q}(t)\exp(\mathfrak{k}x) \tag{12}\]
and the ansatz for the numerical solution (on equidistant grids) as
\[Q_{i}^{n}:=\left(\begin{array}{c}q_{i+\frac{1}{2}}^{n}\\ \widetilde{q}_{i}^{n}\end{array}\right)=\underbrace{\left(\begin{array}{c} \tilde{q}^{n}\\ \tilde{\tilde{q}}^{n}\end{array}\right)}_{=:\hat{Q}^{n}}\exp(\mathbb{I}ki\Delta x). \tag{13}\]
Define \(\beta:=k\Delta x\). Observe that despite dealing with merely a scalar equation, Active Flux has two kinds of degrees of freedom, which are evolved independently. The fact of having two distinct functions is mirrored by having two equations as well. On the one hand, the discrete Fourier transform (13) of the numerical method reduces (9) and (11) to the system
\[\hat{Q}^{n+1}=A^{-1}B\hat{Q}^{n}, \tag{14}\]
where \(A\in\mathscr{M}^{2\times 2}(\mathbb{C})\) is associated with the implicit part, and \(B\in\mathscr{M}^{2\times 2}(\mathbb{C})\) with the explicit one. The non-singularity of \(A\) is a prerequisite of a solvable method, and can be assumed. Note that \(A\) and \(B\) depend on \(k\). The ansatz \(\hat{Q}^{n}=\hat{Q}^{0}z^{n}\) for some yet to be determined \(z\in\mathbb{C}\) yields
\[\hat{Q}^{0}z=A^{-1}B\hat{Q}^{0}\]
i.e. \(z\) must be an eigenvalue of \(A^{-1}B\) (which depends on \(k\)).
On the other hand, considering the ansatz (12) for the advection equation \(\partial_{t}q+u\partial_{x}q=0\) implies
\[q(t+\Delta t,x)=\hat{q}(t+\Delta t)\exp(\mathbb{I}kx)=q(t,x-u \Delta t)=\hat{q}(t)\exp(\mathbb{I}kx)\exp(-\mathbb{I}ku\Delta t)\]
Thus, we have
\[\hat{q}(t+\Delta t)=\hat{q}(t)\exp(-\mathbb{I}ku\Delta t) \tag{15}\]
and consequently \(|\hat{q}(t+\Delta t)|=|\hat{q}(t)|\). A natural stability requirement for the numerical methods therefore is \(|z|\leq 1\) for both eigenvalues.
We have studied von Neumann stability of all the 20 methods of third, 15 methods of fourth, 6 methods of fifth and the unique method of sixth order that result from the procedure described above (a total of 42 methods). In many cases, the eigenvalues of the complex-valued \(2\times 2\) matrix \(A^{-1}B\) could be determined analytically3. For example one finds for (3D) the values \(z=0\) and
Footnote 3: One observes that the computation of the inverse is, in fact, unnecessary, since \(0=\det(A^{-1}B-z\mathbb{I})\) is equivalent to \(0=\det A\det(A^{-1}B-z\mathbb{I})=\det(B-Az)\).
\[z=-\frac{2+\cos\beta-\mathbb{I}c\sin\beta}{2-c^{2}+\cos\beta+c^{ 2}\cos\beta+2\mathbb{I}c\sin\beta}.\]
In the remaining cases, we applied the algorithm of [21] (originally due to Schur [24, 25]), which allows to determine whether the zeros of a polynomial are contained in the unit disc without actually computing them. We applied the algorithm to a sampling of values of \(\beta\) and for \(c<10\).
It is also possible to analyze the stability of the methods by considering a fixed grid size \(N\) with periodic boundaries and analyzing the eigenvalues of the \(2N\times 2N\) update
matrices. We also performed this type of stability analysis for CFL numbers ranging from \(1\) to \(10\) on a grid of \(100\) cells. The two methods of stability analysis gave the same results.
Among the methods studied, i.e. among all the methods of the form (9)-(11), whose reconstruction in time involves at most the degrees of freedom mentioned in (4),
* \(1\) is marginally stable,
* \(14\) have some finite CFL number \(c_{\min}\geq 0\) above which they are stable.
Among the \(14\) stable methods, \(12\) are stable for \(c>2\) or better, and \(8\) are stable for \(c>1\) or better. One among them is even unconditionally stable. These results are summarized in Tables 1 and 2.
#### 2.2.4 Analysis of numerical diffusion and dispersion
So-called diffusion and dispersion errors of a linear numerical method convey the information how Fourier modes of different spatial frequencies \(k\) are amplified/damped and how
\begin{table}
\begin{tabular}{c|c|c|c||c} & order \(3\) & order \(4\) & order \(5\) & total \\ \hline stable \(\forall c\) & & \(1\) & & \(1\) \\ \hline stable for \(c>1\) & \(4\) & \(3\) & \(1\) & \(8\) \\ \hline stable for \(c>2\) & \(6\) & \(3\) & \(3\) & \(12\) \\ \hline \hline stable above some \(c\) & \(8\) & \(3\) & \(3\) & \(14\) \\ \end{tabular}
\end{table}
Table 1: Numbers of stable methods by order of accuracy and type of CFL conditions. The divisions are cumulative (and thus not exclusive): for example a method with a minimum CFL number of \(1\) is also counted in “stable for \(c>2\)” and “stable above some \(c\)”.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline impl.expl. & \(\Box\)
far their speed of propagation differs from the analytical value. By comparing (15) with (14) one observes that the eigenvalues \(z\) of \(A^{-1}B\) (which are functions of \(\beta:=k\Delta x\) and \(c\)) need to be compared to \(\exp(-\mathbb{1}ku\Delta t)=\exp(-\mathbb{1}\beta c)\). It thus makes sense to quantify the numerical diffusion by computing \(|z|\), and the numerical dispersion by computing \(\frac{\arg z}{-\beta c}\), the analytic value being in both cases 1 for all values of \(\beta\). This analysis is to quantify the behaviour of a numerical method beyond merely requiring stability \(|z|\leq 1\).
Typical examples of diffusion and dispersion curves are shown in Figures 2-3. Note that the eigenvalues depend on both \(\beta\) and \(c\); the Figure shows \(|z|\) and \(\frac{\arg z}{-\beta c}\) as functions of \(\beta\in[0,\pi]\) for \(c=3\). As there are two eigenvalues \(z\), in principle, two curves appear for each method. One eigenvalue converges to the analytic one, whereas the other is spurious; it is sometimes zero, in which case it is not shown.
One observes that generally the error both in the diffusion and the dispersion increases with \(\beta\). The diffusion is monotone, which is good, because then waves traveling at wrong speeds are damped. The importance of such behaviour has been emphasized in [23] for explicit Active Flux methods and contrasted to that of other methods. The practical importance of damping those waves becomes obvious for the marginally stable method, which has \(|z|=1\) for the non-vanishing eigenvalue and therefore does not damp waves which have wrong speeds. The corresponding numerical results (see Figure 4.2.3) show significantly more oscillations than the other stable methods.
In many cases a fine interplay of diffusion and dispersion is desirable. Generally speaking, methods with less damping resolve sharp features better. Sharp features correspond
Figure 2: Examples of diffusion curves \(|z|\) for \(c=3\), shown as functions of \(\beta\) for different methods, using their identifiers from Section A. In principle, there are two eigenvalues per method, but often one of the eigenvalues is identically 0; it then is not shown here. Stability requires \(|z|\leq 1\) for all \(\beta\); the method (4D) is marginally stable with the non-zero eigenvalue fulfilling \(|z|=1\)\(\forall\beta\), i.e. equal to the analytical value. This, however, is not good, as this method is not damping waves moving at the wrong speed, i.e. those having large dispersion errors (see Figure 3).
to high values of \(\beta\) and thus it is of interest to quantify how quickly the diffusion increases towards higher \(\beta\). Recall that, given \(c\) and \(\beta\), the time evolution of a Fourier mode is \(|\hat{Q}^{n}|=|\hat{Q}^{0}||z|^{n}=|\hat{Q}^{0}||z|^{\frac{T}{2ct}}\). For example, consider \(u=1\), \(c=3\), \(\Delta x=1/50\). Then \(\Delta t=\frac{3}{50}\), and at \(T=8\) one has \(|\hat{Q}^{n}|=|\hat{Q}^{0}||z|^{n}=|\hat{Q}^{0}||z|^{\frac{400}{3}}\). This is the setup of the numerical test in Section 4.2. The mode has decayed by half if \(|z|^{\frac{400}{3}}=\frac{1}{2}\), i.e. \(|z|\simeq 0.995\).
It thus makes sense to compute, for a given method, the maximum "frequency" \(\beta_{1/2}\) of the Fourier modes that has not yet decayed by half. The wavelength of these modes is \(\frac{2\pi\Delta x}{\beta_{1/2}}\). Measuring this frequency corresponds to measuring the half-width4 of the diffusion curves (such as the ones shown in Fig. 2) at \(|z|=0.995\). These half-widths are shown in Figure 4 for all the stable methods for \(c=3\). From the plot one can see, for example, that for method (3C) features on a length scale \(\sim\)20 \(\Delta x\), corresponding to \(\beta\simeq 0.31\), will have been diffused away by half by the time \(T=8\). For the method (5C) the corresponding \(\beta\) is 0.75 and the length scale \(\sim 8\Delta x\). This is in agreement with the numerical results of Section 4.
Footnote 4: The diffusion curves are bell-shaped and symmetric around \(\beta=0\). The half width is thus the distance along the abscissa between \(\beta=0\) and the location where the curve has a value \(|z|=0.995\).
## 3 Dirichlet boundary conditions
On a finite interval \(I=[x_{\rm L},x_{\rm R}]\), the advection equation (2) with positive velocity \(u>0\) has to be equipped with an initial condition \(q_{0}\) and boundary data \(b\) on the left end of the
Figure 3: Examples of dispersion curves \(\frac{\arg z}{-\beta c}\) for \(c=3\), shown as functions of \(\beta\) for different methods, using their identifiers from Section A. Only the physical eigenvalue is shown.
domain
\[\partial_{t}q+u\partial_{x}q =0\] \[q(t,x_{\mathrm{L}}) =b(t)\quad\forall t>0\] \[q(0,x) =q_{0}(x)\quad\forall x\in I.\]
In the following we will discuss the modifications due to the boundary condition that need to be applied to the first two cells.
The main steps are illustrated in Figure 5. The values \(q_{\frac{1}{2}}^{n+1},q_{\frac{3}{2}}^{n+1}\) of the first two interfaces, as well as the first cell average \(\bar{q}_{1}^{n+1}\) are directly taken from the boundary data. By tracing back the characteristics we find
\[q_{\frac{1}{2}}^{n+1}=b(t^{n+1}),\quad\bar{q}_{1}^{n+1}=\frac{u}{\Delta x}\int_ {t^{n+1}-\frac{\Delta x}{u}}^{t^{n+1}}b(t)\mathrm{d}t,\quad q_{\frac{3}{2}}^{n +1}=b\left(t^{n+1}-\frac{\Delta x}{u}\right).\]
Thus, it is easy to obtain the new values in the vicinity of the left (inflow) boundary by tracing the characteristics back to the boundary. The situation is different at the outflow end, where boundary conditions must not be given. Assume that the grid is divided into \(N\) cells (see Figure 6). According to (11) the right-most point value \(q_{N+\frac{1}{2}}\) is updated as
\[q_{N+\frac{1}{2}}^{n+1}=q_{N-\frac{1}{2}}^{\mathrm{recon}}\left(t^{n+1}-\frac {\Delta x}{u}\right) \tag{16}\]
Figure 4: Half-width of the diffusion curves at \(|z|=0.995\) for all the stable methods at \(c=3\), i.e. the plot shows the value of \(\beta\) for which \(|z|\) attains the value \(0.995\). The choice of values is explained in the text, and is made to fit the numerical tests shown in Section 4. Thus, for values of \(\beta\) less than the one shown the Fourier modes have not decayed by half by the time \(T=8\). Identifiers from Section A are used to distinguish the different methods. The methods have been sorted in the order of increasing \(\beta_{1/2}\), and thus methods further on the right are able to resolve sharp features better/for longer. Unsurprisingly, there is a correlation with the order of the method; however, for methods of the same order, large differences in this ability can be observed.
i.e. it only uses upwind information from the left because the reconstruction \(q_{N-\frac{1}{2}}^{\text{recon}}\) involves only the values \(\bar{q}_{N-1}^{n(+1)},q_{N-\frac{1}{2}}^{n(+1)},\bar{q}_{N}^{n(+1)}\) by definition.
This is different for the average \(\bar{q}_{N}^{n+1}\), because according to (9) its update involves the quadrature of the two reconstructions in time \(q_{N-\frac{1}{2}}^{\text{recon}}\) and \(q_{N+\frac{1}{2}}^{\text{recon}}\). Three cases need to be distinguished:
1. Assume that the reconstruction in time at \(x_{i+\frac{1}{2}}\) does not use the downwind cell averages \(\bar{q}_{i+1}^{n(+1)}\). Then, the reconstruction in time \(q_{N+\frac{1}{2}}^{\text{recon}}\) uses upwind values \(\bar{q}_{N}^{n(+1)},q_{N+\frac{1}{2}}^{n(+1)}\) only and so does the update of the average \(\bar{q}_{N}^{n+1}\). Those schemes can be applied directly. Further we note that such schemes can use an iterative procedure, marching the values from the left to the right: We first compute the update for the next point value and thereafter the update for the average. This substitutes solving the linear system for the values at \(t^{n+1}\), since the corresponding matrix is triangular.
2. Assume next that the reconstruction in time at \(x_{i+\frac{1}{2}}\) involves the implicit downwind cell average \(\bar{q}_{i+1}^{n+1}\) (and possibly its explicit counterpart), i.e. at \(x_{N+\frac{1}{2}}\) we need the non-available cell average \(\bar{q}_{N+1}^{n+1}\). The update equation for the last cell average according to
Figure 5: Scheme at the inflow boundary for the Dirichlet problem. The red values in the first cell can be read off directly by tracing back the characteristic (dashed lines) to the given boundary (green) at \(x_{\frac{1}{2}}\).
Figure 6: The outflow boundary. The values \(\bar{q}_{N+1}^{n(+1)}\) (in red) are not available to define a reconstruction in time at \(x_{N+\frac{1}{2}}\). As explained in the text, in most cases it is not necessary to compute this reconstruction.
(9) reads \[\bar{q}_{N}^{n+1}=\bar{q}_{N}^{n}-\frac{u}{\Delta x}\int_{t^{n}}^{t^{n+1}}\left(q_ {N+\frac{1}{2}}^{\text{recon}}(t)-q_{N-\frac{1}{2}}^{\text{recon}}(t)\right)\, \mathrm{d}t\] (17) where the right-hand-side depends, among other values, on \(q_{N+1}^{n+1}\). However, as the entire method is implicit, there is no notion of which equation updates which variable. Equation (17) can equally well be seen as an update equation for \(q_{N+1}^{n+1}\) that involves only upwind values. What matters is that as many independent equations are provided as there are variables. A different view of the same, inspired by the 'triangular'-scheme from the case above, is to say that the update equation for the cell average (9) is not an update of \(\bar{q}_{i}^{n+1}\) using the value \(\bar{q}_{i+1}^{n+1}\), but an update of \(\bar{q}_{i+1}^{n+1}\) using \(\bar{q}_{i}^{n+1}\). Thus the matrix becomes triangular and all values can be computed in an iterative fashion. We thus propose to shift Equation (17), and in fact all the equations for the cell average, by one cell to the left. The index-shifted Equation (17) would be counted as the equation updating \(q_{N}^{n+1}\), and, in general, Equation (9) would be counted as the update of the cell average \(q_{i-1}^{n+1}\). The problem thus is shifted to the inflow boundary, where missing values are readily available by the procedure described earlier. For example, one can directly compute the integral of the flux \(f_{1/2}^{n+1/2}=\int_{t^{n}}^{t^{n+1}}b(t)\mathrm{d}t\) as indicated in green in Figure 5. Which values are affected by this special treatment depends on the stencil of the reconstruction in time, but any of them can be found in the Dirichlet boundary data upon tracing the characteristics. Also, due to the compact stencil only a few degrees of freedom are updated directly using the boundary.
3. At last we consider schemes using the explicit downwind cell average \(\bar{q}_{i+1}^{n}\), but not the implicit one. The update equation (17) would involve the unavailable cell average \(\bar{q}_{N+1}^{n}\), but not \(\bar{q}_{N+1}^{n+1}\). It cannot thus, by shifting the index, be considered an update equation for \(\bar{q}_{N}^{n+1}\) and the trick from above cannot be applied. Fortunately, upon inspection of Table 2, one observes that there is only one stable5 method which involves \(\bar{q}_{i+1}^{n}\), but does not involve \(\bar{q}_{i+1}^{n+1}\) in its reconstruction in time at \(x_{i+\frac{1}{2}}\): it is (3G), and we skip searching for a fix. Footnote 5: on domains with periodic boundaries, that is.
## 4 Numerical tests
### Convergence studies
We tested the order of convergence for all stable schemes discussed above. The test with the initial condition \(\sin(2\pi x)\) was run with a CFL number of about 3 up to \(T=10\). The grids start with 20 cells and are refined up to 640 cells. The errors on a grid with \(N\) cells are computed as
\[L^{1} =\Delta x\sum_{i=1}^{N}\bar{q}_{i}, \ell^{1} =\frac{1}{N}\sum_{i=1}^{N+1}q_{i-\frac{1}{2}}.\]
The results for some selected schemes are displayed in Figure 7. All schemes show the expected order of convergence and the \(\ell^{1}\)- and \(L^{1}\)-errors are always of similar size.
### Smooth and discontinuous profiles
Next we compare all the proposed methods in the same numerical test proposed in [18], which combines smooth and discontinuous profiles. For the Active Flux schemes we use a discretization with 100 cells for the interval \([0,2]\) using periodic boundaries and a speed \(u=1\). This coincides with the number of degrees of freedom as proposed for the classical method on 200 cells, which is what was used in [18]. We run all methods with a CFL number of about 3 up to a final time \(T=8\), i.e. 4 grid revolution periods.
For reference we show the exact solution and the Finite Difference schemes of order 3 and 4 described in [12, 26]. Their solutions are computed on a grid with 200 cells with the same time step as the Active Flux scheme (CFL number of about 6), such that the memory and computational time are comparable. The 4th order Finite Difference method is marginally stable, which explains the large amount of oscillations.
#### 4.2.1 Semi-discrete method integrated implicitly
Figure 8 shows results obtained by integrating the semi-discretization from section 2.1 implicitly with standard methods. With the second-order Crank-Nicholson method almost no features of the solution are preserved. As the spatial discretization of the semi-discretization (3) is third-order accurate, only third-order time-integrators are of relevance. The DIRK scheme is more diffusive than the reference third-order finite difference method, although third order itself. Only the RadauIA and RadauIIA, which optically coincide, give results similar to the third order finite difference method.
Figure 8: 3rd order semi-discrete scheme integrated with common implicit methods. The curves corresponding to the two Radau methods are on top of each other.
Figure 9: Numerical test of the three third-order schemes (\(\times\) point values, \(+\) averages)
(3B, \(-\)), and (3F, \(-\)). The third-order finite-difference method yields results very close to 3B.
#### 4.2.2 Third-order methods
In Figure 9 and 10 we show the schemes which use three values for the approximation at the interface and are thus of order 3. We do not show (3G) and (3H) (both being unstable for \(\mathrm{CFL}=3\)). All schemes perform better or equally well as the third order finite difference method. The scheme (3B) almost coincides with this reference. All schemes show significant influence of numerical diffusion, but, mostly, the four solution features can be recognized. Especially the methods with an upwind bias in the explicit part, i.e. (3F) and (3E), are significantly better compared to the other ones. They also clearly outperform the semi-discrete methods used with standard time-integrators (discussed in Section 4.2.1), especially noticing that two sub-steps of the Radau methods are required compared to a direct update. The differences between the results of the direct methods are less for larger CFL numbers (e.g. \(\mathrm{CFL}=6\)).
#### 4.2.3 Fourth-order methods
In Figure 11 four schemes of order 4 are shown. All give more accurate results than third-order methods. Scheme (4D) plays a particular role, displaying significantly more oscillations than the other Active Flux methods. As can be seen in Table 2, it is marginally stable (the non-vanishing eigenvalue has norm 1), which might explain the oscillations. Moreover, the stencil of this method is such that the interpolation at the interface uses only average values and thus the update is independent of the point values. It is therefore not surprising that method 4D behaves similarly to the fourth-order reference Finite Difference method. In the solutions of the other three methods the features can be identified much better. Scheme (4C), which has an upwind-bias in the explicit stencil seems to have a tendency of overshooting the exact solution.
#### 4.2.4 Fifth-order methods
Finally, schemes of order 5 are shown in Figure 12. One observes again a significant improvement in comparison to to the fourth-order schemes. All features of the solution are captured and can be clearly distinguished. The three schemes only differ in the choice of the explicit stencil, and a more upwind-focused stencil in the explicit part seems to improve the result slightly.
### Network
As a final test we consider a network of six edges and four nodes, shown in Figure 13. The lengths of the edges are
\[\ell_{1}=5,\quad\ell_{2}=\ell_{3}=\ell_{5}=20,\quad\ell_{4}=\ell_{6}=30.\]
On each edge we solve the advection equation with the speeds
\[u_{1}=u_{3}=u_{4}=u_{6}=1,\quad u_{2}=2,\quad u_{5}=\frac{20}{10+\pi},\]
oriented according to the direction of the edges.
At the nodes \(N_{1},N_{2},N_{3}\) suitable coupling conditions have to be imposed. At the two splitting nodes \(N_{1}\) and \(N_{2}\) we distribute the incoming flux according to fixed parameters \(\alpha_{1}=\frac{3}{4}\) and \(\alpha_{2}=\frac{2}{3}\) such that
\[q_{2}(t,0)=\alpha_{1}q_{1}(t,\ell_{1}), q_{3}(t,0)=(1-\alpha_{1})q_{1}(t,\ell_{1}),\] \[q_{4}(t,0)=\alpha_{2}q_{2}(t,\ell_{1}), q_{5}(t,0)=(1-\alpha_{2})q_{2}(t,\ell_{1}).\]
The coupling at node \(N_{3}\) is immediate by the conservation of mass
\[q_{6}(t,0)=q_{5}(t,\ell_{5})+q_{3}(t,\ell_{3}).\]
Finally, we impose at the first edge the Dirichlet boundary condition
\[q_{1}(t,0)=b(t)=\sin{(\Omega t)}\]
with \(\Omega=\frac{2\pi}{3}\) and an initial condition
\[q_{1}(0,x)=\exp{\left(-4(x-\ell_{1}/2)^{2}\right)} q_{e}(0,x)=0\quad\text{for}\quad e=2,\ldots,6\]
Note that the configuration is chosen such that a signal starting in edge \(1\) will be split at node \(N_{1}\) into two parts. One signal is traveling directly along edge \(3\) to node \(N_{3}\), while the other one takes a detour via \(N_{2}\). Apart from losing some of its strength to edge \(4\), it arrives delayed at node \(N_{3}\) as compared to the signal from edge \(3\).
By defining the edge-crossing times \(\tau_{e}:=\ell_{e}/u_{e}\), \(e=1,\ldots,6\), the exact solution for large times \(t\) at the nodes \(N_{2}\) and \(N_{3}\) has the form
\[q_{4}(t,0)=\alpha_{1}\alpha_{2}b(t-\tau_{1}-\tau_{2}-\tau_{4}),\] \[q_{6}(t,0)=\alpha_{1}(1-\alpha_{2})b(t-\tau_{1}-\tau_{2}-\tau_{5 }-\tau_{6})+(1-\alpha_{1})b(t-\tau_{1}-\tau_{3}-\tau_{6}).\]
Figure 13: Sketch of the network with three nodes and six edges.
Since the parameters \(\alpha_{1},\alpha_{2}\) and \(\Omega\) are chosen such that they satisfy
\[\alpha_{1}(1-\alpha_{2}) =1-\alpha_{1}\] \[\Omega(\tau_{2}+\tau_{5}) =\Omega\tau_{3}+\pi\]
we have a destructive interference at \(q_{6}(t,0)\). This means that the exact solution on edge 6 is equal to zero once the pulse from the initial condition has passed.
Figure 14 shows the results of a simulation using Method (4B) with \(\text{CFL}=3\) and \(\Delta x=\frac{1}{8}\) on all edges.
After a transitional phase, the steady flow is reached. We observe that indeed, the solution on edge 6 is very small. Figure 15 shows the values on that particular edge for two simulations with \(\Delta x=\frac{1}{4}\) and \(\Delta x=\frac{1}{8}\). Although inflow at the first edge has amplitude 1, the cancellation is accurate up to \(10^{-4}\) and decreasing with resolution.
Figure 14: Numerical solution of the flow on the network. |
2308.09032 | Probing Spin-Induced Quadrupole Moments in Precessing Compact Binaries | Spin-induced quadrupole moments provide an important characterization of
compact objects, such as black holes, neutron stars and black hole mimickers
inspired by additional fields and/or modified theories of gravity. Black holes
in general relativity have a specific spin-induced quadrupole moment, with
other objects potentially having differing values. Different values of this
quadrupole moment lead to modifications of the spin precession dynamics, and
consequently modifications to the inspiral waveform. Based on the spin-dynamics
and the associated precessing waveform developed in our previous work, we
assess the prospects of measuring spin-induced moments in various black hole,
neutron star, and black-hole mimicker binaries. We focus on binaries in which
at least one of the objects is in the mass gap (similar to the $2.6 M_\odot$
object found in GW190814). We find that for generic precessing binaries, the
effect of the spin-induced quadrupole moments on the precession is sensitive to
the nature of the mass-gap object, i.e., whether it is a light black hole or a
massive neutron star. So that this is a good probe of the nature of these
objects. For precessing black-hole mimicker binaries, this waveform also
provides significantly tighter constraints on their spin-induced quadrupole
moments than the previous results obtained without incorporating the precession
effects of spin-induced quadrupole moments. We apply the waveform to sample
events in GWTC catalogs to obtain better constraints on the spin-induced
quadrupole moments, and discuss the measurement prospects for events in the
O$4$ run of the LIGO-Virgo-KAGRA Collaboration. | Zhenwei Lyu, Michael LaHaye, Huan Yang, Béatrice Bonga | 2023-08-17T15:14:56Z | http://arxiv.org/abs/2308.09032v2 | # Probing Spin-Induced Quadrupole Moments in Precessing Compact Binaries
###### Abstract
Spin-induced quadrupole moments provide an important characterization of compact objects, such as black holes, neutron stars and black hole mimickers inspired by additional fields and/or modified theories of gravity. Black holes in general relativity have a specific spin-induced quadrupole moment, with other objects potentially having differing values. Different values of this quadrupole moment lead to modifications of the spin precession dynamics, and consequently modifications to the inspiral waveform. Based on the spin-dynamics and the associated precessing waveform developed in our previous work, we assess the prospects of measuring spin-induced moments in various black hole, neutron star, and black-hole mimicker binaries. We focus on binaries in which at least one of the objects is in the mass-gap (similar to the \(2.6M_{\odot}\) object found in GW190814). We find that for generic precessing binaries, the effect of the spin-induced quadrupole moments on the precession is sensitive to the nature of the mass-gap object, i.e., whether it is a light black hole or a massive neutron star. So that this is a good probe of the nature of these objects. For precessing black-hole mimicker binaries, this waveform also provides significantly tighter constraints on their spin-induced quadrupole moments than the previous results obtained without incorporating the precession effects of spin-induced quadrupole moments. We apply the waveform to sample events in GWTC catalogs to obtain better constraints on the spin-induced quadrupole moments, and discuss the measurement prospects for events in the O4 run of the LIGO-Virgo-KAGRA collaboration.
## I Introduction
In the past seven years, the LIGO-Virgo-KAGRA collaboration (LVK) has detected more than one hundred binary black hole merger events, and a handful of events involving neutron stars (be they black hole-neutron star or binary neutron star systems) [1, 2, 3, 4]. In the event catalogs, if the gravitational wave (GW) measurement for the mass of an object within the binary is greater than \(5M_{\odot}\), the object has been identified as a "black hole" by convention. Similarly, if the mass is less than \(2M_{\odot}\) (less than \(3M_{\odot}\) in GWTC-3 [4]), it is identified as a "neutron star".
While this classification system is convenient for bookkeeping purposes, it comes with two inherent issues. First, if the mass distributions of black holes and neutron stars overlap, we potentially misidentify objects if we only use their masses. Second, this system fails to say anything about objects lying between these bounds, in the so-called mass gap. With the unexpected discovery of the \(2.6M_{\odot}\) object in GW190814 (which can be either a heavy neutron star [5, 6] or a light black hole [7, 8, 9, 10]), we are forced to confront this second issue if we want to determine the nature of this and similar objects. The nature of these objects can provide insight into their formation mechanism. For example, these objects may also appear in extreme mass ratio inspirals as relevant sources for space-borne gravitational wave detection [11]. Their relative abundance in "wet" (accretion-disk assisted) [12, 13] and "dry" (scattering assisted) [14, 15] formation channels can be used to constrain supernovae explosion mechanisms, which is related to possible delayed fall-back accretion that strongly affects the remnant mass. Being able to classify the nature of mass-gap objects correctly is increasingly important.
In principle, there are different methods to distinguish between a mass gap neutron star and black hole. While a massive neutron star has an electromagnetic (EM) counterpart (such as short gamma-ray emission and/or kilonova emission), a black hole does not. Consequently, we may distinguish between the two on the basis of the signature of an EM counterpart [16, 17]. This is weighted by the fact that EM counterpart detection is not always available (i.e., see the EM followups for GW190814 [18, 19, 20, 21, 22]) either due to faint emission sources or poor sky localization capacities. The ability to probe the nature of mass-gap objects may also be compromised as the EM signature seems to be greatly influenced by the eccentricity, spin and mass ratio of the system. These are often not accurately constrained by the gravitational wave measurement, in part because a large portion of the parameter space is less explored from the modelling perspective.
Hence, we need a method of distinguishing between a massive neutron star and a light black hole via the gravitational waveform alone. There are several potential gravitational-wave observables that can distinguish between these objects: the tidal deformability, the horizon absorption, and the spin-induced quadrupole moment (SIQM).
The most promising observable for lower-mass (\(\leq 2M_{\odot}\)) objects should be the (dimensionless) tidal Love number, which is constrained to be \(\Lambda(1.4M_{\odot})\leq 800\) for the low-spin prior in GW170817 [23, 24]. However, it is known that the tidal Love number drops dramatically with increasing masses - for objects with masses reaching up to \(2.6\ M_{\odot}\), the |
2303.04664 | Centroid-centered Modeling for Efficient Vision Transformer Pre-training | Masked Image Modeling (MIM) is a new self-supervised vision pre-training
paradigm using a Vision Transformer (ViT). Previous works can be pixel-based or
token-based, using original pixels or discrete visual tokens from parametric
tokenizer models, respectively. Our proposed centroid-based approach, CCViT,
leverages k-means clustering to obtain centroids for image modeling without
supervised training of the tokenizer model, which only takes seconds to create.
This non-parametric centroid tokenizer only takes seconds to create and is
faster for token inference. The centroids can represent both patch pixels and
index tokens with the property of local invariance. Specifically, we adopt
patch masking and centroid replacing strategies to construct corrupted inputs,
and two stacked encoder blocks to predict corrupted patch tokens and
reconstruct original patch pixels. Experiments show that our CCViT achieves
84.4% top-1 accuracy on ImageNet-1K classification with ViT-B and 86.0% with
ViT-L. We also transfer our pre-trained model to other downstream tasks. Our
approach achieves competitive results with recent baselines without external
supervision and distillation training from other models. | Xin Yan, Zuchao Li, Lefei Zhang | 2023-03-08T15:34:57Z | http://arxiv.org/abs/2303.04664v2 | # Centroid-centered Modeling for Efficient Vision Transformer Pre-training
###### Abstract
Masked Image Modeling (MIM) is a new self-supervised vision pre-training paradigm using Vision Transformer (ViT). Previous works can be pixel-based or token-based, using original pixels or discrete visual tokens from parametric tokenizer models, respectively. Our proposed approach, **CCViT**, leverages k-means clustering to obtain centroids for image modeling without supervised training of tokenizer model. The centroids represent patch pixels and index tokens and have the property of local invariance. Non-parametric centroid tokenizer only takes seconds to create and is faster for token inference. Specifically, we adopt patch masking and centroid replacement strategies to construct corrupted inputs, and two stacked encoder blocks to predict corrupted patch tokens and reconstruct original patch pixels. Experiments show that the ViT-B model with only 300 epochs achieves 84.3% top-1 accuracy on ImageNet-1K classification and 51.6% on ADE20K semantic segmentation. Our approach achieves competitive results with BEiTv2 without distillation training from other models and outperforms other methods, such as MAE.
## 1 Introduction
Over the past several years, the triumphs of deep learning in computer vision have been crucially hinged upon Convolutional Neural Networks (CNN)[23, 27, 29]. As indicated by search [37], these convolutional layers have demonstrated the ability to encode inductive bias of spatial equivariance, thus producing remarkable results. Motivated by the success of Transformer [42] in Natural Language Processing (NLP), an image modeling technique, Vision Transformer (ViT) [15] has been verified to produce exceptional results in image modeling and other tasks. However, empirical studies have showed that Vision Transformer requires a larger volume of data than CNN-based models.
The issue of large demand for labeled training data in transformer-based models has been successfully solved in NLP by self-supervised learning. Particularly, BERT [13] has proposed Masked Language Modeling (MLM) to solve the problem. Such mask-then-predict task masks out some proportion of the input data and then learns to predict the masked target, showing its superiority in leveraging large-scale unlabeled data.
Inspired by the MLM method in NLP, multiple studies have attempted to introduce mask-then-predict task into computer vision. Regarding discrete visual tokens as the target, BEiT [2] proposed a pre-training task similar to MLM, namely, Masked Image Modeling (MIM). They use a visual tokenizer to convert an image into tokens, which is obtained by an extra training stage with a decoder in tokenizer model via discrete variational autoencoder (dVAE) [38] method. During pre-training, the Transformer takes corrupted images with blockwise mask as input, and learns to recover the masked tokens. Contrarily, MAE [21] reconstructs raw pixels of the image explicitly
Figure 1: **The proposed CCViT architecture.** We view centroids as two aspects, token indices and patch pixels. Our centroid-centered pre-training aims at predicting the indices of centroids, and also implicitly reconstructing the pixels of centroids. During pre-training, we apply blockwise mask to some proportion of the patches (e.g., 40%) and replace a proportion of remained patches(e.g., 10%) with the corresponding centroids. All of the corrupted patches are fed into ViT Encoder.
using an additional decoder. Differently, the encoder is only applied to visible patches, and mask tokens are introduced before the decoder together with the encoded patches.
Actually, in contrast to primarily language tokens as the target in NLP, various reconstruction targets have emerged in previous works in computer vision, including visual tokens [2, 14, 31, 35, 48], high-level features [11], vanilla pixels [21] and original image features [43], due to the different information density between vision and language. We can broadly categorize these targets into two types of models: token-based MIM with tokens or high-level features as the pre-training objective, which typically requires an additional training stage of a parametric tokenizer model for image abstraction from continuous to discrete; and pixel-based MIM, which encourages the model to reconstruct raw pixels or original features such as HOG without tokenizer.
However, both of these two methods have their drawbacks. Firstly, they both introduce a redundant module to convert latent representations into raw pixels. Pixel-based MIM such as MAE needs a redundant decoder while discarding it in fine-tuning. Token-based MIM such as BEiT [2, 35] needs a tokenizer model to convert image pixels to discrete tokens, and is only utilized in pre-training stage. Moreover, the visual tokens from parametric tokenizers are incapable of representing the corresponding image patches, since the tokenizer generate visual tokens based on the abstraction of the entire image instead of a single patch, resulting in a more global perspective, rather than a local one. Thus, even if a certain patch remains unchanged, the token may be changed if pixels from other patches are modified. The actual image situation contradicts this, as there is a significant correlation between adjacent patches. Mask-then-predict relies on this correlation between tokens for masking inference. This also accounts for achieving a high token prediction accuracy in pre-training stage is typically challenging in token-based MIM models.
In this work, we introduce **CCViT**, which stands for **C**entroid-**c**entered **V**ision **T**ransformer, as shown in Figure 1. Specifically for our centroid-based tokenizer, we utilize efficient clustering method (such as k-means algorithm) to identify the index corresponding to the nearest centroid for each patch as its token id. Different from parametric tokenizers consuming large training resources before pre-training stage used in BEiT, the centroids can be easily obtained using a very small proportion of training set with only a few seconds. Also, it is faster to obtain token indices of a batch of images compared with tokenizer methods in token-based MIM. More importantly, we only perform clustering on the training set of ImageNet-1K [12] to obtain the centroids, without potentially introducing large or private dataset such as DALL-E [2, 38], or distilling from large models such as CLIP [35, 36] into the tokenizer.
We also propose a novel perspective of MIM, namely, centroid-based MIM, which more streamlines the process and is needless of additional training cost. In our centroid-based MIM, we focus on the clustering centroids, which can be both considered as centroid patch pixels and centroid index tokens. During pre-training stage, we spilt the image into several patches and mask out some proportion of image patches similar to previous MIM. Besides, we also replace some patches with the nearest corresponding centroid pixels to encourage the model to align the pixel representations of centroids with their corresponding token indices. We feed all of the corrupted patches into our backbone Vision Transformer encoder, which consists of stacked tokens ViT block and pixel ViT block. The pixel ViT block utilizes the patch representation from an intermediate layer of in the token ViT block and the CLS representation from the last layer as input. We employed two learning objectives: centroid token prediction and centroid corresponding raw pixel reconstruction. The patch representation from token ViT block is used to evaluate the cross-entropy loss with the targeted token, and the patch representation from pixel ViT block is to evaluate the mean squared error with the targeted pixels.
We conduct self-supervised pre-training on our model for base-size Vision Transformers (ViT-B) on ImageNet-1K and fine-tune our pre-trained model on two downstream tasks, i.e., image classification and semantic segmentation. Experimental results show that our method achieves excellent performance for image representation. Specifically, our efficient centroid-based MIM outperforms the prior token-based MIM [2] and pixel-based MIM [21] in equivalent ViT size and epochs. Ablation studies show learning together for both sides of centroids, pixels and tokens, performs better than the singular learning of either form, which indicates the model can be benefited from utilizing the dual-natured property of centroids. Further comparison show that compared to token-based MIM models, we achieve significant improvements in the training cost of a parametric tokenizer model. While compared to pixel-based MIM, by removing the decoder, pre-training is more computationally economic. Besides, we compare the performance of noise resistance for centroid tokenizer and vanilla tokenizers, and demonstrate that our proposed centroid-based MIM has better noise resistance ability.
## 2 Related Work
Self-supervised visual learninghas been explored over years to introduce such learning paradigm into vision pre-training. Various methods use different pre-text tasks for pre-training, including jigsaw puzzle [34], colorization [28] and predicting rotation [19]. Contrastive learning is also a trend in for visual representation learning [7, 9, 20, 22, 44]. These methods typically rely on data augmentation ap
proaches. Also, early studies perform clustering methods to learn vision representation [3, 4, 30, 46]. Most recently, iGPT [6] creates 9-bit color palette by clustering (R, G, B) pixel values using k-means with \(k=512\) and uses the clustered token sequence as the direct input via both auto-regressive and BERT objectives. In comparison, our method uses original image patches as the input, and the centroid indices as the pre-training objective. SplitMask [16] also demonstrates effectiveness of using clustering, but they focus on pre-training on smaller datasets and transfer performance.
Masked image modelinghas seen widespread application in the field of visual pre-training as a counterpart of masked language modeling method in NLP,, BERT [13]. Since ViT [15] first outcomes the obstacle of architecture, masked image modeling (MIM) has achieved remarkable success rapidly [5, 8, 16]. MIM randomly masks some proportion of an image and reconstructs it in the pre-training stage. Due to different reconstruction targets in pre-training stage rather than primarily tokens as objective in NLP, there are two mainstream paradigms in MIM methods, i.e., token-based MIM and pixel-based MIM. Token-based MIM derives from prior arts, BEiT [2], and needs a parametric tokenizer to generate tokens or high-level features for pre-training targets. To be specific, BEiT and VIMPAC [40] use an offline discrete VAE tokenizer from DALL-E [38, 39]. PeCo [14] regards MoCov3 [10] as the perceptual model in VQGAN [17] training. mc-BEiT [31] also focuses on perceptual similarity and construct a codebook for pre-training. iBOT [48], SdAE [11] and data2vec [1] use self-distillation method. BEiTv2 [35] also utilizes distillation method and introduces vector-quantized knowledge distillation [41] to train the tokenizer using CLIP [36]. Pixel-based MIM with non-parametric tokenizer such as MAE [21] and SplitMask [16], considers vanilla pixels or patches as pre-training targets instead of tokens and need a redundant decoder. MaskFeat [43] further introduces handcrafted HOG features as the targets. Different from these works, our centroid-based MIM uses non-parametric tokenizer to model both tokens and pixels in pre-training. We use k-means clustering algorithm to generate centroids which does not suffer from the training costs for tokenizer model training.
## 3 Masked Image Modeling
MIM, which is first proposed in BEiT, has been validated its remarkable results in recent works [2, 21, 35, 43, 48] and becomes the new paradigm in visual pre-training. MIM introduced BERT-style pre-training into computer vision and has successfully replicated the success of NLP.
Specifically, it first splits an input 2D image into a sequence of patches to match the input format of standard Transformer, and then masks out a proportion of the patches. The pre-training objective of token-based MIM is reconstructing corrupted images using the visual context in a higher semantic level, i.e., discrete visual tokens, compared to pixels. Formally, given an input image \(x\in\mathbb{R}^{C\times H\times W}\), it is first split into \(n=HW/P^{2}\) patches and flattened to \(\{x_{i}^{p}\}_{i=1}^{n}\), where \((P,P)\) is the resolution of a
Figure 2: Comparison of pre-training architectures between BEiT, MAE and ours. Our input consists of both masked patches, replaced patches using centroids and original patches. Our model architecture does not feature a redundant decoder, which further streamlines the process. We use both tokens and pixels as our pre-training objectives via different loss functions, which is cross-entropy loss and mean squared error respectively.
patch and \(x^{p}\in\mathbb{R}^{n\times C\times P^{2}}\). It samples a mask via the mask ratio \(r_{m}\) on input image \(x\) to generate the corrupted image \(\hat{x}\) according to the masked positions \(\mathcal{M}\):
\[\hat{x}=\{x_{i}^{p}\mathbf{E}_{p}\mid i\notin\mathcal{M}\}_{i=1}^{n}\bigcup\{e_{m} \mid i\in\mathcal{M}\}_{i=1}^{n} \tag{1}\]
where \(e_{m}\) is the learnable mask token embedding, \(x_{i}^{p}\mathbf{E}_{p}\) indicates the process of patch embedding calculation.
The objective of pre-training is to recover the visual tokens \(\{t_{i}\}_{i}^{n}\) obtained by the tokenizer, where \(t\in\mathcal{V}^{n}\), \(\mathcal{V}=\{1,\dots K\}\) is the \(K\)-size codebook of tokenizer which contains discrete token indices. The corrupted image will be encoded into \(\psi(\hat{x})\in\mathbb{R}^{n\times d}\) through ViT encoder. The representation will be fed into a linear classification head \(lin.:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\) and a softmax operator to obtain the probabilities \(p_{\mathrm{MIM}}(t_{i}\mid\hat{x})=\mathrm{softmax}_{t_{i}}(lin.\circ\psi( \hat{x}))\) of each token. The model are learnt by the cross-entropy loss, which is to maximize the log-likelihood of correct tokens \(t_{i}\) via corrupted image \(\hat{x}\):
\[\max\sum_{x}\mathbb{E}_{\mathcal{M}}\left[\sum_{i\in\mathcal{M}}\log p_{ \mathrm{MIM}}\left(t_{i}\mid\hat{x}\right)\right] \tag{2}\]
where \(p_{\mathrm{MIM}}\left(t_{i}\mid\hat{x}\right)\) is the softmax probability after feeding \(\hat{x}\) into the ViT encoder to predict the correct tokens \(t_{i}\).
## 4 Our Method
In our CCViT, we propose centroid-based MIM which use clustering centroids to model the images. The centroids has two views, i.e., pixel view and token view. We can directly obtain the pixels as each centroid is a image patch. We can also obtain the tokens via the indices of centroids. And each centroid patch is corresponding to its index. It is for the dual-natured property, the output can be either pixels or tokens with different loss functions. We feed all the patches into the backbone ViT encoder, including original patches, masked patches and replaced patches with centroids. Figure 2 shows the main comparison of pre-training architectures of our CCViT and prior arts BEiT and MAE.
### Centroid-based Tokenizer
Token-based MIM needs to convert the continuous images to discrete tokens first to pre-train the model with a patch classification objective implemented as a cross-entropy loss. It usually employs a parametric tokenizer model trained on a large scale of dataset. Take the image tokenizer via discrete variational autoencoder (dVAE) used in BEiT as the example. Given the input image \(x\in\mathbb{R}^{C\times H\times W}\), it is split into \(n=HW/P^{2}\) patches \(\{x_{i}^{p}\}_{i=1}^{n}\). There are two modules including tokenizer and decoder. The tokenizer \(\theta(t\mid x)\) maps images to discrete tokens
Figure 3: Overview of our CCViT. Before pre-training, we use k-means clustering to achieve the centroids and their indices. During pre-training, we mask out some patches and randomly replace some of remained patches using centroids. All the patches are flattened and fed into the encoder after embedding layers. The pre-training objectives are both centroid index tokens and original pixels.
and the decoder \(\delta(x\mid t)\) reconstructs the original image using tokens \(t\). As the discrete tokens are non-differentiable, BEiT employs Gumbel-softmax relaxation [24, 33] to train the parametric tokenizer. The reconstruction objective in tokenizer training stage is:
\[\mathbb{E}_{t\sim\theta(t\mid x)}\left[\log\delta\left(x\mid t\right)\right] \tag{3}\]
Our centroid-based tokenizer does not require extensive resources to train an additional parametric tokenizer model. We only utilize clustering method, such as the k-means algorithm, to obtain a set of centroids. The quality of the centroids via k-means algorithm does not heavily rely on a high standard and large scales for images and can be achieved by only utilizing a small proportion (e.g. 4% in ImageNet) of the training set used for pre-training, thereby circumventing the risk of introducing implicit additional datasets that often occurs in parametric tokenizer model in token-based MIM. We use k-means algorithm to partition each patch into \(K\) clusters. Given the patches \(\{x_{i}^{p}\}_{i=1}^{n}\) they will be flattened into a vector with the dimension of \(D=C\times P^{2}\). So the \(ND\)-dimensional vectors \(\mathcal{X}=\{x_{i}^{v}\in\mathbb{R}^{D}\}_{i=1}^{N}\) will find \(k\) centroids \(\{\mathcal{C}_{k}\in\mathbb{R}^{D}\}_{k=1}^{K}\) that minimize the following cost:
\[\mathbb{E}(\mathcal{C}_{1},\ldots,\mathcal{C}_{K})=\frac{1}{N}\sum_{i=1}^{N} \|x_{i}^{v}-\mathcal{C}_{a(i)}\|_{2} \tag{4}\]
where \(N\) is the number of training images of k-means cluster and \(a(i)\) is an assignment function and is defined by \(a(i)=\operatorname*{arg\,min}_{k\in\{1,\ldots,K\}}\|x_{i}^{v}-\mathcal{C}_{k} \|_{2}\). Our centroid-based tokenizer have the property of local invariance as we only operate on each patch separately in a local perspective.
In the process of converting continuous patches to discrete token indices, for image patch \(x_{i}^{p}\in\mathbb{R}^{C\times P^{2}}\) and its flattened vector \(x_{i}^{v}\in\mathbb{R}^{D}\), its index \(t_{i}\in\{1,\ldots K\}\) can be obtained by looking up the index of nearest centroid, which is:
\[t_{i}=\operatorname*{arg\,min}_{k\in\{1,\ldots,K\}}\|x_{i}^{v}-\mathcal{C}_{k }\|_{2} \tag{5}\]
Notably, as we only do clustering operations and not change the dimension of each patch, so the centroid itself is a patch that can be easily visualized. So, this enables our centroid-based non-parametric tokenizer has the dual-natured property that can be seen as both pixels and tokens. In other words, we can get an approximate version of the original pixels of the input patches based on the token indices.
### Image Corruption For Modeling
The self-supervised pre-training method is essentially a process of corrupting the image and then using the internal correlation of the image to restore the original image or some form of the original image. In our CCViT, we employed two corruption strategies: blockwise masking and centroids replacement. Blockwise masking [2] is more challenging compared with random masking as it provides less visual context. In centroids replacement, we randomly replace some proportion of the remained patches with the corresponding centroids. Such technique allows model to learn the mapping relationships between centroids and indices, centroid approximate pixels and raw pixels. This makes our model more noise resistant and thus achieved better results.
Formally, given the masking ratio \(r_{m}\) and replacement ratio \(r_{re}\), the masked positions can be obtained as \(\mathcal{M}\in\{1,\ldots,n\}^{r_{m}\times n}\) and the replaced positions are \(\mathcal{R}\in\{1,\ldots,n\}^{r_{re}\times n}\). Notice that the replaced positions are always the unmasked positions to avoid being masked, i.e., \(\mathcal{M}\bigcap\mathcal{R}=\emptyset\). So an input image \(x\in\mathbb{R}^{C\times H\times W}\) and its patches series \(\{x_{i}^{p}\}_{i=1}^{n}\) can be corrupted into \(\tilde{x}\):
\[\begin{split}\tilde{x}=\{x_{i}^{p}\mathbf{E}_{p}\mid i\notin\mathcal{ M}\bigcup\mathcal{R}\}_{i=1}^{n}\bigcup\{e_{m}\mid i\in\mathcal{M}\}_{i=1}^{n}\\ \bigcup\{C_{a(i)}\mathbf{E}_{p}\mid i\in\mathcal{R}\}_{i=1}^{n}\end{split} \tag{6}\]
where \(e_{m}\) is the learnable mask token embedding and \(\mathcal{C}_{i}\) is the centroid. And \(a(i)\) is an assignment function and is defined by \(a(i)=\operatorname*{arg\,min}_{k\in\{1,\ldots,K\}}\|x_{i}^{v}-\mathcal{C}_{k} \|_{2}\).
### Vision Transformer Backbone
We use ViT as our backbone network as illustrated in Figure 3. The input is the corrupted image patches \(\tilde{x}\) with the corrupted positions \(\mathcal{T}=\mathcal{M}\bigcup\mathcal{R}\) and is flattened and mapped to \(d\)-dimensional patch embeddings \(\tilde{x_{i}}\mathbf{E}_{p}\), where \(\mathbf{E}_{p}\in\mathbb{R}^{d\times(C\times P^{2})}\) is a mapping projection implemented as a single 2D convolutional layer. We also add a learnable CLS token \(\mathbf{E}_{\text{CLS}}\) for global representation learning. The embeddings are then fed into ViT Transformer layers for contextualized encoding. To retain positional information, we add a positional bias in the attention calculation.
Due to the local perspective of our centroid-based modeling, we need the CLS token to aggregate the global representation explicitly to compensate for the semantic damage caused by downsampling the original image patches to the centroids. We add pixel ViT block to utilize CLS token to gather global information, which has been proved to be effective in prior token-based MIM works [18, 35]. The input vectors \(\mathbf{H}_{0}=[\mathbf{E}_{\text{CLS}},\tilde{x_{1}}\mathbf{E}_{p},\ldots,\tilde{x_{n}} \mathbf{E}_{p}]\) will pass through \(L\)-layers token ViT block \(\psi_{t}^{L}(\tilde{x})\) to predict the masked and replaced token indices. We concat the early representations from the \(l\)-th layer \([\mathbf{h}_{1}^{l},\ldots,\mathbf{h}_{n}^{l}]\) and the CLS token from the \(L\)-th layer \(\mathbf{h}_{\text{CLS}}^{L}\), which is the last layer.
The concated vector \(\mathbf{H}_{p}=[\mathbf{h}_{\text{CLS}}^{L},\mathbf{h}_{1}^{l},\ldots,\mathbf{h}_{n}^{l}]\) is used as input to pixel ViT block for raw pixels reconstruction. The pixel block \(\psi_{p}^{2}(\mathbf{H}_{p})\) is composed of only two layers since the token form of centroid is closely related to the
pixel form. To obtain the predicted tokens and the reconstructed image, the representations from token block and pixel block will be mapped into centroid indices space and original pixel space using two different linear heads \(lin.1(\cdot)\) and \(lin.2(\cdot)\).
Two losses is thus computed, the cross-entropy loss is utilized to measure the likelihood between the ground-truth centroid indices and predicted index probabilities \(p_{\mathrm{CIM}}(t_{i}\mid\tilde{x})=\mathrm{softmax}_{t_{i}}(lin.1\mathrm{ \circ}\mathrm{\psi}_{t}^{L}(\tilde{x}))\), which is the output of token block. The Mean Square Error (MSE) is used to quantify the dissimilarities between the image patches generated by pixel block \(\mathrm{\psi}_{p}^{2}(\mathbf{H}_{p})^{\mathcal{T}}\), and the original image patches \({x^{p}}^{\mathcal{T}}\). The total loss of our CCViT pre-training can be formulated as:
\[\mathcal{L}_{\mathrm{CE}} =-\sum_{\tilde{x}}\sum_{i\in\mathcal{T}}\log(p_{\mathrm{CIM}} \left(t_{i}\mid\tilde{x}\right))) \tag{7}\] \[\mathcal{L}_{\mathrm{MSE}} =-\sum_{\tilde{x}}\sum_{i\in\mathcal{T}}\frac{1}{\lambda(\tilde {x}^{\mathcal{T}})}\|{x^{p}}^{\mathcal{T}}-\mathrm{\psi}_{p}^{2}(\mathbf{H}_{p})^{ \mathcal{T}}\|_{2}\] \[\mathcal{L}_{\mathrm{CIM}} =\mathcal{L}_{\mathrm{CE}}+\mathcal{L}_{\mathrm{MSE}}\]
where \(\lambda(\tilde{x}^{\mathcal{T}})\) is the number of elements and \(\mathrm{\psi}_{p}^{2}(\mathbf{H}_{p})^{\mathcal{T}}\) means that we input the entire corrupted patches (masked, replaced and original patches) and only compute the loss on the corrupted portion (masked and replaced patches).
## 5 Experiments
We perform experiments on downstream tasks using our pre-trained model, including image classification and semantic segmentation, in accordance with the standardized evaluation protocols that have been adopted in previous works [2]. We also conduct a brief ablation study on critical components of our model.
### Pre-training Setup
**Centroids Clustering.** We adopt clustering method on each patch to get the centroids and their corresponding index. We use Faiss-GPU [25] library for k-means algorithm and set number of clusters \(K=8192\) for fair comparison to existing works. We use a very small proportion (50K images) of the well-acknowledged ImageNet-1K training set for training centroids as it will also be used in pre-training stage and will not implicitly introduce the information from other datasets. In our experiment, each image is resized to \(224\times 224\) resolution and will be split into \(14\times 14\) image patches. So each patch as the input of k-means has the feature size of \(d=3\times 16\times 16=768\). According to the experience of conducting k-means clustering on samples from Faiss, there is no consistent improvement of the k-means quantizer beyond 20 iterations and \(1000\times K\) training points. So we randomly choose 50 images in each class of ImageNet-1K training set to train the centroids with 20 iterations, namely 50K images and 9.8M image patches which is only 4% of it. Such clustering stage will only need about 150 seconds, which is a significant improvement compared with days of parametric tokenizer models training.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Method** & **Non-parametric** & **Pre-train Data** & **Supervision** & **Epoch** & **Top-1 Acc** & **mIoU** \\ \hline Supervised ViT [15, 21] & \(\blacktriangledown\) & IN1K & Label & - & 77.9 & 47.4\({}^{\dagger}\) \\ \hline \hline \multicolumn{7}{l}{_Token-based with parametric tokenizer_} \\ BEiT [2] & \(\blacktimes\) & IN1K+DALL-E & DALL-E & 300 & 82.9 & 44.7 \\ BEiT [2] & \(\blacktimes\) & IN1K+DALL-E & DALL-E & 800 & 83.2 & 45.6 \\ BEiTv2 [35] & \(\blacktimes\) & IN1K+CLIP-B & CLIP-B & 300 & 85.0 & 52.7 \\ BEiTv2 [35] & \(\blacktimes\) & IN1K+CLIP-B & CLIP-B & 1600 & **85.5** & **53.1** \\ mc-BEiT [31] & \(\blacktimes\) & IN1K & VQGAN & 300 & 83.9 & - \\ mc-BEiT [31] & \(\blacktimes\) & IN1K & VQGAN & 800 & 84.1 & 47.0 \\ PeCo [14] & \(\blacktimes\) & IN1K & VQGAN & 300 & 84.1 & 46.7 \\ PeCo [14] & \(\blacktimes\) & IN1K & VQGAN & 800 & 84.5 & 48.5 \\ iBOT [48] & \(\blacktimes\) & IN1K & Self-Distillation & 1600 & 84.0 & 50.0 \\ \hline \multicolumn{7}{l}{_Pixel-based_} \\ SplitMask [16] & \(\blacktriangledown\) & IN1K/ADE20K & Patch & 300 & 83.6 & 45.7 \\ MaskFeat [43] & \(\blacktriangledown\) & IN1K & HOG & 300 & 83.6 & - \\ MaskFeat [43] & \(\blacktriangledown\) & IN1K & HOG & 1600 & 84.0 & - \\ MAE [21] & \(\blacktriangledown\) & IN1K & Pixel & 1600 & 83.6 & 48.1 \\ \hline \multicolumn{7}{l}{_Centroid-based_} \\
**Ours** & \(\blacktriangledown\) & **IN1K** & **Centroids** & **300** & **84.3** & **48.4** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fine-tuning results of top-1 accuracy (%) of image classification on ImageNet-1K and mIoU (%) of semantic segmentation on ADE20K. \({}^{\dagger}\): result reproduced by MAE.
Centroid-based MIM.We follow the settings for fair comparison used in MIM methods [2, 35, 31]. We use ImageNet-1K training set without labels to pre-train our model via self-supervised learning. We set the resolution of an image as \(224\times 224\) and patch size as \(16\times 16\), and pre-train base-size Vision Transformer (ViT-Base/16). We use block-wise masking with the ratio of 40% and replaced centroids with 10%, which is 75 patches and 20 patches of the remained respectively. The base-size ViT has \(L=12\) layers in token block and we append two layers of pixel block whose input is the output of the \(l=9\)-th layer concated with the CLS token from the last \(L=12\)-th layer of token block. We pre-train our model for 300 epochs using 4 NVIDIA GeForce RTX 3090 GPUs. To keep a same batch size to existing works, we accumulate the gradients by 4 times and resulted in a 2048 total batch size. We use AdamW [26, 32] with \(\beta_{1}=0.9,\beta_{2}=0.98\) for optimization. The learning rate is set to 1.5e-3 with the warmup epochs of 10 and cosine learning rate decay. We do not use dropout in this work. More details can be found in Appendix A.
### Image Classification
We only use the token ViT block module in the fine-tuning step for fair comparison as it is a complete ViT-Base encoder. For Image Classification task, we evaluate fine-tune accuracy on ImageNet-1K. We adopt top-1 accuracy after fine-tuning for 100 epochs as the metric for image classification task. More details can be found in Appendix B.
We present the image classification results in Table 1. Our base-size model only pre-train for 300 epochs and reaches 84.3% top-1 accuracy on ImageNet-1K classification task, which outperforms the MIM baseline methods BEiT by +1.4%, MAE by +0.7% even it is pre-trained for 1600 epochs. And we outperform all the pixel-based works. This verifies the effectiveness of our CCViT with centroid-based MIM as we only use ImageNet-1K dataset and a easily constructing supervision with no parameters.
We also surpass most of methods at the same 300 epochs. However, our model is 0.7% worse than BEiTv2 which uses very strong supervision from CLIP and implicitly utilizes the data from CLIP. Our model also achieves a competitive results compared with models pre-trained longer in epochs. This demonstrates that centroids as a target is a more efficient pre-training objective for classification than token-only and pixel-only.
### Semantic Segmentation
Semantic segmentation aims to predict the class for each pixel, which can be considered as a pixel-level classification task, so it is often adopted to evaluate the pre-training models. We use ADE20k [47] benchmark and report the metric of mean intersection over union (mIoU) averaged over all semantic categories. About model architecture, we use ViT-Base/16 as the backbone and UPerNet [45] as the semantic segmentation task head. For fair comparison, we conduct fine-tuning for 160k steps and set batch size to 16. We set the learning rate to 8e-5 with learning rate decay and use AdamW as the optimizer. More details can be found in Appendix C.
From Tabel 1, our method achieves 48.4% and outperforms BEiT and MAE by +3.7% and +0.3% on mIoU. We also surpass most of methods at the same 300 epochs on semantic segmentation and also fails BEiTv2 with CLIP implicitly distilled. The performance on ADE20K can be further improved by intermediate fine-tuning on ImageNet-1K according to BEiT. We also conduct the experiment and compared with BEiT and mc-BEiT. In Table 2 we report the performance of semantic segmentation after intermediate fine-tuning on ImageNet-1K for 100 epochs. Our CCViT method achieves 51.6% on mIoU, and gains +3.2% to our pre-training only model, and significantly outperforms prior arts BEiT and mc-BEiT, with the +3.9% and +0.8% on mIoU respectively.
### Ablation Studies
In this section, we ablate the critical components of our model, using ViT-Base/16 model for comparison. The models are evaluated on image classification on ImageNet-1K and semantic segmentation on ADE20K. We set the pre-training epochs to 300 epochs and fine-tuning to 100 epochs in classification and 160k steps in segmentation.
First, we analyze the different pre-training targets in our model. Notice that we always use tokens, i.e. centroid indices as the target after token block, and we can use token target or pixel target after pixel block. According to Table 3, using both tokens and pixels as targets together can achieve the best results (84.30 vs 84.21 and 48.35 vs 47.89). This shows that it is beneficial to learn both token and pixel forms in centroid-based modeling. Next, we ablate the corrupting methods of random replacing operation. We find that our model with random replacing has better results than only
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **Supervision** & **Epoch** & **mIoU** \\ \hline BEiT [2] & DALL-E & 800 & 47.7 \\ mc-BEiT [31] & VQGAN & 800 & 50.8 \\ \hline
**Ours** & Centroids & **300** & **51.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Intermediate Fine-tuning on ADE20k, which is pre-trained and fine-tuned on ImageNet-1K classification.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Tokens** & **Pixel** & **Rep.** & **Top-1 Acc** & **mIoU** \\ \hline \(\bigvee\) & \(\bigtimes\) & \(\bigtimes\) & 84.12 & 47.42 \\ \(\bigvee\) & \(\bigtimes\) & \(\bigvee\) & 84.21 & 47.89 \\ \(\bigvee\) & \(\bigvee\) & \(\bigvee\) & 84.30 & 48.35 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study for different pre-training setting on ImageNet-1K classification and ADE20K segmentation. “Rep.” means using random replacing strategy.
blockwise masking on both ImageNet-1K classification and ADE20K semantic segmentation tasks, which is 84.21 vs 84.12 and 47.89 vs 47.42. This indicates that using random replacing technique will encourage the model to learn the alignment of pixels and tokens.
### Further Analysis on Centroid-based Tokenizer
Performances on downstream tasks are closely related to the tokenizer. Tokenizers using more datasets (such as DALL-E [35] in BEiT [2]) and distilled from huge models (such CLIP [36] in BEiTv2 [35]) often achieve better results. Such parametric tokenizers are so large that they are able to fine-tune on downstream tasks alone even without encoders. Moreover, they consume a lot of training resources before pre-training, also they need long time to infer visual tokens. Patches from images usually has the nature of local invariance, i.e., if a patch in one image is replaced, it will not affect other patches. While previous visual tokenizers fail to maintain the property. To show the noise resistance ability of our centroid-based tokenizers, we first mask out some patches to investigate whether the remained patch tokens will be changed. Also we introduce some noises like Gaussian noise and Gaussian blur to image, to observe the ratio of token indices unchanged.
As shown in Table 4, our non-parametric centroid tokenizer is more stable in noises against BEiT and BEiTv2. Both tokenizers from BEiT and BEiTv2 will change the corresponding tokens even if the pixels of patches are not changed (only 1.41% and 3.97% tokens are kept unchanged), which means that they cannot guarantee the local correspondence between image patches and visual tokens. Therefore, the tokens from parametric tokenizers cannot independently represent a single semantic information. While in our method, we only achieve each centroid using single patch, and will not suffer the inconsistency resulted by noise, as the clustering extract the key information of the patch, which is more robust to noise.
In Table 5, we also show the training and inference speed and masked token prediction accuracy of our centroid-based tokenizer and tokenizers in BEiT and BEiTv2. For the tokenizer training speed, the centroid-based tokenizer only need about 158s on a single RTX 3090 NVIDIA GPU to construct the tokenizer, while tokenizers in BEiT and BEiTv2 need spend several days1. For the tokenizer inference speed, under the same batch size, which consists 64 images, and the same environment (a single RTX 3090), our model only used 9.3ms, while BEiT and BEiTv2 tokenizer used 15.7 and 74.8ms respectively. And it is very worth noting that our tokenizer occupies very little GPU memory, and our speed can be increased to a larger batch size to fully utilize the parallel capability of the GPU, which will make our speed advantage more obvious.
Footnote 1: It depends on the hardware environments and data size. BEiTv2 tokenizer cost 7 days training on a single RTX 3090 NVIDIA GPU.
In addition, to verify our conclusion that the previous tokenizers lose the nature of local invariance, we also show the token prediction accuracy in pre-training stage. The centroid-based tokenizer outperforms BEiT and BEiTv2 tokenizers in large margins. This in the one hand indicates our model has the ability to predict tokens via visual context, on the other hand, BEiT and BEiTv2 do not learn enough inter-patch relations to perform masking inference via inter-patch correlations. This reflects such a reality that the improvement of BEiTv2 is essentially brought about by discrete feature distillation.
## 6 Conclusion and Future Work
Existing token-based MIM with parametric tokenizer models suffers from the training cost, while pixel-based MIM needs a redundant decoder to align with the vanilla pixels. In this work, we propose a novel centroid-centered ViT pre-training framework, in which centroid-based MIM is employed to model the image. We use a simple and effective k-means clustering to obtain the centroids, and adopt the mask-then-predict paradigm to learn the token and pixel representations simultaneously. The tokenizer construction only needs several seconds and very small proportion of pre-training data (4%). Our centroid-based tokenizer also has the property of local invariant, which is more suitable for context-dependent architecture pre-training. Good fine
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Mask**} & \multicolumn{3}{c}{**Gaussian Noise**} & \multicolumn{3}{c}{**Gaussian Blur**} \\ \cline{2-9} & 0.1 & 0.2 & 0.5 & 1 & 10 & 25 & 0.5 & 1 & 2 \\ \hline BEiT & 34.34 & 14.17 & 1.41 & 88.02 & 32.54 & 9.31 & 61.18 & 25.32 & 6.93 \\ BEiTv2 & 59.61 & 33.56 & 3.97 & 95.03 & 57.43 & 24.02 & 83.52 & 61.29 & 0.08 \\ \hline
**Ours** & **90.01** & **80.02** & **50.05** & **98.94** & **88.61** & **72.28** & **96.38** & **86.39** & **66.72** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of the noise resistance ability of different tokenizers. We report the ratio of unchanged tokens. We explore different mask ratios and different noise standard deviations \(\sigma\) in Gaussian noise and Gaussian blur. Visualization of different noises can be found in Appendix D.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Training** & **Inference** & **Accuracy** & **Memory** \\ \hline BEiT & Days & 15.7\(\pm\)2.5ms & 4.96 & 13.2G \\ BEiTv2 & Days & 74.8\(\pm\)0.6ms & 9.49 & 21.5G \\ \hline
**Ours** & **158s** & **9.3\(\pm\)0.5ms** & **40.83** & **3.4G** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of different tokenizers.
tuning results of our CCViT on downstream tasks including classification and segmentation demonstrate the effectiveness. Further analysis show that our CCViT is more robust to noise. In the future, we will advance our work by scaling up the data size, model size and pre-training longer when more hardware resources available. We will also explore distillation approaches in our method to get better results due to it is orthogonal to our contribution.
|
2309.01747 | The HHMP decomposition of the permutohedron and degenerations of torus
orbits in flag varieties | Let $Z\subset Fl(n)$ be the closure of a generic torus orbit in the full flag
variety. Anderson-Tymoczko express the cohomology class of $Z$ as a sum of
classes of Richardson varieties. Harada-Horiguchi-Masuda-Park give a
decomposition of the permutohedron, the moment map image of $Z$, into
subpolytopes corresponding to the summands of the Anderson-Tymoczko formula. We
construct an explicit toric degeneration inside $Fl(n)$ of $Z$ into Richardson
varieties, whose moment map images coincide with the HHMP decomposition,
thereby obtaining a new proof of the Anderson-Tymoczko formula. | Carl Lian | 2023-09-04T18:06:13Z | http://arxiv.org/abs/2309.01747v4 | # The Hhimp decomposition of the permutohedron and degenerations of torus orbits in flag varieties
###### Abstract.
Let \(Z\subset\operatorname{Fl}(n)\) be the closure of a generic torus orbit in the full flag variety. Anderson-Tymoczko express the cohomology class of \(Z\) as a sum of classes of Richardson varieties. Harada-Horiguchi-Masuda-Park give a decomposition of the permutohedron, the moment map image of \(Z\), into subpolytopes corresponding to the summands of the Anderson-Tymoczko formula. We construct an explicit toric degeneration inside \(\operatorname{Fl}(n)\) of \(Z\) into Richardson varieties, whose moment map images coincide with the HHMP decomposition, thereby obtaining a new proof of the Anderson-Tymoczko formula.
## 1. Introduction
Let \(\mathbb{C}^{n}\) be a complex vector space of dimension \(n\) with the standard action of an \(n\)-dimensional torus \(T\). Let \(\operatorname{Fl}(n)\) be the variety of complete flags in \(\mathbb{C}^{n}\), which inherits a standard \(T\)-action. Let \(Z\subset\operatorname{Fl}(n)\) be the closure of the \(T\)-orbit of a generic point. The cycle class \([Z]\) in \(H^{*}(\operatorname{Fl}(n))\) was computed by Anderson-Tymoczko. (We work throughout with rational coefficients.)
**Theorem 1**.: _[_1_]_ _We have_
\[[Z]=\sum_{w\in S_{n-1}}\sigma_{\iota(w)}\sigma_{\mathfrak{N}(w_{0}w)}\]
_in \(H^{(n-1)(n-2)}(\operatorname{Fl}(n))\)._
See SS2 for notation. The class \([Z]\) is equal to that of a _regular semisimple Hessenberg variety_ in \(\operatorname{Fl}(n)\) with Hessenberg function \(h(i)=i+1\), which in turn is cut out by degeneracy loci whose classes are determined by the work of Fulton [4]. The coefficients of \([Z]\) when expressed in the Schubert basis were previously determined by Klyachko [8, Theorem 4] in terms of representation theory, and a positive combinatorial interpretation was recently given by Nadeau-Tewari [10].
The form of Theorem 1 suggests that one might hope for the existence of a (toric) degeneration of \(Z\) into the union of Richardson varieties (generically transverse intersections of two Schubert varieties) \(Z_{w}\) in \(\operatorname{Fl}(n)\) of class \(\sigma_{\iota(w)}\sigma_{\mathfrak{N}(ww)}\). The purpose of this note is to construct such a degeneration, thereby giving a new proof of Theorem 1.
We show moreover that such this degeneration is already encoded in a polyhedral decompositition of the permutohedron \(\operatorname{Perm}(n)\) given by Harada-Horiguchi-Masuda-Park [6]. Their decomposition, which we refer to as the HHMP decomposition, is given by a union
\[\operatorname{Perm}(n)=\bigcup_{w\in S_{n-1}}\operatorname{GZ}(w)\]
of subpolytopes, indexed by permutations \(w\in S_{n-1}\), which are faces of the _Gelfand-Zetlin_ polytope. The correspondence between the subpolytopes \(\operatorname{GZ}(w)\) and the classes \(\sigma_{\iota(w)}\sigma_{\mathfrak{N}(ww)}\)
appearing in the Anderson-Tymoczko formula is also shown to be volume-preserving, suitably interpreted.
More precisely, we identify the subpolytopes \(\mathrm{GZ}(w)\) with the moment map images of the components \(Z_{w}\) appearing in the special fiber of our degeneration. The \(Z_{w}\) are themselves orbit closures of special points in \(\mathrm{Fl}(n)\), and their moment map images are therefore _flag matroid polytopes_\(\mathrm{FM}(w)\). We summarize the new results below.
**Theorem 2**.: _There exists an embedded toric degeneration of \(Z\subset\mathrm{Fl}(n)\) into irreducible components \(Z_{w}\subset\mathrm{Fl}(n)\), which are equal to \(T\)-orbit closures of special flags \(\mathcal{L}_{w}\in\mathrm{Fl}(n)\), and which all appear with multiplicity 1. Furthermore:_
1. _(Theorem_ 18_)_ \(Z_{w}\) _is the_ \(T\)_-orbit closure of a special flag_ \(\mathcal{L}_{w}\in\mathrm{Fl}(n)\)_, whose associated flag matroid polytope_ \(\mathrm{FM}(w)\) _is equal to the polytope_ \(\mathrm{GZ}(w)\) _appearing in the HHMP decomposition._
2. _(Theorem_ 22_)_ \(Z_{w}\) _is a Richardson variety of class_ \(\sigma_{\iota(w)}\sigma_{\overline{\iota}(w_{0}w)}\)_._
Combining these conclusions yields a new proof of Theorem 1. Theorems 18 and 22 are largely implicit in the work of other authors. Nadeau-Tewari [11, SS6-7] identify the polytopes \(\mathrm{GZ}(w)\) with _Bruhat interval polytopes_ in the sense of Tsukerman-Williams [12]. Bruhat interval polytopes are in turn flag matroid polytopes [12, Proposition 2.9] and moment map images of Richardson varieties [12, Remark 7.11]. Our main contribution is the explicit construction of the degeneration reproving the Anderson-Tymoczko formula; we give self-contained proofs of Theorems 18 and 22 for completeness.
It may be of interest to construct similar degenerations for the more general Hessenberg varieties considered by Anderson-Tymoczko, whose classes are computed by a similar formula, or for torus orbit closures in other Lie types, where we are not aware of such formulas. Outside of the toric setting, we have carried out similar degenerations on the Grassmannians in connection with counting curves on projective spaces, see [9, SS8].
We review preliminaries in SS2 and the HHMP decomposition in SS3. We construct the degeneration of \(Z\) in SS4. We prove Theorem 18 in SS5, which implies that the \(Z_{w}\) are the only components that appear in our degeneration. We prove Theorem 22 in SS6, which completes the proof of Theorem 1. We explain how our degeneration pushes forward to Grassmannians in SS7, recovering a result of Berget-Fink [2, Theorem 5.1].
### Acknowledgments
This paper was written with the support of an NSF postdoctoral fellowship, DMS-2001976. We are grateful to Philippe Nadeau and Vasu Tewari for many helpful comments and references to the literature, and to Matt Larson for pointing out an oversight in the original draft of this work.
## 2. Preliminaries
### Permutations
Let \([n]=\{1,2,\ldots,n\}\). Permutations \(w\in S_{n}\) are understood to be functions \(w:[n]\to[n]\).
**Definition 3**.: _Let \(w\in S_{n}\) be a permutation. For \(j\in[n]\), we define \(r_{j}(w)\in[1,n+1-j]\) to be the integer such that \(w^{-1}(j)\) is the \(r_{j}\)-th largest integer among \(w^{-1}(j),w^{-1}(j+1),\ldots,w^{-1}(n)\). We also write \(\overrightarrow{r}(w)=(r_{1},\ldots,r_{n})\)._
In this way, permutations \(w\) are clearly in bijection with integer vectors \(\overrightarrow{r}\) of length \(n\) with \(r_{j}\in[1,n+1-j]\). It will be convenient to pass between these two indexings of elements
of \(S_{n}\). We will often write \(r_{j}\) and \(\overrightarrow{r}\) instead of \(r_{j}(w)\) and \(\overrightarrow{r}(w)\) when the permutation \(w\) has been fixed.
**Definition 4**.: _The length \(\ell(w)\) of a permutation \(w\in S_{n}\) is the minimal number of simple transpositions \((i,i+1)\) which need to be composed to obtain \(w\). We denote by \(w_{0}\in S_{n}\) the longest permutation, given by \(w_{0}(i)=n+1-i\)._
**Definition 5**.: _Let \(\iota:S_{n-1}\hookrightarrow S_{n}\) be the inclusion sending \(w:[n-1]\to[n-1]\) to the permutation \(\iota(w):[n]\to[n]\) with \(\iota(w)(n)=n\) and \(\iota(w)(i)=w(i)\) for \(i=1,\ldots,n-1\)._
_Let \(\overline{\iota}:S_{n-1}\hookrightarrow S_{n}\) be the inclusion sending \(w:[n-1]\to[n-1]\) to the permutation \(\iota(w):[n]\to[n]\) with \(\iota(w)(1)=1\) and \(\iota(w)(i)=w(i-1)+1\) for \(i=2,\ldots,n\)._
### Schubert and Richardson varieties
Fix an \(n\)-dimensional vector space \(\mathbb{C}^{n}\), and let \(\operatorname{Fl}(n)\) be the space of complete flags \(0\subset L_{1}\subset\cdots\subset L_{n-1}\subset\mathbb{C}^{n}\). An additive basis of the cohomology \(H^{*}(\operatorname{Fl}(n))\) is given by the classes of Schubert varieties, defined as follows. Let \(F=(0\subset F_{1}\subset\cdots\subset F_{n-1}\subset\mathbb{C}^{n})\) be a fixed flag, and let \(w\in S_{n}\) be a permutation.
**Definition 6**.: _We define the Schubert variety \(\Sigma_{w}^{F}\subset\operatorname{Fl}(n)\) to be the locus of flags \(\mathcal{L}\in\operatorname{Fl}(n)\) satisfying_
\[\dim(L_{i}\cap F_{n-j})\geq\#\left(\{w(1),\ldots,w(i)\}\cap\{j+1,\ldots,n\} \right).\]
_The class \([\Sigma_{w}^{F}]\in H^{2\ell(w)}(\operatorname{Fl}(n))\), which does not depend on \(F\), is denoted \(\sigma_{w}\)._
Let \(F^{\prime}=(0\subset F^{\prime}_{1}\subset\cdots\subset F^{\prime}_{n-1} \subset\mathbb{C}^{n})\) be a second fixed flag, transverse to \(F\), in the sense that \(\dim(F_{j}\cap F^{\prime}_{k})=\max(0,j+k-n)\) for all \(j,k\). Let \(w^{\prime}\in S_{n}\) be a second permutation.
**Definition 7**.: _The intersection \(\Sigma_{w}^{F}\cap\Sigma_{w^{\prime}}^{F^{\prime}}\) is called a Richardson variety._
The following facts are standard: Schubert varieties are irreducible and reduced of codimension \(\ell(w)\), and Richardson varieties are, when non-empty, irreducible and reduced of codimension \(\ell(w)+\ell(w^{\prime})\).
### Flag matroids and rank polytopes
In this paper, we deal only with realizable flag matroids, which come from complete flags in \(\operatorname{Fl}(n)\). See [3] for a survey on flag matroids and their associated polytopes.
Fix as before \(n\)-dimensional vector space \(\mathbb{C}^{n}\), and fix in addition a basis \(\langle e_{1},\ldots,e_{n}\rangle\). Let \(A\) be a non-singular \(n\times n\) matrix. The columns of \(A\) define a complete flag \(\mathcal{L}(A)\) in \(\mathbb{C}^{n}\) by taking \(L_{i}\) to be the span of the first \(i\) column vectors of \(A\).
**Definition 8**.: _For any \(S\subset[n]\) and \(j\in[n]\), let \(A_{S,j}\) denote the sub-matrix obtained by taking the rows of \(A\) indexed by \(S\) and the first \(j\) columns of \(A\). Then, the flag matroid associated to \(A\) is the data of the rank function \(\operatorname{rank}_{A}:\mathcal{P}([n])\to\mathbb{Z}_{\geq 0}\) defined by_
\[\operatorname{rank}_{A}(S)=\sum_{j=0}^{n-1}\operatorname{rank}(A_{S,j}).\]
We will often abuse terminology, identifying the matrix \(A\) with its associated flag and flag matroid.
**Definition 9**.: _Let \(A\) be a non-singular \(n\times n\) matrix. Then, the flag matroid polytope \(\operatorname{FM}(A)\subset\mathbb{R}_{\geq 0}^{n}\) is defined to be the locus cut out by the equation_
\[z_{1}+\cdots+z_{n}=\frac{n(n-1)}{2}\]
_and the inequalities_
\[z_{S}:=\sum_{i\in S}z_{i}\leq\operatorname{rank}_{A}(S)\]
_for any subset \(S\subset\{1,2,\ldots,n\}\)._
_More generally, given a sequence \(\lambda\) of real numbers \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\geq 0\), the weighted flag matroid polytope \(\operatorname{FM}(\lambda,A)\subset\mathbb{R}_{\geq 0}^{n}\) is defined to be the locus of vectors \(\overrightarrow{z}=(z_{1},\ldots,z_{n})\) cut out by the equation_
\[z_{1}+\cdots+z_{n}=\lambda_{1}+\cdots+\lambda_{n}\]
_and the inequalities_
\[z_{S}\leq\sum_{j=1}^{n}(\lambda_{j}-\lambda_{j+1})\operatorname{rank}(A_{S,j}),\]
_for any subset \(S\subset\{1,2,\ldots,n\}\), where by convention we set \(\lambda_{n+1}=0\). Taking \(\lambda=(n-1,n-2,\ldots,0)\) recovers the definition of \(\operatorname{FM}(A)\)._
The required upper bound on \(z_{S}\) may be re-written as
\[\sum_{j=1}^{n}(\operatorname{rank}(A_{S,j})-\operatorname{rank}(A_{S,j-1})) \lambda_{j},\]
where the coefficient in front of \(\lambda_{j}\) is \(1\) if adding the \(j\)-th column to \(A_{S,j-1}\) to obtain \(A_{S,j}\) increases the rank, and \(0\) otherwise.
**Definition 10**.: _Suppose that \(A\) has the property that \(A_{S,j}\) has maximal rank for any \(S,j\). Then, \(A\) is said to define the uniform flag matroid._
_The permutohedron \(\operatorname{Perm}(n)\subset\mathbb{R}^{n}\) is the flag matroid polytope \(\operatorname{FM}(A)\) associated to the uniform flag matroid. Similarly, the weighted permutohedron \(\operatorname{Perm}(\lambda)\subset\mathbb{R}^{n}\) is the weighted flag matroid polytope \(\operatorname{FM}(\lambda,A)\) for the uniform flag matroid._
Note that a generic matrix \(A\) defines the uniform flag matroid. The permutohedron is equivalently the convex hull of the points \((w(1)-1,\ldots,w(n)-1)\), where \(w\) ranges over all permutations in \(S_{n}\). Similarly, the weighted permutohedron is the convex hull of the points \((\lambda_{w(1)},\ldots,\lambda_{w(n)})\) for any \(\lambda\). For any \(A\), we have \(\operatorname{FM}(\lambda,A)\subset\operatorname{Perm}(\lambda)\).
### The moment map
The main references for this section are the work of Gel'fand-Serganova [5] and Kapranov [7].
Let
\[p:\operatorname{Fl}(n)\hookrightarrow\prod_{r=1}^{n-1}\operatorname{Gr}(r,n) \hookrightarrow\prod_{r=1}^{n-1}\mathbb{P}^{\binom{n}{r}-1}\]
be the Plucker embedding. Let
\[\mu_{r}:\mathbb{P}^{\binom{n}{r}-1}\to\mathbb{R}^{n}\]
be the map
\[\mu_{r}([x_{I}])=\frac{\sum_{I}|x_{I}|^{2}e_{I}}{\sum_{I}|x_{I}|^{2}}\]
where \(I\) ranges over all \(r\)-element subsets of \([n]\), and \(e_{I}\) is the vector \(\sum_{i\in I}z_{i}\in\mathbb{R}^{n}\). Then, the _moment map_\(\mu:\operatorname{Fl}(n)\to\mathbb{R}^{n}\) is the composition of \(p\) with the sum of the maps \(\mu_{r}\), after projection from \(\prod_{r=1}^{n-1}\mathbb{P}^{\binom{n}{r}-1}\).
The key property of \(\mu\) is the following. Let \(A\) be a non-singular \(n\times n\) matrix, let \(\mathcal{L}(A)\in\operatorname{Fl}(n)\) be the associated flag, and let \(Z_{A}\) be the \(T\)-orbit closure of \(\mathcal{L}(A)\). Then, the image of \(Z_{A}\) under \(\mu\) is equal to the flag matroid polytope \(\operatorname{FM}(A)\). Moreover, the dimension of \(Z_{A}\) as a subvariety of \(\operatorname{Fl}(n)\) is equal to the dimension of \(\operatorname{FM}(A)\) as a polytope.
Toric degenerations of \(Z_{A}\) inside \(\operatorname{Fl}(n)\) give rise to correspond to flag matroidal polyhedral subdivisions of \(\operatorname{FM}(A)\). More precisely, consider a \(1\)-parameter toric degeneration of \(Z_{A}\) with irreducible components \(Z_{1},\ldots,Z_{m}\) on the special fiber. Then, the \(Z_{i}\) are all reduced, and equal to orbit closures of flags \(\mathcal{L}_{i}\in\operatorname{Fl}(n)\), whose flag matroid polytopes \(\operatorname{FM}(A_{i})\) (where \(A_{i}\) is obtained by choosing appropriate bases for the components of the flags \(\mathcal{L}_{i}\)) form a polyhedral subdivision of \(\operatorname{FM}(A)\).
## 3. The HHMP decomposition
In this section, we review the decomposition of \(\operatorname{Perm}(\lambda)\) given by Harada-Horiguchi-Masuda-Park [6].
**Definition 11**.: _Fix a sequence \(\lambda\) of real numbers \(\lambda_{1}\geq\cdots\geq\lambda_{n}\geq 0\). The Gelfand-Zetlin (\(\operatorname{GZ}\)) polytope \(\operatorname{GZ}(\lambda)\) is defined as follows. Consider the diagram_
\[\begin{array}{ccccc}\lambda_{1}&x_{1,2}&x_{1,3}&\cdots&x_{1,n-1}&x_{1,n}\\ &\lambda_{2}&x_{2,3}&\ddots&x_{2,n-1}&x_{2,n}\\ &&\ddots&\ddots&\vdots&\vdots\\ &&&\lambda_{n-2}&x_{n-2,n-1}&x_{n-2,n}\\ &&&&\lambda_{n-1}&x_{n-1,n}\\ &&&&\lambda_{n}\end{array}\]
_where we also set \(x_{i,i}=\lambda_{i}\) for \(i=1,2,\ldots,n\)._
_Then, \(\operatorname{GZ}(\lambda)\subset\mathbb{R}^{n(n-1)/2}\) is defined to be the subset of vectors \(\overrightarrow{x}=(x_{k,\ell})_{1\leq k<\ell\leq n}\) for which any three variables appearing in the configuration_
\[\begin{array}{cc}a&b\\ &c\end{array}\]
_above satisfies \(a\geq b\geq c\)._
**Definition 12**.: _Let \(w\in S_{n-1}\) be a permutation and write \(\overrightarrow{r}=\overrightarrow{r}(w)=(r_{1},\ldots,r_{n-1})\) for the corresponding vector. Define the face \(\operatorname{GZ}(\lambda,w)\subset\operatorname{GZ}(\lambda)\) to be the subset of \(\operatorname{GZ}(\lambda)\) of points satisfying the equations_
\[x_{i,i+j}=x_{i,i+j-1}\ \text{for}\ i\in[1,r_{j}-1],\quad\text{ and }\quad x_{i,i+j}=x_{i,i+j-1}\ \text{for}\ i\in[r_{j}+1,n-j].\]
_for all \(j=1,2,\ldots,n-1\)._
The key property of \(\operatorname{GZ}(\lambda,w)\) is the following. Suppose that the coordinates \(x_{i,i+j-1}\) are given for some fixed \(j\in[1,n-1]\) and all \(i=1,2,\ldots,n-j+1\). Then, all but one of the entries \(x_{i,i+j}\) is determined by the above equations; the unique entry which is not is \(x_{r_{j},r_{j}+j}\) (this entry exists because \(r_{j}\in[1,n-j]\)), which is constrained by the inequality
\[x_{r_{j}+1,r_{j}+j}\leq x_{r_{j},r_{j}+j}\leq x_{r_{j},r_{j}+j-1}.\]
Because the entries \(x_{i,i}=\lambda_{i}\) are fixed, the dimension of any face \(\operatorname{GZ}(\lambda,w)\) is easily seen to be equal to \(n-1\).
**Definition 13**.: _Define the map \(\Phi:\operatorname{GZ}(\lambda)\to\operatorname{Perm}(\lambda)\) as follows. For \(\overrightarrow{x}\in\operatorname{GZ}(\lambda)\) and \(j=0,1,\ldots,n-1\), write_
\[y_{j}=x_{1,1+j}+\cdots+x_{n-j,n}.\]
_Then, define_
\[\Phi(\overrightarrow{x})=(y_{0}-y_{1},\ldots,y_{n-2}-y_{n-1},y_{n-1}).\]
**Theorem 14**.: _[_6_, Proposition 5.2]_ _The map \(\Phi\) is a bijection upon restriction to the union of faces \(\operatorname{GZ}(\lambda,w)\), where \(w\) ranges over all permutations in \(S_{n-1}\)._
The images of the \(\operatorname{GZ}(\lambda,w)\) in \(\operatorname{Perm}(\lambda)\), which we abusively denote \(\operatorname{GZ}(\lambda,w)\), therefore give a polyhedral decomposition of \(\operatorname{Perm}(\lambda)\), which we refer to as the _HHMP decomposition_.
We recall the construction of the inverse map \(\operatorname{Perm}(\lambda)\to\bigcup_{w\in S_{n-1}}\operatorname{GZ}( \lambda,w)\). Suppose \(\overrightarrow{z}=(z_{1},\ldots,z_{n})\in\operatorname{Perm}(\lambda)\). Then, there exists an integer \(r_{1}\in[1,n-1]\) (not necessarily unique) for which \(z_{1}\in[\lambda_{r_{1}+1},\lambda_{r_{1}}]\), and thus
\[\lambda_{r_{1}+1}\leq(\lambda_{r_{1}+1}+\lambda_{r_{1}})-z_{1}\leq\lambda_{r}.\]
Set now \(x_{i,i+1}=x_{i,i}=\lambda_{i}\) for \(i\in[1,r_{1}-1]\cup[r_{1}+1,n-1]\), and set \(x_{i,i+1}=(\lambda_{r_{1}+1}+\lambda_{r_{1}})-z_{1}\). Now, set \(\lambda^{\prime}=(x_{1,2},\ldots,x_{n-1,n})\), and iterate this procedure. Namely, define \(x_{i,i+j}\) with \(j>1\) inductively via the inverse map \(\operatorname{Perm}(\lambda^{\prime})\to\bigcup_{w^{\prime}\in S_{n-2}} \operatorname{GZ}(\lambda^{\prime},w^{\prime})\).
## 4. The toric degeneration
We now come to our main construction, giving an explicit toric degeneration of \(Z\) in \(\operatorname{Fl}(n)\) into a union of special orbit closures \(Z_{w}\subset\operatorname{Fl}(n)\).
Strictly speaking, the construction of this section does strictly less than this; it will only follow from our computations of flag matroid polytopes in the next section that our construction gives the desired degeneration. We more precisely describe a sequence of degenerations of irreducible subschemes of \(\operatorname{Fl}(n)\), starting with \(Z\). In each degeneration, we will identify two distinct irreducible subschemes of their flat limits, some of which will be ignored. We iterate the construction until reaching the subschemes \(Z_{w}\subset\operatorname{Fl}(n)\). Note that \(\dim(Z)=n-1\), and that the dimensions of the limit subschemes can only go down at each step.
We will find in the next section that in fact, the \(Z_{w}\) also have dimension \(n-1\), which implies that all of the intermediate subschemes in the sequence of degenerations from \(Z\) to \(Z_{w}\) also have dimension \(n-1\). Therefore, all of these subschemes are in fact components of the flat limits of the corresponding degenerations. Moreover, we will find, by the existence of the HMMP decomposition, that no components other than the \(Z_{w}\) can appear in the end, which implies that the limit components we identify in each of the intermediate degenerations are the only ones. However, we emphasize that we will not need any of these conclusions in the construction of this section.
### The first row
Let \(A\) be a generic \(n\times n\) matrix, which we identify with its corresponding flag \(\mathcal{L}(A)\in\operatorname{Fl}(n)\). More generally, for \(j=0,1,\ldots,n-1\), let \(A^{j}\) be an \(n\times n\) matrix which is zero in the left-most \(j\) entries of its first row, but generic otherwise.
Fix now \(j\in[1,n-1]\), and consider a degeneration in which the \(j\)-th entry in the first row of \(A^{j-1}\) is sent to zero. More precisely, consider the matrix
\[A_{t}^{j-1}=\begin{bmatrix}0&\cdots&0&t&a_{1,j+1}&a_{1,j+2}&\cdots&a_{1,n}\\ a_{2,1}&\cdots&a_{2,j-1}&a_{2,j}&a_{2,j+1}&a_{2,j+2}&\cdots&a_{2,n}\\ &&&\vdots&&&\\ a_{n,1}&\cdots&a_{n,j-1}&a_{n,j}&a_{n,j+1}&a_{n,j+2}&\cdots&a_{n,n}\end{bmatrix}\]
whose entries are taken in \(\mathbb{C}[[t]]\), and its corresponding \(T\)-orbit closure \(Z_{t}^{j-1}\) in \(\operatorname{Fl}(n)\). The family of \(T\)-orbit closures \(Z_{t}^{j-1}\subset\operatorname{Fl}(n)\times\operatorname{Spec}\mathbb{C}[[t]]\), where \(t\neq 0\), is clearly \(T\)-equivariant. It is also clear that the \(T\)-orbit closure \(Z^{j}\subset\operatorname{Fl}(n)\) of \(A^{j}\) appears in the flat limit of \(Z_{t}^{j-1}\) as \(t\to 0\).
Consider now the \(1\)-parameter family of matrices
\[\begin{bmatrix}0&\cdots&0&t&a_{1,j+1}&a_{1,j+2}&\cdots&a_{1,n}\\ a_{2,1}t&\cdots&a_{2,j-1}t&a_{2,j}t&a_{2,j+1}t&a_{2,j+2}t&\cdots&a_{2,n}t\\ &&\vdots&&&\\ a_{n,1}t&\cdots&a_{n,j-1}t&a_{n,j}t&a_{n,j+1}t&a_{n,ij2}t&\cdots&a_{n,n}t\end{bmatrix} \in Z_{t}^{i-1}.\]
The limit of the corresponding flags in \(\operatorname{Fl}(n)\) is the flag defined by the matrix
\[A_{i}^{+}:=\begin{bmatrix}0&\cdots&0&1&1&0&\cdots&0\\ a_{2,1}&\cdots&a_{2,j-1}&a_{2,j}&0&a_{2,j+2}&\cdots&a_{2,n}\\ &&&\vdots&&&\\ a_{n,1}&\cdots&a_{n,j-1}&a_{n,j}&0&a_{n,j+2}&\cdots&a_{n,n}\end{bmatrix}.\]
Indeed, the first \(j\) columns are obtained by dividing by \(t\), the \((j+1)\)-th column is obtained by substituting \(t=0\), and the remaining columns are obtained by subtracting the appropriate multiple of the \((j+1)\)-th column, and then dividing by \(t\). The resulting matrix is non-singular by the genericity assumption.
Let \(Z_{+}^{j}\) be the \(T\)-orbit closure of \(A_{i}^{+}\) in \(\operatorname{Fl}(n)\). Then, \(Z_{+}^{j}\) is also a subscheme of the flat limit of \(Z_{t}^{j-1}\) as \(t\to 0\). Therefore, the flat limit of \(Z_{t}^{j-1}\) as \(t\to 0\) contains the irreducible subschemes \(Z^{j},Z_{+}^{j}\). In the case \(j=n-1\), we will throw out the subscheme \(Z^{n-1}\); we may safely do this because it is easily checked to have dimension \(n-2\) (in fact, it is contained in \(Z_{+}^{n-1}\)). On the other hand, when \(j<n-1\), note that \(Z^{j}\neq Z_{+}^{j}\), because a general point of \(Z_{+}^{j}\) has
\[\begin{bmatrix}1\\ 0\\ \vdots\\ 0\end{bmatrix}\in L_{j+1}\]
but a general point of \(Z^{j}\) does not.
To summarize, we perform \(n-1\) degenerations in total: the flat limit of \(Z_{t}^{j-1}\) contains the distinct orbit closures \(Z^{j},Z_{+}^{j}\) when \(j=1,2,\ldots,n-2\), and the flat limit of \(Z_{t}^{n-2}\) contains the orbit closure \(Z_{+}^{n-1}\). We are therefore left with the collection of subschemes \(Z_{+}^{1},\ldots,Z_{+}^{n-1}\).
As explained above, it will turn out that the \(Z_{+}^{j}\) all have dimension \(n-1\), which implies a posteriori that they appear with multiplicity \(1\) as components of the flat limits of their
corresponding degenerations. It will furthermore turn out that we have identified all of the components of the flat limits of the degenerations, so in particular that \([Z]\) is equal to the sum of the classes \([Z_{+}^{j}]\). However, we will continue not to assume this in what follows.
### Iterating over rows
For \(j=1,2,\ldots,n-1\), we repeat the degeneration on the matrix \(A_{j}^{+}\). More precisely, consider the \((n-1)\times(n-1)\) matrix obtained by ignoring the \((j+1)\)-th row and first column of \(A_{j}^{+}\). Then, we send, from left to right, the entries in the top row of this new matrix to zero, extracting two subschemes of the flat limit, except in the \((n-2)\)-nd step, in which case we only extract one. We continue this procedure until reaching the last row.
We now describe the matrices whose orbit closures are reached at the end of this process. Let \(w\in S_{n-1}\) be a permutation. We define an \(n\times n\) matrix \(A_{w}\) by filling in its rows from top to bottom. At step \(i\), for \(i=1,2,\ldots,n-1\), we begin by placing the symbol \(\star_{2}\) in row \(i\) and column \(w^{-1}(i)+1\). Then, we place the symbol \(\star_{1}\) in row \(i\) and the right-most column that is on the one hand to the left of column \(j\) but on the other hand does not contain the symbol \(\star_{2}\) in any row above row \(i\). Such a column exists, because the symbol \(\star_{2}\) is never placed in the first column of \(A_{w}\) for all \(i=1,2,\ldots,n-1\). In row \(n\), place the symbol \(\star_{2}\) in column \(1\). Then, place a \(0\) in all remaining entries of \(A_{w}\).
Finally, replace all symbols \(\star_{1},\star_{2}\) with generic complex numbers. When \(w=(3712546)\), so that \(w^{-1}=(3416572)\), we obtain the below matrix \(A_{w}\), where on the right the symbols \(\ast\) denote generic (and distinct) complex numbers.
\[\begin{bmatrix}0&0&\star_{1}&\star_{2}&0&0&0&0\\ 0&0&\star_{1}&0&\star_{2}&0&0&0\\ \star_{1}&\star_{2}&0&0&0&0&0&0\\ 0&0&0&0&0&\star_{1}&\star_{2}&0\\ 0&0&\star_{1}&0&0&\star_{2}&0&0\\ 0&0&\star_{1}&0&0&0&0&\star_{2}\\ \star_{1}&0&\star_{2}&0&0&0&0&0\\ \star_{2}&0&0&0&0&0&0&0\end{bmatrix}\to\begin{bmatrix}0&0&\ast&\ast&0&0&0&0 \\ 0&0&\ast&0&\ast&0&0&0\\ \ast&\ast&0&0&0&0&0&0\\ 0&0&0&0&\ast&\ast&0\\ 0&0&\ast&0&0&\ast&0&0\\ 0&0&\ast&0&0&0&0&\ast\\ \ast&0&\ast&0&0&0&0&0\\ \ast&0&0&0&0&0&0\end{bmatrix}\]
Observe that matrix \(A_{w}\) is invertible (if the non-zero entries are sufficiently generic), because the expansion of the determinant of \(A_{w}\) contains at least one non-zero summand corresponding to the permutation \(\overline{\iota}(w)\). We therefore obtain a flag \(\mathcal{L}_{w}:=\mathcal{L}(A_{w})\in\operatorname{Fl}(n)\) and an orbit closure \(Z_{w}:=\overline{T\cdot\mathcal{L}_{w}}\subset\operatorname{Fl}(n)\) from \(A_{w}\). The following is now straightforward:
**Proposition 15**.: _The orbit closures \(Z_{w}\), where \(w\) ranges over all permutations of \(w\), appear in the special fibers of the sequence of degenerations of \(Z\) described above._
We still do not claim that \(Z_{w}\) appear as _components_ of the special fibers. This is proven in the next section.
## 5. GZ faces as flag matroid polytopes
To the orbit closures \(Z_{w}\subset Fl(n)\) appearing in the special fibers of the degenerations described in the previous section, we may associate flag matroid polytopes \(\operatorname{FM}(\lambda,w)\). The purpose of this section is to identify these flag matroid polytopes with the polytopes \(\operatorname{GZ}(\lambda,w)\) appearing in the HMMP decomposition.
**Definition 16**.: _Define \(\operatorname{FM}(\lambda,w)=\operatorname{FM}(\lambda,A_{w})\) to be the flag matroid polytope associated to \(A_{w}\)._
**Lemma 17**.: _Let \(w\in S_{n-1}\) be a permutation and let \(\overrightarrow{r}=(r_{1},\ldots,r_{n-1})\) be the corresponding integer vector. Let \(w^{\prime}\in S_{n-2}\) be the permutation corresponding to \(\overrightarrow{r}^{\prime}=(r_{2},\ldots,r_{n-1})\). Suppose \(S\subset\{2,3,\ldots,n\}\) is any subset and write \(S^{+}=\{1\}\cup S\)._
_Then, we have the following:_
* \(\operatorname{rank}((A_{w})_{S^{+},j})=\operatorname{rank}((A_{w})_{S,j})= \operatorname{rank}((A_{w^{\prime}})_{S,j})\) _if_ \(j\leq r_{1}-1\)_,_
* \(\operatorname{rank}((A_{w})_{S^{+},r_{1}})=\operatorname{rank}((A_{w})_{S,r_ {1}-1})+1=\operatorname{rank}((A_{w^{\prime}})_{S,r_{1}-1})+1\)_, and_
* \(\operatorname{rank}((A_{w})_{S^{+},j})=\operatorname{rank}((A_{w})_{S,j})+1= \operatorname{rank}((A_{w^{\prime}})_{S,j-1})+1\) _if_ \(j\geq r_{1}+1\)_._
Note that matrix \(A_{w^{\prime}}\) is obtained from \(A_{w}\) by deleting the first row and the \((r_{1}+1)\)-th column. Our convention above is that we index the rows of \(A_{w^{\prime}}\) by \(2,3,\ldots,n\), but shift the indices on the columns to the right of the deleted one so that they are labelled \(1,2,\ldots,n-1\).
Proof.: If \(j\leq r_{1}-1\), then \((A_{w})_{S,j}=(A_{w^{\prime}})_{S,j}\) are both obtained from \((A_{w})_{S^{+},j}\) by deleting a zero row, so the first line is immediate.
We have \(\operatorname{rank}((A_{w})_{S^{+},r_{1}})=\operatorname{rank}((A_{w})_{S,r_ {1}-1})+1\) because the first \(r_{1}-1\) columns of \((A_{w})_{S^{+},r_{1}}\) contain a zero row on top of \((A_{w})_{S,r_{1}-1}\), and the last column has a non-zero entry in the first row. Furthermore, we have \((A_{w})_{S,r_{1}-1}=(A_{w^{\prime}})_{S,r_{1}-1}\), so we obtain the second line.
To compute \(\operatorname{rank}((A_{w})_{S^{+},j})\) for any \(j\geq r_{1}+1\), we may replace the non-zero entry in the first row and \(r_{1}\)-th column by zero, by adding the appropriate multiple of the \((r_{1}+1)\)-th row. The resulting matrix has exactly one non-zero entry in the first row, in column \(r_{1}+1\). The submatrix given by the remaining rows is precisely \((A_{w})_{S,j}\), so we have \(\operatorname{rank}((A_{w})_{S^{+},j})=\operatorname{rank}((A_{w})_{S,j})+1\). Furthermore, the \((r_{1}+1)\)-st row of \((A_{w})_{S,j}\) is zero, which is deleted to obtain \((A_{w^{\prime}})_{S,j-1}\), so the last line follows.
**Theorem 18**.: _For any \(\lambda,w\), we have \(\operatorname{FM}(\lambda,w)=\operatorname{GZ}(\lambda,w)\)._
Proof.: We induct on \(n\). The base cases \(n=1,2\) are trivial and left to the reader. Assume the conclusion for \(n-1\), and fix \(w\in S_{n-1}\) and the corresponding \(\overrightarrow{r}\). We first show that \(\operatorname{GZ}(\lambda,w)\subset\operatorname{FM}(\lambda,w)\). Let \(\overrightarrow{z}=\Phi(\overrightarrow{x})\in\operatorname{GZ}(\lambda,w)\) be any point. By assumption, we have \(x_{i,i+1}=\lambda_{i}\) for \(i=1,2,\ldots,r_{1}-1\) and \(i=r_{1}+1,\ldots,n-1\), and \(x_{r_{1},r_{1}+1}=\lambda_{r_{1}}+\lambda_{r_{1}+1}-z_{1}\).
Write now:
\[\overrightarrow{z}^{\prime} =(z_{2},\ldots,z_{n}),\] \[\overrightarrow{x}^{\prime} =(x_{i,j})_{1\leq i<j+1\leq n},\] \[\lambda^{\prime} =(\lambda_{1},\cdots,\lambda_{r_{1}-1},\lambda_{r_{1}}+\lambda_{ r_{1}+1}-z_{1},\lambda_{r_{1}+2},\ldots,\lambda_{n}),\] \[\overrightarrow{r}^{\prime} =(r_{2},\ldots,r_{n-1}),\]
and furthermore let \(w^{\prime}\in S_{n-2}\) be the permutation corresponding to \(\overrightarrow{r}^{\prime}\). Then, we have \(\overrightarrow{z}^{\prime}=\Phi(\overrightarrow{x}^{\prime})\in\operatorname{ GZ}(\lambda^{\prime},w^{\prime})=\operatorname{FM}(\lambda^{\prime},w^{\prime})\), by the inductive hypothesis.
The \((n-1)\times(n-1)\) matrix \(A_{w^{\prime}}\) is obtained by deleting the first row and \((r_{1}+1)\)-th column of \(A_{w}\). The \((r_{1}+1)\)-th column of \(A_{w}\) is zero except in the first row. For any \(S\subset\{2,3,\ldots,n\}\), we therefore have (again, using the convention that the rows of \(A_{w^{\prime}}\) are labelled \(2,\ldots,n\) but
the columns are labelled \(1,\ldots,n-1\))
\[z_{S} \leq\sum_{j\in[1,n-1]}(\operatorname{rank}((A_{w^{\prime}})_{S,j})- \operatorname{rank}((A_{w^{\prime}})_{S,j-1}))\lambda_{j}^{\prime}\] \[=\sum_{j\in[1,r_{1}-1]\cup[r_{1}+2,n]}(\operatorname{rank}((A_{w} )_{S,j})-\operatorname{rank}((A_{w})_{S,j-1}))\lambda_{j}\] \[\qquad+(\operatorname{rank}((A_{w})_{S,r_{1}})-\operatorname{ rank}((A_{w})_{S,r_{1}-1}))(\lambda_{r_{1}}+\lambda_{r_{1}+1}-z_{1}),\]
where we have applied Lemma 17.
This implies on the one hand that
\[z_{S} \leq\sum_{j\in[1,r_{1}]\cup[r_{1}+2,n]}(\operatorname{rank}((A_{ w})_{S,j})-\operatorname{rank}((A_{w})_{S,j-1}))\lambda_{j}\] \[=\sum_{j\in[1,n]}(\operatorname{rank}((A_{w})_{S,j})-\operatorname {rank}((A_{w})_{S,j-1}))\lambda_{j},\]
because \(\lambda_{r_{1}}+\lambda_{r_{1}+1}-z_{1}\leq\lambda_{r_{1}}\), and \((A_{w})_{S,r_{1}+1}\) and \((A_{w})_{S,r_{1}}\) differ only by a zero column.
On the other hand, by Lemma 17, we have
\[z_{S} \leq\sum_{j\in[1,r_{1}-1]\cup[r_{1}+2,n]}(\operatorname{rank}((A_ {w})_{S,j})-\operatorname{rank}((A_{w})_{S,j-1}))\lambda_{j}\] \[\qquad+(\operatorname{rank}((A_{w})_{S,r_{1}})-\operatorname{ rank}((A_{w})_{S,r_{1}-1}))(\lambda_{r_{1}}+\lambda_{r_{1}+1}-z_{1})\] \[\leq\sum_{j\in[1,r_{1}-1]\cup[r_{1}+2,n]}(\operatorname{rank}((A_ {w})_{S^{+},j})-\operatorname{rank}((A_{w})_{S^{+},j-1}))\lambda_{j}\] \[\qquad+(\operatorname{rank}((A_{w})_{S^{+},r_{1}+1})-\operatorname {rank}((A_{w})_{S^{+},r_{1}}))\lambda_{r_{1}+1}+(\lambda_{r_{1}}-z_{1})\]
and so
\[z_{S^{+}}\leq\sum_{j\in[1,n]}(\operatorname{rank}((A_{w})_{S^{+},j})- \operatorname{rank}((A_{w})_{S^{+},j-1}))\lambda_{j},\]
because \(\operatorname{rank}((A_{w})_{S^{+},r_{1}})-\operatorname{rank}((A_{w})_{S^{+ },r_{1}-1})=1\). The vector \(\overrightarrow{z}\in\operatorname{GZ}(\lambda,w)\) therefore satisfies all needed inequalities for \(\operatorname{FM}(\lambda,w)\), so we conclude that \(\operatorname{GZ}(\lambda,w)\subset\operatorname{FM}(\lambda,w)\).
Conversely, suppose that \(\overrightarrow{z}\in\operatorname{FM}(\lambda,w)\). In particular, we have
\[z_{1}\leq\lambda_{r_{1}},\]
because \((A_{w})_{\{1\},j}=0\) for \(j\leq r_{1}-1\), and also
\[z_{2}+\cdots+z_{n}\leq\sum_{j\in[1,r_{1}]\cup[r_{1}+2,n]}\lambda_{j},\]
because the \((r_{1}+1)\)-th column of \((A_{w})_{S-\{1\},r_{1}+1}\) is zero. In particular, we have \(z_{1}\in[\lambda_{r_{1}+1},\lambda_{r_{1}}]\).
Define again
\[\overrightarrow{z}^{\prime} =(z_{2},\ldots,z_{n}),\] \[\lambda^{\prime} =(\lambda_{1},\cdots,\lambda_{r_{1}-1},\lambda_{r_{1}}+\lambda_{r _{1}+1}-z_{1},\lambda_{r_{1}+2},\ldots,\lambda_{n}),\] \[\overrightarrow{r}^{\prime} =(r_{2},\ldots,r_{n-1}),\]
and let \(w^{\prime}\in S_{n-2}\) be the corresponding permutation to \(\overrightarrow{r}^{\prime}\). It now suffices to show that \(\overrightarrow{z}^{\prime}\in\operatorname{FM}(\lambda^{\prime},w^{\prime})\), because by the inductive hypothesis, we may conclude that \(\overrightarrow{z}^{\prime}=\Phi(\overrightarrow{x}^{\prime})\) for some \(\overrightarrow{x}^{\prime}\in\operatorname{GZ}(\lambda^{\prime},w^{\prime})\), and combining the data of \(\overrightarrow{x}^{\prime}\) and \(\lambda^{\prime}\) gives a point \(\overrightarrow{x}^{\prime}\in\operatorname{GZ}(\lambda,w)\) with \(\Phi(\overrightarrow{x})=\overrightarrow{z}\).
That \(\overrightarrow{z}^{\prime}\in\operatorname{FM}(\lambda^{\prime},w^{\prime})\) amounts to the requirement that, for all \(S\subset\{2,3,\ldots,n\}\), we have
\[z_{S} \leq\sum_{j\in[1,n-1]}(\operatorname{rank}((A_{w^{\prime}})_{S,j} )-\operatorname{rank}((A_{w^{\prime}})_{S,j-1}))\lambda^{\prime}_{j}\] \[=\sum_{j\in[1,r_{1}-1]\cup[r_{1}+2,n]}(\operatorname{rank}((A_{w })_{S,j})-\operatorname{rank}((A_{w})_{S,j-1}))\lambda_{j}\] \[\qquad+(\operatorname{rank}((A_{w^{\prime}})_{S,r_{1}})- \operatorname{rank}((A_{w^{\prime}})_{S,r_{1}-1}))(\lambda_{r_{1}}+\lambda_{r _{1}+1}-z_{1})\] \[=\sum_{j\in[1,r_{1}-1]\cup[r_{1}+2,n]}(\operatorname{rank}((A_{w })_{S^{+},j})-\operatorname{rank}((A_{w})_{S^{+},j-1}))\lambda_{j}\] \[\qquad+(\operatorname{rank}((A_{w})_{S^{+},r_{1}+1})- \operatorname{rank}((A_{w})_{S^{+},r_{1}}))(\lambda_{r_{1}}+\lambda_{r_{1}+1} -z_{1})\]
by Lemma 17. If \(\operatorname{rank}((A_{w^{\prime}})_{S,r_{1}})=\operatorname{rank}((A_{w^{ \prime}})_{S,r_{1}-1})\), then using the formula on the second line, the required inequality is exactly the upper bound on \(z_{S}\) in the definition of \(\operatorname{FM}(\lambda,w)\). Indeed, in this case, the left-most columns of \((A_{w})_{S,r_{1}}\) and \((A_{w})_{S,r_{1}+1}\) (which is zero in the latter case) do not increase the rank. On the other hand, if \(\operatorname{rank}((A_{w^{\prime}})_{S,r_{1}})-\operatorname{rank}((A_{w^{ \prime}})_{S,r_{1}-1})=1\), or equivalently, if \(\operatorname{rank}((A_{w})_{S^{+},r_{1}+1})-\operatorname{rank}((A_{w})_{S^{ +},r_{1}})=1\), then using the formula on the last line, the required inequality follows from the upper bound on \(z_{S^{+}}\) in the definition of \(\operatorname{FM}(\lambda,w)\). Therefore, we conclude that \(\overrightarrow{z}^{\prime}\in\operatorname{FM}(\lambda^{\prime},w^{\prime})\), completing the proof.
**Corollary 19**.: _The orbit closure \(Z_{w}=\overline{T\cdot\mathcal{L}_{w}}\in\operatorname{Fl}(n)\) irreducible of dimension \(n-1\)._
Proof.: The irreducibility is immediate from the fact that \(Z_{w}\) is an orbit closure of a point. The moment map image of \(Z_{w}\) is equal to \(\operatorname{FM}(\lambda,w)=\operatorname{GZ}(\lambda,w)\), which has dimension \(n-1\), hence \(Z_{w}\) has dimension \(n-1\).
**Corollary 20**.: _We have an equality of cycles_
\[[Z]=\sum_{w\in S_{n-1}}[Z_{w}]\]
_on \(\operatorname{Fl}(n)\)._
Proof.: The fact that the \(Z_{w}\) all have dimension \(n-1\) implies that all of the intermediate subschemes in the degeneration of \(Z\) to \(Z_{w}\) have dimension \(n-1\), and are components of the corresponding flat limits. Thus, the construction of SS4 is in fact a sequence of toric degenerations of \(Z\) into a union of components, necessarily of multiplicity \(1\), _containing_ the \(Z_{w}\). On the other hand, the moment map images of \(Z_{w}\), which are equal to \(\operatorname{GZ}(w)\) by Theorem 18, already give a polyhedral subdivision of \(\mu(Z)=\operatorname{Perm}(n)\), namely, the HHMP decomposition. Therefore, no other components can appear at the end of this sequence of degenerations, and the claim follows.
**Remark 21**.: _The above analysis shows that the data of the components \(Z_{w}\) appearing in the end of our degeneration of \(Z\) are essentially encoded by the HHMP decomposition of \(\operatorname{Perm}(n)\). The components \(Z_{w}\) are also in bijection with decreasing binary trees, which give a recursive
way to obtain the HHMP decomposition by slicing by hyperplanes, see [11, Theorem 6.5]. We expect that this slicing procedure more precisely encodes the sequence of intermediate degenerations described in SS4, but we have not checked this carefully._
## 6. The Anderson-Tymoczko formula
To complete the proof of the Anderson-Tymoczko formula, it suffices to compute \([Z_{w}]\). We do so by identifying the \(Z_{w}\) as Richardson varieties. Recall that we have fixed a basis \(\mathbb{C}^{n}=\langle e_{1},\ldots,e_{n}\rangle\). Write in addition \(H_{i}=\langle e_{1},\ldots,\widehat{e_{i}},\ldots,e_{n}\rangle\). For any subset \(S\subset[n]\), write \(H_{S}=\cap_{i\in S}H_{i}\).
**Theorem 22**.: _Let \(w\in S_{n-1}\) be a permutation._
_Let \(F\) be the flag \(0\subset H_{[1,n-1]}\subset H_{[1,n-2]}\subset\cdots\subset H_{1}\subset \mathbb{C}^{n}\)._
_Let \(F^{\prime}\) be the transverse flag \(0\subset H_{[2,n]}\subset H_{[3,n]}\subset\cdots\subset H_{n}\subset\mathbb{C} ^{n}\)._
_Then, the orbit closure \(Z_{w}\) is equal (as a scheme) to the Richardson variety \(\Sigma_{\iota(w)}^{F}\cap\Sigma_{\overline{\iota}(w_{0}w)}^{F^{\prime}}\)._
Proof.: Both \(Z_{w}\) and \(\Sigma_{\iota(w)}^{F}\cap\Sigma_{\overline{\iota}(w_{0}w)}^{F^{\prime}}\) are irreducible and reduced of dimension \(n-1\), so it suffices to show that \(Z_{w}\subset\Sigma_{\iota(w)}^{F}\) and \(Z_{w}\subset\Sigma_{\overline{\iota}(w_{0}w)}^{F^{\prime}}\).
We begin by showing that \(Z_{w}\subset\Sigma_{\overline{\iota}(w_{0}w)}^{F^{\prime}}\). This amounts to the statement that, for a general point \(\mathcal{L}\in Z_{w}\), we have
\[\dim(L_{i}\cap H_{[n+1-j,n]}) \geq\#\left(\{\overline{\iota}(w_{0}w)(1),\ldots,\overline{ \iota}(w_{0}w)(i)\}\cap\{j+1,\ldots,n\}\right)\] \[=\#\left(\{1,n+1-w(1),\ldots,n+1-w(i-1)\}\cap\{j+1,\ldots,n\}\right)\] \[=\#\left(\{w(1),\ldots,w(i-1)\}\cap\{1,\ldots,n-j\}\right)\]
for any \(i,j\). Recall that \(L_{i}\) is the subspace of \(\mathbb{C}^{n}\) spanned by the first \(i\) columns of \(A_{w}\). By construction, the \((k+1)\)-st column vector of \(A_{w}\), for \(k=1,2,\ldots,i-1\), lies in \(H_{[w(k)+1,n]}\), corresponding to the fact that all entries below the symbol \(\star_{2}\) are zero. Therefore, at least \(\#\left(\{w(1),\ldots,w(i-1)\}\cap\{1,\ldots,n-j\}\right)\) of the columns \(2,\ldots,i\) of \(A_{w}\) are vectors in \(H_{[n+1-j,n]}\), establishing the needed inequality.
Now, we show that \(Z_{w}\subset\Sigma_{\iota(w)}^{F}\). This amounts to the statement that, for a general point \(\mathcal{L}\in Z_{w}\), we have
\[\dim(L_{i}\cap H_{[1,j]}) \geq\#\left(\{\iota(w)(1),\ldots,\iota(w)(i)\}\cap\{j+1,\ldots,n\}\right)\] \[=\#\left(\{w(1),\ldots,w(i)\}\cap\{j+1,\ldots,n-1\}\right).\]
First, if \(w(i)\leq j\), then \(\dim(L_{i}\cap H_{[1,j]})\geq\dim(L_{i-1}\cap H_{[1,j]})\) and
\[\#\left(\{w(1),\ldots,w(i)\}\cap\{j+1,\ldots,n-1\}\right)=\#\left(\{w(1), \ldots,w(i-1)\}\cap\{j+1,\ldots,n-1\}\right),\]
so we may replace \(i\) by \(i-1\) and proceed by induction. We therefore assume that \(w(i)>j\).
Let \(A_{w}^{ij}\) denote the submatrix of \(A_{w}\) of entries in the first \(i\) columns and \(j\) rows. Suppose that the symbol \(\star_{1}\) appears in \(A_{w}^{ij}\), in row \(\ell\leq j\). We claim that the symbol \(\star_{2}\) in the same row as \(\star_{1}\) must also appear in \(A_{w}^{ij}\). Indeed, if this were not the case, then because \(\star_{2}\) appears in column \(i+1\) in a row below row \(j\), and in particular in no row above row \(\ell\), the symbol \(\star_{1}\) would have been placed in row \(\ell\) to the right of column \(i+1\), a contradiction.
Let \(\alpha=\#\left(\{w(1),\ldots,w(i-1)\}\cap\{j+1,\ldots,n-1\}\right)\) be the number of appearances of the symbol \(\star_{2}\) below row \(j\) and between columns \(2\) and \(i\), inclusive. Then, there are at most \(i-1-\alpha\) appearances of the symbol \(\star_{2}\) in \(A_{w}^{ij}\). All appearances of the symbol \(\star_{1}\) in \(A_{w}^{ij}\) must
appear in the same rows as the symbols \(\star_{2}\) in \(A_{w}^{ij}\). Thus, for a subset \(S\subset\{1,2,\ldots,j\}\) of cardinality at least \(j-(i-1-\alpha)\), the corresponding rows of \(A_{w}^{ij}\) are zero.
We therefore have \(L_{i}\subset H_{S}\). Because \(H_{[1,j]}\subset H_{S}\) has codimension \(i-1-\alpha\), we also have
\[\dim(L_{i}\cap H_{[1,j]})\geq i-(i-1-\alpha)=\alpha+1,\]
which is exactly the required inequality, because \(w(i)>j\) by assumption. This completes the proof.
Proof of Theorem 1.: Theorem 22 implies that \([Z_{w}]=\sigma_{\iota(w)}\sigma_{\overline{\iota}(w_{0}w)}\). The Anderson-Tymoczko formula now follows from Corollary 20.
## 7. Grassmannians
The degeneration \(Z\) to the union of \(Z_{w}\) pushes forward to a toric degeneration of a generic torus orbit closure on any variety of partial flags in \(\mathbb{C}^{n}\) to a union of special ones. We focus on the case of Grassmannians, describing the components that survive under the push-forward and the associated polyhedral subdivision.
Let \(\pi:\operatorname{Fl}(n)\to\operatorname{Gr}(r,n)\) be the map remembering only the component \(L_{r}\) of a flag \(\mathcal{L}\).
**Lemma 23**.: _The map \(\pi\) has positive-dimensional fibers on \(Z_{w}\), and thus sends \([Z_{w}]\) to zero under push-forward, unless_
\[w(1)>w(2)>\cdots>w(r)=1<w(r+1)<\cdots\cdots<w(n-1). \tag{1}\]
Proof.: We first prove that if \(\pi_{*}[Z_{w}]\neq 0\), then we must have \(w(1)>w(2)>\cdots>w(r)\). Assume instead that for some \(s<r\), we have \(w(1)>\cdots>w(s)\) and \(w(s)<w(s+1)\). Then, we claim that if the needed conditions
\[\dim(L_{i}\cap H_{[n+1-j,n]}) \geq(\#\{w(1),\ldots,w(i-1)\}\cap\{1,\ldots,n-j\})\] \[\dim(L_{i}\cap H_{[1,j]}) \geq(\#\{w(1),\ldots,w(i)\}\cap\{j+1,\ldots,n-1\})\]
hold for all \(i\neq s\) and all \(j\), then in fact, they hold for all \(i\) and \(j\). This implies that if \(\mathcal{L}\in Z_{w}\), then replacing \(L_{s}\) with any subspace \(L_{s}^{\prime}\) with \(L_{s-1}\subset L_{s}^{\prime}\subset L_{s+1}\) yields a flag \(\mathcal{L}^{\prime}\in Z_{w}\). If furthermore \(s<r\), then this shows that any fiber of \(\pi\) upon restriction to \(Z_{w}\) is positive-dimensional, a contradiction.
We now prove the claim. We first have
\[\dim(L_{s}\cap H_{[1,j]}) \geq\dim(L_{s-1}\cap H_{[1,j]})\] \[\geq\#(\{w(1),\ldots,w(s-1)\}\cap\{j+1,\ldots,n-1\})\] \[=\#(\{w(1),\ldots,w(s)\}\cap\{j+1,\ldots,n-1\})\]
unless \(w(1)>\cdots>w(s)>j\), in which case we need to prove that \(L_{s}\subset H_{[1,w(s)-1]}\). On the other hand, if \(z(s)<z(s+1)\), then we have
\[\dim(L_{s+1}\cap H_{[1,w(s)-1]})\geq\#(\{w(1),\ldots,w(s+1)\}\cap\{w(s), \ldots,n-1\})=s+1,\]
so in fact we have the stronger statement that \(L_{s+1}\subset H_{[1,w(s)-1]}\).
Similarly, we have
\[\dim(L_{s}\cap H_{[n+1-j,n]}) \geq\dim(L_{s+1}\cap H_{[n+1-j,n]})-1\] \[\geq(\#\{w(1),\ldots,w(s)\}\cap\{1,\ldots,n-j\})-1\] \[=(\#\{w(1),\ldots,w(s-1)\}\cap\{1,\ldots,n-j\})\]
unless \(w(1)>\cdots>w(s)>n-j\), in which case the required statement is simply that \(\dim(L_{s}\cap H_{[n+1-j,n]})\geq 0\). This proves the claim.
Similarly, one proves by downward induction that \(w(i)<\cdots<w(n-1)\) for \(i\geq r\).
If instead (1) holds, then the Schubert variety \(\Sigma_{\iota(w)}^{F}\subset\operatorname{Fl}(n)\) pushes forward under \(\pi\) to the Schubert variety \(\Sigma_{\lambda}^{F}\subset\operatorname{Gr}(r,n)\), where \(\lambda\) is the partition
\[(w(1)-r,w(2)-(r-1),\ldots,w(r-1)-1,0).\]
On the other hand, the Schubert variety \(\Sigma_{\overline{t}(w)}^{F^{\prime}}\subset\operatorname{Fl}(n)\) is the pullback of \(\Sigma_{\overline{\lambda}}^{F^{\prime}}\subset\operatorname{Gr}(r,n)\), where \(\overline{\lambda}\) is the complement of \(\lambda\) inside the rectangle \((n-r-1)^{r-1}\).
By the projection formula, it follows that the class of a generic torus orbit closure in \(\operatorname{Gr}(r,n)\) is given by
\[\pi_{*}[Z]=\sum_{\lambda\subset(n-r-1)^{r-1}}\sigma_{\lambda}\sigma_{ \overline{\lambda}},\]
which was obtained using different methods by Berget-Fink [2, Theorem 5.1].
A typical special orbit closure corresponding to a summand on the right hand side is represented by a \(n\times r\) matrix of the form
\[A_{\lambda}=\begin{bmatrix}0&0&0&*\\ 0&0&0&*\\ 0&0&0&*\\ 0&0&*&*\\ 0&0&*&0\\ 0&*&0&0\\ 0&*&0&0\\ *&*&0&0\\ *&0&0&0\\ \end{bmatrix},\]
obtained by restricting to the first \(r\) columns of the matrix \(A_{w}\). The above example corresponds to the permutation \(w=(10,6,4,1,2,3,5,7,8,9,11)\) of \(n-1=11\), or equivalently the partition \(\lambda=(6,3,2)\subset(7)^{3}\).
The \(w(i)\)-th row of \(A_{\lambda}\) has non-zero entries in columns \(i\) and \(i+1\) for \(i=1,2,\ldots,r-1\), and each additional row has exactly one non-zero entry in such a way that the non-zero entries in every column are contiguous. In this way, the Schubert conditions corresponding to the cycles \(\sigma_{\lambda},\sigma_{\overline{\lambda}}\) are visible, corresponding to the zeroes appearing above and below the path of non-zero entries, respectively.
The degeneration of \(\pi(Z)\) into special orbit closures \(Z_{\lambda}\subset\operatorname{Gr}(r,n)\) corresponds to a matroid decomposition of the simplex \(\Delta(r,n)\subset\mathbb{R}^{n}\) cut out by the equation \(z_{1}+\cdots+z_{n}=r\) and the inequalities \(0\leq z_{i}\leq 1\). Namely, the subpolytopes \(\Delta(r,n)_{\lambda}\subset\Delta(r,n)\) are cut out by the inequalities
\[z_{[1,w(i)-1]}\leq n-i\leq z_{[1,w(i)]}\]
for \(i=1,2,\ldots,n-1\). This decomposition thus encodes a new proof of the Berget-Fink formula. |
2305.16850 | Pulsar as a Weber detector of gravitational waves and a probe to its
internal phase transitions | It is believed that cores of neutron stars provide a natural laboratory where
exotic high baryon density QCD phases may exist.The theoretically well
established {\it neutron superfluid phase} is also believed to be found only
inside neutron stars. Focus on neutron stars has intensified in recent years
with the direct detection of gravitational waves (GWs) from binary neutron star
(BNS) merger, which has allowed the possibility of directly probing the
properties of the interior of a neutron star. A remarkable phenomenon
manifested by rapidly rotating neutron stars is in their {\it avatar} as {\it
Pulsars}. The accuracy of pulsar timing allowed the first indirect detection of
GWs from a BNS system and opened up a few exciting possibilities. Any pulsar
deformation, even if incredibly tiny, can leave imprints on the pulses by
introducing tiny perturbations of the moment of inertia (MI) tensor components.
While the diagonal MI components of the perturbed MI tensor affect the pulse
timings, the off-diagonal components lead to the pulsar's wobbling and
affecting the pulse profile. This opens up an opportunity to explore various
phase transitions inside a pulsar core by induced density fluctuations through
the observable effects on the pulse timing and profile. Such perturbations also
naturally induce a rapidly changing quadrupole moment of the star, thereby
providing a new source of GW emission. Another remarkable possibility arises
when we consider the effect of an external GW on a neutron star. With the
possibility of detecting any minute changes in its configuration through pulse
observations, the neutron star has the potential to perform as a Weber detector
of GWs. This brief review focuses on these specific aspects of a pulsar,
specifically on the type of physics that can be probed by utilizing the effect
of changes in the MI tensor on pulse properties. | Partha Bagchi, Oindrila Ganguly, Biswanath Layek, Anjishnu Sarkar, Ajit M. Srivastava | 2023-05-26T12:03:00Z | http://arxiv.org/abs/2305.16850v2 | # QCD, Gravitational Waves, and Pulsars
###### Abstract
Investigations of the phase diagram of quantum chromodynamics (QCD) have revealed exotic new possibilities. Experimental investigations with relativistic heavy ion collision experiments (RHICE) have been able to probe relatively low baryon chemical potential regime of the QCD phase diagram, namely the quark-gluon plasma (QGP) phase. An entire new spectrum of phases is expected to arise at very high baryon chemical potential, which may remain out of reach at these terrestrial experiments. These phases are collectively referred to as _color superconducting phases_. Attention is naturally directed towards astrophysics where gravity assisted ultra high baryon density objects naturally occur. The densest such object, which can be directly observed, is a neutron star. It is thus speculated that cores of neutron stars provide a natural laboratory where such exotic phases of QCD may exist. In fact, the theoretically well established _neutron superfluidity phase_ is also believed to be found only inside neutron stars. Focus on neutrons stars has tremendously intensified in recent years from a completely different angle. Even though the first _indirect_ detection of gravitational waves had come long ago from a binary neutron star (BNS) system, it is the direct detection of gravitational waves by LIGO/Virgo from BNS merger events which has allowed the possibility of directly probing the properties of the interior of a neutron star. A truly remarkable phenomenon manifested by rapidly rotating neutron stars is in their _avatar_ as _Pulsars_. The accuracy of pulsar timing can reach the level of one part in \(10^{15}\), comparable to that of atomic clocks. Indeed, it was such a great accuracy which had allowed the first indirect detection of gravitational waves from a BNS system. Such an incredible accuracy of pulse timings points to a very interesting possibility. Any deformation of the pulsar, even if it is extremely tiny, has the potential of leaving its imprints on the pulses through introduction of tiny perturbations in the entire moment of inertial (MI) tensor. While, the diagonal components of perturbed MI tensor affect the pulse timings, the off-diagonal components lead to wobbling of pulsar, directly affecting the pulse profile. This opens up a new window of opportunity for exploring various phase transitions occurring inside a pulsar core, through induced density fluctuations, which may be observable as perturbations in the pulse timing as well as its profile. Such perturbations also naturally induce a rapidly changing quadrupole moment of the star, thereby providing a new source of gravitational wave emission. Another remarkable possibility arises when we consider the effect of an external gravitational wave on neutron star. With the possibility of detecting any minute changes in its configuration through pulse observations, the neutron star has the potential of performing as a Weber detector of gravitational wave. This brief review will focus on these specific aspects of a pulsar. Specifically, the focus will be on the type of physics which can be probed by utilizing the effect of changes in the MI tensor of the pulsar on pulse properties.
pacs: 12.38.Mh,97.60.Gb,95.55.Ym,04.80.Nn,26.60.+c
## I Introduction
Cosmos has always proved to be the ultimate laboratory where physical systems may exist in extreme environments, even those which are beyond the reach of any terrestrial experiments. Early hot and dense stages of the Universe is one such case where extremely high temperatures are achieved. Fortunately, some of those
stages, with temperatures reaching \(10^{12}\) K (few hundred MeV) are possible to be partially probed in terrestrial experiments now in relativistic heavy-ion collision experiments (RHICE) [1; 2]. This possibility has put the physics of quark-gluon plasma (QGP) phase of QCD matter at a central stage, the phase which is believed to exist in the Universe when it was few tens of microseconds old. Experimental observations at RHIC have already shown completely unexpected results, for example, a near-perfect fluid nature of QGP with a value of shear viscosity to entropy ratio which is close to the proposed lowest bound on this number [3]. With continued efforts in RHICE with varying collision energy, it has been possible to extensively investigate certain part of the phase diagram of quantum chromodynamics (QCD) which corresponds to relatively low baron number density. As the early Universe was filled with a matter with extremely low baryon number density, it allows one to claim that conditions like early Universe have been recreated in laboratory (at least for the strongly interacting matter part).
At the same time, theoretical investigations have revealed the possibility of an entire new spectrum of phases of strongly interacting matter which are expected to arise at very high baryon chemical potential [4]. It is reasonably clear now that much of this extremely rich part of the QCD phase diagram may remain out of reach at these terrestrial experiments. These phases are collectively referred to as the _color superconducting phases_[5; 6]. Attention is thus naturally directed towards astrophysics where gravity assisted ultra high baryon density objects routinely occur. Extreme conditions of high baryon density are expected to be reached in supernova explosions, in neutrons stars, and in matter undergoing collapse to a black hole. The densest such object, which can be directly observed at present, is a neutron star. It is speculated that cores of neutron stars provide a natural laboratory where various exotic phases of QCD may occur [7]. Even exotic forms of matter, stable only under extreme conditions of density and pressure, may form in these objects, such as strange stars [8; 9; 10; 11; 12]. Interestingly, the theoretically very well established _neutron superfluidity phase_ has never been seen in any terrestrial experiment. At the same time it is expected to routinely occur inside neutron stars [13]. In fact, superfluid vortices in such a phase provide the most convincing explanation of the phenomena of pulsar glitches [14].
Neutron stars have long been investigated theoretically and experimentally, especially with pulsar observations. Pulsars are rapidly rotating neutron stars with pulse timings which are observed on earth with incredible accuracy, reaching one part in \(10^{15}\) for certain pulsars, comparable to that of atomic clocks. This extreme accuracy of pulse timings had allowed the first indirect detection of gravitational waves from a binary neutron star (BNS) system [15; 16]. Neutron star physics has acquired a centre stage recently with the advent of gravitational wave detectors. After the first direct detection of gravitational waves (GWs) by LIGO coming from a binary black hole merger event [17], the stage was set for the detection of GWs from spiral-in of other compact dense objects. The first such event of binary neutron star merger was detected by LIGO/Virgo in 2017 [18] and that opened the remarkable possibility of directly probing the properties of the interior of neutron stars.
Neutron stars thus acquire a unique status of providing a laboratory for probing microphysics of exotic phases of QCD on one hand, while providing a window to probe the physics of its interior using GW detectors on earth in BNS merger events on the other. This remarkable story of neutron stars still allows for one more chapter, that relating to its _avatar_ as a pulsar with extreme accuracy of pulsar timing observations. Such an incredible accuracy of pulse timings points to a very interesting possibility. Any deformation of the neutron star, even if it is extremely tiny, has the potential of leaving its imprints on the pulses through introduction of tiny perturbations in the entire moment of inertial (MI) tensor. Clearly, it will directly affect the pulse timing. However, a general deformation of NS will change the entire MI tensor, including its off-diagonal components. The diagonal components of perturbed MI tensor will affect the pulse timings, at the same time, the perturbed off-diagonal components will induce wobbling of pulsar. Wobbling of pulsar (on top of any previously present) will directly affect the profile of pulses as observed on earth. Thus observations of changes in pulse timings, along with any accompanying changes in the pulse profile will contain information about details of minute changes in the configuration of NS, e.g. density perturbations inside the NS, or its overall deformations. This opens up a new window of opportunity for exploring various phase transitions occurring inside a pulsar core, through induced density fluctuations, which may be observable as perturbations in the pulse timing as well as its profile [19; 20; 21]. Such perturbations also naturally induce a rapidly changing quadrupole moment of the star, thereby providing a new source of gravitational wave emission [19]. Another remarkable possibility arises when we consider the effect of an external gravitational wave on neutron star. With the possibility of detecting any minute changes in its configuration through pulse observations, the neutron star has the potential of performing as a Weber detector of gravitational wave [22]. The possibility of such _Weber_ detectors, spread out in space, with their signals (carrying imprints of any GWs) monitored at earth, has tremendous potential, especially in allowing us to re-visit GW events whose signal may have passed through earth in past [23].
In this brief review we will focus on this particular aspect of NS physics, namely the range of phenomena which can be probed utilizing the effect of changes in the configuration of NS using high precision measurements of pulses coming from a pulsar. We will begin in Section II with a discussion of salient features of the QCD phase diagram, specially the high baryon density regime. We will briefly discuss theoretical expectations of different phases
in this regime. We will also connect with the experimental situation and discuss what parts of QCD phase diagram can be probed by the present and future relativistic heavy-ion collision experiments, and which regimes may remain out of reach of these terrestrial experiments, leading us towards cosmos, in particular to neutron stars. Section III will then be devoted to basic physics of neutron stars, including the superfluid phase in its interior as well as the possibility of exotic QCD phases in the inner core. There are excellent reviews on this subject (as well as on the subject matter of Section II, i.e. QCD phase diagram). We will only recollect essential parts of these discussions for self-completeness of the discussion here. Thus, we will also briefly discuss how recent gravitational wave detections have allowed the probe of NS interior properties. Section IV will be devoted to pulsars recalling the extreme accuracy of pulsar timing observations. We will also recall here the first (indirect) detection of gravitational waves (GWs) by pulsar observations, as well as ongoing attempts of pulsar timing arrays for detection of ultra low frequency GWs. In Section V we will discuss various proposals from the literature for possible observational signatures of various phases in NS interiors. Among these, glitches take a prominent role as well established signals for the existence of superfluid phase in NS interior. We will discuss here difficulty of this explanation in accounting for relatively recent observations of anti-glitches. We will also discuss various proposals for detection of the exotic color superconducting phases of QCD in NS interior. Here we will also point to a new possibility where possibly the highest observable baryon density phases could arise, that is in the matter undergoing collapse to a black hole. For example, for a stellar mass black hole, the Schwarzschild radius is about one third of the typical neutron star radius. Thus it is possible that baryon densities in matter collapsing to black hole may become about 20 times larger than that in a neutron star (depending on the density profile, also the density contrast will be smaller for a more massive black hole). Certainly, it will only be transient, lasting only for tiny fractions of seconds, but still may allow observational signatures of novel QCD phase transition which can occur with typical local time scale of QCD, i.e. fm/c.
In Section VI we will discuss the implications of phase transitions occurring inside a pulsar on its pulses. The consequences of phase transition (for example from nuclear matter to QGP) occurring in the core of a neutron star in terms of its effect on the moment of inertia have been discussed in the literature with its observational implications on the spin rate of the neutron star [24; 25]. These discussions primarily focused on the change in the equation of state during the phase transition, and hence the main implications related to changes in the diagonal components of the moment of inertia tensor affecting the spin rate of the neutron star. However, phase transitions necessarily produce density fluctuations, which perturb entire moment of inertia tensor, including its off-diagonal components. This was discussed by some of us in [19; 20], pointing out that phase transitions induced density fluctuations modify the entire moment of inertial tensor of the pulsar which affects pulse timings, but also induces modulations of the pulse profile. The detailed modification of the pulses carries the information of statistical nature of density fluctuations, and hence the precise nature of phase transition occurring inside the NS interior. In Section VII we will discuss a special case in detail when the density fluctuations are modelled in terms of random components of MI tensor added to the unperturbed diagonal MI tensor of the neutron star [21]. We will see here the effect on pulse timings as well as the nature of modulations expected in the pulse profile. We will discuss that even for very tiny density fluctuations, even if pulsar timings remain extremely small, pulse profile modification may become relatively large. This is because while pulse timing changes will be proportional to typical density fluctuation magnitude \(\epsilon\), the pulse profile modification will be proportional to \(\epsilon/\eta^{2}\) where \(\eta\) is the NS deformation parameter. (\(\eta\) is typically very small \(\sim 10^{-8}-10^{-4}\). This observation will play an important role in later discussion when we discuss possible detection of external GWs using NS deformations.) We will also briefly discuss how the same technique can be used to detect other perturbations occurring in neutron stars, e.g. collision with an asteroid.
In Section VIII we will discuss how phase transitions occurring inside NS may provide a new _high frequency_ source of GWs through density fluctuation induced rapidly changing quadrupole moment. Section IX will change the direction of discussion towards the effects of an external gravitational wave on the neutron star configuration. Clearly, expected deformations in NS will be extremely tiny. However, we will discuss how it may be possible to detect even such tiny deformations utilizing the impressive accuracy of pulsar timing observations, and in particular possible changes in the pulse profile from induced wobbling of pulsar (recalling discussion from the results of Section VII that even if pulse timing changes remain very small, pulse profile modulations may become relatively large). We will therefore conclude in this section that pulsars will effectively act as _remotely stationed_ Weber detectors whose GW perturbed signals may be observable on earth [22]. A very important part of discussion here will involve the so called _ringing_ of the pulsar which will allow folding of very large number of pulses, thereby tremendously increasing the signal to noise ratio, exactly what is done in a Weber detector. We will make a point here on the importance of the quantity the _Quality factor_\(\mathbf{Q}\) of the NS matter which will directly determine the strength of the ringing effect. While much focus has been there in the literature in calculating the shear viscosity to entropy ratio of the QCD matter (in the QGP phase as well as the hadronic phase), there is no such discussion for the Quality factor \(\mathbf{Q}\).
This possibility of pulsars acting as Weber detectors, spread in cosmos will lead to a truly remarkable possibility of re-vising past GW events. We will discuss this
in Section X, how some GW event (collisions of black holes, neutron stars, supernova explosions etc.) from far away, whose signal may have passed through earth in past, may become observable on earth again via observations of certain pulsars which also get affected by the same GW source, and transmit the perturbed pulses which are then later detected on earth [23]. Knowing the location of GW source and various pulsar coordinates, it is possible to predict at what time in future such GW events may become observable again through specific pulsars. Even the GWs from the earliest detected supernova SN185, observed in AD 185, may be observable through observations of perturbed pulses of specific pulsars, (in this case, via pulsars J0900-3144 and pulsar J1858-2216 with perturbed pulsar signal arrival date reaching earth during 2016-2049). The final Section XI will present conclusions and future directions where we will discuss various limitations of these proposals, as well as what specific efforts are needed to make these proposed techniques more effective.
## II QCD phase diagram
This section will discuss salient features of the QCD phase diagram, and possible experimental probes of different regimes of the phase diagram. The earliest known part of the _QCD phase diagram_ relates to the liquid/gas transition of nucleonic matter. Indeed, it was the liquid drop model of the nucleus which led Lise Meitner to propose theory of fission [26; 27]. Further structure in this phase diagram emerged from astrophysics, with the notion of neutron superfluidity in the cores of neutron stars [13]. Up to this stage, it would be more appropriate to call it the phase diagram of nucleonic (hadronic) matter. With the discovery of asymptotic freedom, Quantum Chromo Dynamics (QCD) was established as the fundamental theory of strong interactions, with quarks and gluons being the fundamental constituents, and gluons being the mediators of the color force among these constituents. Hadrons were composed of quarks and the nuclear (hadronic) interactions arise as residual interaction from this fundamental color interaction. With the color gauge group SU(3), the QCD Lagrangian is
\[L_{QCD}=-\frac{1}{4}F^{a}_{\mu\nu}F^{a\mu\nu}+\sum_{\alpha}\overline{\psi}_{ \alpha}\left(i\gamma^{\mu}D_{\mu}-m_{\alpha}\right)\psi_{\alpha}\]
where \(\alpha=u,d,c,s,t,b\) is the flavor index for quarks.
The covariant derivative is
\[D_{\mu}\psi_{\alpha}=\left(\partial_{\mu}-ig_{s}T^{a}A^{a}_{\mu}\right)\psi_{\alpha}\]
\(g_{s}\) is the strong interaction coupling constant. \(\psi_{\alpha}\) is in the 3-dimensional fundamental representation of color \(SU(3)\), with the generators \(T^{a}=\frac{\lambda^{a}}{2}\), \(a=1,.8\). \(A^{a}_{\mu}\) are the 8 gluon fields and \(\lambda^{a}\) are the Gell-Mann matrices. The field strength tensor is
\[F^{a}_{\mu v}=\partial_{\mu}A^{a}_{\nu}-\partial_{\nu}A^{a}_{\mu}+g_{s}f^{abc }A^{b}_{\mu}A^{c}_{\nu}\]
where \(f^{abc}\) are the antisymmetric structure constants for the Lie algebra of SU(3). As isolated quarks are not seen, confinement is an essential part of color interaction. Isolated objects can only be color singlets. Running of the strong interaction coupling constant \(g_{s}\) with energy exhibits the famous _asymptotic freedom_. Writing \(\frac{g_{s}^{2}}{4\pi}\equiv\alpha_{s}\),
\[\alpha_{s}\left(Q^{2}\right)=\frac{4\pi}{\left(11-\frac{2n_{F}}{3}\right)\,ln \,Q^{2}/\Lambda^{2}}\]
\(\Lambda\) is the QCD scale typically taken to be of order 200 MeV. Thus color interaction becomes weaker with large momentum transfer, equivalently at short distances. On the other hand, at large distances, interaction becomes very strong, which is consistent with the notion of color confinement.
With asymptotic freedom, it is but natural to expect that the behavior of strongly interacting matter (system of hadrons) at ultra high temperatures, or ultra-high densities, may show qualitatively different behavior compared to the standard confining hadronic phase. One would expect in such extreme regimes, typical interaction between quarks will be at very large energies/short distances, hence will become much weaker. In the limiting case quarks and gluons should become almost free, hence the notion of an almost ideal gas of quarks and gluons, the quark-gluon plasma (QGP) phase of strongly interacting matter. Extreme limits of temperature are only found in the very early stages of the Universe, while very large baryon densities are expected to arise in cores of neutron stars. These two systems thus provide natural laboratories for the QGP phase of the QCD phase diagram. However, the QGP phase in the early Universe is primarily probed theoretically, with no reasonably direct observable signals expected for the present stage of the Universe. (This is with the present understanding that for very small net baryon densities, the quark-hadron transition is a cross-over. Some time back when the possibility of a first order transition was also there, there were tantalizing possibilities of forming quark nuggets which could be candidates for dark matter [8].) Situation is different with neutron stars which are directly accessible to observations. Observational data available for masses and sizes of neutron stars put strong constraints on the equation of state of the matter inside the neutron star. Detailed information about phases in the interior of a pulsar is provided by observations of pulses, in particular from the pulsar glitches [28]. A remarkable new probe of the neutron star interior in terms of _tidal deformability_ became available from the direct detection of gravitational waves by LIGO/Virgo coming from binary neutron star (BNS) merger events [18].
Study of QGP phase, and exploration of QCD phase diagram in general, has acquired new dimensions with
the arrival of relativistic heavy-ion collider experiments (RHICE). Heavy ions (e.g. Lead, Gold, etc.) are accelerated to ultra-relativistic speeds and made to collide, thereby creating a transient ultra hot and dense system of strongly interacting matter. For high enough center of mass energy, the temperatures are expected to be high enough for the creation of a thermalized system of quarks and gluons, the so called quark-gluon plasma (QGP) phase of QCD. These experiments have allowed unprecedented control over the properties of strongly interacting matter created reaching temperatures and densities which were only available so far in cosmos. Controlled experiments have given wealth of knowledge about the system, from thermodynamic properties to transport coefficients, and even allow to study QCD matter in strong external electromagnetic field. The initial goal of these experiments was to find the QGP phase, hence the drive for larger and larger centre of mass energy. Indeed, from Super Proton Synchrotron (SPS) at CERN to Relativistic Heavy-Ion Collider (RHIC) at BNL, USA, and then to Large Hadron Collider (LHC) at CERN, center of mass energy per nucleon-nucleon pair has increased from few tens of GeV at SPS to 200 GeV at RHIC, and 5.36 TeV at LHC for nucleus-nucleus collisions. The temperatures reached at such energies have certainly been high enough to convincingly demonstrate creation of the QGP phase. However, increasing collision energy at the same time leads to lower baryon density for the produced system. This is because of asymptotic freedom of QCD, larger momentum transfer leads to weaker interaction, thereby reducing possibilities of baryon stopping in the produced system. The QGP system produced at ultra high energies, therefore, resembles more closely the QGP phase of the early Universe which also had very small baryon number density.
With these experiments at ultra high energy, creating a QGP system with very small baryon densities, reaching a level of maturity, the attention is now being focused on the other extreme condition, namely very high baryon density regime of QCD matter. QCD matter is also expected to go to QGP phase in this regime, but with different properties. While high temperature low baryon density QGP teaches us about the early Universe, the regime of very high baryon density QGP directly relates to the interior of neutron stars. In fact, it is now realized that the QCD phase diagram has extremely rich structure precisely in this ultra-high baryon density regime. The situation is quite like the phase diagram of water. While at high temperatures we have liquid and gas phases, at ultra low temperatures and densities there are numerous phases of ice which appear. Possibilities of various phases of QCD are illustrated in the QCD phase diagram in Fig.1. Here we show the phase diagram in terms of two most important variables, temperature \(T\) and baryon chemical potential \(\mu\) (representing baryon density). In different situations, one can use other variables such as strangeness chemical potential etc. For theoretical discussions it is also useful to invoke another axis where quark masses can be varied. This allows for the discussion of various possible phases, and critical points in the phase diagram. We will confine our discussion to the standard 2-d phase diagram as shown in Fig.1, in the \(T,\mu\) plane.
The phase diagram in Fig.1 shows possible phases in different regions of temperature \(T\) and baryon chemical potential \(\mu_{B}\). The figure also shows where different regions of the phase diagram are expected to arise. It is useful to consider two different regimes in the entire phase diagram. One is for relatively low baryon density with baryon chemical potential values less than about 1 GeV, or ultra high temperatures, and the other regime is for much higher values of \(\mu_{B}\) and relatively low temperatures.
### Low baryon density/ultra high temperature regime
There are several regions for separating this regime. First, there are only two phases to discuss here. The hadron gas phase appears at low values of \(\mu_{B}\) and relatively low temperatures. Boundary of this phase is denoted by the white curve, part solid, joined to dashed white curve, the joining point denoted as the _critical point_. The solid white curve denotes line of first order transition, which ends at the critical end point where the transition is second order (a continuous phase transition). The dashed line denotes a cross-over transition, which is not a proper phase transition (in the sense that the partition function remains analytic across this boundary). The entire region of the phase diagram outside this boundary, and above the solid yellow line is the second phase, the quark-gluon phase. Importantly, all the present and future planned heavy-ion collision experiments probe this part of the phase diagram only. First
Figure 1: QCD phase diagram in the \(T,\mu\) plane (from ref.[29])
system to discuss here will be the early Universe denoted by the solid white arrow (almost) on the \(\mu_{B}=0\) line. The temperatures reached in the Universe in its earliest stages can reach close to the Planck temperature (\(\sim 10^{19}\) GeV), depending on the inflationary models. The Universe, expanding and cooling from these earliest stages, reaches the quark-hadron transition temperature (where the solid dashed line intersects the \(T\) axis). Lattice calculations are under good control for \(\mu_{B}=0\) case, and give the value of quark-hadron transition temperature to be \(T_{c}=156\) MeV. This value of temperature is expected to be reached in the early Universe when its age is few tens microseconds. Upto that time, the Universe was filled with a plasma of quarks, gluons, as well as other elementary particles, e.g. leptons, photons. After this time, the Universe undergoes quark-hadron transition, (which is expected to be a smooth cross over from lattice calculations) when quarks and gluons get confined into system of hadrons. As we mentioned, with smooth cross-over the earlier much discussed quark nugget scenario [8] is no more possible. Though, there have been alternate proposals for the formation of quark nuggets which do not depend on the nature of quark-hadron phase transition [30; 31].
The ultra-relativistic collisions at RHIC and LHC, with centre of mass energies ranging from 200 GeV to 5.36 TeV (per nucleon-nucleon collision) lead to a fireball with temperatures well above the quark-hadron transition temperature with relatively low values of \(\mu_{B}\) (\(\sim 50\) MeV for 200 GeV collision energy). As we mentioned, increasing collision energy leads to smaller values of \(\mu_{B}\) due to asymptotic freedom of QCD. For these large values of collision energies, confining forces become irrelevant, so basically the collision is between quarks and gluons (partons) contained in each colliding nuclei. For very large collision energies, partons scatter little, basically the colliding nuclei go through each other. Even with smaller scattering at large energies, the energy available for secondary parton production monotonically increases with collision energies, leading to higher temperatures. But with smaller scattering, proton stopping is less, hence net baryon number of the produced QGP system is monotonically decreasing function of collision energy. (Strong interactions conserve baryon number, so baryon density in the central fireball can only come from the stopping of initial valance quarks of the colliding nuclei.)
It is believed that at these collision energies, the expanding QGP system undergoes quark-hadron transition through the dashed white line in Fig.1, that is the cross-over line. This is with several estimates of the location of the critical point. Though it has not been possible to have a good control over lattice calculations for non-zero \(\mu_{B}\) (due to the so called _fermion sign problem_), several techniques have been devised to extend lattice calculations for non-zero \(\mu_{B}\). Recent estimates suggest the location of the critical point to be about (\(T\sim 156\) MeV, \(\mu_{B}\sim 65\) MeV) [32]. It should be mentioned that there are strong theoretical arguments for the existence of a first order transition line at non-zero values of \(\mu_{B}\). These are typically based on various effective field theory models. Combined with the solid knowledge of cross-over transition at \(\mu_{B}=0\), it automatically follows that the first order line has to end somewhere at a critical end point.
Physics near the critical point (in the critical regime) is dominated by critical fluctuations showing universal properties. Experimental evidence for such fluctuations will give deep insight into this very important part of the QCD phase diagram. With this aim, the beam energy scan program was designed at RHIC, with collision energies as low as 7.7 GeV (thereby creating the QGP system with much higher values of \(\mu_{B}\), hopefully evolving through the critical regime).
One more region deserves mention in this part of the phase diagram, denoted as _Nuclear matter_ near \(\mu_{B}=900\) MeV. There is a short line of first order transition (not shown in Fig.1) which corresponds to liquid-gas transition of nuclear matter. What is more interesting is that at further larger values of \(\mu_{B}\) but still within the hadronic phase (so not crossing solid white curve), there is the possibility of nucleonic superfluid phase. It is precisely this superfluid phase, and associated superfluid vortices which are supposed to play crucial role in neutrons stars, especially in relation to the phenomenon of pulsar glitches. We will have details discussion of this phase in the next section.
### Very high baryon density, low temperature regime, color superconductivity
Discussions relating to this part of the phase diagram have intensified relatively recently. There have been insightful exchanges of ideas between the usual condensed matter physics and this area, which is appropriately being called as _condensed matter physics of strongly interacting matter_. The phases in the part of the phase diagram are typically termed as _color superconducting phases_. Note here that for this part of the QCD phase diagram, we are not including the so called _high baryon density QGP phase_ which will be the part above the solid yellow line with very large values of \(\mu_{B}\). Though there will be important differences in the physical properties of QGP with high baryon density from QGP with very low baryon density (and high temperature), they are the same thermodynamic phase, with no phase boundary expected between these two regimes. That is why we included high \(\mu_{B}\) QGP phase in the preceding subsection. Discussion below is primary taken from refs.[4; 5; 6; 7] which can be consulted for details.
High \(\mu_{B}\) with low temperature changes the physics qualitatively giving rise to new thermodynamic phases. The basic physics of these phases lies in the realization that at very high values of \(\mu_{B}\), and relatively low temperatures, the physics will be governed by low energy excitations near the Fermi level. Standard BCS Super
conductivity has taught us that any attractive interaction between fermions at the Fermi level destabilizes the Fermi level, forming Cooper pairs of fermions. First we note that for the relevant values of chemical potential, only relevant quark flavors are \(u,d\), and \(s\) quarks. (We do not discuss here heavier quarks as there is no physical system known where one could reach quark chemical potential high enough so that heavier quarks can play any important role for these condensed phases of QCD.) At ultra high chemical potential, of order 500 MeV for quarks, asymptotic freedom will make perturbative calculations reliable, especially for \(u\) and \(d\) quarks. Consider scattering of light quarks with one gluon exchange as shown in Fig.2,
Here, \(i,j,k,l=1,2,3\) (or r,g,b) refer to colors of u and d quarks. Now we note that the amplitude for this process is the same as that for the QED process \(e^{-}\mu^{-}\to e^{-}\mu^{-}\) with the replacement of the electromagnetic coupling \(e\) by the strong coupling \(g_{s}\) (i.e. replacing \(\alpha\) by \(\alpha_{s}\), and inclusion of the _color factor_),
\[C_{F}(ik\to jl)=\frac{1}{4}\lambda_{ij}^{a}\lambda_{kl}^{a} \tag{1}\]
quark-quark can combine in two ways, with \(3\times 3=3^{*}+6\). The color factors for the two representations are, \(C_{F}(3^{*})=-2/3\) and \(C_{F}(6)=1/3\).
As the EM potential between \(e^{-}\) and \(\mu^{-}\) is repulsive, the q0 potential is attractive in the 3* channel, and repulsive in the 6 channel. BCS pairing of quarks in the 3* channel leads to the _color superconducting phase_. Depending on the relative masses of the quarks forming the condensate, different phases are possible.
**Color-Flavor Locked (CFL) phase:**
With color antisymmetric (3* channel), spin antisymmetric (for \({}^{1}S_{0}\) pairing), we conclude that the condensate should have flavor antisymmetric. For very high chemical potential it may be reasonable to treat all three quarks (\(u,d,s\)) as massless. Condensate for this most symmetric case, (with \({}^{1}S_{0}\) spin pairing) will have the following structure
\[<q_{i}^{\alpha}q_{j}^{\beta}>\sim\Delta_{CFL}(\delta_{i}^{\alpha}\delta_{j}^{ \beta}-\delta_{j}^{\alpha}\delta_{i}^{\beta})=\Delta_{CFL}\epsilon^{\alpha\beta n }\epsilon_{ijn} \tag{2}\]
where \(\alpha\beta\) are flavor indices and \(ij\) are color indices. Note that the condensate is invariant under equal and opposite rotations of color and (vector) flavor. Hence, it leads to the following spontaneous symmetry breaking
\[SU(3)_{Color}\times SU(3)_{L}\times SU(3)_{R}\times U(1)_{B}\to SU(3)_{C+L+R} \times Z_{2} \tag{3}\]
Thus color symmetry of QCD \(SU(3)_{C}\) is spontaneously broken. Three flavor Chiral symmetry is also spontaneously broken. Interestingly, QGP phase restores the chiral symmetry (at least for \(\mu_{B}=0\) case). Here, at large \(\mu_{B}\) also, we expect chiral symmetry restoration in the _high density QGP phase_ which occurs at higher temperatures. But at low temperatures, high \(\mu_{B}\) quark-gluon phase breaks the chiral symmetry spontaneously. We will discuss the situation with chiral symmetry further below.
Spontaneous Breaking of SU(3) color symmetry implies that all gluons become massive, (so there are no long range color forces even in this deconfined phase), hence the name _Color superconductor_. We mention here that the role of photon in this broken phase is played by the usual photon with a small admixture of gluon, see [4] for details. Also, note that spontaneous breaking of chiral symmetry, here is not by a color neutral condensate (as for the \(\mu_{B}=0\) case, but by a colored condensate. This implies multiplets of colored hadrons, (and also colored Skyrmions),
**2SC Pairing:**
For Cooper pairing of 2 light quarks, we get the so called \(2SC\) phase. This breaks color group \(SU(3)_{C}\) to color \(SU(2)\). In this phase, 5 out of 8 gluons become massive.
**Crystalline Color Superconducting phase:**
This phase is similar to the so called Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) phase in condensed matter systems: When chemical potential is not very large compared to the strange quark mass, then difference between \(u,d\) quark masses and \(s\) quark mass becomes significant. In that case Cooper pairing of different quarks (having different Fermi momenta), leads to spatial modulation of the order parameter. This implies spontaneous breaking of spatial translations (and rotation) symmetries.
We are only giving brief details of various phases here, just to give an idea of the richness of the QCD phase diagram in different regimes. There are excellent references available in literature, we have listed some here in refs.[4; 5; 6; 7]. We end this section with brief discussion the important subject of chiral symmetry in QCD.
Figure 2: quark-quark scattering with one-gluon exchange
### Chiral Symmetry in QCD
We here briefly recall the notion of chiral symmetry in QCD. For details, literature can be consulted, e.g. ref. [33]. Note that we have been using the term _quark-hadron transition_ in the above discussion. This term can have two meanings. One is the confinement-deconfinement (C-D) transition where a system of hadrons, (with quarks/gluons being confined inside hadrons) makes a transition to the deconfined phase of a plasma of quarks and gluons. The other meaning can refer to the chiral transition. For 2 massless flavors, QCD Lagrangian has exact \(SU(2)_{L}\times SU(2)_{R}\) symmetry, called as 2-flavor chiral symmetry, which corresponds to independent transformations of left and right components of \(u\) and \(d\) quarks. Hadron spectrum does not show any such doubling of mass spectrum, but it does show multiplet structure of \(SU(2)_{isospin}\). This leads to the conclusion that the chiral symmetry \(SU(2)_{L}\times SU(2)_{R}\) is spontaneously broken to the diagonal subgroup \(SU(2)_{isospin}\), with pions as the Goldstone bosons. For three massless flavors, all \(SU(2)\) groups should be replaced by \(SU(3)\) groups, leading to 3-flavor chiral symmetry breaking. Of course, quarks are not massless, this leads to explicit breaking of chiral symmetry, leading to small masses for the Goldstone bosons, and contributing to mass differences within the multiplets. The explicit symmetry breaking being relatively small, especially for the 2-flavor case, the notion of chiral symmetry in QCD has been immensely useful, especially in constructing effective field theory models which capture physics at low energy scale. Chiral sigma models, Nambu-Jona-Lasinio (NJL) models etc. are all bases on the notion of chiral symmetry breaking and have been the only tools for discussions of QCD phase diagram at high baryon densities.
It is believed that chiral symmetry transition and confinement-deconfinement (C-D) transition are the same. Lattice results support this idea (though at times there are differing results also). Conceptually, these two transitions are completely different. Indeed, different order parameters characterize these two transitions, with expectation value of the Polyakov loop characterizing the C-D phase, and the \(\bar{\psi}\psi\) condensate charactering the chiral transition (\(\psi\) being the quark field). We will have discussion of some of these (especially the Polyakov loop condensate) later in Section VI. The conceptual difference between these two transitions becomes clear when we consider high \(\mu_{B}\) phases, in particular the color superconducting phases. These phases appear in the regime where inter-quark separation is so small that confining forces are irrelevant. This is like a plasma of deconfined quarks and gluons, but not the usual QGP phase as here thermal effects are insignificant compared to the quantum statistics effects. Thus, we have chiral symmetry breaking here (as discussed above for the \(CFL\) phase), even though we have a system of deconfined quarks and gluons. Note that although we have massive gluons here, there is no relevance of color singlet objects here. Indeed, as mentioned above, we expect colored hadrons here arising from the chiral symmetry breaking via a colored diquark condensate.
## III Neutron stars
This section will discuss basic physics of neutron stars, including the superfluid phase in its interior as well as the possibility of exotic QCD phases in the inner core. We will also briefly discuss how the recent gravitational wave detections have allowed the probe on NS interior properties.
**Basics of a neutron star :** The existence of neutron stars as one of the possible end states of massive stars was predicted by Walter Baade and Fritz Zwicky in 1933 [34], a long time before the discovery of the pulsating neutron star (_little green man_) in late 1967 [35]. Neutron stars are the remnants of the supernova explosion of supergiant stars of mass in the range \(10~{}\,{\rm M}_{\odot}-20~{}{\rm M}_{\odot}\). During its formation, gravity squeezes the matter to achieve extremely high baryon density. Gravitational collapse is counterbalanced by the neutron degeneracy pressure (along with repulsive nuclear forces) leading to the formation of a stable neutron star. The radius (R) of a neutron star lies in the range (10 - 14) km with mass in the range \(~{}1.1~{}{\rm M}_{\odot}-2.1~{}{\rm M}_{\odot}\). Thus the average mass density is of the order of nuclear matter's saturation density \(\rho_{0}=2.8\times 10^{14}~{}{\rm g~{}cm}^{-3}\). The discovery of the neutron star put the discussions of such compact astrophysical objects on deep footing, with the realization that it provides the opportunity for testing exciting physics at a high baryon density regime that cannot be tested otherwise by terrestrial experiments. However, with the opportunity, new theoretical challenges arise, of understanding the properties of dense interior materials, the inner core in particular, and relating these properties to various testable observables.
There have been continuous efforts towards a theoretical understanding of the internal structure of neutron stars keeping in view its astrophysical implications and observational constraints. These include the mass-radius relations, the moments of inertia (MI) of the star (including the fractional contribution of the solid crust and superfluid/superconductor components), the extent of rigidity of the outer crust, and the deformation from the sphericity etc. The above quantities, in turn, determine the frequency of pulsar's free precession, glitch size in the context of various pulsar glitch models (crustquake, superfluid-vortex model etc.), or the feasibility of GW emission etc.
The standard approach of constructing a model of neutron star's internal structure is to implement hydrostatic equilibrium of a gravitating fluid system, resulting in the well known Tolman-Oppenheimer-Volkoff (TOV) equa
tions [36; 37] (\(c=1=G\)),
\[\frac{dP(r)}{dr} = -\frac{(P+\rho)(M+4\pi r^{3}P)}{r(r-2M)}; \tag{4}\] \[\frac{dM(r)}{dr} = 4\pi\rho(r)r^{2}. \tag{5}\]
The solutions of the above set of equations provide the density profile \(\rho(r)\), including the mass and the radius of the neutron star, provided the equation of state (EOS) \(P=P(\rho)\) and the central density \(\rho(0)\) are supplied. The above procedure thus produces a neutron star structure model for a fixed value of central density, which finally determines various physical parameters, such as the mass, radius, moment of inertia, etc., of the star. However, the main challenge lies in providing the EOS, the most critical input for solving the Eqs. (4,5). Although the EOS of the outer crust and, to some extent, the inner crust are known [38], the first principle calculations of the many-body QCD interaction relevant to the inner core are unknown. Thus, one has to resort to phenomenological models for extracting the EOS for the core region. One can then use the experimental measured values of various physical parameters to test the predictions based on constructed stellar models. For most practical calculations, one uses a set of canonical values \(M=1.4\;\mathrm{M}_{\odot}\), R = 10 km, and \(\mathrm{MI}=10^{45}\;\mathrm{g}\;\mathrm{cm}^{2}\). For the above set of values, and from the observations of various pulsar events, such as glitches (to be discussed later), one broadly takes the neutron star's internal structure as shown in Fig. 3.
Observation of glitches in the rotation of young pulsars indicates a solid crust containing \(\geq 1.4\%\) of total MI [39], the outer layer of which consists of crystalline solid iron nuclei and a sea of degenerate electrons with mass density \(\rho\simeq 10^{6}\;\mathrm{g}\;\mathrm{cm}^{-3}\). Going deeper into the inner crust region, as the density increases from \(\rho\simeq 10^{11}\;\mathrm{g}\;\mathrm{cm}^{-3}\) onward, protons and electrons start combining to form neutrons for creating neutron-rich nuclei. Eventually, it becomes energetically favorable for the neutrons dripping out of the nuclei and forming a sea of free (unbounded) neutrons. A few hundred meters thick inner crust region with the density ranging from approximately \(10^{11}\;\mathrm{g}\;\mathrm{cm}^{-3}\) to \(10^{14}\;\mathrm{g}\;\mathrm{cm}^{-3}\) plays a crucial role in the theory of glitches. There is a strong belief, supported by glitch observations, that a significant fraction of free neutrons in this region (see Fig. 3) are in a \({}^{1}S_{0}\) neutron superfluid state. Further deeper, below the inner crust region, a more favorable \({}^{3}P_{2}\) neutron superfluid state is believed to co-exist with a \({}^{1}S_{0}\) proton superconductor.
The composition of the inner core is highly speculative. The states of matter at the high pressures in the deep interior can form various hadronic states, such as hyperons, condensed-pions etc. The central region of the neutron star can also accommodate different QCD phases, such as QGP, CFL, 2SC, etc. The core centre can become quite mysterious for massive neutron stars. One needs well defined signatures that can be tested to confirm possibilities of any such phases which remains an important theoretical challenge.
We will be discussing later various proposals and observational techniques for probing physics of pulsar interior. But, it seems most appropriate to close this section with a discussion of a truly exciting recent development towards probing neutron star interior. This is the direct detection of gravitational waves by LIGO/Virgo which, for GWs originating from a BNS merger allows to put observational constraints on deformation of neutron stars during last stages of coalescence, thereby directly probing the stage of matter inside the neutron star.
**Probing NS interior with direct detection of gravitational waves from BNS merger**
Possibly the most important advancement in experimental General Relativity in recent years has been the direct detection of gravitational waves by LIGO and Virgo coming from distant events of binary black hole mergers and subsequently, binary neutron star mergers. Black hole merger events allow the possibility of directly probing the dynamics of intense gravity regime of near horizon regions of black holes by analysis of the GW waveform corresponding to the near coalescence regime of black
Figure 3: The expected internal structure of a neutron star. Credit : Dany P Page ([https://phys.org/news/2015-09-neutron-star.html](https://phys.org/news/2015-09-neutron-star.html)).
holes. Similarly, for binary neutron star merger events, GW waveform contains information about strong tidal deformations towards the end of spiral-in of the neutron stars, when separation between the two neutron stars approaches neutron star size. In such regime, the coalescence is accelerated by a quadrupolar deformation of NS by the tidal field of the companion NS. Indeed, GW waveform observations have been used [18] to put constraints on the dimensionless tidal deformability \(\Lambda=\frac{2}{3}K_{2}R^{5}/M^{5}\) (in gravitational units with G = c = 1) where \(k_{2}\) is called the second Love number. (\(\Lambda\) is related to the tidal deformability \(\lambda\) discussed later in Sec. IX, by \(\Lambda=\lambda/M^{5}\).) value of \(k_{2}\) and hence the tidal deformability depends on the equation of state of NS matter. GW observations thus provide a completely independent probe of the equation of state of NS matter, which is usually constrained by mass-radius relationship of neutron stars. Direct observation of source identification for BNS mergers by resulting electromagnetic radiation (which would be absent for black hole mergers) has started the new chapter of multi-messenger astronomy in our exploration of cosmos. With a range of observations, from gravitational waves, to electromagnetic radiation in a range of energies, along with the possibility of neutrino bursts from such BNS merger events will jointly give a powerful probe of the structure and property of neutron stars.
## IV Pulsars
This section will be devoted to pulsars recalling the extreme accuracy of pulsar timing observations. We will also briefly discuss here the first (indirect) detection of gravitational waves (GWs) by pulsar observations, as well as ongoing attempts of pulsar timing arrays for detection of ultra low frequency GWs.
**Pulsars Timing :** The atomic clock-like stability of pulsar rotation period allows one, through monitoring of the pulsar rotation, to study a rich variety of phenomenon affecting the propagation of pulses while reaching earth. Most applications of pulsars involve a powerful technique known as the pulsar timing. Measurement of a sequence of time of arrival (ToA) of pulses over intervals ranging from hours to decades is the basis of pulsar timing [40]. These ToAs are first transferred, normally to the solar-system barycentre, to remove the effects of rotation and orbital motion of the Earth. The amount of useful information that can be extracted critically depends on the precision at which the pulse arrival times are measured. In order to understand the pulsar timing, we will take the example of isolated pulsars and describe the rotation in the pulsar's comoving frame.
For an isolated pulsar, one can express the spin frequency \(\nu\) (or time period \(P\)) in a Taylor series about some reference epoch \(t_{0}\)[40]
\[\nu(t)=\nu_{0}+\dot{\nu}(t-t_{0})+\ddot{\nu}\frac{(t-t_{0})^{2}}{2}+... \tag{6}\]
Where \(\nu_{0}=\nu(t_{0})\) is the pulsar's spin frequency at \(t_{0}\) and \(\dot{\nu},\ddot{\nu},...\) are the higher order time derivatives of \(\nu\) to be evaluated at \(t_{0}\). These parameters are associated with some physical process, knowledge of which provides valuable information about the underline process. For a normal (i.e., rotation-powered) pulsar, the period \(P\) (\(\sim 0.3\) s - \(3\) s) and its first derivative \(\dot{P}\) (\(\sim 10^{-15}\) s s\({}^{-1}\)) are observed with high accuracy through timing measurements. These parameters capture the spin-down history of the isolated pulsar. On the other hand, the millisecond pulsars (MSP) have the most exotic applications, including the detection of low-frequency gravitational waves (see below) because of their extreme stability in their periodicity (\(P\sim 3\) ms with \(\dot{P}\sim 10^{-20}\) s s\({}^{-1}\)) compared to the normal pulsar.
**Indirect detection of GWs :** The celebrated discovery of the pulsar PSR B1913+16 in binary star system by Russell Hulse and Joseph Taylor in 1974 [15] using the data from the Arecibo radio telescope opened up the possibilities for the study of relativistic gravity in moderately strong field regime. The above discovery provided the first indirect quantitative confirmation test in favor of the existence of gravitational waves within the framework of Einstein's theory of gravity [41; 42; 16]. The Hulse-Taylor pulsar (PSR B1913+16) is a binary star system composed of a pulsar of mass \(\simeq 1.44\) M\({}_{\odot}\) and the silent companion neutron star of mass \(\simeq 1.39\) M\({}_{\odot}\)[43], moving around in elliptical orbits about their center of mass. As per Einstein's theory of gravity, the orbital period of this binary system is expected to decay with time. The heartening agreement between the observed data with the theoretical prediction (see Fig. 4) not only provided conclusive evidence for the existence of gravitational waves; it laid the foundation for the belief of the possibility of _direct_ detection of GWs. The remarkable first-ever direct detection of GWs in 2015 by LIGO [17], arising from a binary black hole merger, fulfills that belief and opens a new era in gravitational wave astronomy. Since then, there have been quite a few significant detections of GWs. The peak strain amplitude (\(h_{0}\)) for all these detections has been in the range \(10^{-21}~{}-~{}10^{-22}\).
There were several theoretical studies [44; 45; 46; 19], where the authors suggested that isolated pulsars can be a potential source of GWs. A few other neutron star activities, such as the neutron star flaring, the formation of hyper-massive NS following binary coalescence, etc., are capable of exciting quasi-normal modes of a pulsar, resulting in the GWs emission. The primary purpose of such studies is to probe the internal structure of the pulsars. Hopefully, the more sensitive ground-based advanced detectors, namely, aLIGO, VIRGO, Einstein Telescope (ET) etc., will be able to measure the gravitational waves produced by isolated pulsars shortly and answer a few questions on pulsar physics. In fact, with the above purpose, even before the direct detection of GWs, there were a few attempts for GWs searches from
isolated pulsars. The GWs associated with the timing glitch in the Vela pulsar in August 2006 [47] was one among those searches worth mentioning. The observed timing noise, i.e., glitch, can be one such activity that can excite quasi-normal modes in the pulsar and cause GWs. Although the searches during the August 2006 Vela pulsar timing glitch produced no detectable GWs [17], with the improving sensitivity of advanced detectors, continuous attempts in this direction may produce more conclusive results shortly.
**Pulsar Timing Array (PTA) :** Pulsar timing arrays are the GW detector setups provided by nature itself, in the form of a population of highly stable millisecond pulsars, with timing accuracies of \(\sim 10\) ns over several years, with arm lengths of galactic scale. They are sensitive to much lower frequencies than ground-based instruments. With precise pulsar timing, an array of pulsars could detect extremely low-frequency sources with typical frequencies less than \(10^{-6}\) Hz. The origins of GWs with such low frequency can be supermassive black hole mergers with masses in the range (\(10^{9}-10^{10}\)) M\({}_{\odot}\), the stochastic background of GWs arising from cosmological phase transition during the early stages of the Universe, or otherwise, or more exotic objects such as cosmic strings. The basic principle is that the set of stable MSPs serves as an array of clocks whose _time_, as observed on earth, would be modulated by gravitational waves passing through the space between MSP and Earth. With observations of many pulsars, phenomena which affect all pulsar periods in a correlated way can be separated from phenomena which affect different pulsars differently. For example, a stochastic gravitational wave background can be separated from errors in the time standard because of their different dependence on pulsar sky position. For example, clock errors will lead to all pulsars having the same Time of Arrival (TOA) variations (monopole signature), solar-system ephemeris errors will lead to a dipole signature. In contrast, gravitational waves will lead to a quadrupole signature. These effects can be separated if one has sufficient number of widely distributed pulsars. With this purpose, there is a worldwide effort to search for and observe the set of stable millisecond pulsars for detecting gravitational waves through pulse modulation. There are four current efforts in this direction, operating under the joint umbrella of the International Pulsar Timing Array (IPTA [48]); North American Nanohertz Observatory for Gravitational Waves (NANOGrav) in the USA, the Parkes Pulsar Timing Array (PPTA) in Australia, the European Pulsar Timing Array (EPTA), and the Indian Pulsar Timing Array Project (InPTA). IPTA aims to construct the most sensitive low-frequency gravitational wave detector, which can be achieved through sharing resources among the stakeholders and creating combined pulsar timing data sets. The current sensitivity of the experiments is exciting from the perspective of the potential detection of GWs through PTA.
## V Observational aspects of neutron star interiors
Here we will discuss various proposals from the literature for possible observational signatures of various phases in NS interiors. Among these, glitches take a prominent role as well established signals for the existence of the superfluid phase in NS interior. We will discuss here difficulty of this explanation in accounting for relatively recent observations of anti-glitches. We will also discuss various proposals for detection of the exotic color superconducting phases of QCD in NS interior.
**Pulsar glitches :** Pulsars are magnetized rotating neutron stars that emits periodic short pulses of electromagnetic radiation with periods between 1.4 ms - 0.3 s. The non-alignment of the rotation axis with the magnetic axis causes the light-house-like appearance of the pulses to the observer at Earth. Amid the pulsars' extraordinarily stable rotational frequency, many pulsars show sudden spin-up events (glitches) followed by a period of slow recovery.
Since the discovery of the first glitch in the Vela pulsar [49], many glitches have been observed and reported to date [50]. A typical glitch pattern is shown in Fig. 5. The fractional change of rotational frequency, i.e., the glitch size (\(\Delta\Omega/\Omega\)) lies in the range \(10^{-11}-10^{-5}\) with an average inter-glitch time of a few months to a few years. The pulsar recovers from weeks to months to a period close to the pre-glitch value. The oldest theoretical model for pulsar glitches [51], namely, the crustquake model, assumes the existence of a deformed solid crust of a pulsar. The oblateness can be characterized by the parameter \(\epsilon=\frac{I_{zz}-I_{xx}}{I_{0}}\). Where \(I_{zz}\), \(I_{xx}\) and \(I_{0}\) are the moment of inertia about the z-axis (rotation axis), x-axis, and the spherical star, respectively [52]. The sudden decrease of oblateness \(\Delta\epsilon\) decreases the MI, resulting in a spin-up event (following angular momentum conservation), i.e., \(\frac{\Delta\Omega}{\Omega}=-\frac{\Delta I}{I_{0}}=\Delta\epsilon\). It was immediately realized [52]
Figure 4: The shift in the periastron passage of the binary pulsar PSR B1913+16 with time, caused by gravitational radiation. Figure taken from Ref. [43]
that inter-glitch time being proportional to \(\Delta\epsilon\), successive large-size glitches need longer waiting periods. Thus, crustquake might be responsible for producing smaller Crab-like glitches \(\Delta\Omega/\Omega\simeq 10^{-8}\); the model is incompatible with Vela-like large-size glitches \(\Delta\Omega/\Omega\simeq 10^{-6}\).
Anderson & Itoh proposed the most popular superfluid-vortex model in 1975 [28]. The model assumes the existence of neutron superfluidity in the inner crust, with the core being in a superfluidity/superconducting state. Interestingly, superfluidity in a neutron star was hypothesized [53] a long time before the above-proposed glitch model. The basic idea of the vortex model is that the vortices (spinning neutron star causes an array of quantized vortices for stability) act as an angular momentum reservoir while being pinned to the nuclear sites [54; 55; 46; 47]. As the neutron star slows down due to radiation loss, differential angular velocity \(\delta\Omega\) between the inner crust superfluid and the rest (crust+core) is built until it reaches a critical value \((\delta\Omega)_{c}\). Once the Magnus force overcomes the pinning force, just above \((\delta\Omega)_{c}\), many vortices (\(\sim 10^{18}\)) are released from the pinning site and share the excess angular momentum to the rigid crust-core co-rotating system, causing the pulsar to spin-up. The vortex model not only can account for large-size Vela-like glitches (\(\sim 10^{-6}\)), the long-relaxation time scale (weeks to months) associated with the post-glitch recovery phase also arises naturally in this picture, thus providing indirect evidence of superfluidity in the interior [57]. In this context we mention that an interesting spin-down event was observed a decade earlier in the magnetar 1E 2259+586. As per the report [58], X-ray timing observation [59] of the magnetar clearly shows an anti-glitch. Evidence for anti-glitches in the X-ray pulsar NGC 300 ULX-1 is also reported recently [60]. Such events challenge the standard glitch theory and suggest the need for rethinking this issue [19].
**Mesonic condensate phases**
It has been suggested that at high baryon densities in the core of neutron stars, apart from the neutrons and protons, various mesonic condensates may form. Formation of pion condensate has been suggested, (see, e.g. [61; 62; 63]) and it has been argued that it may lead to significant modification of equation of state. Kaon condensates have also been considered and its effects on softening the equation of state, hence constraining the maximum mass of neutron stars has been discussed [64; 65]. These condensates may also affect the cooling rate of the neutron star. Along with such condensates, dominant presence of hyperons in the NS interior has been discussed, and constraints on maximum mass of NS have been derived [66]. Effect of hyperons on the possibility of pion condensates has been discussed in [67] and it has been argued that with hyperons, it is unlikely that s-wave pion condensate could form in the NS interior.
**Quark matter core:**
At high baryon densities, the core of the neutron star is likely to convert to quark matter. With the outer regions being in the hadronic phase, there will be a phase boundary separating the quark core from the hadronic region. With quark matter equation of state for the core, the mass and size relationship of the neutron star is affected [68] which can be probed by observations, e.g. using thermonuclear X-ray bursts from matter accretion on NS surface in binary systems [69]. The compactness of the neutron star \((M/R)\) is directly probed by the tidal deformability from gravitational wave detection from binary neutron star mergers. Possibility of strange quark matter in the high density core of neutron star has been discussed where three-flavor quark matter is assumed more stable than nuclear matter at low density [8; 9; 10; 11; 12]. The composition of the core also affects the cooling of neutron star by neutrino emission.
**Color superconducting phases :** The quark matter core itself can have very rich phase structure at very high baryon densities, as we discussed earlier in the Section II. In the QCD phase diagram, we have observed that the QCD theory allows the appearance of various exotic phases at the low-temperature (\(T\)) and very high baryon chemical potential (\(\mu_{B}\)) regime. The neutron star's core can provide such conditions to accommodate those phases. One such phase, the color superconducting phase, may arise due to spontaneously broken \(SU(3)_{C}\) color and chiral symmetry for three light quark flavors (u, d & s). For two light quarks, color group \(SU(3)_{C}\) is broken down to color \(SU(2)_{C}\) causing the so-called \(2SC\) phase. Observational signatures of these phases have been discussed in the literature [5; 6]. For example, central core with CFL phase leads to suppressed cooling by neutrino emission, and also has smaller specific heat. Thus the total heat capacity and neutrino emission of NS with CFL core will be dominated by outer layer which are in the standard nucleonic phase. For somewhat lower baryon densities, crystalline color superconductivity may arise in the NS core. It has been suggested that the rigidity of such a phase (possibly with suitably misaligned magnetic field form the rotation axis) will lead to non-zero quadrupole moment [5]. This will be a sig
Figure 5: A schematic representation of a typical glitch pattern of a pulsar. Here Q is the recovery fraction, which measures the part of \(\Delta\Omega\) which decays. The post-glitch recovery time scale typically ranges from a few days to weeks. (Taken from: [http://hdl.handle.net/11343/36537](http://hdl.handle.net/11343/36537).)
nificant source of gravitational waves, which can be observationally constrained with the present generation of gravitational wave detectors.
With the realization that many of these possible signatures of the exotic QCD phases are subject to model uncertainties, an attempt was made by some of us in [19] that the above symmetric breaking phase transitions may cause density fluctuations in the core through the formation of topological defects, leading to a transient change of MI tensors components. It was shown in Ref. [19] that the change of diagonal components of the MI tensor may lead to the change of the spin frequency of the pulsars and may be responsible for glitches and/or anti-glitches. As the fluctuations are random, there is a possibility of the generation of quadrupole moment leading to the emission of GWs. Development of the non-zero MI tensor's diagonal components may also lead to modulations of pulse profile [21]. Contrary to the above proposal, there was also a view in Ref. [70] that the color superconducting phase associated with the three light flavors might not exist. The authors also claimed that two-flavor color superconductivity (2SC) might be marginally inconsistent with pulsar data.
### Possibility of ultra-high baryon densities in matter collapsing to a black hole
Neutron star core is generally believed to be the place where the highest baryon density can be achieved in nature. Thus the possibility of quark matter and other exotic QCD phases are usually discussed in that context (here we include quark star, strange star etc. in the same category). These are supposed to be equilibrium configurations achieved by using suitable equation of state for the interior. What happens when such compact objects accrete matter? Beyond a limit, the equilibrium is broken and the collapse to black hole occurs. Typically, the collapse is very rapid, occurring in milliseconds for a typical stellar mass object. During this collapse, density keeps increasing, while an event horizon starts forming near the centre, which grows outwards. It is reasonable to expect that during this dynamical evolution, density will be significantly larger than the initial density of the equilibrium configuration. Even before the event horizon forms at the centre, the central density will keep growing, and after the formation of horizon, the region outside it will also have large density before it gets engulfed by the growing horizon. For a naive estimate, consider a stellar-mass black hole of mass, say, 2 M\({}_{\odot}\) (approximately the upper mass limit of a neutron star, though there is still a theoretical uncertainty in estimates of this limiting mass). The Schwarzschild radius \(r_{s}=2GM/c^{2}\) of this type of black hole is about 6 km. Thus it is possible that baryon densities in matter \(\rho_{bh}\) collapsing to a black hole may become about \((2/1.4)(10/6)^{3}\simeq 5\) times larger than the average mass density of a canonical neutron star of mass 1.4 M\({}_{\odot}\) and radius 10 km. Thus, such collapsing _proto black holes_ may provide possibly the highest possible densities available in nature, larger than any compact equilibrium stars. This should be most optimistic place for looking for extreme exotic QCD phases, such as CFL phase which require very high baryon densities. Even though such high densities will last for a very short times, less than a millisecond, it is a very large time for the QCD processes which occur at time scale of fm/c. Thus observational signals from phase transitions to different high density QCD phases may be detectable when such collapse occurs. Possibility of transition to quark matter core in core-collapse supernova simulations was discussed in [71]. What we are proposing is that even more exotic QCD phases requiring extreme baryon densities may show up transiently in matter collapsing to black hole, for suitable masses of collapsing object. Numerical simulations of neutron stars accreting matter and collapsing to black hole show that central density can significantly increase even before event horizon forms at the centre [72; 73; 74]. The signatures from such collapse should be very unique, as there will be a succession of phase transitions to various high baryon density QCD phases, all occurring within a time span of order milliseconds. Apart from signaling exotic QCD phases, it can provide unique signature of such a gravitational collapse to black holes.
## VI Detecting phase transition occurring inside a pulsar
Here we will discuss observational implications of phase transitions occurring inside a pulsar, in particular, on the nature of its pulses. The consequences of phase transition (for example from nuclear matter to QGP) occurring in the core of a neutron star on its rotational dynamics has been discussed in [24; 25]. Basic physics of their model is that as pulsar rotation slows down due to radiation breaking, the density of the core steadily increases (with reduction of centrifugal force). If the density was initially below the critical density of the pulsar core (corresponding to a phase transition), then at some stage central density crosses the critical value leading to phase transition. It is assumed that as the high density core grows in size (slowly, over the time scale of millions of years), it continuously converts to the high density QGP phase (even when the transition is is of first order). The dependence of pressure on density will determine the manner in which the phase transition will affect the moment of inertia of the neutron star, hence the angular velocity of the star. It was emphasized in these works that the quantity of special interest is the _breaking index_ defined as \(n(\Omega)\equiv\frac{\Omega\Omega}{\Omega^{2}}\). It is argued that while usually \(n(\Omega)\) will be equal to the intrinsic index n of the energy-loss mechanism (with the energy loss \(dE/dt=\frac{d}{dt}(\frac{1}{2}\Omega^{2})=-C\Omega^{n+1}\)), during phase transitions, it can differ markedly from this value, possibly even by orders of magnitude. Thus, even if the changes in moment of inertia, and hence the spin rate
changes are not directly observable (due to very large time scale), breaking index may provide a more promising signal for phase transitions occurring inside pulsars.
The discussions in refs. [24; 25] primarily focused on the change in the equation of state during the phase transition. Main consequence of the phase transition was thus related to change in the diagonal components of the moment of inertia tensor affecting the spin rate of the neutron star. However, all phase transitions necessarily produce density fluctuations. In fact, a rich spectrum of physics is encoded in the distribution of density fluctuations relating to the nature of phase transition (first order, second order), and in particular the symmetry breaking pattern (if any) associated with the phase transition. Density fluctuations perturb entire moment of inertia tensor, including its off-diagonal components. This was discussed by some of us in [19; 20] for the situation when phase transition occurs rapidly in a large core of neutron star. It was pointed out that as phase transitions induced density fluctuations modify the entire moment of inertial tensor of the pulsar, the resulting off-diagonal components will lead to wobbling of star (in addition to any previously present) which will induces modulations of the pulse profile. Thus, it was argued that the detailed modification of the pulses carries the information of statistical nature of density fluctuations, and hence the precise nature of phase transition occurring inside the NS interior. An important aspect of this model is that it predicts that off-diagonal components of the MI tensor components necessarily become non-zero along with its diagonal components, with all perturbations being of similar order, due to statistical nature of the phase transition induced density fluctuations. Thus, in this model, spin rate changes will be necessarily associated with the modulations of pulse profile. This can be used to test this model for any observed pulse modifications, as if spin rate changes occur due to de-pinning of vortices, then dominant changes only occur in the diagonal components of the MI tensor (as all vortices are aligned along the rotation axis).
### Effects of density fluctuations
The earlier discussions in refs. [24; 25] related to a scenario of slow transition as applicable for slowly evolving star (e.g. by accretion), with a transition which is either a weak first order, or a second order, (or a crossover). For a strongly first order transition a different possibility arises as discussed in [19; 20]. Strong supercooling can lead to a highly suppressed nucleation rate, so that no bubble nucleation occurs for a very long time. The transition can then occur suddenly, possibly due to some inhomogeneity, after the supercritical core becomes macroscopic in size. The phase transition thus occurs rapidly over a macroscopically large core. This scenario would be quite similar to the one discussed by Witten [8] where a very low nucleation rate could lead to macroscopic length scales, of order of meters, for bubble nucleations for the quark-hadron transition where the typical length scale would be of order fm. (The discussion in [8] assumed a first order transition. With lattice results, now one knows that for very low baryonic chemical potential the quark-hadron transition is a cross-over). Rapid phase transition occurring over a large core can occur for in other situations also, for example during early hot stages of the neutron star undergoing rapid cooling.) The discussion in ref. [19; 20] related to a general such situation and argued that, along with the expected change in the moment of inertia, and hence the spin rate, (which could be directly observable), density fluctuations will be produced in the entire large core region undergoing this rapid phase transition. This will then affect off-diagonal MI components and hence induce wobbling of the neutron star.
The situation considered in ref. [19; 20] related to the case when phase transition induced spin rate remains small, (say, within the range set by observations of glitches), and determined the effects of density fluctuations on the MI tensor. In a simplified, two density picture of the phase transition occurring in the NS, it is assumed that the phase transition converts the core of radius \(R_{0}\) to the new phase with density \(\rho_{2}\), while the rest of NS remains in the old phase with density \(\rho_{1}\). The resulting fractional change in the moment of inertia [25] is
\[\frac{\Delta I}{I}\approx\frac{5}{3}(\frac{\rho_{2}}{\rho_{1}}-1)\frac{R_{0}^{ 3}}{R^{3}} \tag{7}\]
Where \(R\) is the radius of the star (taken to be spherical) in absence of the dense core. If we consider the possibility that phase transitions could have occurred in the pulsars which have been regularly monitored, then observations of glitches can be used to constrain the size of the core undergoing phase transition. One would then constrain the largest fractional change of moment of inertia to be less than \(10^{-5}\), relating to strongest glitches observed so far. Various phase transition cases can then be considered. For example, a sample value of change in density due to phase transition can be taken to be about \(30\%\) (which could be appropriate for QCD transition where density change can be of order one). This will constrain the value of \(R_{0}\) to be less than about \(300m\) (taking \(R=10\) km). Another important case is that of superfluid transition where the density change can be taken to be of order of the superfluid condensation energy density (\(\approx 0.1MeV/fm^{3}\)) [75; 13]. In this case, glitch constraint will implying that \(R_{0}\) can be as large as \(5\) km.
With the size of the core undergoing phase transition being constrained, one can then discuss effects of density fluctuations occurring in this core [19; 20]. First it is useful to get generic estimates which depend simply on the nature of phase transition. First, we focus on density fluctuations due to the nucleation of bubbles. As
discussed above, for a strong first-order case, a core of size a few hundred meters (or larger) can undergo rapid phase transition. However, in general, the core region will be expected to have minute nonuniformities, even of purely statistical origin, so that one can consider a situation where many bubbles may nucleate in different parts of the supercritical core. The bubbles nucleated with a critical size of the order of tens of \(fm\), will expand and coalesce. At the time of coalescence, the supercritical core region will consist of close packing of bubbles of the new phase, embedded in the old phase. We carried out simulation of random spherical bubble nucleation of radius \(r_{0}\) (at the coalescence stage) filling up a spherical core of radius \(R_{0}=300m\). The density change in bubble nucleation is taken to \(\sim 160\) MeV/fm\({}^{3}\), as appropriate for the QCD transition. We find fractional change in moment of inertia \(\Delta I/I\approx 4\times 10^{-8}\) for \(r_{0}=20\) meters and remains of same order when \(r_{0}\) varies from 5 meters to 20 meters. Due to random nucleation of the bubble, the off-diagonal component of components of the moment of inertia, as well as the quadrupole moment become nonzero. The ratio of both to the initial moment of inertia are found to be of same order, \(\frac{Q}{I_{0}}\simeq\frac{I_{\pi\pi}}{I_{0}}\simeq 10^{-11}-10^{-10}\).
### Density Fluctuation From Topological Defects
The formation of topological defects routinely occurs in symmetry-breaking transitions and has been extensively discussed in the literature, from the early Universe to condensed matter systems. Depending on the relevant energy scales these defects can be a source of large density fluctuations. The underlying dominant mechanism for their formation in a phase transition is the so-called Kibble mechanism [76; 77] which predicts a defect density proportional to the number density of correlation domains, with proportionality constant determined using universal arguments depending only on the specific symmetry breaking pattern and dimensionality of space under consideration. Thus, the random network of defects arising in any phase transition and resulting defect distribution can be determined entirely using the symmetry-breaking pattern. For example, a random network of vortices arises from superfluid transition. A network of domain walls and global strings arises from the spontaneous breaking of \(Z(3)\) center symmetry for confinement-deconfinement QCD transition [78]. Some QCD transitions ( e.g. the color flavor locked (CFL) phase, expected to arise at very large values of baryonic chemical potential) may give rise to only global strings, with \(SU(3)_{c}\times SU(3)_{L}\times SU(3)_{R}\times U(1)_{B}\) symmetry broken down to the diagonal subgroup \(SU(3)_{c+L+R}\times Z_{2}\)[5]. The evolution of such a defect network shows universal characteristics. Starting with initial defect densities (basically determined using correlation length and topological probability calculations), the later evolution of string defects and domain wall defects shows scaling behavior. This has important implications for predictions of changes in the moment of inertia (hence glitches/antiglitches), quadrupole moment, and subsequent relaxation to the original state of rotation in a reasonably model-independent way. An important aspect of topological defect sourced density fluctuations relates to the manner in which density fluctuations evolve in time. Eventually density fluctuations decay away, leaving uniform new phase (in the core region undergoing phase transition). But the manner of this decay, and the time duration, depend crucially on the specific nature of the defect, and hence on the symmetry breaking pattern. For example, while bubble-generated density fluctuations decay away quickly in the time scale of coalescence of bubbles, the domain wall network and the string network coarsen on much larger time scales (with specific scaling exponents) which are characteristic of specific type of defect. Thus the precise measurement of the pulsar spinning rate and modification of pulse profile, and its time evolution, can also provide important information about the specific transition occurring inside the neutron star.
#### iv.2.1 Lattice simulation of string network
Due to topological nature of these defects, generic features of the defect network can be determined using simple lattice picture with lattice size representing the correlation length. An estimate of the change in MI due to string and wall formation can be made by producing a network of defects inside the core of the pulsar by modelling the correlation domain formation in a cubic lattice, with lattice spacing \(\xi\) representing the correlation length [79]. To model U(1) global string formation each lattice site is associated with an angle \(\theta\) (randomly varying between 0 and \(2\pi\)), or two discrete values \(0,1\) while modelling \(Z_{2}\) domain wall formation. (\(Z_{2}\) domain wall is considered instead of \(Z_{3}\) walls of QCD just for simplicity ). For string case geodesic rule [76; 77; 79] is used to determine the winding of \(\theta\) on each face of the cube.
Starting with correlation length of order of \(fm\) to simulate the network of order few hundreds meter is numerically not possible. So the estimates were made in [19; 20] by considering spherical star of size \(R\) and a spherical core of radius \(R_{c}=\frac{0.3}{10}R\). Then, considering \(\xi=10fm\), \(R_{c}\) is increased from \(5\xi\) to \(400\xi\), and it was found that \(\frac{\delta I}{I_{i}}\simeq\) appears to stabilize at \(10^{-13}-10^{-14}\). Using this numerical results, it was suggested that the same fractional change in the MI may also be possible for realistic value of \(R=10\) km, especially when one accounts for statistical fluctuations in the core. For the case of domain wall formation, one finds fractional change in the off-diagonal components of MI (as well as quadrupole moments) to be larger by a factor of 40. For the case of superfluid transition, a rapid superfluid transition can take place due to transient heating, and subsequent cooling of star. This may occur either due to another transition releasing latent heat, or due to accretion, etc.. Taking vortex
energy per unit length to be \(100MeV/fm\) and correlation length for vortex formation of order \(10fm\) ([75; 13]) it is found that the superfluid vortices induced transient fractional change in MI is of order \(10^{-10}\) (compared to net fractional change in MI of order \(10^{-5}\) as discussed in Section VI.1). Also it was found that the ratios of the quadrupole moment and off-diagonal components of MI to the net MI of the pulsar are of order \(10^{-10}\).
#### iv.2.2 field theory simulation of strings and domain walls
The technique proposed in [19; 20] has potential of using detailed observations of pulse modification to learn about precise nature of phase transition occurring inside the pulsar. For this, one would need to know specific nature of density fluctuations for different cases and their precise time evolution. As the relevant cases here refer to quantum field theory phase transitions, one has to resort to field theory simulations to get such details. Unfortunately, with typical length scales of such QFT phase transitions being microscopic (e.g. of order fm for QCD transition), one can only hope to do these simulations in very small spatial regions. (This is different from the case of bubble nucleation where generic arguments of strong supercooling were invoked to determine density fluctuations over macroscopic regions even for QCD transition. even for lattice modelling of defect networks as discussed above, one could consider relatively large lattice sizes.)
This was achieved in [19; 20] by studying string and wall formation in a field theory simulation for confinement-deconfinement (C-D) QCD transition using effective field theory Polyakov loop model. The expectation value of the Polyakov loop, \(l(x)\), is the order parameter for the C-D transition [81; 80]. \(l(x)\) vanishes in the confined phase and is non-zero in the deconfined phase where \(Z(3)\) center symmetry (for the SU(3) color group) is spontaneously broken as \(l(x)\) transforms non-trivially under \(Z(3)\). This gives rise to three different vacua for different value of \(l(x)\) in QGP phase, leads to topological domain wall defects (which interpolate between different \(Z(3)\) vacua) [82], and also string defects (QGP strings) forming at the junction of these \(Z(3)\) walls [83; 84; 78]. (Note, we have used the notations \(Z(3)\) and \(Z_{3}\) interchangeably.) Defect formation is studied using field theory simulation of the evolution of \(l(x)\) from an initial value of zero (appropriate for the confining phase) as the system is assumed to undergo a rapid transition (quench) to the deconfined phase (as in [85]). Use of quench is not an important point here as the formation of defects only requires formation of uncorrelated domains, and the size of the domains in this model has to be treated as a parameter as it is not possible to cover length scales of km (for star) to fm (QCD scale). Hence these simulations are necessarily restricted to system sizes of tens of \(fm\) only. The physical size of the lattice is taken as \((7.5fm)^{3}\) and \((15fm)^{3}\). Another possible phase transition is to the so called color flavor locked (CFL) phase inside the core of a pulsar where the QCD symmetry for three flavors (for very large baryon chemical potential such mass differences between these quarks become unimportant)), \(SU(3)_{c}\times SU(3)_{L}\times SU(3)_{R}\times U(1)_{B}\) is broken down to the diagonal subgroup \(SU(3)_{c+L+R}\times Z_{2}\)[86; 5]. This transition will give rise to global strings. To roughly estimate resulting change in MI, a simplified case is considered by removing the cubic term from effective potential. This modification in the potential gives rise to string defects only without any domain walls, as appropriate for the transition from, say, QGP phase to the CFL phase, while ensuring that one has the correct energy scale for these string defects.
Fig. 6 shows how the induced fractional off-diagonal MI components and ratio of quadrupole moment to MI change in time. Even though the exact values of these quantities remain to be determined for the truly macroscopic core sizes (as for the lattice simulation of defect network discussed above), it is clear that in principle, exact temporal profile of these quantities can be determined. Thus resulting pulse modifications can be predicted in complete detail, in principle. One important, completely robust feature of these results is that density fluctuation induced contributions to even the diagonal components of the MI tensor can be positive as well as negative (with equal probability). These changes in diagonal components can then account for glitches as well as anti-glitches in the same unified framework. For this it is important to note that density fluctuations arise during phase transition to a new phase which by itself leads to change in the spin rate. Thus, when density fluctuations die away, the original spin rate is only partially restored (depending on the density difference between the two phases). This is precisely the qualitative behavior observed for glitches.
Figure 6: Time evolution of fractional changes in the moment of inertia as well as the quadrupole moment during the process of phase transition. (a,b), and (c,d) correspond to lattice sizes \((7.5fm)^{3}\) and \((15fm)^{3}\) respectively. Plots in (a) and (c) correspond to the confinement-deconfinement phase transition which results in the production of Z(3) walls and associated strings, while plots in (b) and (d) correspond to the transition where only string defects form which will be the case for the CFL phase. (Fig. taken from [19].)
## VII Pulse modification for random density fluctuations
As we discussed above, in principle, a detailed analysis of perturbed pulses should be able to reveal a wealth of information about underlying density fluctuations occurring inside a pulsar. The order of phase transition, the symmetry breaking pattern, the time scale of phase transition, etc. will leave distinctive characteristic perturbations on the pulses. For this, one should consider specific source of density fluctuation, with well defined distribution of density fluctuations, and calculate in detail the resulting pulse modification. However, certain aspects of generic density fluctuations, in particular those resulting from any phase transition, will have qualitative implications for pulse modifications. Such density fluctuations will be statistical in nature, and will lead to random contributions to the components of the MI tensor. A general study of such fluctuations was carried out by some of us in [21] where the density fluctuations were modeled in terms of Gaussian distributed random components of MI tensor added to the unperturbed diagonal MI tensor of the neutron star. With the form of perturbed MI tensor prescribed, the resulting effects on pulse timings as well as specific nature of pulse modulations was then calculated. An important finding in [21] was that even for very tiny density fluctuations, with resulting changes in pulsar timings being extremely small, pulse profile modification were found to become relatively large due to modulations resulting from wobbling of NS. Main reason for this was that while pulse timing changes remain proportional to typical density fluctuation magnitude \(\epsilon\), the pulse profile modification were found to be proportional to \(\epsilon/\eta^{2}\) where \(\eta\) is the NS deformation parameter. (\(\eta\) is typically very small \(\sim 10^{-8}-10^{-4}\). This observation will play an important role in later discussion when we discuss possible detection of external GWs using NS deformations.) We first discuss this case of random MI components from [21]. Subsequently, we will also briefly mention how this technique can be applied to cases where where modified MI tensor is precisely known, for example, the case of specific density fluctuations occurring in neutron stars during collision with an asteroid.
Basic physics of calculations in [21] is to consider a reasonably symmetric initial configuration of neutron star with homogeneous density, and then incorporate effects of random density fluctuations on its rotational dynamics. The shape of the unperturbed pulsar was taken to be an oblate spheroid, rotating about the \(z\)-axis with angular frequency \(\omega\) and angular momentum \(L_{z}=L\) (\(L_{x}=L_{y}=0\)). The principal moment of inertia components were taken as \(I_{11}^{0}\equiv I_{1}^{0}=I_{22}^{0}\equiv I_{2}^{0}\) and \(I_{33}^{0}\equiv I_{3}^{0}=I_{0}\) with \(I_{0}>I_{1}^{0},I_{2}^{0}\). The subdetensics parameterized through \(\eta=(I_{0}-I_{1}^{0})/I_{0}\), the value of which depends on the neutron star's mass, the rigidity of the crust and magnetic field etc. Theoretical studies by [87; 88] put the upper bound of \(\eta\) as \(\simeq 10^{-6}\), whereas work of [89] puts the upper bound limit as \(10^{-4}\). Observational studies put the upper bound close to \(10^{-5}-10^{-4}\)[90] with some pulsars having \(\eta\sim 10^{-2}-10^{-3}\)[91].
Immediately after a phase transition (at \(t=0\)), density fluctuation will alter the MI tensor of the pulsar. The \(S_{0}\) frame in Fig. 7 shows the set of principal axes immediately after the phase transition. The subsequent dynamics of the perturbed pulsar are governed by the set of Euler equations, [92; 93]
\[I_{1}\dot{\omega}_{1}-(I_{2}-I_{3})\omega_{2}\omega_{3}=0 \tag{8}\] \[I_{2}\dot{\omega}_{2}-(I_{3}-I_{1})\omega_{1}\omega_{3}=0\] (9) \[I_{3}\dot{\omega}_{3}-(I_{1}-I_{2})\omega_{1}\omega_{2}=0. \tag{10}\]
Where \(I_{i}\) (\(i=1,2,3\)) denotes the principal MI tensor components relative to the body fixed frame (\(S^{\prime}\) in Fig. 7) and \(\omega_{1}(t),\omega_{2}(t)\), and \(\omega_{3}(t)\) are the angular frequencies of the star with respect to space fixed frame (which momentarily coincides with \(S^{\prime}\)). As \(\omega_{1}\), and \(\omega_{2}\) are expected to be very small compared to \(\omega_{3}\simeq\omega\), one can write the equation of motion for \(\omega_{1}\) as
\[\ddot{\omega_{1}}+\Omega^{2}\ \omega_{1}=0\,, \tag{11}\] \[\text{where},\qquad\Omega=\omega_{3}\left[\frac{(I_{3}-I_{1})(I_ {3}-I_{2})}{(I_{1}I_{2})}\right]^{1/2}\,. \tag{12}\]
Here, \(\Omega\) is the precession frequency due to the perturbation. We consider the situation when perturbations are tiny in comparison to the oblateness parameter (\(\epsilon<<\eta\)), hence the condition that \(I_{3}>I_{1},I_{2}\) is still valid (as we will see below). The solution of the above equation is then given by
\[\omega_{1}(t)=A\ \cos(\Omega\ t)+B\ \sin(\Omega\ t). \tag{13}\]
\(A\) and \(B\) are two arbitrary constants determined from the initial conditions. Similarly using Eq. (13) and Eq. (9), one obtains the time evolution of \(\omega_{2}\)
\[\omega_{2}(t)=k[A\sin(\Omega\ t)-B\cos(\Omega\ t)]. \tag{14}\]
Where the factor \(k\) is given by
\[k=\left[\frac{I_{1}(I_{3}-I_{1})}{(I_{2}(I_{3}-I_{2}))}\right]^{1/2}. \tag{15}\]
The time \(t=0\) is assumed to be the onset of the pulsar's precession immediately after the completion of phase transition. Denoting the respective angular velocities at \(t=0\) by \(\omega_{1}^{0}\) and \(\omega_{2}^{0}\) respectively, Eqs. (13) and (14) can be rewritten as
\[\omega_{1}(t) =\omega_{1}^{0}\cos(\Omega\ t)-\frac{\omega_{2}^{0}}{k}\sin( \Omega\ t) \tag{16}\] \[\omega_{2}(t) =k\omega_{1}^{0}\sin(\Omega\ t)+\omega_{2}^{0}\cos(\Omega\ t). \tag{17}\]
Here, the arbitrary constants \(\omega_{1}^{0}\) and \(\omega_{2}^{0}\) can be fixed by initial conditions.
### Initial conditions and analytical estimates of various parameters
Immediately after the phase transition, the orientation of the principal axes (\(S_{0}\) frame) is shown in Fig. 7. The standard polar angle and the azimuthal angle of \(z_{0}\) axis with respect to the \(S\) frame are denoted by \(\theta_{0}\) and \(\phi_{0}\). These angles are obtained by diagonalizing the perturbed moment of inertia tensor and finding the eigenvalues and the corresponding set of eigenvectors (see [21] for details). In the absence of external torque, the angular momentum of the pulsar must be conserved. Thus, after the phase transition, the components of the angular momentum along \(x_{0},y_{0}\), and \(z_{0}\) are given by
\[L_{x_{0}}(t=0) =I_{1}\omega_{1}^{0}=-L\theta_{0}\cos\phi_{0} \tag{18}\] \[L_{y_{0}}(t=0) =I_{2}\omega_{2}^{0}=-L\theta_{0}\sin\phi_{0}\] (19) \[L_{z_{0}}(t=0) =I_{3}\omega_{3}^{0}=L. \tag{20}\]
Because \(\omega_{3}\) is approximately constant and the angle \(\theta_{0}\) is small for tiny density fluctuations, one can use the approximation \(L/I_{1}\simeq L/I_{3}\simeq\omega\). The above set of equations can be used now to express Eqs. (13) and (14) in terms of \(\theta_{0}\) and \(\phi_{0}\). Since the phase transition fluctuations are small, the perturbed MI tensor can be written as
\[I_{1,2} =I_{0}(1-\eta+\epsilon_{1,2})\,, \tag{21}\] \[\text{and}\qquad I_{3} =I_{0}(1+\epsilon_{3})\,, \tag{22}\]
with \(\mathcal{O}(\epsilon_{i})\simeq\mathcal{O}(\epsilon)\) for \(i=1,2,3\). Therefore the precession frequency (Eq. (12)) can be written as
\[\Omega\simeq\left(\frac{\eta+\epsilon}{1-\eta+\epsilon}\right)\omega\simeq \eta\ \omega. \tag{23}\]
Here it is assumed that \(\epsilon,\ \eta\ll 1\) and \(\epsilon\ll\eta\). The initial angle \(\theta_{0}\) is obtained by diagonalizing the perturbed MI tensor \(I_{ij}\). For an order of magnitude estimate, the perturbation \(\epsilon_{ij}\equiv\delta I_{ij}/I_{0}\) can be taken as \(\epsilon_{ij}=\epsilon/I_{0}\). This results in
\[\cos\theta_{0}=\left(1+2\left(\frac{\epsilon I_{0}}{I_{3}-I_{1}-\epsilon I_{ 0}}\right)^{2}\right)^{-1/2}. \tag{24}\]
Assuming \(\epsilon<<\eta\), one gets \(\theta_{0}\simeq\sqrt{2}\left(\frac{\epsilon}{\eta}\right)\). On similar argument, one obtains the factor \(k=1+\epsilon/2\eta\). Thus, the Eq.(16) and Eq.(17)can be expressed as
\[\omega_{1}(t) =-\omega\theta_{0}\left[\cos(\Omega t+\phi_{0})+\frac{\epsilon}{ 2\eta}\sin\phi_{0}\sin(\Omega t)\right] \tag{25}\] \[\omega_{2}(t) =-\omega\theta_{0}\left[\sin(\Omega t+\phi_{0})+\frac{\epsilon}{ 2\eta}\cos\phi_{0}\sin(\Omega t)\right]. \tag{26}\]
The corresponding rotational angles \(\theta_{1}(t)\) and \(\theta_{2}(t)\) can also be re-expressed as
\[\theta_{1}(t) =-\frac{\omega\theta_{0}}{\Omega}\left[\sin(\Omega t+\phi_{0})- \frac{\epsilon}{2\eta}\sin\phi_{0}\cos(\Omega t)\right] \tag{27}\] \[\theta_{2}(t) =\frac{\omega\theta_{0}}{\Omega}\left[\cos(\Omega t+\phi_{0})+ \frac{\epsilon}{2\eta}\cos\phi_{0}\cos(\Omega t)\right]. \tag{28}\]
Putting the values of \(\Omega\) and \(\theta_{0}\), the amplitude \(\omega_{m}\) (From Eqs. (25) and (26)), and \(\theta_{m}\) (From Eqs. (27) and (28)) are determined as
\[\omega_{m} =\omega\theta_{0}\simeq\sqrt{2}\left(\frac{\epsilon}{\eta}\right) \omega\,, \tag{29}\] \[\theta_{m} =\left(\frac{\omega}{\Omega}\right)\theta_{0}\simeq\sqrt{2}\left( \frac{\epsilon}{\eta^{2}}\right)\,. \tag{30}\]
Thus for \(\eta=10^{-3}\), the oscillation amplitude \(\theta_{m}\) is of order \(10^{6}\ \epsilon\simeq 1^{\circ}\) for \(\epsilon=10^{-8}\). It was shown in ref. [21] that the above analytical estimates approximately match the results obtained from the simulation.
### Numerical Results
Here we briefly review the simulation results of Ref. [21]. There, two sets of values \((\eta,\epsilon)=(10^{-2},10^{-5})\) and
Figure 7: An oblate shape unperturbed pulsar is initially (i.e. before any phase transition) rotating about the z-axis relative to a space-fixed frame S (solid black lines). The red dotted lines show the principal axes (\(x_{0}\), \(y_{0}\), \(z_{0}\)) of \(S_{0}\) frame immediately after the phase transition (at \(t=0\)). The body-fixed \(S^{\prime}-\)frame at any arbitrary time \(t>0\) is shown with blue dashed lines (taken from [21]).
\((10^{-3},10^{-8})\) were chosen to observe the impact of perturbation on pulse modulations (See Table 1 in [21] for the values of other parameters). Note that Eqs. (25) and (26) can be approximately written as \(\omega_{1,2}\sim\omega_{m}\cos(\Omega t)\). Since \(\Omega\sim\eta\omega\) (Eq. (23)), the time period \(T_{\Omega}=T_{\omega}/\eta\). Thus \(T_{\Omega}=0.1\) for \(\eta=10^{-2}\) and 1 sec for \(\eta=10^{-3}\). This matches the numerical simulation results in Ref. [21]. The results are shown in Figs. 8, and 9.
Other than the above (first) modulation, another (second) modulation is also expected since \(\omega_{1}\) and \(\omega_{2}\) also oscillates about the \(x\) and \(y\) axis, respectively. From the frequency oscillation amplitude \(\omega_{m}\sim(\epsilon/\eta)\omega=(2\pi\epsilon/\eta)1000\) /sec., the second modulation time scale can be approximately determined as \(T_{m}\simeq 10^{-3}\) (\(\eta/\epsilon\)) sec. Thus, the time scale \(T_{m}\) varies from a few seconds [for \((\eta,\epsilon)=(10^{-2},10^{-5})\)] to a few hundred seconds [for \((\eta,\epsilon)=(10^{-3},10^{-8})\)]. The figures 8, and 9 indeed show that there is a second modulation, though the numerical results show time scale is larger compared to the analytical estimate \(T_{m}\). (For clarity, in Fig. 9, only the top part of the pulses is shown compared to Fig. 8.) Of course, considering the complexity of the rigid body dynamics, and the order of magnitude estimates used, one can expect uncertainty in the analytical estimates of the concerned quantities.
An important feature of these results can be termed as _the memory effect_. This relates to the fact that, even after the density fluctuations fade away leading to vanishing off-diagonal components, thereby restoring the original rotation axes, and hence the original pulse profile, there will in general be a net shift of the angular position of the pulse. (This is apart from the effect of any possible net change in the spin rate. So, if net spin rate change remains unobservable, this net shift of the angular position may still be observable as it originates from the transient pulse modification from intermediate stage of wobbling of star.) Therefore, a net, residual shift of the angular position of the pulse could signal a missed phase transition.
### Pulse modification due to asteroid impact on neutron star
Impact of asteroids, comets etc. on astrophysical bodies frequently occur and with intense gravity of neutron stars, such impacts have dramatic effects. For example, it has been proposed that certain specific gamma ray burst event may have origin in the impact of a solid body (comet or asteroid) with mass of order \(10^{18}\) g colliding with NS surface [94]. The tidal distortion of the body during last stages of impact, with intense magnetic field of neutron star leads to strong compression between magnetic longitudes. The interaction of this with NS surface material, with exploding material falling back at magnetic conjugate points was studied in detail in [94] and it was proposed that it could explain specific gamma ray burst events. In view of discussions in previous sections, it becomes natural to expect that such a collision occurring on the surface of a pulsar should lead to perturbation in the MI tensor of the pulsar, and hence should leave imprints on the pulses. This interesting possibility is explored in [95]. Special feature of this case is that, in this case, one can determine exact modification of the MI tensor of the pulsar. This is done by following detailed impact dynamics of the body on the NS surface as calculated in [94]. With the perturbed MI tensor known, it is then straightforward to apply the technique discussed above, and determine the detailed nature of perturbed pulses. One important unknown in this case is the NS deformation parameter \(\eta\) which determines the pulse modulations (as discussed above). As resulting change in MI tensor will have very small magnitude in this case (with \(\epsilon\) of order \(10^{-15}\) for a \(10^{18}\) g body impacting on a solar mass NS), observation of pulse modulations will only be
Figure 9: (taken from [21]) Same plot as Fig. 8, however for \((\eta,\epsilon)=(10^{-3},10^{-8})\). An apparent ‘kink’ at the top figure appears just below 500 s smoothed out with improved resolution (bottom-left). For clarity, here only the top part of the pulses is shown compared to Fig. 8.
Figure 8: The figure shows the time evolution of the pulse profile \(I(\theta_{p})/I_{0}\) of a millisecond pulsar for \((\eta,\epsilon)=(10^{-2},10^{-5})\). The top plot shows the two different modulations at different time scales. Bottom-left shows the same plot with better resolution. Bottom-right shows a few pulses for a typical millisecond pulsar. The figures are taken from [21].
possible for NS with very small values of \(\eta\). Still, this suggests an interesting technique to probe impact of bodies on pulsar surfaces by observing pulse modifications. In particular, for any proposed explanations on gamma ray bursts etc., one will expect an accompanying observation of pulse modification (depending on value of \(\eta\)).
## VIII Gravitational waves due to phase transition induced density fluctuations
Here we discuss how phase transitions occurring inside neutron stars may provide a new _high frequency_ source of gravitational waves through density fluctuation induced rapidly changing quadrupole moment of the star.
Neutron stars are considered to be one of the most sought after sources of gravitational waves. As the emission mechanism from such a source is governed by its internal structure, searches for gravitational waves from such a source can provide valuable information. For example, through a series of works, Thorne et al. [96; 97; 98; 99] developed a non-radial pulsation theory for a general relativistic static neutron star model. The author suggested that various mechanisms can excite quasinormal modes of the parent neutron star and may result in gravitational wave emission. The mechanisms capable of exciting quasinormal modes include flaring activity, the formation of hypermassive neutron stars following the coalescence of binary neutron stars etc. The mechanism associated with the timing glitch can excite quasinormal modes in the parent pulsar also. With this motivation, the searches for the emission of GWs from isolated pulsars began even before the first ever direct detection of GWs from the binary black hole merger. In this context, an attempt for GW search associated with the timing glitch in the Vela pulsar in August 2006 [47] is worth mentioning. Although the above search produced no detectable GWs, with improving sensitivity of the detectors, continuous attempts in this direction are believed to yield fruitful results soon.
In the literature, there are a few other theoretical studies where the authors [44; 45; 46; 87; 100] have explored the feasibility of emission of gravitational waves from isolated pulsars (see the review [101] for more details). These studies considered various reasons for GW emissions. These include the existence of deformity of neutron stars in the form of crustal mountains on the surface [44], or permanent triaxiality [45] of the star. This mechanism, as mentioned above, produces monochromatic GWs, the frequency of which is determined by the spin frequency of the star. There was also a suggestion that crustquake (see V) may have a role in generating GWs [46; 102]. In reference [102], the authors considered the crustquake as a trigger which can excite various oscillatory modes of the pulsar [102], causing emission of GWs. The role of crustquake as a possible source of GWs was also suggested in [46], where the authors noticed that the sudden change of the oblateness corresponded to a change of the quadrupole moment within a very small time scale and crustquake could indeed cause the bursts of GWs [46]. In the context of bursts of GWs from an isolated pulsar, we will discuss the case discussed in [19] where density fluctuations induced during phase transitions were discussed. These density fluctuations were found to perturb the entire MI tensor of the neutron star. Along with that, it also necessarily produced non-zero quadrupole moment. With a very short time scale associated with the phase transition, e.g. for a QCD phase transition, in [19], it was suggested the rapidly evolving quadrupole moment can provide a new source of gravitational waves.
In [19], the authors have discussed how density perturbations due to bubble nucleation or formation of topological defects during phase transitions change the moment of inertia (MI) tensor of the star, in general with the addition of off diagonal components in the MI tensor. In other words, the rotation axis of the star no longer remains aligned with one of its principal axes. As the shape of the star diverges from sphericity during phase transition, it develops a quadrupole moment \(Q\) also. The quadrupole moment tensor \(Q_{ij}\) (\(i,j=1,2,3\)) varies rapidly during phase transition, thereby giving rise to quadrupolar gravitational radiation. In the weak field limit, the power emitted by the star through gravitational radiation is given by [103]
\[\frac{dE}{dt} = -\frac{32G}{5c^{5}}(\Delta Q)^{2}\omega^{6} \tag{31}\] \[\approx -(10^{33}J/s)\left(\frac{\Delta Q/I_{0}}{10^{-6}}\right)^{2} \left(\frac{10^{-3}s}{\Delta t}\right)^{6}\,.\]
Here, we use \(\Delta Q\) to represent generic value of \(\Delta Q_{ij}\) which is the change in the \(ij^{\rm th}\) component of the quadrupole moment tensor during a time interval \(\Delta t\) and \(I_{0}\) is the initial moment of inertia of the spherical NS. (The above expressions were derived in [103] for a periodically varying quadrupole moment with angular frequency \(\omega\). As our aim here is only to provide the possibility of phase transition induced density fluctuations being a new source of GWs, we use these expressions with the typical time scale of the phase transition as playing the role of the period associated with \(\omega\). The important factors to be noted above are \((\Delta Q)^{2}\) and \(\omega^{6}\) leading to \((\Delta t)^{-6}\) dependence for the GW power with \(\Delta t\) being the time scale of the phase transition.) According to [19], \(\frac{\Delta Q_{ij}}{I_{0}}\) lies within the range \(10^{-11}-10^{-10}\) for phase transition through bubble nucleation and within the range \(10^{-14}-10^{-10}\) when inhomogeneities in density of the star arise due to formation of topological defects (see for example Table 1 of [19]). Though this value is much lower than the typical value of \(10^{-6}\) used for deformed neutron stars, the fact that the change in \(Q\) occurs over a time scale of microseconds (as a conservative estimate) makes the power radiated by gravitational waves in this situation significant. An estimate of the strain amplitude \(h\) arising from a pulsar
at a distance \(r\) from the field point is given by
\[h = \frac{4\pi^{2}G\Delta Qf^{2}}{c^{4}r} \tag{32}\] \[\approx 10^{-24}\left(\frac{\Delta Q/I_{0}}{10^{-6}}\right)^{2}\left( \frac{10^{-3}s}{\Delta t}\right)^{6}\frac{1kpc}{r}.\]
Taking \(\frac{\Delta Q}{I_{0}}\approx 10^{-10}\), \(\Delta t\approx 10^{-6}-10^{-5}\) s, and \(r=1kpc\), they get \(h\approx 10^{-24}-10^{-22}\). In reality, \(\Delta t\) could be smaller, thereby enhancing \(h\) and \(dE/dt\). Note that as the gravitational wave emission is extremely short lived, net energy lost by the star is negligible compared to its mass.
In conclusion, transient changes in MI of a pulsar due to phase transition induced density perturbations give rise to non-vanishing rapidly varying quadrupole moment and hence quadrupolar gravitational radiation. In general, the MI tensor in this case gets off diagonal contributions that cause wobbling of the pulsar and consequent modification of the peak pulse intensity. This is a special feature of the model of [19]. Through observation of modulation of peak pulse intensity and gravitational waves, it is possible to identify the specific nature of phase transition occurring within the core of the star.
## IX Pulsar as a Weber detector of gravitational waves
So far we discussed the effects of internal dynamics of neutron star leading to density fluctuations and changes in the nature of pulses from the pulsar. In this section, and next, we will discuss the effects of an external gravitational wave on the neutron star configuration. It is natural to be specific, as expected deformations in NS will be extremely tiny. However, at the same time we also recall the impressive accuracy of pulsar timing observations, better than 1 part in \(10^{15}\). Further, as discussed above in Section VII, possible changes in the pulse profile from induced wobbling of pulsar may be large, even if pulse timing changes remain very small. We will put forward arguments here that pulsars can effectively act as _remotely stationed_ resonant Weber detectors whose GW perturbed signals may be observable on earth. Such a possibility is worth exploring, even if it requires challenging observations. Many gravitational wave detectors (like LIGO/Virgo) are being set up around the globe, in order to be able to detect gravitational waves with good localization of the source in the sky. This will be complemented by future space-based detectors for the search for gravitational wave sources with very wide range of wavelengths and strengths. However, even all these near earth detectors will be limited in their scope as most of the powerful gravitational wave sources occur very far and also, triangulating the location of the sources will also be limited. Clearly, if one could have a family of detectors placed far away in space, so that their signals could be monitored on earth, that will immensely boost our ability for detecting and identifying GW sources.
The discussion in this section is taken primarily from ref.[22]. Basic physics proposed here is, to put it simply, taking the entire neutron star as a _resonant Weber detector_. While for the conventional Weber detectors [104; 105; 106; 107], GW induced deformations are detected and converted into electrical signals, for neutron stars (pulsars), deformations induced by external GWs will be imprinted on the detailed nature of pulses which can be monitored on earth. As for the conventional Weber detector, resonance will play a crucial role for pulsar Weber detectors also which will lead to amplitude enhancement, and more importantly, the _ringing effect_[107] which will allow folding of a large number of pulses to tremendously improve the signal to noise ratio.
It is important to make a distinction between the proposal here for using pulsar itself as a Weber detector, and the conventional technique of pulsar timing array (PTA) for GW detection [108]. For PTA, one monitors the pulse arrival times from a network of pulsars, and looks for gradual changes due to passage of very low frequency gravitational waves in the intervening region. By its very nature, PTA technique is limited to extremely low frequency sources with frequencies of order \(10^{-6}\) Hz or less. Such GWs are expected to arise from supermassive black hole mergers or exotic objects such as cosmic strings. In contrast, for the technique discussed here, a single pulsar acts as a GW detector. In principle, even a single pulsar Weber detector can give some information about the GW source direction through changes in its spin rate and pulse profile. In the context of the proposal made here, we note that it was proposed long ago [109] that with an array of seismometers, the whole earth may be used as gravitational wave detector for low frequency GWs. Proposals have also been made that gravitational waves may be detected due to their effects on nearby stars, in particular on the solar acoustic modes (helioseismology and astro-seismology) [110; 111; 112]. There is some similarity in spirit of these proposals with the pulsar Weber detector proposed here. However, to our knowledge, detection of GW using its effects on pulse modifications of a pulsar [22], had not been been discussed earlier.
Consider a monochromatic gravitational wave with wavelength \(\lambda\) from a far away GW source, reaching a neutron star. Passing of GW means periodic changes in the Riemann curvature tensor \(R_{\mu\nu\lambda\rho}\) which will induce deformations of any body in its path. For a neutron star, one can calculate changes in its quadrupole moment \(Q_{ij}\) induced by a GW in the static limit (essentially, large wavelength limit for the GW compared to the neutron star size). It can be written in the following form [113]:
\[Q_{ij}=-\lambda_{d}E_{ij}. \tag{33}\]
Here, \(E_{ij}=R_{i0j0}\) is the external tidal field. \(\lambda_{d}\) is called the tidal deformability,
\[\lambda_{d}=\frac{2}{3}k_{2}\frac{R^{5}}{G}. \tag{34}\]
Here R denotes the equilibrium (undisturbed) radius of the neutron star and \(k_{2}\) is called the second Love number. There have been theoretical estimates of this. For polytropic pressure-density relation \(P=K\rho^{1+\frac{1}{n}}\), where \(K\) is a constant and \(n\) is the polytropic index, numerical results (for \(0.5\leq n\leq 1.0\), and \(0.1\leq(M/R)\leq 0.24\)) can be fitted by the formula [113].
\[k_{2}\simeq\frac{3}{2}(-0.41+\frac{0.56}{n^{0.33}})(\frac{M}{R})^{-0.003} \tag{35}\]
Remarkably, the BNS merger event detected by LIGO/Virgo [18] has been used to put a direct observational constraint on the value of \(k_{2}\) to lie within the range, \(k_{2}\simeq 0.05-0.15\), and we will be using values of \(k_{2}\) within this allowed range.
For a GW travelling along z direction, the tidal field \(E_{ij}\) can be calculated for the two polarizations (\({}^{\prime}+^{\prime}\), and \({}^{\prime}\times^{\prime}\) polarizations) in the transverse traceless (TT) gauge. We consider the \({}^{\prime}+^{\prime}\) polarization and denote the gravitational wave strain amplitude by \(h\) for this polarization. The resulting tidal field amplitude is given by [114],
\[E_{xx}=-E_{yy}=\frac{2\pi^{2}hc^{2}}{\lambda^{2}}, \tag{36}\]
We take the neutron star to have a spherical shape and mass \(M\). This tidal field of the gravitational wave will then induce a quadrupole moment tensor as given above in Eq.(33). Taking this deformation to be of an ellipsoidal shape, one can estimate the resulting change in the moment of inertia tensor of the neutron star.
\[\frac{\Delta I_{xx}}{I}=-\frac{\Delta I_{yy}}{I}\simeq\frac{k_{2}}{3}\frac{R^ {3}c^{2}}{GM\lambda^{2}}20h \tag{37}\]
For sample values, \(M=1.0M_{\odot}\) and \(R=10\) km and \(\lambda\) for a gravitational wave with 1 kHz frequency (for GW from a typical astrophysical source, as detected by LIGO/Virgo).
\[\frac{\Delta I_{xx}}{I}\simeq 10^{-2}h. \tag{38}\]
Here we have used a sample value \(k_{2}=0.1\) within the allowed value from BNS merger event [18]. This event had the GW source about 130 million light years away from earth, and had the peak strength of the signal \(h\simeq 10^{-19}\). The advantage of the proposed pulsar Weber detector is that it could be anywhere in space. Most optimistically, one can even imagine that a pulsar was about 1 light years away from the BNS undergoing merger. (Such a possibility can be considered as most neutron stars and pulsars arise in globular clusters in our galaxy, so one could imagine it to be also true for other galaxies, i.e. for this particular BNS event, though extragalactic pulsar observations are not very frequent). The value of \(h\) at the location of that pulsar will be about \(h\simeq 10^{-11}\). As the spin rate change for the pulsar is directly related to change in its MI, change in the spin rate of the pulsar will be
\[\frac{\Delta\nu}{\nu}=\frac{\Delta I}{I}\simeq 10^{-13}. \tag{39}\]
where \(\Delta I\) represent the change in the relevant component of MI. Given the extreme levels of accuracy of measurements of pulsar signals, this magnitude of spin rate change is well within observations. It is important to note that for a generic direction of propagation of the GW, the MI tensor will develop non-zero off-diagonal components also. This will induce wobbling of the pulsar leading to the modulation of the pulse intensity profile. Exactly this type of modulation was discussed above in Section VII. One important result from that section is that even for very tiny changes in the spin rate due to changes in the diagonal components of MI tensor, induced wobbling (and hence changes in the pulse profile) may be much larger, especially for neutron stars with small deformation parameter \(\eta\). Thus profile changes may become more important as a signal of GW passing through a pulsar. However, in this section we will keep focusing on the spin rate change, as profile change discussion is much more involved. Discussion of Section VII can be straightforwardly applied to the present case of GW induced changes in the MI tensor.
A crucial requirement for the Weber detector is that it needs to work at resonance. The resulting increase in amplitude is not too large as GW signals are short pulses, limiting resonant enhancement of the amplitude. However, at resonance, the solid detector exhibits the so called _ringing effect_. Ringing effect refers to continued vibration of the detector in the resonant mode for a long time even after complete passing of the GW pulse through the detector. This happens because at resonance energy absorption from the wave is highly efficient. That energy has to be dissipated in sound and heat, until then the detector will keep vibrating. For example, for a GW pulse of duration a few ms, at resonance, the vibrations of the Weber bar can continue for time of order 10 min [106; 107]. With the particular template for the pulse, a large number of pulses, getting repeated during this ringing, can be folded. This leads to tremendous increase in the signal to noise ratio. In the same way, for the pulsar Weber detector working at resonance, one would expect the pulsar to continue _ringing_ for a long time after the passing of the GW. This should allow folding of many pulses to separate this _ringing_ signal. In fact, this is the standard way in which a large number of regular pulses from the pulsar are folded, leading to impressive accuracy of pulse timings. The difference is that regular pulses from the pulsar are periodic, and hardly change in relevant time scales (apart from occasional glitches), so pulse folding is standard. The ringing of pulsar Weber detector shows that even for a short GW pulse, similar folding of large number of pulses may be possible.
There is a wide range of resonant frequencies for neutron stars with frequencies of few Hz all the way up to 20 kHz [115]. For specific modes, the resonant frequencies of
NS can be in the Range of 100 Hz to 1 kHz, which is precisely the range relevant for a typical BNS merger GW source, also for typical black hole mergers (with masses within tens of solar masses).
### The quality factor \(\mathbf{Q}\) for the neutron star matter
Effectiveness of Weber detector crucially depends on the quality factor \(\mathbf{Q}\) of the detector material. Very high values of \(\mathbf{Q}\) will lead to better enhancement in the vibration amplitude. More importantly, the energy dissipated per vibration cycle will be low, prolonging the ringing effect, thereby allowing folding of much larger number of pulses for better signal to noise ratio. It has been argued that various viscous effects may not be very important even on time scales of the orbital decay time [116]. Thus it is entirely possible that the quality factor \(\mathbf{Q}\) for neutron star interior may be very large, possibly much larger than the material available on earth for conventional Weber detectors. Thus, one needs to know the \(\mathbf{Q}\) factor for NS interior. This is a new challenge for QCD calculations, apart from the well known problem of determining the equation of state, and transport coefficients like shear viscosity, the quality factor \(\mathbf{Q}\) for different phases of QCD (at least for the hadronic and the QGP phases) needs to be calculated. It is illuminating to quote here from the review article on _Detection of gravitational waves_[117]. In the section on the Antenna materials for the resonant-mass (Weber) detectors, it is stated,
_"An ideal resonant bar would consist of a piece of nuclear matter, with high density and a velocity of sound comparable to the velocity of light! Since this is not available except in neutron stars, we must find a form of molecular matter which, to maximize coupling to gravitational waves, combines high velocity of sound \(v_{s}\), and high density \(\rho\). To reduce the thermal noise we require a low acoustic loss \(Q^{-1}\)"._
The arguments presented in this section propose that, going with the spirit of above quotation, we realize that neutron star material can possibly be available for use in a Weber detector, in fact by using the entire neutron star itself as a pulsar Weber detector.
## X Re-visiting past gravitational waves via pulsars as Webber detectors
An interesting outcome of the possibility of using pulsars as Webber gravitational wave detectors spread out in the cosmos is that they can be used to revisit past GW events, like collisions of black holes, neutron stars, supernova explosions etc., again and again. These past events are taken to be such that their signals have already passed through Earth. They might have been detected by LIGO/Virgo or their signals might have been missed. Supernovae within our galaxy whose existence have been deduced from astronomical data collected around the world fall within the latter category. This novel idea has been proposed in [23]. Figure 10 schematically represents the situation we are talking about. Gravitational waves from the source directly travel to Earth via path C but it also travels to a pulsar via path B, transiently inducing a quadrupole moment in the pulsar and changing its moment of inertia and hence its pulse frequency and profile. The modified pulses reach earth via path A. If \(r_{A,B,C}\) are the distances along paths A, B, C respectively then the path length difference between the direct path C and the path A+B via a pulsar is,
\[\Delta r = (r_{A}+r_{B})-r_{C}, \tag{40}\] \[= r_{A}+(r_{A}^{2}+r_{C}^{2}-2r_{A}r_{C}\cos\alpha)^{1/2}-r_{C}. \tag{41}\]
Thus, the indirect signal will be detected on Earth a time \(t_{0}\) after the arrival of the direct signal, where
\[t_{0}=\frac{\Delta r}{c}. \tag{42}\]
Angle \(\alpha\) between the directions of the GW source and the pulsar is given by \(\alpha=\cos^{-1}(\sin\theta_{p}\sin\theta_{s}+\cos\theta_{p}\cos\theta_{s}\cos( \phi_{p}-\phi_{s}))\), \(\theta_{p,s}\) and \(\phi_{p,s}\) being respectively the 'Declination' and 'Right Ascension' angles for the pulsar and GW source. Signals from pulsars that lie far away from Earth (i.e., \(r_{A}\) is relatively large) will arrive on Earth within a reasonably short interval after its direct detection only if \(\alpha\) is small. For \(\alpha\) close to zero, \(r_{A}\) doesn't matter in eq. 41 and there is almost no time delay between the direct and indirect signals. (Thus, errors in pulsar distances do not affect the values of \(t_{0}\) for small \(\alpha\).) In [23], the authors have presented an elaborate list of GW events, pulsars through whose signals the events may be revisited and the range of computed arrival times of the affected signals from the pulsars, accommodating for known errors wherever possible. Their data is available for past events whose earliest signal arrival dates lie within the next 100y after 1967 and also whose uncertainty in signal arrival dates is limited to within 100y. A notable example among many interesting cases is the earliest recorded supernova event SN185 that may become observable again via pulsars J0900-3144 and J1858-2216 between 2016-2049. A few other interesting cases are of supernovae SN1885 whose perturbed signal via pulsar B2310+42 is expected to reach Earth between
Figure 10: Schematic diagram showing the relative positions of Earth, a distant GW source and a pulsar whose change in pulse profile and spin rate we can measure on Earth. (Fig. taken from [23].)
2022-2044 and SN1604 whose signal perturbed by pulsar J1813-1246 should reach between 1971-2052.
Thus, gravitational waves from events that have travelled through pulsars strewn in the sky can be observed again, not only once but multiple times, through the observation of perturbed pulses from these pulsars. Nature of modification of pulsar timing and pulse profile also depends on the relative directions of GW propagation and pulsar spin. So, even a single indirect pulsar-detector observation can be used to get information about source direction. However, an important question that may arise here is how would one determine whether the observed change in pulse profile/timing is indeed due to passage of gravitational waves and not some other phenomena like glitches or phase transition induced density perturbations in the core of the star. Of course, a careful analysis using detailed characteristics of the incoming GW signal (its profile and direction), the internal structure of the pulsar and the direction of observation on Earth is needed to confirm this. Yet, even without that, one can keep in mind the simple physical fact that any change in pulse profile due to passing gravitational waves will be temporary and the spin rate/pulse profile of the pulsar will be restored to the original value, unlike cases where glitches or density perturbations arising from phase transitions are the source of pulse modification. Except an important effect that there will in general be a net shift of the angular position of the pulse if there was any such transient GW event. This is exactly the same effect which was discussed in Section VII for any transient effect of phase transition induced density fluctuations. Thus, a net, residual shift of the angular position of the pulse could signal a missed phase transition or a missed GW event. To further distinguish between these two possibilities, we note that any phase transition will necessarily also lead to free energy changes which are permanent, thereby leading to permanent change in spin rate. In contrast, GW event is genuinely transient, without any permanent change of the NS structure. Thus the original spin rate will be completely restored.
## XI Conclusions and Future Directions
Main focus of this brief review is on certain specific properties of pulsars, namely the extreme accuracy of observations of its pulses, and very sensitive dependence of the detailed nature of pulses, as observed on earth, on the internal structure of the pulsar. Probes of internal structure of neutron stars is of paramount importance in astrophysics. This has acquired special importance in view of the exciting detection of GWs from BNS merger events which have allowed direct probe of internal structure of the neutron star matter due to tidal deformation of the neutron stars during last stages of coalescence. Probing neutron star core structure is of great importance for understanding of the rich spectrum of various exotic high baryon density phases of QCD. This part of the QCD phase diagram has so far eluded experimental observations in terrestrial experiments (relativistic heavy-ion collision experiments). Various estimates suggest that extreme baryon densities needed for several such phases may only become accessible inside neutron star cores. Detailed nature of pulses is extremely sensitive to any changes occurring in the rotational dynamics of the pulsar, hence to any changes occurring in the configuration of a pulsar. This makes pulsar observations as powerful probe of dynamical processes affecting pulsar structure. We have discussed how internal changes in pulsar due to various phase transitions occurring in the pulsar cores can leave detailed imprints on pulses. Changes in the equation of state directly affects the spin rate by affecting the diagonal components of the moment of inertia tensor. At the same time, phase transition induced random density fluctuations affect entire MI tensor, including its off-diagonal components. This induces wobbling of the pulsar, leading to modulation of pulses as observed on earth. It may thus be possible to infer about the detailed nature of specific phase transitions occurring in the pulsar core by detailed analysis of pulsar observations. Random density fluctuations during phase transitions also induce rapidly changing quadrupole moment of the neutron star, providing one more possible source of GW emission from neutron stars. A particular interesting possibility arises when one considers pulsars in the presence of external gravitational waves. While acknowledging the fact that GWs induce extremely tiny configurational changes in bodies, it has been proposed that extreme accuracy of pulsar observations may allow detection of such GWs, especially for GW frequencies in the resonant bands of the neutron star oscillations. This is precisely the physics of Weber detectors of gravitational waves. Pulsars, thus can act as remotely stationed resonant Weber detectors of GWs, with the signal of GW getting transmitted to earth in form of perturbations on the pulses. This also allows for very exciting possibility of detecting gravitational wave events of past, that is cases when GWs from distant sources passed through the earth in past. Same GWs also reach pulsars affecting its pulses. Those perturbed pulses are detected on earth much later, allowing us to re-visit past gravitational wave events.
Admittedly, the feasibility of this proposal of using pulsars as Weber detectors leaves many questions unanswered. Even with the ringing effect, which will allow for the folding of very large number of pulses, it is not very clear what level of GW strain amplitude will be observable by analysis of pulsar signals. The extreme accuracy of pulsar signals is also achieved only by folding huge number of pulses. One thing which will come to help here is that pulsar will probably be most perfect Weber detector possible, formed with extreme density QCD matter (recall the quotation in Section IX.A from ref. [117]). Further, ideal situation will be to have a BNS system where at least one partner is a pulsar. If one neutron
star emits GW, then the partner pulsar will carry large imprints of that signal to earth. Pulsar in this sense becomes permanent probe of any micro changes happening in the interior of the partner neutron star.
The most important aspect of pulsar being Weber detector will be that the detector has possibility of being very close to the GW source. It is clear, that the proposal will be most effective if extra-galactic pulsars can be monitored with great accuracy. Thus, importance of extra-galactic pulsar may be viewed in this term also, that, with these _pulsar Weber detectors_ being out there, chances of a powerful GW source being close-by may be significant, allowing us to see the imprints of those GWs on the signals of that pulsar.
## Acknowledgment
This article is dedicated to the loving memory of Abira Sarkar. We also deeply acknowledge her immense support during the writing of this article.
|
2308.01308 | Masked and Swapped Sequence Modeling for Next Novel Basket
Recommendation in Grocery Shopping | Next basket recommendation (NBR) is the task of predicting the next set of
items based on a sequence of already purchased baskets. It is a recommendation
task that has been widely studied, especially in the context of grocery
shopping. In next basket recommendation (NBR), it is useful to distinguish
between repeat items, i.e., items that a user has consumed before, and explore
items, i.e., items that a user has not consumed before. Most NBR work either
ignores this distinction or focuses on repeat items. We formulate the next
novel basket recommendation (NNBR) task, i.e., the task of recommending a
basket that only consists of novel items, which is valuable for both real-world
application and NBR evaluation. We evaluate how existing NBR methods perform on
the NNBR task and find that, so far, limited progress has been made w.r.t. the
NNBR task. To address the NNBR task, we propose a simple bi-directional
transformer basket recommendation model (BTBR), which is focused on directly
modeling item-to-item correlations within and across baskets instead of
learning complex basket representations. To properly train BTBR, we propose and
investigate several masking strategies and training objectives: (i) item-level
random masking, (ii) item-level select masking, (iii) basket-level all masking,
(iv) basket-level explore masking, and (v) joint masking. In addition, an
item-basket swapping strategy is proposed to enrich the item interactions
within the same baskets. We conduct extensive experiments on three open
datasets with various characteristics. The results demonstrate the
effectiveness of BTBR and our masking and swapping strategies for the NNBR
task. BTBR with a properly selected masking and swapping strategy can
substantially improve NNBR performance. | Ming Li, Mozhdeh Ariannezhad, Andrew Yates, Maarten de Rijke | 2023-08-02T17:52:37Z | http://arxiv.org/abs/2308.01308v1 | # Masked and Swapped Sequence Modeling for Next Novel Basket Recommendation in Grocery Shopping
###### Abstract.
Next basket recommendation (NBR) is the task of predicting the next set of items based on a sequence of already purchased baskets. It is a recommendation task that has been widely studied, especially in the context of grocery shopping. In NBR, it is useful to distinguish between repeat items, i.e., items that a user has consumed before, and explore items, i.e., items that a user has not consumed before. Most NBR work either ignores this distinction or focuses on repeat items. We formulate the _next novel basket recommendation_ (NNBR) task, i.e., the task of recommending a basket that only consists of _novel items_, which is valuable for both real-world application and NBR evaluation. We evaluate how existing NBR methods perform on the NNBR task and find that, so far, limited progress has been made w.r.t. the NNBR task. To address the NNBR task, we propose a simple **bi**-directional transformer **b**asfect recommendation model (BTBRR), which is focused on directly modeling item-to-item correlations within and across baskets instead of learning complex basket representations. To properly train BTBR, we propose and investigate several masking strategies and training objectives: (i) item-level random masking, (ii) item-level select masking, (iii) basket-level all masking, (iv) basket-level explore masking, and (v) joint masking. In addition, an item-basket swapping strategy is proposed to enrich the item interactions within the same baskets. We conduct extensive experiments on three open datasets with various characteristics. The results demonstrate the effectiveness of BTBR and our masking and swapping strategies for the NNBR task. BTBR with a properly selected masking and swapping strategy can substantially improve NNBR performance.
**Information systems \(\rightarrow\) Recommender systems**: _Retrieval models and ranking_.
**ACM Reference Format**:
M. Li, M. Ariannezhad, A. Yates, and M. de Rijke (2023)Masked and Swapped Sequence Modeling for Next Novel Basket Recommendation in Grocery Shopping. In _Seventeenth ACM Conference on Recommender Systems (RecSys '23), September 18-22, 2023, Singapore, Singapore_. ACM, New York, NY, USA, 18 pages. [https://doi.org/10.1145/3604915.3608803](https://doi.org/10.1145/3604915.3608803)
## 1. Introduction
Next basket recommendation is a type of sequential recommendation that aims to recommend the next basket, i.e., set of items, to users given their historical basket sequences. Recommendation in a grocery shopping scenario is one of the main use cases of the NBR task, where users usually purchase a set of items instead of a single item to satisfy their diverse needs. Many methods, based on a broad range of underlying techniques (i.e., RNNs (Beng et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), self-attention (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018), and denoising via contrastive learning (Wang et al., 2018)), have been proposed for, and achieve good performance on, the NBR task.
**Next novel basket recommendation.** A recent study [24] offers a new evaluation perspective on the next basket recommendation (NBR) task by distinguishing between _repetition_ (i.e., recommending items that users have purchased before) and _exploration_ (i.e., recommending items that are new to the user) tasks in NBR and points out the imbalance in difficulty between the two tasks. According to the analysis of existing methods in [24], the performance of many existing NBR methods mainly comes from being biased towards (i.e., giving more resources to) the repetition task and sacrificing the ability of exploration. Building on these insights, recent work on NBR has seen a specific focus on the pure repetition task [e.g., 20] as well the introduction of specific methods for the repetition task [1; 20].
Novelty and serendipity are two important objectives when evaluating recommendation performance [13; 18]. People might simply get tired of repurchasing the same set of items. Even when they engage in a considerable amount of repetition, there is still a large proportion of users who would like to try something new when shopping for groceries [24]. This phenomenon is especially noticeable for users with fewer transactions in their purchase history [1]. Therefore, one of the key roles of recommender systems is to assist users in discovering potential novel items that align with their interests. However, in contrast to the pure repetition task, the pure exploration task in NBR remains under-explored. Besides, due to the difference in difficulty between the two tasks, many online e-commerce and grocery shopping platforms have started to design a "buy it again" service to isolate repeat items from the general recommendation.1; 2
Footnote 1: After login, users may see a “buy it again” page on e-commerce platforms (see, e.g., Amazon [https://amazon.com](https://amazon.com) and grocery shopping platforms (see, e.g., Picnic [https://picnic.app](https://picnic.app)), where the platform collects repeat items. Similarly, in the grocery shopping scenario, “Try Something New” services also exist, where only novel items are recommended to the user.
Motivated by the research gaps and real-world demands, we formulate the _next novel basket recommendation_ (NNBR) task, which focuses on recommending a novel basket, i.e., a set of items that are new to the user, given the user's historical basket sequence. Different from the repetition task, which predicts the probability of repurchase from a relatively small set of items, the NNBR task needs to predict possible items from many thousands of candidates by modeling item-item correlations, which is more complex and difficult [24]. NNBR is especially relevant to the "Try Something New" concept in the grocery shopping scenario. Table 1 shows differences between three types of basket recommendation and positions our work.
**From NBR to NNBR.** The NNBR task can be seen as a sub-task of the conventional NBR task, in which NBR methods are designed to find all possible items (both _repeat items_ and _novel items_) in the next basket. Therefore, it is possible to generate a novel basket by selecting only the top-\(k\) novel items predicted by NBR methods. To modify NBR methods for the NNBR task, an intuitive solution is to remove the repeat items from the ground-truth labels and train models only depending on the novel items in the ground-truth labels. Given this obvious strategy and given that many methods have already been proposed for NBR, an important question is: _If we already have an NBR model, do we need to train another model specifically for the NNBR task?_ Surprisingly, we find that training specifically for exploration does not always lead to better performance in the NNBR task, and might even reduce performance in some cases.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Task** & Target items & Recommended basket & Related work \\ \hline NBR & Repeat items \& novel items & [3; 4; 16; 21; 27; 32; 43; 45] \\ NBRR & Repeat items & Only repeat items & [1; 20] \\ NNBR & Novel items & Only novel items & This paper \\ \hline \hline \end{tabular}
\end{table}
Table 1. Three types of basket recommendation.
**BTBR: A bi-directional transformer basket recommendation method.** In NNBR, item-to-item correlations are especially important, since we need to infer the utility of new items based on previously purchased items. Besides, a single basket is likely to address diverse needs of a user [37]. E.g., what a user would like to drink is more likely to depend on what he or she drank before rather than on the tooth paste they previously purchased. However, most existing NBR approaches [16; 21; 43; 45] are two-stage methods, which first generate a basket-level representation [35], and then learn a temporal model based on basket-level representations, which will lead to information loss w.r.t. item-to-item correlations [21; 32; 45]. Some methods [21; 32; 45] learn partial item-to-item correlations based on the co-occurrence within the same or adjacent basket as auxiliary information beyond basket-level correlation learning. Instead of learning or exploiting complex basket representations, we learn item-to-item correlations from direct interactions among different items across different baskets. To do so, we propose a bi-directional transformer basket recommendation model (BTBR) that adopts a bi-directional transformer [36] and uses the shared basket position embedding to indicate items' temporal information.
**Masking and training.** To properly train BTBR, we propose and investigate several masking strategies and training objectives at different levels and tasks, as follows: (i) item-level random masking: a cloze-task loss [8; 34], in which we randomly mask the historical sequence at the item level; (ii) item-level select masking: a cloze-task loss designed for exploration, in which we first select the items we need to mask and then mask all the occurrences of the selected item; (iii) basket-level all masking: a general basket recommendation task loss, in which we mask and predict the complete last basket at the end of the historical sequence; (iv) basket-level explore masking: an explore-specific basket recommendation task loss, in which we remove the repeat items and only mask the novel items in the last basket of the historical sequence; and (v) joint masking: a loss that follows the pre-train-then-fine-tune paradigm, in which we first adopt item-level masking for the cloze task, then fine-tune the model using basket-level masking.
In addition, conventional sequential item recommendation usually assumes that the items in a sequence are strictly ordered and sequentially dependent. However, recent work [e.g., 5; 26; 39; 42] argues that the items may occur in any order, i.e., the order is flexible, and ignoring flexible orders might lead to information loss. Similarly, it is unclear whether the items that are being purchased across baskets have a strict order in the grocery shopping scenario. Thus, we propose an item swapping strategy that allows us to randomly move an item to another basket according to a certain ratio, which can enrich item interactions within the same basket.
We conduct extensive experiments on three publicly available grocery datasets to understand the effectiveness of the BTBR model and the proposed strategies on datasets with various repeat ratios and characteristics.
**Our contributions.** The main contributions of this paper are:
* To the best of our knowledge, we are the first to formulate and investigate the next novel basket recommendation (NNBR) task, which aims to recommend a set of novel items that meets a user's preferences in the next basket.
* We investigate the performance of several representative NBR methods w.r.t. the NNBR task and find (i) that training specifically for the exploration task does not always lead to better performance, and (ii) that limited progress has been made w.r.t. the NNBR task.
* We propose a simple bi-directional transformer basket recommendation (BTBR) model that learns item-to-item correlations across baskets.
* We propose several types of masking and item swapping strategies for optimizing BTBR for the NNBR task. Extensive experiments are done on three open grocery shopping datasets to assess the effectiveness of the proposed strategies. BTBR with a proper masking and swapping strategy is the new state-of-the-art method w.r.t. the NNBR task.
## 2. Related Work
In this section, we describe two lines of research in the recommender systems literature that are related to our work: sequential recommendation and next basket recommendation.
**Sequential recommendation.** Sequential item recommendation has been widely studied for many years, and models (Han et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Li et al., 2020; Li et al., 2021; Li et al., 2020) with various deep learning techniques, e.g., RNN (Han et al., 2017; Chen et al., 2018), CNN (Li et al., 2020), GNN (Li et al., 2020; Li et al., 2021), contrastive learning (Li et al., 2020), attention (Li et al., 2020; Li et al., 2021) and self-attention (Chen et al., 2017; Li et al., 2020; Li et al., 2021) mechanism have been proposed. The self-attention (transformer) model (Li et al., 2020) with multi-head attention shows strong performance in natural language processing, and SASRec (Chen et al., 2017) is the first sequential recommendation model that employs the self-attention mechanism. BERT4Rec (Li et al., 2020) upgrades the left-to-right training scheme in SASRec and uses a bi-direction transformer with a Cloze task (Li et al., 2020), which is the closest sequential recommendation method to this paper. Motivated by the success of BERT4Rec, some follow-up work has applied masked-item-prediction training to more specific scenarios (Li et al., 2020).
However, BERT4Rec and follow-up work only focus on the item sequential recommendation with only random masking during training (Li et al., 2020). We extend BERT4Rec to the basket sequence setting and propose several types of masking strategies and training objectives that are specifically designed for the NNBR task. Furthermore, in this work we study the next novel basket recommendation task, where both historical interactions and the predicted target are baskets (sets of items). None of the sequential recommendation models listed above have been designed to handle a sequence of baskets.
**Next basket recommendation.** Next basket recommendation is a sequential recommendation task that addresses the sequence of baskets in the grocery shopping scenario. Existing methods can be classified into three types: frequency neighbor-based methods (Chen et al., 2017; Chen et al., 2017), Markov chain (MC)-based methods (Li et al., 2020), and deep learning-based methods (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Li et al., 2020; Li et al., 2021; Li et al., 2020; Li et al., 2021; Li et al., 2020; Li et al., 2021; Li et al., 2021; Li et al., 2021), and exploration perspective; they find that the repetition task, i.e., recommending repeat items, is much easier than the exploration task, i.e., recommending explore items (a.k.a. novel items in this paper), besides the improvements of many recent methods come from the performance of the repetition task rather than better capturing correlations among items. Inspired by this finding, an NBR method (Chen et al., 2017) that only models the repetition behavior has been proposed, and an NBRR task (Li et al., 2020) that only focuses on recommending repeat items has been formulated.
In this paper, we propose and formulate the next novel basket recommendation task that focuses on recommending novel items to the user, whereas all of the NBR methods mentioned above focus on the conventional NBR task, and their performance when generalized to the NNBR task remains unknown.
## 3. Task Formulation
In this section, we describe and formalize the next novel basket recommendation task which is the focus of this paper.
Formally, given a set of users \(U=\{u_{1},\,u_{2},\,\ldots,\,u_{n}\}\) and a set of items \(I=\{i_{1},i_{2},\ldots,i_{m}\}\), \(S_{u}=[B_{u}^{1},B_{u}^{2},\ldots,B_{u}^{t}]\) represents the historical interaction sequence for user \(u\), where \(B_{u}^{t}\) represents a set of items \(i\in I\) that user \(u\) purchased at time step \(t\). For a user \(u\), the _repeat item_\(i_{u}^{rep}\) is the item that user \(u\) has purchased before, which is defined as \(i_{u}^{rep}\in I_{u}^{rep}=B_{u}^{1}\cup B_{u}^{2}\cup\ldots\cup B_{u}^{t}\), and the novel item \(i_{u}^{novel}\) is the item that user \(u\) have not purchased before, i.e., \(i_{u}^{novel}\in I_{u}^{novel}=I-I_{u}^{rep}\).
The goal of the _next novel basket recommendation_ task is to predict the following novel basket which only consists of novel items \(i_{u}^{novel}\) that the user would probably like, based on the user's past interactions \(S_{u}\), that is,
\[P_{u}=\hat{B}_{u}^{t+1}=f(S_{u}) \tag{1}\]
where \(P_{u}\) denotes a recommended item list that _only consists of the novel items \(i_{u}^{\text{novel}}\) of user \(u\)_.
## 4. Our method
In this section, we first describe the base bi-directional transformer basket recommendation model (BTBR) we use, then introduce several types of masking strategies for the NNBR task, and finally describe an item swapping strategy.
### Bi-directional transformer basket recommendation model
Learning basket representations (Kang et al., 2019) and modeling temporal dependencies across baskets are two key components in almost all neural-based NBR methods. Many NBR methods introduce complex architectures to learn representations for baskets in grocery shopping (Beng et al., 2019; Chen et al., 2020; Li et al., 2021; Li et al., 2022; Kang et al., 2020). Instead of proposing more complex architectures to learn better basket representations and temporal dependencies, we want to simplify the model and only focus on the item-level correlations across different baskets, which helps us to infer novel items from users' historical items.
As a widely used method to model temporal dependencies, a recurrent neural network (RNN) (Chen et al., 2020; Chen et al., 2020) requires passing information sequentially according to the temporal order, whereas there is no temporal order for items within the same basket, and basket-level representations at each timestamp are required (Chen et al., 2020; Li et al., 2021; Li et al., 2022; Kang et al., 2020). Another alternative method is the self-attention mechanism (a.k.a. transformer) (Kang et al., 2020), which is capable of learning the representations of every position by exchanging the information across all positions. Therefore, we adopt the bi-directional transformer (Chen et al., 2020; Kang et al., 2020) as the backbone of our BTBR model, which not only allows us to learn item-to-item correlations from the direct interactions among items across different baskets but is also able to handle basket sequence information in grocery shopping. The overall architecture of BTBR is shown in Figure 1.
**Embedding layer.** In order to use transformers (Kang et al., 2020) for NNBR, we first transfer a basket sequence to an item sequence via a "flatten" operation, e.g., \([\{i_{1},i_{2}\},\{i_{1},i_{3},i_{4}\}]\rightarrow[i_{1},i_{2},i_{1},i_{3},i_ {4}]\). It has been shown that the positions of items are informative in the sequential recommendation scenario (Chen et al., 2020; Li et al., 2022). Different from soluti
Figure 1. The overall architecture of the BTBR model.
sequential recommendation, where each item is combined with its unique position embedding w.r.t. its position in the item sequence, we use a learnable position embedding for every basket, and items within the same basket will share the same position embedding. For example, given a basket sequence \(S=[\{i_{1},i_{2}\},\{i_{1},i_{3},i_{4}\},\{i_{4},i_{5}\}]\), we first flatten \(S\) and get a sequence of item embeddings \(E_{i}=[e_{1}^{i},e_{2}^{i},e_{1}^{i},e_{3}^{i},e_{4}^{i},e_{4}^{i},e_{5}^{i}]\), and a position embedding sequence \(E_{p}=[e_{1}^{p},e_{2}^{p},e_{3}^{p}]\). Finally, the input sequence of transformer layer will be \(E_{i,p}=[e_{1}^{i}+e_{1}^{p},e_{2}^{i}+e_{1}^{p},e_{1}^{i}+e_{2}^{p},e_{3}^{i}+e _{2}^{p},e_{4}^{i}+e_{2}^{p},e_{4}^{i}+e_{3}^{p},e_{5}^{i}+e_{3}^{p}]\). Note that the padding and truncating operations are also employed to handle sequences of various lengths.
**Bi-directional transformer layer.** The transformer architecture contains two sub-layers:
1. _Multi-head attention layer_, which adopts the popular attention mechanism (Kang et al., 2019) and aggregates all items' embeddings across different baskets with adaptive weights.
2. _Point-wise feed-forward layer_, which aims to endow nonlinearity and interactions between different latent dimensions.
We use stacked transformer layers to learn more complex item-to-item correlations, that is:
\[H^{1}=\text{Trm}(E_{i,p}),\dots,H^{L}=\text{Trm}(H^{L-1}), \tag{2}\]
where \(Trm\) denotes the bi-directional transformer layer, \(H^{L}=[h_{1}^{L},h_{2}^{L},\dots,h_{d}^{L}]\) denotes a representation sequence derived from the last transformer layer, and \(d\) denotes the maximum sequence length of input sequence \(E_{i,p}\). Besides, residual connections (Kang et al., 2019), dropout (Kang et al., 2019), layer normalization (Kang et al., 2019), and GELU activation (Gelman et al., 2019) are adopted to enhance the ability of representation learning. For more details about the bi-directional transformer layer, we refer to (Kang et al., 2019; Kang et al., 2019; Kang et al., 2019).
**Prediction layer.** After hierarchically exchanging information of all items across baskets using the transformer, we get \(H^{L}\in\mathbb{R}^{m\times d}\), which contains the corresponding representations \(h^{L}\) for all items in the input sequence. Following (Kang et al., 2019; Kang et al., 2019), we use the same item embedding \(E_{I}\in\mathbb{R}^{m\times d}\) as the input layer to reduce the model size and alleviate the overfitting problem. For a masked position (item), we get its learned representation \(h\in\mathbb{R}^{d}\) and compute the interaction probability distribution \(p\) of candidate items by:
\[p=\text{Softmax}(hE^{\text{T}}+b), \tag{3}\]
where \(E\) is the embedding matrix for candidate items and \(b\) denotes a bias term.
### Masking strategy
Since there are repetition signals in the basket sequence, it is unclear whether these signals are merely noise/shortcuts or contain valuable information for the task of recommending novel items. After constructing the base model (BTBR), the challenging problem that needs to be addressed is how to properly train the model to improve its ability of finding novel items that meet users' interests. In this section, we propose four types of alternative masking strategies for the next novel basket recommendation task by considering different tasks and levels, as well as the repetition-exploration signals. Figure 2 shows examples of four types of masking strategies and Table 2 shows the characteristics of different training strategies.
**Cloze task.** The first type of training objective is a cloze task (Kang et al., 2019), i.e., "masked language model" in (Cheng et al., 2019). Specifically, we mask a proportion of items in the input sequence, i.e., replace each of them with a "mask token," and then try to predict the original items based on their contexts. We call this masking "item-level." Two main advantages of this item-level masking & training strategy are (i) it allows us to generate more item-level training samples by breaking the definition of "basket," and (ii) it learns both sides' information via the bi-directional transformer, which might allow the model to better capture item-to-item correlations. We first introduce two item-level masking strategies as follows:
1. _Random:_ This is a conventional masking strategy, which has been adopted in BERT4Rec [31]. Specifically, given a flattened item sequence, we randomly select several positions of the sequence and mask the corresponding items of the selected position according to mask ratio \(\alpha\) as input.
2. _Select_: One potential issue w.r.t. the above _Random_ masking is that the masked items (prediction target) might still exist in the non-masked positions, so the model might mainly predict the masked item via its repetition information rather than inferring new items based on item-to-item correlations. Therefore, we propose the select masking strategy, which is specifically targeted at the exploration demand of the NNBR task. Specifically, given a flattened item sequence, we first derive the item set \(I\) in this sequence, then randomly select several items \(i_{m}\in I\) according to mask ratio \(\alpha\), and finally mask all the occurrences of \(i_{m}\) in the sequence. Since there is no repetition information available, the model can only infer the targeted items, i.e., novel items, via learning the item-to-item correlations.
**Basket recommendation task.** Using the cloze task as the learning objective has limitations: (i) it is not able to fully respect the temporal dependencies of a sequence, since we can only use the historical information (left-side context) when we make the recommendation; and (ii) it is not specifically designed for the basket recommendation task and a mismatch might exist. Therefore, the second type of training objective we consider is the basket recommendation task, which masks the input sequence at the basket-level instead of item-level. Specifically, we mask the last basket and try to predict the items in this basket only based on the historical items (left-side information). Similarly, we propose another two basket-level masking strategies as follows:
1. _All_: This masking strategy can be regarded as optimizing the model for the NBR task. Given a flattened item sequence, we find and mask all items, i.e., both novel items and repeat items in the last basket.
Figure 2. The original basket sequence (at the top) and four types of masking strategies.
4. _Explore_: This is a NNBR-specific masking strategy. Given a flattened item sequence, we find the items in the last basket, instead of masking all items, we only mask the novel items \(i\in I^{\textit{novel}}\) and remove the _repeat items_\(i\in I^{\textit{rep}}\).
The model will be only optimized for finding all novel items in the future based on the historical basket sequence.
**Joint task.** The pretrain-then-finetune paradigm has been widely adopted in NLP tasks. It is worth noting that item-level masking (the cloze task) and basket-level masking (the basket recommendation task) can also be combined as a joint masking strategy to employ the pretrain-then-finetune paradigm in NNBR, which first uses item-level masking strategy (i.e., self-supervised task) to get item correlations as the pre-train stage and then employ basket-level masking strategy (i.e., supervised task) to finetune it for the basket recommendation task.
**Loss.** Following (Kipip et al., 2019), we select minimizing the negative log-likelihood loss as the training objective:
\[\mathcal{L}=\frac{1}{|I^{m}|}\sum_{i\in I^{m}}-\log p(i\mid S_{u}), \tag{4}\]
where \(I^{m}\) is the masked item set, \(p(i\mid S_{u},t)\) is the predicted probability of item \(i\) at position \(t\).
**Test and prediction.** To predict a future basket (a set of items), we only need to add one masked token at the end of the user's item sequence, since items within the same basket share the same position embedding. In the NNBR task, the candidate items are novel items \(I^{\textit{new}}\) that the user has not bought before, thus we use the embedding matrix w.r.t. the novel items of every user to compute the probabilities according to Eq. 3. Finally, we select top-\(K\) novel items with the highest scores as the recommendation list of the next novel basket.
### Swapping strategy
In sequential recommendation, some work (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020) argues that the items in a sequence may not be sequentially dependent and different item orders may actually correspond to the same user intent. Ignoring flexible orders in sequential recommendation might lead to less accurate recommendations for scenarios where many items are not sequentially dependent (Wang et al., 2018; Wang et al., 2019; Wang et al., 2020). In grocery shopping, the items purchased within the different baskets might not have rigid orders. To further understand if considering the flexible orders among items could further improve the performance w.r.t. the NNBR task, we propose the item swapping strategy to create augmentations for the BTBR.
Specifically, as illustrated in Figure 3, we randomly select items according to a swap ratio \(\lambda\) and then move them to another basket to enrich the items' interactions within the same basket. Besides, we introduce a hyper-parameter, i.e., swap hop \(\gamma\), to control the basket distance of the swapping strategy. Note that we only perform the local swap strategy when using item-level masking (the cloze task) to train the model, since basket-level masking (the basket recommendation task) is designed to respect the sequential order and predict the future basket based on historical information.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Strategy** & Strict temporal orders & Explore specific & Training signals ranking3 \\ \hline Item-Random & \(\times\) & \(\times\) & 1 \\ Item-Explore & \(\times\) & \(\checkmark\) & 1 \\ Basket-All & \(\checkmark\) & \(\times\) & 2 \\ Basket-Explore & \(\checkmark\) & \(\checkmark\) & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Comparison of four types of masking strategies from three aspects, i.e., temporal orders, explore specific and amount of training signals.
## 5. Experiments
### Research questions
To understand the next novel basket recommendation task, and evaluate the performance of BTBR with different strategies, we conduct experiments to answer the following questions:
1. How do existing NBR models perform w.r.t. the NNBR task? Does training specifically for the NNBR task lead to better performance?
2. How does BTBR with different masking strategies perform compared to the state-of-the-art models?
3. Does the swapping strategy contribute to the improvements?
4. How do the hyper-parameters influence the models' performance and how different masking strategies affect the training dynamics?
5. Is the joint masking strategy more robust than using the single masking strategy?
### Experimental setup
**Datasets.** We evaluate the NNBR task on three publicly available grocery shopping datasets (TaFeng,4 Dunnhumby,5 and Instacart6), which vary in their repetition and exploration ratios. Following (Tang et al., 2019), we sample users whose basket length is between 3 and 50, and remove the least frequent items in each dataset. We also focus on the fixed size (10 or 20) next novel basket recommendation problem. In our experiments, we split the dataset across users, 80% for training, and 20% for testing, and leave 10% of the training users as the validation set. We repeat the splitting and experiments five times and report the average performance. The statistics of the processed datasets are shown in Table 3.
Footnote 4: [https://www.kaggle.com/chiranjivdas09/ta-feng-grocery-dataset](https://www.kaggle.com/chiranjivdas09/ta-feng-grocery-dataset)
Footnote 5: [https://www.dunnhumby.com/source-files/](https://www.dunnhumby.com/source-files/)
Footnote 6: [https://www.kaggle.com/c/instacart-market-basket-analysis/data](https://www.kaggle.com/c/instacart-market-basket-analysis/data)
**Baselines.** We investigate the performance of six NBR baselines, which we select based on their performance on our chosen datasets in the analysis performed in [24]. Importantly, for a fair comparison, we do not include methods that leverage additional information [3, 4, 32].
* **G-TopFreq:** A simple and effective method that recommends the top \(k\) most popular items in the dataset as the next basket for users.
* **TIFUKNN:** A state-of-the-art method that models the temporal dynamics of users' past baskets by using a KNN-based approach based on personalized frequency information (PIF) [17].
* **Dream:** A RNN-based method that gets basket representation using pooling strategy and employs RNN to model sequential behavior [43].
* **Beacon:** A RNN-based method that uses RNN to capture sequential behavior and uses correlation-sensitive basket encoder to consider intra-basket item correlations [21].
* **DNNTSP:** A state-of-the-art method that utilizes a graph neural network (GNN) and self-attention mechanisms to encode item-item relations across baskets and capture temporal dependencies [45].
* **CLEA:** A state-of-the-art method that uses contrastive learning and a GRU-based encoder to denoise and automatically extract items relevant to the target item [27].
Note that for the above baseline models (except G-TopFreq), we have two versions with different training methods, i.e., using all items in the last basket as training labels (Train-all), and only using novel items in the last basket as training labels (Train-explore).
**Configurations.** For the training-based baseline methods and TIFUKNN, we strictly follow the hyper-parameter setting and tuning strategy of their respective original papers. The embedding size is tuned on \(\{16,32,64,128\}\) for all training-based methods based on the validation set to achieve their best performance.
We use PyTorch to implement our model and train it using a TITAN X GPU with 12G memory. For BTBR, we set self-attention layers to 2 and their head number to 8, and tune the embedding size on \(\{16,32,64,128\}\). The Adam optimizer with a learning rate of 0.001 is used to update parameters. We set the batch size to 128 for the Tafeng and Dunnhumby datasets, and 64 for the Instacart dataset; we sweep the mask ratio \(\alpha\) in \(\{0.1,0.3,0.5,0.7,0.9\}\), local swap ratio in \(\{0,0.1,0.3,0.5,0.7,0.9\}\) and swap hop \(\gamma\) in \(\{1,3,5,7,9\}\).
**Metrics.** Two widely used metrics for the NBR problem are _Recall@k_ and _nDCG@k_. In the NNBR task, _Recall_ measures the ability to find all novel items that a user will purchase in the next basket; NDCG is a ranking metric that also considers the order of these novel items, i.e.,
\[\text{Recall@}K =\frac{1}{|U|}\sum_{u\in U}\frac{\left|P_{u}\cap T_{u}^{novel} \right|}{\left|T_{u}^{novel}\right|}, \tag{5}\] \[\text{nDCG@}K =\frac{1}{|U|}\sum_{u\in U}\frac{\sum_{k=1}^{K}p_{k}/\log_{2}(k+1 )}{\sum_{k=1}^{\min(K,|T_{u}^{novel}|)}1/\log_{2}(k+1)}, \tag{6}\]
where \(U\) is a set of users who will purchase novel items in their next basket, \(T_{u}^{novel}\) is a set of ground-truth novel items of user \(u\), \(p_{k}\) equals 1 if \(p_{u}^{k}\in T_{u}^{novel}\), otherwise \(p_{k}=0\). \(p_{u}^{k}\) denotes the \(k\)-th item in the predicted basket \(P_{u}\). Note that some methods might assign high scores w.r.t. the repeat items [24], to generate a novel basket, we fully remove the repeat items, then only rank and select top-\(k\) novel items as the recommended basket \(P_{u}\) to ensure a fair comparison, i.e., the recommended basket _only consists top-\(k\) novel items_.
### Train-all and Train-explore (RQ1)
To answer RQ1, we employ two training strategies for each baseline method: (i) _Train-all_: we keep both repeat items and explore items as part of the ground-truth labels during training, which means that the model is trained to find all possible items in the next basket; and (ii) _Train-explore_: we remove the repeat items and only keep novel items in the ground-truth labels during training, which means the model is specifically trained to find novel items in the next basket. For the NNBR performance evaluation, we assess the models' ability to find novel items, which means the recommended novel basket consists of top-\(k\) novel items and there are no repeat items. We report the experimental results of different baseline methods in Table 4. We have three main findings.
First, we see that no method consistently outperforms all other methods across all datasets. On the Tafeng dataset, several NN-based methods (Dream-all, Dream-explore, Beacon-all, Beacon-explore, DNNTSP-all, CLEA-explore) fall in the top-tier methods group with quite good performance. On the Dunnhumby dataset, Beacon-explore achieves the best performance w.r.t. all metrics. On the Instacart dataset, TIFUKNN-explore is among the best-performing methods,
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Dataset & Metric & Train & G-Pop & TIFUKNN & Dream & Beacon & CLEA & DNNTSP \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{Recall@10} & all & 0.0587 & 0.0714 & 0.0960 & 0.0926 & 0.0870 & **0.1024** \\ & & explore & = & 0.0911\({}^{\dagger}\) & 0.1021\({}^{\dagger}\) & 0.0967\({}^{\dagger}\) & 0.1010\({}^{\dagger}\) & 0.0940\({}^{\ddagger}\) \\ \cline{2-9} & \multirow{2}{*}{nDCG@10} & all & 0.0603 & 0.0662 & 0.0823 & 0.0789 & 0.0755 & 0.0855 \\ & & explore & = & 0.0783\({}^{\dagger}\) & **0.0859\({}^{\dagger}\)** & 0.0819\({}^{\dagger}\) & 0.0857\({}^{\dagger}\) & 0.0767\({}^{\ddagger}\) \\ \cline{2-9} & \multirow{2}{*}{Recall@20} & all & 0.0874 & 0.0926 & 0.1244 & 0.1252 & 0.1150 & 0.1245 \\ & & explore & = & 0.1157\({}^{\dagger}\) & 0.1244 & **0.1257** & 0.1253\({}^{\dagger}\) & 0.1168\({}^{\ddagger}\) \\ \cline{2-9} & \multirow{2}{*}{nDCG@20} & all & 0.0703 & 0.0738 & 0.0928 & 0.0909 & 0.0861 & 0.0943 \\ & & explore & = & 0.0876\({}^{\dagger}\) & 0.0939 & 0.0929 & **0.0952\({}^{\dagger}\)** & 0.0858\({}^{\ddagger}\) \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{Recall@10} & all & 0.0468 & 0.0497 & 0.0494 & 0.0499 & 0.0499 & 0.0514 \\ & & explore & = & 0.0498 & 0.0506 & **0.0529\({}^{\dagger}\)** & 0.0520\({}^{\dagger}\) & 0.0472\({}^{\ddagger}\) \\ \cline{2-9} & \multirow{2}{*}{nDCG@10} & all & 0.0397 & 0.0409 & 0.0409 & 0.0411 & 0.0376 & 0.0415 \\ & & explore & = & 0.0411 & 0.0385 & **0.0428\({}^{\dagger}\)** & 0.0404\({}^{\dagger}\) & 0.0378\({}^{\ddagger}\) \\ \cline{2-9} & \multirow{2}{*}{Recall@20} & all & 0.0701 & 0.0745 & 0.0744 & 0.0804 & 0.0711 & 0.0782 \\ & & explore & = & 0.0746 & 0.0791 & **0.0813** & 0.0807\({}^{\dagger}\) & 0.0739 \\ \cline{2-9} & \multirow{2}{*}{nDCG@20} & all & 0.0491 & 0.0505 & 0.0505 & 0.0532 & 0.0479 & 0.0524 \\ & & explore & = & 0.0506 & 0.0502 & **0.0546** & 0.0521\({}^{\dagger}\) & 0.0484\({}^{\ddagger}\) \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{Recall@10} & all & 0.0430 & 0.0425 & 0.0440 & 0.0454 & 0.0394 & 0.0414 \\ & & explore & = & **0.0494\({}^{\dagger}\)** & 0.0455 & 0.0460 & 0.0469\({}^{\dagger}\) & 0.0419 \\ \cline{2-9} & \multirow{2}{*}{nDCG@10} & all & 0.0359 & 0.0346 & 0.0356 & 0.0388 & 0.0302 & 0.0335 \\ & & explore & = & **0.0400\({}^{\dagger}\)** & 0.0355 & 0.0387 & 0.0369\({}^{\dagger}\) & 0.0341 \\ \cline{2-9} & \multirow{2}{*}{Recall@20} & all & 0.0685 & 0.0649 & 0.0690 & 0.0733 & 0.0626 & 0.0635 \\ & & explore & = & 0.0755\({}^{\dagger}\) & 0.0719 & 0.0741 & **0.0764\({}^{\dagger}\)** & 0.0642 \\ \cline{2-9} & \multirow{2}{*}{nDCG@20} & all & 0.0455 & 0.0431 & 0.0452 & 0.0499 & 0.0394 & 0.0424 \\ & & explore & = & 0.0500\({}^{\dagger}\) & 0.0462 & **0.0501** & 0.0484\({}^{\dagger}\) & 0.0431 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Results of methods training for finding novel items, i.e., Train-explore, compared against the methods training for finding all items, i.e., Train-all. Boldface and underline indicate the best and the second best performing performance w.r.t. the NNBR task, respectively. Significant improvements and deteriorations of Train-explore over the corresponding Train-all baseline results are marked with \({}^{\dagger}\) and \({}^{\ddagger}\), respectively. (paired t-test, p \(<0.05\)).
which means that well-tuned neighbor-based models may outperform complex neural-based methods on some datasets w.r.t. the NNBR task (Nagumov et al., 2019; Wang et al., 2020). The performance of G-TopFreq is obviously the worst on the Tafeng and Dunnhumby dataset, however, its performance is quite competitive on the Instacart dataset, which indicates that the popularity information is very important w.r.t. the NNBR task in the scenario with a high repeat ratio.
Second, the improvements of recent methods achieved in NBR task do not always generalize to the NNBR task. Recent proposed methods (TIFUKNN, CLEA, DNNTSP) have surpassed the previous classic baselines (i.e., G-TopFreq, Dream, Beacon) by a large margin in conventional NBR task (Nagumov et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020), whereas, the improvements are relatively small or even missing on some datasets when handling the NNBR task. This indicates that the recently proposed methods make limited progress on finding novel items for the user and that their improvements mainly come from the repeat recommendation, which is consistent with the findings in (Wang et al., 2020).
Third, the NNBR performance changes diversely for different methods when changing from Train-all to Train-explore. Training and tuning existing NBR methods specifically for the NNBR task lead to significant or mild improvements in most cases, since the models do not need to deal with the repetition task and they are more targeted on finding novel items that meet users' preferences. Surprisingly, we find that DNNTSP-explore's performance is much worse than DNNTSP-all on the Tafeng and Dunnhumby datasets. We suspect that the underlying reason for this deterioration is that the repeat items (labels) contain useful item-to-item correlation signals that can be captured by the DNNTSP.7 Since various NBR methods have distinct architectures, certain methods may gain more from tailored training for exploration, while others can grasp item-item correlations from repeat labels. Consequently, it is unwise to indiscriminately eliminate repeat labels during training.8
Footnote 7: Assume that one user’s historical basket sequence is \([[a,b,c],[c,d],[a,c]]\), and next basket is \([b,e]\). Even though \(b\) is a repeat item, the model might be able to learn the correlation between \(b\) and other items in this historical sequence, which might help with the model’s ability of finding novel items.
Footnote 8: This finding is important as it helps to avoid the potential issue of poor baselines. To ensure a fair comparison, NNBR practitioners should experiment with both strategies to train their baseline models and achieve best performances, instead of using an intuitive solution, i.e., removing repeat labels.
### Effectiveness of BTBR (RQ2)
In this experiment, we evaluate the overall NNBR task performance of BTBR with different masking strategies, i.e., item-level random masking (item-random), item-level select masking (item-select), basket-level all masking (basket-all) and basket-level explore masking (basket-explore). The results of the comparison with the best baseline performances are shown in Table 5.9 Based on the results, we have several observations. First, BTBR with the basket-all masking strategy (i.e., conventional next basket recommendation task) can significantly outperform the best baselines on the Tafeng and Instacart datasets, and achieve comparable performance on the Dunnhumby dataset. This result indicates that it may not be necessary to introduce basket representations, because only modeling item-to-item correlations is already effective for the NNBR task.
Footnote 9: To avoid confusion, we only mark the significant differences for comparison with the baselines in this table. More comparison results among different strategies can be found in the experimental analysis.
Second, there is no consistent best masking strategy across all datasets. On the Tafeng dataset, it is clear that basket-level masking outperforms item-level masking, where basket-all and basket-explore can respectively outperform and achieve the existing best performances w.r.t. each metric; however, using item-level masking leads to significant deterioration. On the Dunnhumby and Instacart datasets, BTBR with item-level masking strategies significantly outperforms the best performance achieved by baselines by a large margin, and is superior to BTBR with basket-level masking strategies. The above results show that the sequential order of items or baskets on the Tafeng dataset might be more strict than the order on the Dunnhumby and Instacart datasets, so using item-level masking, which fails to fully respect the sequential order and has poor performance on the Tafeng dataset.
Third, we can also observe that item-select masking achieves better performance than item-random masking w.r.t. all metrics across all datasets (paired t-test, \(p<0.05\)), i.e., the improvements range from 4.1% to 9.0%, which demonstrates the effectiveness of our specifically designed item-select masking strategy for the NNBR task. In a sequence with many recurring items, the conventional random masking strategy could not ensure there is no masked item remaining in the other positions of the sequence, so the model might learn to predict the masked item based on the items' remaining occurrences, i.e., item self-relations. While the proposed item-select masking strategy will remove all occurrences of the same item, which can ensure that the masked items are novel items w.r.t. the remaining masked sequence, and the model has to infer the masked novel item via learning the masked item's relation with other items.
Finally, it can also be seen that basket-explore masking, which is specifically targeted at the NNBR task, does not lead to any improvements on the Tafeng and Dunnhumby datasets, and results in a decrease in performance on the Instacart dataset, compared with basket-all masking. This result again verifies the findings in Section 5.3 and indicates that masking and training BTBR specifically for the NNBR task may be suboptimal, since the repeat item labels may also be helpful with item-to-item correlations modeling.
### Effectiveness of the item swapping strategy (RQ3)
In this section, we conduct experiments to verify the effectiveness of the swapping strategy, and the results are shown in Table 5. We find that adding a swapping strategy on top of item-random and item-select leads to a decrease in performance on the Tafeng dataset. At the same time, we note that adding a swapping strategy on top of item-random and item-select leads to better performance on the Dunnhumby and Instacart datasets (paired t-test, \(p<0.05\)). These results are not surprising, since the swapping strategy will not only enrich the item interactions within the basket, but also has a risk of introducing noise w.r.t. the temporal information. The sequential order is relatively strict on the TaFeng dataset (see Section 5.4), and the model can not benefit from the swap strategy.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Parameter**} & \multirow{2}{*}{Metric} & \multirow{2}{*}{Best} & \multicolumn{3}{c}{Item level} & \multicolumn{3}{c}{Basket level} & \multicolumn{1}{c}{Joint} \\ \cline{3-10} & & & Random & Select & Random & Select & All & Explore & Pretrain-Finetune \\ & & & & swap & swap & & & & \\ \hline \multirow{6}{*}{**Parameter**} & Recall@10 & 0.1024 & 0.07361 & 0.0801 & 0.07171 & 0.07461 & 0.10561 & 0.1032 & **0.10571 **(3.2\%)** \\ & nDCG@10 & 0.0859 & 0.05971 & 0.0651 & 0.05871 & 0.06051 & 0.0869 & 0.0860 \\ & Recall@20 & 0.1257 & 0.09771 & 0.10361 & 0.08951 & 0.0911 & 0.12921 & 0.1271 & **0.13531 **(7.6\%)** \\ & nDCG@20 & 0.0952 & 0.06911 & 0.07391 & 0.06851 & 0.06881 & 0.09701 & 0.0957 & **0.09731 **(2.2\%)** \\ \cline{2-10} & Recall@10 & 0.0529 & 0.05481 & 0.05721 & 0.05531 & 0.05921 & 0.0524 & 0.0521 & **0.05931 **(12.1\%)** \\ & nDCG@10 & 0.0428 & 0.04391 & 0.0461 & 0.04431 & **0.04691 ** & 0.0427 & 0.0424 & 0.04681 (9.3\%)** \\ & Recall@20 & 0.0813 & 0.08471 & 0.0891 & 0.08671 & **0.09241 ** & 0.0815 & 0.0806 & 0.09151 (12.5\%)** \\ & nDCG@20 & 0.0546 & 0.05601 & 0.05871 & **0.0571 ** & **0.05981 ** & 0.0540 & 0.0532 & 0.05961 (9.2\%)** \\ \cline{2-10} & Recall@10 & 0.0494 & 0.05541 & 0.05831 & 0.05721 & **0.06001** & 0.05391 & 0.04551 & 0.05981 (2.1\%)** \\ \cline{2-10} & nDCG@10 & 0.0400 & 0.04451 & 0.04741 & 0.04581 & **0.04861 ** & 0.0426 & 0.03871 & 0.04781 (19.5\%)** \\ & Recall@20 & 0.0764 & 0.08871 & 0.09241 & 0.0898 & **0.09351 ** & 0.08461 ** & 0.07341 & 0.09341 (22.3\%)** \\ \cline{2-10} & nDCG@20 & 0.0501 & 0.05731 & 0.06071 & 0.05831 & **0.061
We further investigate the influence of hyper-parameters of the swapping strategy, i.e., swap ratio and swap hop. Figure 4 shows a heatmap w.r.t. Recall@10 on different datasets when swap ratio ranges within \([0.1,0.3,0.5,0.7,0.9]\) and swap hop ranges within \([1,3,5,7,9]\). We observe that training with both high swap ratio and swap hop (the upper-right of the heatmap) leads to poor performance on the Tafeng and Dunnhumby dataset. When it comes to the Instacart dataset, better performance is achieved via using a high swap-hop. The repeat ratio on Instacart dataset is high, which means that the user's interest is relatively stable and swapping across adjacent baskets will not help, so a higher swap hop is preferred to enrich item interactions within the basket on this dataset.
Given the above findings, there is a trade-off between enriching the item interactions within baskets and respecting the original temporal order information, so it is reasonable to search for the optimal swap hyper-parameters to get the highest performance on different datasets in practice.
### Effect of mask ratio and training dynamics (RQ4)
We investigate the effect of mask ratio and analyze how the performance changes as training goes on to further understand the properties of different masking strategies.
Figure 4. Performance heatmap with different swap hops and swap ratios.
Figure 5. Performance of BTBR with item-random strategy and item-select masking strategy with various mask ratios.
**Mask ratio.** The mask ratio \(\alpha\) when using item-level masking is a hyper-parameter that is worth discussing. Figure 5 shows the Recall@10 when the mask ratio ranges within \([0.1,0.3,0.5,0.7,0.9]\). We can observe that item-select outperforms item-random with the same mask ratio in most cases. We also see that the optimal mask ratio is 0.1 for item-random and item-select, and the optimal mask ratio is much higher (0.5, 0.7) on the Dunnhumby and Instacart datasets. We suspect that a higher mask ratio is preferred in the NNBR task when the dataset has long interaction records for the users.
**Training dynamics.** Figure 6 shows how the Recall@10 evolves as training goes when using different masking strategies. First, it is obvious that basket-level masking achieves its best performances very fast, and then drops much earlier than item-level masking. This is because the training labels of basket-level masking are static, which can easily lead to overfitting, while the training labels of item-level masking are dynamic, which alleviates overfitting. Second, compared to basket-all masking, basket-explore masking further aggravates the overfitting problem via removing the repeat items (labels), which might lead to a performance decrease, especially in the scenario with a high repeat ratio. Finally, the performance of item-random and item-select evolves similarly on the Tafeng dataset, since the repeat ratio on it is small. On the Dunnhumby and Instacart datasets, item-random masking results in overfitting earlier than the item-select masking, since the masked item might still exist in other positions of the masked sequence and the model will rely more on the repeat item prediction instead of inferring novel items, as the repetition prediction task is relatively easier (Krizhevsky et al., 2014).
### Effectiveness of joint masking (RQ5)
So far, we have built a comprehensive understanding of different masking strategies and realize that no single masking strategy is optimal in all cases, due to the diverse characteristics of datasets. Now, we conduct experiments to evaluate the effectiveness of joint masking (training), i.e., pre-train the model using item-select masking, then fine-tune the model using basket-all masking. The results are also shown in Table 5. We find that BTBR with joint masking consistently outperforms the best performance obtained by existing baselines across datasets; the improvements range from 1.3% to 7.6% on Tafeng dataset, from 9.2% to 12.5% on Dunnhumby dataset and from 19.5% to 22.4% on Instacart dataset. Joint masking does not lead to further improvements compared with a single optimal strategy, i.e., basket-all on the Tafeng dataset and item-select with swap on the Dunnhumby and Instacart datasets, in most cases.10 The joint masking strategy under the pretrain-then-finetune paradigm is still valuable due to its robustness w.r.t. NNBR task (i.e., it consistently achieves the best performance) on various datasets with different characteristics.
Footnote 10: The highest and second-highest scores in Table 5 are essentially at the same level and there is no significant difference between the joint training strategy and the single optimal strategy on each dataset in terms of performance.
Figure 6. The training progress w.r.t. Recall@10 of BTBR with different masking strategies on three datasets.
## 6. Conclusion
We have formulated the next novel basket recommendation task, i.e., the task of recommending novel items to users given historical interactions. The task has practical applications, and helps us to evaluate an NBR model's ability to find novel items for a given user. To understand the performance of existing NBR methods on the NNBR task, we have evaluated several NBR models with two training methods, i.e., Train-all and Train-explore. To address the NNBR task, we have proposed a bi-directional transformer basket recommendation model (BTBR), which uses a bi-directional transformer to directly model item-to-item correlations across different baskets. To train BTBR, we have designed five types of masking strategies and training objectives considering different levels: (i) item-level random masking, (ii) item-level select masking, (iii) basket-level all masking, (iv) basket-level explore masking, and (v) joint masking. To further improve the BTBR performance, we also proposed an item swapping strategy to enriching item interactions.
We have conducted extensive experiments on three datasets. Concerning existing NBR methods we have found that: (i) the performance on the NNBR task differs widely between existing NBR methods; (ii) the performance of existing methods on the NNBR task leaves considerable room for improvement, and the top performing methods on the NNBR task are different from the top performers on the NBR task; and (iii) training specifically for the NNBR task by removing repeat items from the ground truth labels does not lead to consistent improvements in performance. Concerning our newly proposed BTBR method, we have found that: (i) BTBR with a properly selected masking and swapping strategy can substantially improve the NNBR performance; (ii) there is no consistent best masking level for BTBR across all datasets; (iii) the proposed item-select masking strategy outperforms the conventional item-random masking strategy on the NNBR task; (iv) the item-basket swapping strategy can further improve NNBR performance; and (v) a joint masking strategy is robust on various datasets but does not lead to further improvements compared to a single level masking strategy.
A broader implication of our work is that blindly training specifically for the proposed recommendation task might lead to sub-optimal performance and it is necessary to consider various training objectives on diverse datasets. Another implication is that it is important to consider the differences between repetition behavior and exploration behavior when designing recommendation models for the grocery shopping scenario.
One limitation of this paper is that we only focus on the grocery shopping scenario. An obvious avenue for future work, therefore, is to extend the proposed item-select masking strategy to sequential item recommendation scenarios, and investigate if it can outperform the widely used item-random masking strategy w.r.t. finding novel items.
## Reproducibility
We share both our processed dataset and the source code used to produce the results in this paper at [https://github.com/liming-7/Mask-Swap-NNBR](https://github.com/liming-7/Mask-Swap-NNBR).
## Acknowledgments
This research was (partially) funded by the China Scholarship Council (grant #20190607154), the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research (NWO), [https://hybrid-intelligence-centre.nl](https://hybrid-intelligence-centre.nl), and project LESSEN with project number NWA.1389.20.183 of the research program NWA ORC 2020/21, which is (partly) financed by the Dutch Research Council (NWO). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. |
2310.14229 | Bounds for the kernel of the $(κ, a)$-generalized Fourier transform | In this paper, we study the pointwise bounds for the kernel of the $(\kappa,
a)$-generalized Fourier transform with $\kappa\equiv0$, introduced by Ben
Sa\"id, Kobayashi and Orsted. We present explicit formulas for the case $a=4$,
which show that the kernels can exhibit polynomial growth. Subsequently, we
provide a polynomial bound for the even dimensional kernel for this transform,
focusing on the cases with finite order. Furthermore, by utilizing an
estimation for the Prabhakar function, it is found that the $(0,a)$-generalized
Fourier kernel is bounded by a constant when $a>1$ and $m\ge 2$, except within
an angular domain that diminishes as $a \rightarrow \infty$. As a byproduct, we
prove that the $(0, 2^{\ell}/n)$-generalized Fourier kernel is uniformly
bounded, when $m=2$ and $\ell, n\in \mathbb{N}$. | Hendrik De Bie, Pan Lian, Frederick Maes | 2023-10-22T08:43:38Z | http://arxiv.org/abs/2310.14229v2 | # Bounds for the kernel of the \((\kappa,a)\)-generalized Fourier transform
###### Abstract.
In this paper, we study the pointwise bounds for the kernel of the \((\kappa,a)\)-generalized Fourier transform with \(\kappa\equiv 0\), introduced by Ben Said, Kobayashi and Orsted. We present explicit formulas for the case \(a=4\), which show that the kernels can exhibit polynomial growth. Subsequently, we provide a polynomial bound for the even dimensional kernel for this transform, focusing on the cases with finite order. Furthermore, by utilizing an estimation for the Prabhakar function, it is found that the \((0,a)\)-generalized Fourier kernel is bounded by a constant when \(a>1\) and \(m\geq 2\), except within an angular domain that diminishes as \(a\to\infty\). As a byproduct, we prove that the \((0,2^{\ell}/n)\)-generalized Fourier kernel is uniformly bounded, when \(m=2\) and \(\ell,n\in\mathbb{N}\).
Key words and phrases:generalized Fourier transform, integral kernel, Laplace transform, Prabhakar function 2020 Mathematics Subject Classification: Primary 42B10; Secondary 33C45, 33C52
## 1. Introduction
The \((\kappa,a)\)-generalized Fourier transform, denoted by \(\mathcal{F}_{\kappa,a}\), is a two-parameter family of integral transforms. It was introduced by Ben Said, Kobayashi and Orsted in [2] and further investigated in detail in [3]. This transform can be considered as an 'interpolation' between the Euclidean Fourier transform and Hankel transform, with additional deformation from Dunkl operators [11]. In particular, it reduces to the Dunkl transform [10], when \(a=2\).
One important question in the study of \(\mathcal{F}_{\kappa,a}\) is to determine the boundedness of its Schwartz distribution kernel, denoted by \(B_{\kappa,a}(x,y)\). In the work [15] of Gorbachev, Ivanov and Tikhonov, the following conjecture was proposed: when \(2\langle k\rangle+m+a\geq 3\), \(B_{\kappa,a}\) is uniformly bounded by \(1\), that is
\[|B_{\kappa,a}(x,y)|\leq B_{\kappa,a}(0,y)=1,\qquad\qquad\forall\,x,y\in \mathbb{R}^{m}. \tag{1.1}\]
Here \(\langle k\rangle\) is a constant arising from the Dunkl deformation. One dimensional kernels are well understood so far, see the recent paper [16, Theorem 1.1]. The bounds for higher dimensional kernels are much less understood, except for the two significant cases, i.e. the Dunkl transform (\(a=2\)) and the Hankel transform (\(a=1\)). For \(a=2/n\) with \(n\in\mathbb{N}\) and \(\kappa\equiv 0\), inequality (1.1) was confirmed in [5, Theorem 9]. Recently, the three authors mentioned above presented a negative result in [16, Theorem 1.2] indicating that (1.1) is not valid for some parameter values. More precisely, they prove that
\[\|B_{\kappa,a}(x,y)\|_{\infty}>1,\qquad\qquad x,y\in\mathbb{R}^{m},\]
when \(m\geq 2\), \(a\in(1,2)\cup(2,\infty)\) and \(\langle\kappa\rangle\geq 0\). This is achieved by examining the kernel's behavior when the product of \(|x|\) and \(|y|\) is sufficiently small. At
## 1. Introduction
Let \(\mathcal{F}_{a}\) be a bounded bounded operator on \(\mathbb{R}^{+}\) and \(\mathcal{F}_{a}\) be a bounded operator on \(\mathbb{R}^{+}\). We denote by \(\mathcal{F}_{a}\) the _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.1}\]
where \(\Delta\) is the Laplace operator on \(\mathbb{R}^{+}\) and \(\mathcal{F}_{a}\) is the _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi} {2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.2}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi} {2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.3}\]
where \(\Delta\) is the Laplace operator on \(\mathbb{R}^{+}\) and \(\mathcal{F}_{a}\) is the _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi} {2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.4}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi} {2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.5}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.6}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.7}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.8}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.9}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.10}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.11}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.12}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.13}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.14}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{ 2a}\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.15}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{2a }\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.16}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{2a} \left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.17}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{2a }\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.18}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{2a }\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.19}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{2a }\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.20}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
\[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{2a }\left(|x|^{2-a}\Delta-|x|^{a}\right)\right] \tag{1.21}\]
on \(\mathbb{R}^{+}\). The _Fourier transform_ of \(\mathcal{F}_{a}\) on \(\mathbb{R}^{+}\) is defined as
(1.22) \[\mathcal{F}_{a}=\exp\left[\frac{i\pi(m+a-2)}{2a}\right]\exp\left[\frac{i\pi}{2a }\left(|x|^{
A general closed expression for the kernel (1.3) in terms of elementary functions is not available now. However, it has been found that a closed expression exists in the Laplace domain in [5]. This was achieved by introducing an auxiliary variable \(t\) in the series (1.3), where the variable \(2z^{a/2}/a\) of the Bessel function is replaced by \(2z^{a/2}t/a\) and then Laplace transformed with respect to the new variable \(t\), see [5, Eq.(8)].
**Theorem 1.6**.: _For \(a>0\), \(m\geq 2\) and \(\operatorname{Re}s\) big enough, the kernel of \(\mathcal{F}_{a}\) in the Laplace domain is given by_
\[\mathcal{L}[K_{a}^{m}(x,y,t)](s)=2^{2\lambda/a}\Gamma\left(\frac{2\lambda+a}{ a}\right)\frac{1}{r}\left(\frac{1}{R}\right)^{2\lambda/a}\frac{1-u_{R}^{2}}{(1-2 \xi u_{R}+u_{R}^{2})^{\lambda+1}}, \tag{1.4}\]
_where \(u_{R}=\left(e^{\frac{-i\pi}{2}}z_{a}/R\right)^{2/a}\) with \(r=\sqrt{s^{2}+z_{a}^{2}}\), \(R=s+r\) and \(z_{a}=\frac{2}{a}z^{a/2}\), \(\lambda\), \(z\) and \(\xi\) are defined in Theorem 1.4._
_Remark 1.7_.: When \(a=2/n\) with \(n\in\mathbb{N}\), an explicit formula for the kernel was obtained by performing the inverse Laplace transform and subsequently setting the auxiliary variable \(t\) to \(1\). Moreover, the optimal uniform bound is found to be \(1\) for both even and odd dimensions, see [5, Theorem 9]. An alternative proof for the explicit expression was later given in [9]. It also closely related with the Dunkl kernel associated to the dihedral groups, see [5].
In this paper, we further investigate the behavior of these generalized Fourier kernels. Observing the parameters of the Bessel functions in the kernel series (1.3), particular attention should be given to the case when \(a=4\). Indeed, our explicit expressions for \(a=4\) and \(m>2\) in Section 2 show that the kernels are not uniformly bounded, but instead exhibit a polynomial growth. This contrasts with the known results for \(a=2/n\) and differs with what many researchers have expected previously.
Using its Laplace domain expression (1.4), we provide a polynomial bound for the kernel of the radially deformed Fourier transform of finite order, i.e. \(a=p/q\), with \(p,q\in\mathbb{N}\) and \(m=2n\), see Theorem 3.3 below. This allows us to introduce the function space on which \(\mathcal{F}_{a}\) is well defined. The two dimensional kernels exhibit some differences, as the reproducing kernels of spherical harmonics reduce to \(\cos k\theta\). Studies for the generalized Fourier transforms with polynomial bounded kernels can be found in e.g. [7, 8, 14].
An integral expression and a bound are given for the Prabhakar generalized Mittag-Leffer function in Section 4. Based on the obtained estimate, we show that the \((0,a)\)-generalized Fourier kernel is bounded by a constant when \(a>1\) and \(m\geq 2\), except within an angular domain that diminishes as \(a\to\infty\). This means that the generalized Fourier kernel (1.3) is uniformly bounded on an unbounded domain in \(\mathbb{R}^{m}\times\mathbb{R}^{m}\), but it may exhibit polynomial growth on the remaining region. For \(K_{4}^{2n}(x,y)\), this can be seen from the asymptotic expansion given in Remark 2.7. As a byproduct, we prove that the \((0,2^{\ell}/n)\)-generalized Fourier kernel is uniformly bounded by constants, when \(m=2\) and \(\ell,n\in\mathbb{N}\) in Section 5.
For the readers' convenience, we collect the bounds for the radially deformed Fourier kernel obtained in this paper in Table 1.
The remainder of this paper is organized as follows. In Section 2, we calculate the kernel with \(a=4\), which suggests the polynomial bounds for general cases. Section 3 is devoted to the polynomial bounds for the even dimensional \((0,p/q)\)-generalized
Fourier kernel. In Section 4, we present estimates based on the Prabhakar function. In the last section, we show that this estimate can be used to obtain the uniformly boundedness of certain kernels.
## 2. The motivating case \(a=4\)
In this section, we investigate kernels with parameter \(a=4\). It shows that even in these simple cases, the behavior of the kernels \(K_{4}^{2n}(x,y)\) differs significantly from the known cases.
We start with the kernel of dimension two, which means \(\lambda=0\). Using the well-known relation [29, Eq. (4.7.8)]
\[\lim_{\lambda\to 0}\lambda^{-1}C_{k}^{(\lambda)}(\xi)=(2/k)\cos k\theta, \qquad\xi=\cos\theta,\,k\geq 1,\]
the generalized Fourier kernel in (1.3) reduces to
\[\begin{split} K_{a}^{2}(z,\xi)=&\lim_{\lambda\to 0 }K_{a}^{m}(z,\xi)\\ =& J_{0}(z_{a})+2\sum_{k=1}^{\infty}e^{-\frac{i\pi k }{a}}J_{\frac{2k}{a}}(z_{a})\cos k\theta,\end{split} \tag{2.1}\]
where \(z_{a}=\frac{2}{a}z^{a/2}\) and \(\xi=\langle x,y\rangle/z=\cos\theta\) (see also [6]). Here and in the sequel, we use \(K_{a}^{m}(z,\xi)\) and \(K_{a}^{m}(x,y)\) with abuse of notation, as the meaning should be clear from the context.
Let \(\mathrm{erf}(w)\) be the error function defined by
\[\mathrm{erf}(w)=\frac{2}{\sqrt{\pi}}\int_{0}^{w}e^{-t^{2}}\,\mathrm{d}t, \tag{2.2}\]
and \(\mathrm{erfc}(w)=1-\mathrm{erf}(w)\) be the complementary error function. Note that
\[\frac{\mathrm{d}^{n+1}}{\mathrm{d}w^{n+1}}\mathrm{erf}(w)=(-1)^{n}\frac{2}{ \sqrt{\pi}}H_{n}(w)e^{-w^{2}},\qquad n\in\mathbb{N}_{0},\]
where \(H_{n}(w)\) is the ordinary Hermite polynomial, see [25]. This property will help to illustrate the polynomial growth of high dimensional kernels.
**Theorem 2.1**.: _The two dimensional radially deformed Fourier kernel in (1.3) with parameter \(a=4\) is given by_
\[\begin{split} K_{4}^{2}(z,\xi)=&\,e^{-iz^{2}(\xi^{ 2}-1/2)}\mathrm{erfc}\left[-e^{-i\frac{\pi}{4}}z\xi\right]\\ =&\,(1-i)\left(\frac{2}{\pi}\right)^{1/2}e^{-\frac{ i}{2}z^{2}\cos 2\theta}\int_{-\infty}^{z\cos\theta}e^{it^{2}}\,\mathrm{d}t.\end{split} \tag{2.3}\]
_Furthermore, it satisfies_
\[\big{|}K_{4}^{2}(z,\xi)\big{|}\leq 1+2\sqrt{\frac{2}{\pi}},\qquad\forall\,(z,\xi) \in\mathbb{R}^{+}\times[-1,1]. \tag{2.4}\]
_In particular,_
\[\lim_{z\to+\infty}\big{|}K_{4}^{2}(z,1)\big{|}=2.\]
Proof.: (1) Recall the following generating function of modified Bessel functions (see [24, SS3.3.1]),
\[e^{w\cos\theta}\mathrm{erfc}\left[(2w)^{\frac{1}{2}}\cos\left(\frac{1}{2} \theta\right)\right]=I_{0}(w)+2\sum_{k=1}^{\infty}(-1)^{k}I_{\frac{k}{2}}(w) \cos\left(\frac{k}{2}\theta\right), \tag{2.5}\]
where \(I_{\alpha}(z)\) is the modified Bessel function satisfying \(I_{\alpha}(z)=i^{-\alpha}J_{\alpha}(iz)\) when the principal value of the phase \(-\pi\leq\arg z\leq\pi/2\). Replacing \(w=-iz_{4}=-iz^{2}/2\) and \(\theta\) by \(2\theta\) in (2.5), it yields
\[e^{-\frac{i}{2}z^{2}\cos 2\theta}\mathrm{erfc}\left[\frac{1-i}{\sqrt{2}}z \cos\theta\right]=J_{0}(z_{4})+2\sum_{k=1}^{\infty}(-1)^{k}e^{-\frac{i\pi k}{ 4}}J_{\frac{k}{2}}(z_{4})\cos\left(k\theta\right). \tag{2.6}\]
Substituting \(\cos\theta\) by \(-\cos\theta\) in (2.6), a new expression for the kernel (2.1) follows,
\[K_{4}^{2}(z,\xi)= J_{0}(z_{4})+2\sum_{k=1}^{\infty}e^{-\frac{i\pi k}{4}}J_{ \frac{k}{2}}(z_{4})\cos k\theta\] \[= e^{-\frac{i}{2}z^{2}\cos 2\theta}\mathrm{erfc}\left[\frac{i-1}{ \sqrt{2}}z\cos\theta\right].\]
It is easy to see that the error function is an odd function from its definition (2.2), i.e. \(\mathrm{erf}(-w)=-\mathrm{erf}(w)\). Using this property, the kernel \(K_{4}^{2}(z,\xi)\) now can be written as
\[K_{4}^{2}(z,\xi)= e^{-iz^{2}(\xi^{2}-1/2)}+e^{-iz^{2}(\xi^{2}-1/2)}\mathrm{erf} \left[\frac{1-i}{\sqrt{2}}z\xi\right]\] \[= e^{-iz^{2}(\xi^{2}-1/2)}+\left(\frac{2}{\pi}\right)^{\frac{1}{2 }}e^{-iz^{2}(\xi^{2}-1/2)}(1-i)\left[C(z\xi)+iS(z\xi)\right]\] \[= e^{-iz^{2}(\xi^{2}-1/2)}+\left(\frac{2}{\pi}\right)^{\frac{1}{2 }}e^{-iz^{2}(\xi^{2}-1/2)}(1-i)\int_{0}^{z\xi}e^{it^{2}}\,\mathrm{d}t\] \[= (1-i)\left(\frac{2}{\pi}\right)^{\frac{1}{2}}e^{-iz^{2}(\xi^{2}- 1/2)}\int_{-\infty}^{z\xi}e^{it^{2}}\,\mathrm{d}t.\]
Here \(S(u)\) and \(C(u)\) are the Fresnel integrals defined by (see e.g. [24, SS9.2.4])
\[S(u)=\int_{0}^{u}\sin(t^{2})\,\mathrm{d}t,\qquad C(u)=\int_{0}^{u}\cos(t^{2}) \,\mathrm{d}t.\]
In the second step, we have used the relation
\[C(u)+iS(u)=\sqrt{\frac{\pi}{2}}\cdot\frac{1+i}{2}\mathrm{erf}\left(\frac{1-i} {\sqrt{2}}u\right)\]
and in the last step
\[\lim_{u\to+\infty}S(u)=\lim_{u\to+\infty}C(u)=\frac{1}{2}\cdot\sqrt{\frac{\pi }{2}}. \tag{2.7}\]
(2) The bound of the kernel in (2.4) follows from the obtained expressions, using (2.7) and the fact that the Fresnel integrals \(S(x)\) and \(C(x)\) are bounded by \(1\). Thus here we only consider the last limit,
\[\begin{split}\lim_{z\to\infty}\big{|}K_{4}^{2}(z,1)\big{|}& =\lim_{z\to\infty}\left|(1-i)\left(\frac{2}{\pi}\right)^{\frac{1} {2}}e^{-iz^{2}(1-1/2)}\int_{-\infty}^{z}e^{it^{2}}\,\mathrm{d}t\right|\\ &=\frac{2}{\sqrt{\pi}}\cdot 2\left|\int_{0}^{\infty}e^{it^{2}}\, \mathrm{d}t\right|\\ &=\frac{2}{\sqrt{\pi}}\cdot 2\left|\lim_{u\to+\infty}\left(C(u)+iS(u) \right)\right|\\ &=2.\end{split}\]
Here we have used the property (2.7) again.
_Remark 2.2_.: Unlike for the Euclidean Fourier kernel \(e^{-i\langle x,y\rangle}\), it holds that
\[\big{\|}K_{4}^{2}(x,y)\big{\|}_{\infty}>\big{|}K_{4}^{2}(0,y)\big{|}=1,\qquad x,y\in\mathbb{R}^{2}.\]
_Remark 2.3_.: In Cartesian coordinates, the expression for the kernel \(K_{4}^{2}\) is
\[K_{4}^{2}(x,y)=(1-i)\left(\frac{2}{\pi}\right)^{1/2}e^{-i\left(\langle x,y \rangle^{2}-\frac{|x|^{2}|y|^{2}}{2}\right)}\int_{-\infty}^{(x,y)}e^{it^{2}}\, \mathrm{d}t.\]
Direct calculations show that
\[\left\{\begin{array}{l}|x|^{-2}\Delta_{x}K_{4}^{2}(x,y)=-|y|^{4}K_{4}^{2}(x, y),\\ |y|^{-2}\Delta_{y}K_{4}^{2}(x,y)=-|x|^{4}K_{4}^{2}(x,y),\end{array}\right.\]
hold as expected.
The kernel of dimension four exhibits polynomial growth.
**Theorem 2.4**.: _When \(a=4\) and \(m=4\), the kernel of \(\mathcal{F}_{4}\) is given by_
\[\begin{split} K_{4}^{4}(z,\xi)=&\,e^{-iz^{2}(\xi^{2}-1/2) }\left(-2iz\xi\int_{-\infty}^{z\xi}e^{it^{2}}\,\mathrm{d}t+e^{iz^{2}\xi^{2}} \right)\\ =&\,e^{iz^{2}/2}-2iz\cos\theta\,e^{-\frac{i}{2}z^{2} \cos 2\theta}\int_{-\infty}^{z\cos\theta}e^{it^{2}}\,\mathrm{d}t.\end{split} \tag{2.8}\]
_Furthermore, it is seen that_
\[K_{4}^{4}(z,1)=\mathcal{O}(z),\qquad z\to+\infty.\]
Proof.: The expression (2.8) follows from the recursive relation between \(K_{a}^{m}\) and \(K_{a}^{m+2}\) (see [6, Lemma 1]). That is
\[K_{a}^{m+2}(z,\xi)=e^{i\frac{\pi}{a}}a^{\frac{2}{a}}\frac{\Gamma(\frac{2 \lambda+a+2}{a})}{2(\lambda+1)\Gamma(\frac{2\lambda+a}{a})}z^{-1}\partial_{ \xi}K_{a}^{m}(z,\xi), \tag{2.9}\]
with \(\lambda=(m-2)/2\), and the fact
\[\partial_{\xi}\left(\int_{-\infty}^{z\xi}e^{it^{2}}\,\mathrm{d}t\right)=ze^{ iz^{2}\xi^{2}}.\]
The last limit relation follows from (2.8).
Using the recursive relation (2.9) again, it yields that
\[K_{4}^{6}(z,1)=\mathcal{O}(z^{2}),\qquad z\to+\infty.\]
Moreover, we have,
**Theorem 2.5**.: _When the dimension \(m=2n\) is even, and \(a=4\), the kernel of \(\mathcal{F}_{4}^{m}\) is given by_
\[K_{4}^{m}(x,y)= c_{n}e^{\frac{i}{2}z^{2}\sin^{2}\theta}D_{-n}[(i-1)z\cos \theta],\]
_where \(c_{n}=2^{\frac{n}{2}}\Gamma\left((n+1)/2\right))/\sqrt{\pi}\) and \(D_{\nu}(z)\) is the parabolic cylinder function [25, SS12.1]. Furthermore, there exists a constant \(C>0\) such that_
\[|K_{4}^{m}(x,y)|\leq C\left(1+|x||y|\right)^{\frac{m-2}{2}},\qquad x,y\in \mathbb{R}^{m}.\]
_Here the growth order \((m-2)/2\) is optimal._
Proof.: The compact expression follows from the expression of kernel \(K_{4}^{2}\) given in (2.3), the recursive relation (2.9) and the following formula [4, SS1.5.1 (17)]
\[\frac{\mathrm{d}^{n}}{\mathrm{d}w^{n}}\left(e^{a^{2}w^{2}}\mathrm{ erfc}(aw)\right)=\frac{2^{(n+1)/2}}{\sqrt{\pi}}n!(-a)^{n}e^{a^{2}w^{2}/2}D_{-n-1}( \sqrt{2}aw).\]
The optimal bounds can be seen from their expressions in terms of the Fresnel integrals.
_Remark 2.6_.: The notation \(D_{\nu}(z)\) for parabolic cylinder functions is due to Whittaker. These functions are also known as the Weber parabolic cylinder functions \(U(\mu,z)\), see [25, SS12.1]. They are related by \(D_{-\nu-\frac{1}{2}}(z)=U(\nu,z)\).
_Remark 2.7_.: The generalized Fourier kernel's asymptotic expansion for large variables can be obtained by setting \(\nu=n-1/2\) in the following (see [25, SS12.9]), which shows the polynomial growth of the kernel. Let \(\delta\) be an arbitrary small positive constant, we have as \(z\to\infty\),
\[D_{-\nu-\frac{1}{2}}(z)=U(\nu,z)\sim e^{-\frac{1}{4}z^{2}}z^{-\nu-\frac{1}{2}}\sum_{s=0}^{\infty}(-1)^{s} \frac{\left(\frac{1}{2}+\nu\right)_{2s}}{s!(2z^{2})^{s}}\] \[\pm i\frac{\sqrt{2\pi}}{\Gamma\left(\frac{1}{2}+\nu\right)}e^{ \mp i\pi\nu}e^{\frac{1}{4}z^{2}}z^{\nu-\frac{1}{2}}\sum_{s=0}^{\infty}\frac{ \left(\frac{1}{2}-\nu\right)_{2s}}{s!(2z^{2})^{s}},\]
when \(\frac{1}{4}\pi+\delta\leq\pm\arg z\leq\frac{5}{4}\pi-\delta\) and
\[D_{-\nu-\frac{1}{2}}(z)\sim e^{-\frac{1}{4}z^{2}}z^{-\nu-\frac{1}{2}}\sum_{s=0}^{\infty}(-1)^{s} \frac{\left(\frac{1}{2}+\nu\right)_{2s}}{s!(2z^{2})^{s}},\]
when \(|\arg z|\leq\frac{3}{4}\pi-\delta\left(<\frac{3}{4}\pi\right)\).
At the end of this section, we give an integral expression for \(K_{6}^{2}(x,y)\). This method can also be applied to construct other kernels of in dimension \(2\) with even integer parameters. By combining with the subsequent Lemma 5.3 and the recursive relation (2.9), it is in principle possible to give explicit expressions for all even dimensional kernels with rational parameters \(a\). Another way to derive these integral expressions is to use the Laplace domain expression (1.4).
**Theorem 2.8**.: _The two dimensional radially deformed Fourier kernel with \(a=6\) is given by_
\[K_{6}^{2}(z,\cos\theta)= \exp(-iz_{6}\cos 3\theta)+2e^{-\frac{iz}{6}}J_{\frac{1}{3}}(z_{6})+2e^{ -\frac{iz}{3}}J_{\frac{2}{3}}(z_{6})\] \[+\sum_{k=1}^{2}e^{ik(\theta-\frac{\pi}{6})}\left(f_{1}\left(\frac {k}{3},z_{6},3\theta-\frac{\pi}{2}\right)+if_{2}\left(\frac{k}{3},z_{6},3 \theta-\frac{\pi}{2}\right)\right)\] \[+\sum_{k=1}^{2}e^{-ik(\theta+\frac{\pi}{6})}\left(f_{1}\left( \frac{k}{3},z_{6},3\theta+\frac{\pi}{2}\right)-if_{2}\left(\frac{k}{3},z_{6}, 3\theta+\frac{\pi}{2}\right)\right),\]
_where \(z_{6}=z^{3}/3\) and the functions \(f_{1}\) and \(f_{2}\) are defined by_
\[f_{1}(\nu,z,\theta)=\frac{1}{4}\mathrm{cosec}\,\theta\int_{0}^{z }\sin[(z-t)\sin\theta][\cos 2\theta J_{\nu}(t)+2\cos\theta J_{\nu}^{\prime}(t)-J_{ \nu+2}(t)]\,\mathrm{d}t,\] \[f_{2}(\nu,z,\theta)=\frac{1}{2}\int_{0}^{z}\left(\frac{\nu}{t}+ \cos\theta\right)\sin[(z-t)\sin\theta]J_{\nu}(t)\,\mathrm{d}t.\]
Proof.: We rewrite the kernel series (2.1) as
\[K_{6}^{2}(z,\xi) = J_{0}(z_{6})+2\sum_{k=1}^{\infty}e^{-\frac{izk}{6}}J_{\frac{k}{3} }(z_{6})\cos k\theta\] \[= J_{0}(z_{6})+2\sum_{k=1}^{\infty}e^{-\frac{iz+k}{2}}J_{k}(z_{6}) \cos 3k\theta\] \[+2e^{-\frac{iz}{6}}\left[J_{\frac{1}{3}}(z_{6})+\sum_{k=1}^{ \infty}e^{-\frac{iz\pi}{2}}J_{k+\frac{1}{3}}(z_{6})\cos(3k+1)\theta\right]\] \[+2e^{-\frac{iz}{3}}\left[J_{\frac{2}{3}}(z_{6})+\sum_{k=1}^{ \infty}e^{-\frac{iz\pi}{2}}J_{k+\frac{2}{3}}(z_{6})\cos(3k+2)\theta\right]\] \[=: I+II+III.\]
For the first one \(I=\exp(-iz_{6}\cos 3\theta)\) we refer to [25, SS10.35.2]. In the following, we only compute \(II\),
\[2\sum_{k=1}^{\infty}e^{-\frac{izk}{2}}J_{k+\frac{1}{3}}(z_{6}) \cos(3k+1)\theta= \,e^{i\theta}\sum_{k=1}^{\infty}J_{k+\frac{1}{3}}(z_{6})e^{3ik( \theta-\frac{\pi}{6})}\] \[+e^{-i\theta}\sum_{k=1}^{\infty}J_{k+\frac{1}{3}}(z_{6})e^{-3ik( \theta+\frac{\pi}{6})}.\]
Now, using the formulas (see [27, SS5.7.10]) when \(\operatorname{Re}\nu>0\),
\[\sum_{k=1}^{\infty}J_{k+\nu}(z)\sin k\theta=\frac{1}{2}\int_{0}^{z}\left( \frac{\nu}{t}+\cos\theta\right)\sin[(z-t)\sin\theta]J_{\nu}(t)\,\mathrm{d}t,\]
and
\[\sum_{k=1}^{\infty}J_{k+\nu}(z)\cos k\theta= \frac{1}{4}\mathrm{cosec}\,\theta\int_{0}^{z}\sin[(z-t)\sin\theta]\] \[\times[\cos 2\theta J_{\nu}(t)+2\cos\theta J_{\nu}^{\prime}(t)-J_{ \nu+2}(t)]\,\mathrm{d}t,\]
we obtain the explicit expression.
_Remark 2.9_.: Using the relation \(J^{\prime}_{\nu}(x)=\frac{\nu}{x}J_{\nu}(x)-J_{\nu+1}(x)\), it is seen that \(f_{1}\) and \(f_{2}\) can be bounded by a polynomial in \(z\). This suggests a polynomial bound for the kernels with general parameters.
## 3. Bounds for the \((0,p/q)\)-generalized Fourier kernel
In this section, we provide a bound for the even dimensional \((0,p/q)\)-generalized Fourier kernel based on its Laplace domain expression (1.4). These transforms with rational parameters are of particular interest in harmonic analysis (see [16]), as they are the only cases in this family with finite order. Our proof relies on the following estimate, which is a direct generalization of Lemma 2 in [5]. We omit its proof for brevity.
**Lemma 3.1**.: _Let \(a_{j}\in\mathbb{R}\) with \(j=1,\ldots,n\) and let \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\in\mathbb{N}^{n}\) be an index vector with length \(|\alpha|=\alpha_{1}+\alpha_{2}+\cdots+\alpha_{n}\). Consider the function_
\[F_{n,\alpha}(s)=\frac{1}{\prod_{j=1}^{n}(s+ia_{j})^{\alpha_{j}}},\]
_whose inverse Laplace transform is denoted by_
\[f_{n,\alpha}(t)=\mathcal{L}^{-1}[F_{n,\alpha}(s)](t).\]
_Then we have the following estimate_
\[|f_{n,\alpha}(t)|\leq\frac{t^{|\alpha|-1}}{\Gamma(|\alpha|)},\qquad\forall t \in]0,\infty[.\]
_Remark 3.2_.: The function \(f_{n,\alpha}(t)\) can be represented as a \(\Phi_{2}\)-Horn confluent hypergeometric function, see [9, 28]. Alternatively, the uniform bound for \(t\in[0,1]\) can be derived from its integral expression over a simplex.
The main result of this section is the following.
**Theorem 3.3**.: _Let \(a=p/q\) with \(p,q\in\mathbb{N}\), and \(m\geq 2\) be even. Then there exists a constant \(C>0\) such that_
\[\left|K_{p/q}^{m}(x,y)\right|\leq C(1+|x||y|)^{\frac{3mp}{2}-m+2},\]
_for all \(x,y\in\mathbb{R}^{m}\)._
Proof.: We will rewrite the compact formula (1.4) for the kernel in the Laplace domain as a linear combination of several terms. Afterwards, we estimate each term in the time domain.
Recall that \(\xi=\cos\theta\). Then the denominator of \(\mathcal{L}[K_{a}^{m}(x,y,t)](s)\) in (1.4) can be factored as
\[1-2\xi u_{R}+u_{R}^{2}=\left(u_{R}-e^{i\theta}\right)\left(u_{R}-e^{-i\theta} \right),\]
where \(u_{R}=\left(e^{\frac{-i\pi}{2}}z_{a}/R\right)^{2/a}\) with \(R=s+r\), \(r=\sqrt{s^{2}+z_{a}^{2}}\) and \(z_{a}=\frac{2}{a}z^{a/2}\).
Using the elementary identity
\[x^{n}-a^{n}=(x-a)\left(x^{n-1}+ax^{n-2}+a^{2}x^{n-3}+\cdots+a^{n-1}\right),\]
it is seen that
\[u_{R}-e^{i\theta}=\frac{u_{R}^{p}-e^{ip\theta}}{u_{R}^{p-1}+e^{i\theta}u_{R}^{ p-2}+\cdots+e^{i\theta(p-1)}}.\]
It yields
\[\begin{split} 1-2\xi u_{R}+u_{R}^{2}&=\frac{\left(u_{R}^{p}-e^{ ip\theta}\right)\left(u_{R}^{p}-e^{-ip\theta}\right)}{\left(\sum_{k=0}^{p-1}u_{R}^{k}e^{ i(p-1-k)\theta}\right)\left(\sum_{k=0}^{p-1}u_{R}^{k}e^{-i(p-1-k)\theta}\right)}\\ &=\frac{\left(u_{R}^{p}-e^{ip\theta}\right)\left(u_{R}^{p}-e^{- ip\theta}\right)}{\sum_{k=0}^{2(p-1)}\left(\sum_{\ell+j=k}e^{i(\ell-j)\theta} \right)u_{R}^{k}}\\ &=\frac{\left(u_{R}^{p}-e^{ip\theta}\right)\left(u_{R}^{p}-e^{- ip\theta}\right)}{\sum_{k=0}^{2(p-1)}\frac{\sin((1+k)\theta)}{\sin\theta}u_{R}^{k}}. \end{split} \tag{3.1}\]
When \(\theta=0\) or \(\pi\), the denominator in (3.1) is considered in the limit sense. Recalling \(z_{a}=\frac{2}{a}z^{a/2}\), the nominator in the right hand side of (3.1) can be rewritten as
\[\begin{split}\left(u_{R}^{p}-e^{ip\theta}\right)\left(u_{R}^{p} -e^{-ip\theta}\right)&=1-2\cos(p\theta)u_{R}^{p}+u_{R}^{2p}\\ &=\frac{1}{R^{2q}}\left(R^{2q}-2(-1)^{q}\cos(p\theta)\left(z_{p/q }\right)^{q}+\left(\frac{z_{p/q}^{2}}{R}\right)^{2q}\right)\\ &=\frac{1}{R^{2q}}\left((r+s)^{2q}-2(-1)^{q}\cos(p\theta)\left(z_ {p/q}\right)^{q}+(r-s)^{2q}\right).\end{split} \tag{3.2}\]
A straightforward calculation shows that the term within the bracket on the right hand side of (3.2) is a polynomial in \(s\) of degree \(2q\). Furthermore, its \(2q\) roots are symmetrically distributed on a circle with radius \(z_{p/q}\), see [5, Lemma 1]. Explicitly, we have:
\[\left(u_{R}^{p}-e^{ip\theta}\right)\left(u_{R}^{p}-e^{-ip\theta}\right)=\frac {2^{2q}}{R^{2q}}\prod_{\ell=0}^{2q-1}\left(s+iz_{p/q}\cos\left(\frac{p\theta+2 \pi\ell}{2q}\right)\right).\]
Taking into account the aforementioned calculations, the kernel \(K_{p/q}^{m}(x,y,t)\) in the Laplace domain can now be expressed as follows:
\[\begin{split}\mathcal{L}\left[K_{p/q}^{m}(x,y,t)\right](s)\propto& \,\frac{1}{r}\left(\frac{1}{R}\right)^{\frac{2\lambda s}{p}}R^{2q( \lambda+1)}\left(\sum_{k=0}^{2(p-1)}\frac{\sin((1+k)\theta)}{\sin\theta}u_{R}^ {k}\right)^{\lambda+1}\\ &\times\frac{(1-u_{R}^{2})}{\prod_{\ell=0}^{2q-1}\left(s+iz_{p/q} \cos\left(\frac{p\theta+2\pi\ell}{2q}\right)\right)^{\lambda+1}},\end{split} \tag{3.3}\]
where the notation \(\propto\) means that the equality holds up to a constant factor. Note that when \(m\) is even, \(\lambda\) is an integer. Thus, the right hand side of (3.3) is a linear combination of terms of the form
\[\frac{1}{r}\left(\frac{1}{R}\right)^{\gamma}\frac{z_{p/q}^{\tilde{\gamma}}}{ \prod_{\ell=0}^{2q-1}\left(s+iz_{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q} \right)\right)^{\lambda+1}}, \tag{3.4}\]
where \(2\lambda q/p-2q(\lambda+1)\leq\gamma\leq 2q(\lambda+1)-2q\lambda/p\) and \(0\leq\tilde{\gamma}\leq 4q(\lambda-\lambda/p+1).\) It is seen that those linear combination coefficients \(c(\theta)\) are bounded by a constant.
In the remaining, we only consider the most complicated term, i.e. the term with the smallest value of \(\gamma.\) Other terms can be estimated similarly. For this case,
we expand the power of \(R=r+s\) and subsequently split it into two parts based on the powers of \(r\), distinguishing between even and odd powers,
\[\begin{split}&\frac{1}{r}\left(\frac{1}{R}\right)^{\frac{2\lambda q }{p}}\frac{R^{2q(\lambda+1)}}{\prod_{\ell=0}^{2q-1}\left(s+iz_{p/q}\cos\left( \frac{p\theta+2\pi\ell}{2q}\right)\right)^{\lambda+1}}\\ &=\frac{1}{r}\left(\frac{1}{R}\right)^{\frac{2\lambda q}{p}}\frac {\sum_{k=0}^{2q(\lambda+1)}\binom{2q(\lambda+1)}{k}r^{k}s^{2q(\lambda+1)-k}} {\prod_{\ell=0}^{2q-1}\left(s+iz_{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q} \right)\right)^{\lambda+1}}\\ &=:I+II\end{split} \tag{3.5}\]
with
\[I=\frac{1}{r}\left(\frac{1}{R}\right)^{\frac{2\lambda q}{p}}\frac{\sum_{n=0}^{ q(\lambda+1)}\binom{2q(\lambda+1)}{2n}r^{2n}s^{2q(\lambda+1)-2n}}{\prod_{\ell=0}^{2q-1 }\left(s+iz_{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q}\right)\right)^{\lambda +1}}\]
and
\[II=\left(\frac{1}{R}\right)^{\frac{2\lambda q}{p}}\frac{\sum_{n=0}^{q(\lambda+ 1)-1}\binom{2q(\lambda+1)}{2n+1}r^{2n}s^{2q(\lambda+1)-2n-1}}{\prod_{\ell=0}^{2 q-1}\left(s+iz_{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q}\right)\right)^{ \lambda+1}}.\]
It is seen that both the numerators of the final factor in \(I\) and \(II\) are polynomials in \(s\). Now, we rewrite the rational factors on the right-hand side of \(I\) and \(II\) as follows,
\[\frac{r^{2n}s^{2q(\lambda+1)-2n}}{\prod_{\ell=0}^{2q-1}\left(s+iz _{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q}\right)\right)^{\lambda+1}} = \frac{\left(s^{2}+z_{p/q}^{2}\right)^{n}s^{2q(\lambda+1)-2n}}{ \prod_{\ell=0}^{2q-1}\left(s+iz_{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q} \right)\right)^{\lambda+1}}\] \[= \frac{\prod_{\ell=0}^{2q-1}\left[\left(s+iz_{p/q}\cos\left(\frac{ p\theta+2\pi\ell}{2q}\right)\right)+c_{\ell}\right]^{\lambda+1}}{\prod_{\ell=0}^{2q-1 }\left(s+iz_{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q}\right)\right)^{\lambda +1}}, \tag{3.6}\]
where \(0\leq n\leq q(\lambda+1)\) and each \(c_{\ell}\) satisfies
\[|c_{\ell}|\leq 2z_{p/q},\quad\ell=0,1,\ldots,2q-1.\]
Using the binomial theorem, it is seen that (3.6) is indeed a finite linear combination of terms in the form
\[\frac{d(z)}{\prod_{\ell=\ell_{0}}^{\ell_{1}}\left(s+iz_{p/q}\cos\left(\frac{p \theta+2\pi\ell}{2q}\right)\right)^{k_{\ell}}},\]
where \(0\leq\ell_{0}\leq\ell_{1}\leq 2q-1\) and \(0\leq k_{\ell}\leq\lambda+1\). The numerator \(d(z)\) is a product of at most \(2q(\lambda+1)\) factors \(c_{\ell}\) (independent of \(s\)) and hence satisfies
\[d(z)\leq C\left(1+z_{p/q}^{2q(\lambda+1)}\right)\leq C(1+z)^{p(\lambda+1)},\]
where \(C\) is a constant. Note that this decomposition is not the usual partial fraction decomposition of a rational function. Estimating the constants that arise in the partial fraction decomposition would be inconvenient, as they may not be controlled by polynomials of \(z=|x||y|\). Collecting all we obtained, \(I\) is a finite linear
combination of terms of the form
\[\frac{1}{r}\left(\frac{1}{R}\right)^{\frac{2\lambda q}{p}}\frac{d(z)}{\prod_{ \ell=\ell_{0}}^{\ell_{1}}\left(s+iz_{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q} \right)\right)^{k_{\ell}}}, \tag{3.7}\]
while \(II\) is a linear combination of terms of the form
\[\left(\frac{1}{R}\right)^{\frac{2\lambda q}{p}}\frac{d(z)}{\prod_{\ell=\ell_{0 }}^{\ell_{1}}\left(s+iz_{p/q}\cos\left(\frac{p\theta+2\pi\ell}{2q}\right) \right)^{k_{\ell}}}. \tag{3.8}\]
Note that the factor \(1/\left(\prod_{\ell=\ell_{0}}^{\ell_{1}}\left(s+iz_{p/q}\cos\left(\frac{p \theta+2\pi\ell}{2q}\right)\right)^{k_{\ell}}\right)\) has been studied in Lemma 3.1.
Now, we estimate the part of the kernel corresponding to each factor in (3.7) for \(I\). Similar calculations can be done for \(II\). By the inverse Laplace transform formula ([28, p.47 (15)])
\[\mathcal{L}^{-1}\left[\left(\frac{1}{R}\right)^{\nu}\right](t)=\frac{\nu}{t} \frac{J_{\nu}(at)}{a^{\nu}},\qquad\operatorname{Re}\nu>0,\,\operatorname{Re}s >|\mathrm{Im}\,a|,\]
and ([28, p.27 (3)])
\[\mathcal{L}^{-1}\left[\frac{1}{r}\right](t)=J_{0}(at),\qquad\operatorname{Re }s>|\mathrm{Im}\,a|,\]
with \(r=\sqrt{s^{2}+a^{2}}\) and \(R=s+r\), as well as the convolution theorem for Laplace transform, we have for \(\lambda\in\mathbb{N}\) (\(m\neq 2\)),
\[\mathcal{L}^{-1}\left[\frac{1}{r}\left(\frac{1}{R}\right)^{\frac{2\lambda q}{ p}}\right](t)=\frac{2\lambda q}{p}\int_{0}^{t}J_{0}(au)\frac{J_{\frac{2\lambda q}{p}} (a(t-u))}{a^{\frac{2\lambda q}{p}}(t-u)}\,\mathrm{d}u.\]
Using the convolution theorem again and evaluating at the point \(t=1\), the corresponding part of the kernel \(K_{p/q}^{m}\) in the time domain is given by
\[\begin{split}&\mathcal{L}^{-1}\left[\frac{1}{r}\left(\frac{1}{R} \right)^{\frac{2\lambda q}{p}}F_{n,\alpha}(s)\right](1)\\ &=\frac{2\lambda q}{p}\int_{0}^{1}\int_{0}^{\tau}J_{0}\left(z_{p /q}\tau\right)\frac{J_{\frac{2\lambda q}{p}}\left(z_{p/q}(\tau-u)\right)}{z_{p /q}^{\frac{2\lambda q}{p}}(\tau-u)}\mathrm{d}u\,f_{n,\alpha}(1-\tau)\,\mathrm{ d}\tau,\end{split} \tag{3.9}\]
where \(F_{n,\alpha}\) is the function defined in Lemma 3.1. Thus, the part of the kernel corresponding to \(I\) is indeed a finite linear combination of terms of the form (3.9) multiplied with some different \(d(z)\).
We claim that for \(\lambda\neq 0\), i.e. \(m>2\), the right-hand side of (3.9) is bounded by a constant independent of \(x\) and \(y\). Indeed, it is smaller than
\[\begin{split}& C\int_{0}^{1}\int_{0}^{\tau}\left|J_{0}(z_{p/q} \tau)\frac{J_{\frac{2\lambda q}{p}}(z_{p/q}(\tau-u))}{z_{p/q}^{\frac{2\lambda q }{p}}(\tau-u)}\right|\mathrm{d}u\left|f_{n,\alpha}(1-\tau)\right|\mathrm{d} \tau\\ \leq& C\int_{0}^{1}\int_{0}^{\tau}\frac{1}{(\tau-u)^{1 -\frac{2\lambda q}{p}}}\mathrm{d}u\left|f_{n,\alpha}(1-\tau)\right|\mathrm{d} \tau\end{split}\]
\[\leq C\int_{0}^{1}\int_{0}^{\tau}\frac{1}{(\tau-u)^{1-\frac{2\lambda q}{p}}} \mathrm{d}u\,\mathrm{d}\tau\] \[= C\int_{0}^{1}\tau^{1-\left(1-\frac{2\lambda q}{p}\right)}\int_{0}^ {1}\frac{1}{(1-w)^{1-\frac{2\lambda q}{p}}}\,\mathrm{d}w\,\mathrm{d}\tau\] \[\leq C,\]
where we used the following estimate for Bessel function of first kind in the first step, i.e.
\[\left|z^{-\alpha}J_{\alpha}(z)\right|\leq c,\qquad z\in\mathbb{R}, \tag{3.10}\]
where \(c\) is a constant and \(\alpha>-1/2\). Lemma 3.1 is used in the second inequality. The third step is obtained by changing variables. The last step is due to \(1-2\lambda q/p<1\). Inequality (3.10) follows immediately from the following integral representation for the Bessel function,
\[J_{\alpha}(z)=\frac{(z/2)^{\alpha}}{\Gamma(\alpha+\frac{1}{2})\Gamma(\frac{1}{ 2})}\int_{-1}^{1}e^{izu}(1-u^{2})^{\alpha-\frac{1}{2}}\,\mathrm{d}u,\]
where \(\alpha>-1/2\) and \(z\in\mathbb{R}\), see [25].
When \(\lambda=0\), i.e. \(m=2\), it is seen that the left hand side of (3.9) is bounded by a constant using the convolution theorem and Lemma 3.1.
After counting the powers of \(z\), we complete the proof.
_Remark 3.4_.: The polynomial growth order given here is not optimal. Since we are mainly interested when \(z\) is large enough, a smaller degree can be obtained if we use the property \(|J_{\nu}(x)|\leq 1\) for \(\nu>0,x\in\mathbb{R}\) and the inverse Laplace transform formula [28, p.48 (21)]
\[\mathcal{L}^{-1}\left[\frac{1}{r}\frac{1}{R^{\nu}}\right](t)=a^{-\nu}J_{a\nu} (at),\qquad\mathrm{Re}\,\nu>-1,\]
with \(r=\sqrt{s^{2}+a^{2}}\) and \(R=r+s\).
With the bounds established in Theorem 1.6, we can now specify the domain in the definition of \(\mathcal{F}_{p/q}\). Let us proceed by defining a class of functions as follows,
\[B_{p/q}(\mathbb{R}^{m}):=\left\{f\in L^{1}(\mathbb{R}^{m}):\int_{\mathbb{R}^{ m}}\left(1+|x|\right)^{\frac{3m}{2}-m+2}|f(x)||x|^{p/q-2}\mathrm{d}x<\infty \right\}.\]
It immediately follows that,
**Theorem 3.5**.: _Let \(p,q\in\mathbb{N}\), and \(m\geq 2\) be even. The \((0,p/q)\)-generalized Fourier transform \(\mathcal{F}_{p/q}\) is well-defined on \(B_{p/q}(\mathbb{R}^{m})\). In particular, for \(f\in B_{p/q}(\mathbb{R}^{m})\), \(\mathcal{F}_{p/q}f\) is a continuous function._
Proof.: Since \(3mp/2-m+2\geq 3>0\), we have
\[\left|K_{p/q}^{m}(x,y)\right|\leq C\left(1+|x||y|\right)^{\frac{3mp}{2}-m+2}\] \[\leq C(1+|x|)^{\frac{3m}{2}-m+2}(1+|y|)^{\frac{3mp}{2}-m+2}.\]
Thus, \(\mathcal{F}_{p/q}\) is well-defined on \(B_{p/q}(\mathbb{R}^{m})\). The continuity of \(\mathcal{F}_{p/q}f\) follows from the continuity of the kernel and the dominated convergence theorem.
_Remark 3.6_.: With these bounds, several uncertainty inequalities can now be established for \(\mathcal{F}_{p/q}\) by following the methods described in [14]. These include for instance the global uncertainty inequality, Donoho-Stark's uncertainty inequality, and Faris's local uncertainty inequality.
## 4. Prabhakar function and kernel estimates
In this section, we begin by providing an integral expression and an estimate for the Prabhakar generalized Mittag-Leffler function [13]. In case \(\delta=1\) we recover the estimate from [23, Lemma 5] which was derived using a similar technique. Subsequently, utilizing the obtained bounds, we establish an estimate for the \((0,a)\)-generalized Fourier kernel in a certain domain. This helps to locate the domain in which the kernel may grow rapidly. The results are valid for all dimensions \(m\geq 2\).
**Definition 4.1**.: The Prabhakar generalized Mittag-Leffler function is defined by
\[E_{\alpha,\beta}^{\delta}(z):=\sum_{n=0}^{\infty}\frac{(\delta)_{n}}{n!\Gamma (\alpha n+\beta)}z^{n},\qquad\alpha,\beta,\gamma\in\mathbb{C},\quad\operatorname {Re}\alpha>0,\]
where \((\delta)_{n}=\delta(\delta+1)\ldots(\delta+n-1)\).
_Remark 4.2_.: The Laplace transform of the Prabhakar function is given by
\[\mathcal{L}\left(t^{\beta-1}E_{\alpha,\beta}^{\delta}(zt^{\alpha})\right)= \frac{1}{s^{\beta}}\frac{1}{(1-zs^{-\alpha})^{\delta}}, \tag{4.1}\]
where \(\operatorname{Re}\alpha>0\), \(\operatorname{Re}\beta>0\), \(\operatorname{Re}s>0\) and \(s>|z|^{1/(\operatorname{Re}\alpha)}\), see [17, (5.1.6)].
_Remark 4.3_.: When \(\delta=1\), it reduces to the two-parametric Mittag-Leffler function [17]
\[E_{\alpha,\beta}(z):=\sum_{n=0}^{\infty}\frac{z^{n}}{\Gamma(\alpha n+\beta)}.\]
For \(\epsilon>0\) and \(0<\mu<\pi\), we consider the contour \(\gamma(\epsilon,\mu)\) which is shown in Figure 1.
It consists of the arc \(\{\zeta\in\mathbb{C}\mid|\zeta|=\epsilon,|\arg(\zeta)|\leq\mu\}\), and two rays \(\{\zeta\in\mathbb{C}\mid|\zeta|\geq\epsilon,\arg(\zeta)=\pm\mu\}.\) The contour \(\gamma(\epsilon,\mu)\) divides the complex plane in
two parts, a region \(G^{-}(\epsilon,\mu)\) located to the left of the contour and a region \(G^{+}(\epsilon,\mu)\) located to the right of \(\gamma(\epsilon,\mu).\) The orientation is such that \(\arg\) is non-decreasing.
As was shown in [26, Eq. (1.52)], the reciprocal gamma function has the representation
\[\frac{1}{\Gamma(z)}=\frac{1}{2\pi\alpha i}\int_{\gamma(\epsilon,\mu)}\exp \left(\zeta^{1/\alpha}\right)\zeta^{(1-z-\alpha)/\alpha}\,\mathrm{d}\zeta, \tag{4.2}\]
for \(\alpha<2,\) and \(\pi\alpha/2<\mu<\min\{\pi,\pi\alpha\}.\) The branch cut is taken to be the negative real axis for the function \(z^{\delta}\) where \(\delta>0\) is assumed.
We give the following integral formula for the Prabhakar function.
**Theorem 4.4**.: _Let \(\delta>0\), \(0<\alpha<2\) and \(\beta\in\mathbb{C}\) and let \(\epsilon>0\) and \(\mu\) be given such that_
\[\frac{\pi\alpha}{2}<\mu<\min\{\pi,\alpha\pi\}.\]
_Then for \(z\in G^{-}(\epsilon,\mu)\) we have the formula_
\[E^{\delta}_{\alpha,\beta}(z)=\frac{1}{2\pi\alpha i}\int_{\gamma(\epsilon,\mu )}\frac{\exp\left(\zeta^{1/\alpha}\right)\zeta^{\frac{1-\beta}{\alpha}+\delta -1}}{(\zeta-z)^{\delta}}\,\mathrm{d}\zeta\]
_and for \(z\in G^{+}(\epsilon,\mu)\) in case \(\delta\) is a positive integer_
\[E^{\delta}_{\alpha,\beta}(z)=\frac{1}{2\pi\alpha i}\int_{\gamma(\epsilon,\mu )}\frac{\exp\left(\zeta^{1/\alpha}\right)\zeta^{\frac{1-\beta}{\alpha}+\delta -1}}{(\zeta-z)^{\delta}}\,\mathrm{d}\zeta+\frac{1}{\alpha\Gamma(\delta)}\frac{ \mathrm{d}^{\delta-1}}{\mathrm{d}z^{\delta-1}}g(z),\]
_where \(g(z)=\exp(z^{1/\alpha})z^{(1-\beta)/\alpha+\delta-1}.\)_
Proof.: The proof follows the same lines as for the two-parameter Mittag-Leffler function [26, Section 1.2.7]. Let us first consider the case where \(|z|<\epsilon.\) Then it holds that \(|z/\zeta|<1\) for \(\zeta\in\gamma(\epsilon,\mu).\) Next, we compute by means of (4.2) and the binomial theorem
\[E^{\delta}_{\alpha,\beta}(z) =\sum_{k=0}^{+\infty}\frac{(\delta)_{k}}{k!}\frac{z^{k}}{2\pi \alpha i}\int_{\gamma(\epsilon,\mu)}\exp\left(\zeta^{1/\alpha}\right)\zeta^{ (1-\alpha k-\beta-\alpha)/\alpha}\,\mathrm{d}\zeta\] \[=\frac{1}{2\pi\alpha i}\int_{\gamma(\epsilon,\mu)}\exp\left( \zeta^{1/\alpha}\right)\zeta^{\frac{1-\beta}{\alpha}-1}\sum_{k=0}^{+\infty} \frac{(\delta)_{k}}{k!}\left(\frac{z}{\zeta}\right)^{k}\,\mathrm{d}\zeta\] \[=\frac{1}{2\pi\alpha i}\int_{\gamma(\epsilon,\mu)}\exp\left( \zeta^{1/\alpha}\right)\zeta^{\frac{1-\beta}{\alpha}-1}\left(1-\frac{z}{\zeta }\right)^{-\delta}\,\mathrm{d}\zeta\] \[=\frac{1}{2\pi\alpha i}\int_{\gamma(\epsilon,\mu)}\exp\left( \zeta^{1/\alpha}\right)\zeta^{\frac{1-\beta}{\alpha}+\delta-1}\left(\zeta-z \right)^{-\delta}\,\mathrm{d}\zeta.\]
Now by the condition on \(\mu,\) we get that the integral is absolutely convergent and hence defines a function of the variable \(z,\) which is analytic in the region \(G^{-}(\epsilon,\mu).\) Since the ball \(|z|<\epsilon\) is contained in this region, by analytic continuation the above formula for \(E^{\delta}_{\alpha,\beta}(z)\) holds for \(z\in G^{-}(\epsilon,\mu).\)
In case \(z\in G^{+}(\epsilon,\mu)\) and \(\delta\) is assumed to be an integer, we pick \(\eta>|z|\) so that \(z\in G^{-}(\eta,\mu)\) and therefore
\[E^{\delta}_{\alpha,\beta}(z)=\frac{1}{2\pi\alpha i}\int_{\gamma(\eta,\mu)} \frac{\exp\left(\zeta^{1/\alpha}\right)\zeta^{\frac{1-\beta}{\alpha}+\delta-1} }{(\zeta-z)^{\delta}}\,\mathrm{d}\zeta.\]
The formula now follows by using Cauchy's integral formula for derivatives
\[\frac{1}{\alpha(\delta-1)!}\frac{\mathrm{d}^{\delta-1}}{\mathrm{d}z^{\delta-1}}g (z)=\frac{1}{2\alpha\pi i}\int_{\chi}\frac{1}{(\zeta-z)^{\delta}}\left(\exp\left( \zeta^{1/\alpha}\right)\zeta^{\frac{1-\beta}{\alpha}+\delta-1}\right)\,\mathrm{d}\zeta,\]
where \(\chi\) is the closed clockwise contour consisting of the arc of \(\gamma(\eta,\mu)\), the arc of \(\gamma(\epsilon,\mu)\) and the two line segments joining them.
We now give an estimate for the function \(E_{\alpha,\beta}^{\delta}(z)\) in some part of the complex plane.
**Theorem 4.5**.: _Let \(\delta>0\), \(0<\alpha<2\) and \(\beta>0\). Assume that \(\alpha\pi/2<\mu<\min\{\pi,\alpha\pi\}\) and \(\mu\leq|\arg z|\leq\pi\). Then there exists a constant \(C>0\) only depending on \(\mu,\alpha\) and \(\beta\) such that_
\[\left|E_{\alpha,\beta}^{\delta}(z)\right|\leq\frac{C}{1+|z|^{\delta}}. \tag{4.3}\]
Proof.: Pick \(\theta\) such that \(\alpha\pi/2<\theta<\mu\) and consider the representation from Theorem 4.4,
\[E_{\alpha,\beta}^{\delta}(z)=\frac{1}{2\pi\alpha i}\int_{\gamma(R,\theta)} \frac{\exp\left(\zeta^{1/\alpha}\right)\zeta^{\frac{1-\beta}{\alpha}+\delta-1} }{(\zeta-z)^{\delta}}\,\mathrm{d}\zeta,\]
which is valid (by analytic continuation) for \(z\in G^{-}(R,\theta).\) Here \(R>0\) is free to choose, but is from now on fixed.
In case \(|z|>R\), we note that
\[\min_{\zeta\in\gamma(R,\theta)}\left|(\zeta-z)^{\delta}\right|\geq|z|^{\delta} \sin(\mu-\theta)^{\delta}\]
and hence we estimate
\[\left|E_{\alpha,\beta}^{\delta}(z)\right|\leq\frac{1}{2\alpha\pi|z|^{\delta} \sin(\mu-\theta)^{\delta}}\int_{\gamma(R,\theta)}\left|\exp\left(\zeta^{1/ \alpha}\right)\right|\left|\zeta^{\frac{1-\beta}{\alpha}+\delta-1}\right| \left|\mathrm{d}\zeta\right|.\]
Denote with \(I_{\delta}\) the latter integral. The contribution of the circular arc of \(\gamma(R,\theta)\) to the integral \(I_{\delta}\) is given by
\[\int_{-\theta}^{\theta}\exp\left(R^{1/\alpha}\cos\left(\frac{u}{\alpha}\right) \right)R^{\frac{1-\beta}{\alpha}+\delta-1}R\,\mathrm{d}u<\infty.\]
The part of \(I_{\delta}\) covering the two rays of \(\gamma(R,\theta)\) is finite, as for \(\zeta\in\gamma(R,\theta)\) with \(\arg(\zeta)=\pm\theta\) and \(|\zeta|\geq R\), we have
\[\left|\exp\left(\zeta^{1/\alpha}\right)\right|=\exp\left(|\zeta|^{1/\alpha} \cos\frac{\theta}{\alpha}\right)\]
and \(\cos(\theta/\alpha)<0\) as \(\pi/2<\theta/\alpha<\pi\) by the choice of \(\theta.\) Hence for \(|z|>R\) and \(\mu\leq|\arg(z)|\leq\pi\), we have
\[\left|E_{\alpha,\beta}^{\delta}(z)\right|\leq\frac{C_{1}}{|z|^{\delta}},\]
where \(C_{1}\) is explicitly given by
\[C_{1}=\frac{1}{2\alpha\pi\sin(\mu-\theta)^{\delta}}\int_{\gamma(R,\theta)} \left|\exp\left(\zeta^{1/\alpha}\right)\right|\left|\zeta^{\frac{1-\beta}{ \alpha}+\delta-1}\right|\left|\mathrm{d}\zeta\right|.\]
In the case \(|z|\leq R\), with \(\mu\leq|\arg(z)|\leq\pi\), we find
\[\left|E_{\alpha,\beta}^{\delta}(z)\right|\leq\sum_{n=0}^{\infty}\frac{( \delta)_{n}}{n!}\frac{|z|^{n}}{\Gamma(\beta+\alpha n)}\leq E_{\alpha,\beta}^{ \delta}(R).\]
Summarizing, if we let \(C=(1+R^{\delta})\max\{\frac{C_{1}}{R},E^{\delta}_{\alpha,\beta}(R)\}\) then we obtain the desired bound
\[\big{|}E^{\delta}_{\alpha,\beta}(z)\big{|}\leq\frac{C}{1+|z|^{\delta}},\qquad \mu\leq|\arg z|\leq\pi.\]
_Remark 4.6_.: It is relatively easier to obtain a similar estimate when \(\delta\) is an integer. These cases correspond to the even dimensional generalized Fourier kernels. Indeed, recall the reduction formula in the third parameter for the Prabhakar function (see [13, Eq.(2)])
\[E^{\delta+1}_{\alpha,\beta}(z)=\frac{E^{\delta}_{\alpha,\beta-1}(z)+(1-\beta+ \alpha\delta)E^{\delta}_{\alpha,\beta}(z)}{\alpha\delta}.\]
This means that it is possible to write the Prabhakar function, when \(\delta\in\mathbb{N}\), in terms of the two parametric Mittag-Leffler function. Then a similar estimate (without \(\delta\)) follows from the known results below.
For the two-parametric Mittag-Leffler function, we have (see [26, p.35]),
**Theorem 4.7**.: _If \(\alpha<2\), \(\beta\) is an arbitrary real number, \(\mu\) is such that \(\pi\alpha/2<\mu<\min\{\pi,\pi\alpha\}\) and \(C\) is a real constant, then_
\[|E_{\alpha,\beta}(z)|\leq\frac{C}{1+|z|},\qquad(\mu\leq|\arg z|\leq\pi,\quad 0 \leq|z|).\]
Now, we go back to the estimation of the radially deformed Fourier kernel. An integral representation is obtained by performing an inverse Laplace transform of (1.4) in terms of the Prabhakar function using (4.1), see [5, Theorem 10]. We corrected the exponent of \(z\) outside the integral in the definition of \(h\) there.
**Theorem 4.8**.: _For \(a>0\) and \(m\geq 2\), the kernel of the radially deformed Fourier transform \(\mathcal{F}_{a}\) is given by_
\[\begin{split} K^{m}_{a}(x,y)=&\,c_{a,m}\int_{0}^{1} \left[(1+2\tau)^{-\frac{\lambda}{a}}J_{\frac{2\lambda}{a}}\left(\frac{2}{a}z^{ a/2}\sqrt{1+2\tau}\right)\right.\\ &-\left.e^{-i\frac{2\pi}{a}}(1+2\tau)^{-\frac{\lambda+2}{a}}J_{ \frac{2\lambda+4}{a}}\left(\frac{2}{a}z^{a/2}\sqrt{1+2\tau}\right)\right]h(z, \xi,\tau)d\tau,\end{split} \tag{4.4}\]
_where \(c_{a,m}=2^{2\lambda/a}\Gamma\left(\frac{2\lambda+a}{a}\right)e^{i\frac{2\pi( \lambda+1)}{a}}\left(\frac{2}{a}\right)^{2(\lambda+2)/a},\) and_
\[\begin{split} h(z,\xi,t)=&\,z^{\lambda+2}\int_{0}^{t }\zeta^{\frac{2}{a}(\lambda+1)-1}E^{\lambda+1}_{\frac{2}{a},\frac{2}{a}( \lambda+1)}(b_{+}\zeta^{\frac{2}{a}})(t-\zeta)^{\frac{2}{a}(\lambda+1)-1}\\ &\times E^{\lambda+1}_{\frac{2}{a},\frac{2}{a}(\lambda+1)}(b_{-}( t-\zeta)^{\frac{2}{a}})\,\mathrm{d}\zeta,\end{split} \tag{4.5}\]
_in which \(b_{\pm}=e^{\pm i\theta}e^{i\pi/a}\left(\frac{2}{a}\right)^{2/a}z\), \(\lambda\), \(z\) and \(\xi=\cos\theta\) are defined in Theorem 1.4._
We now estimate \(h(x,y,t)\) using the bound (4.3) for Prabhakar function.
**Lemma 4.9**.: _Let \(a>1\) and \(m\geq 2\). Assume that \(\pi/a<\mu<\min\{\pi,2\pi/a\}\) and \(\mu\leq|\arg e^{i(\pm\theta+\pi/a)}|\leq\pi\). Then there exists a constant \(C>0\) only depending on \(\mu,m\) and \(a\) such that_
\[|h(z,\xi,t)|\leq Cz^{1/6}t^{\frac{2\lambda}{a}+\frac{1}{3a}-1} \tag{4.6}\]
_for \(t\in[0,1]\)._
Proof.: Using the inequality \(x\leq 1+x^{p}\) when \(x\geq 0\) and \(p\geq 1\), we have
\[|b_{+}\zeta^{\frac{2}{a}}|^{\frac{\lambda}{2}+1-\frac{1}{12}}\leq 1+|b_{+} \zeta^{\frac{2}{a}}|^{\lambda+1}. \tag{4.7}\]
By (4.3) and (4.7), there exists \(C\geq 0\) such that
\[\left|E_{\frac{2}{a},\frac{2}{a}(\lambda+1)}^{\lambda+1}(b_{+} \zeta^{\frac{2}{a}})\right|\leq\frac{C}{1+|b_{+}\zeta^{\frac{2}{a}}|^{\lambda+ 1}}\leq\frac{C}{|b_{+}\zeta^{\frac{2}{a}}|^{\frac{\lambda}{2}+1-\frac{1}{12}}} \leq\frac{C}{\left(z\zeta^{\frac{2}{a}}\right)^{\frac{\lambda}{2}+1-\frac{1}{1 2}}},\]
and similarly
\[\left|E_{\frac{2}{a},\frac{2}{a}(\lambda+1)}^{\lambda+1}(b_{-}(t- \zeta)^{\frac{2}{a}})\right|\leq\frac{C}{\left(z(t-\zeta)^{\frac{2}{a}}\right) ^{\frac{\lambda}{2}+1-\frac{1}{12}}},\]
when \(\mu\leq|\arg e^{i(\pm\theta+\pi/a)}|\leq\pi\), and \(0\leq\zeta\leq t\leq 1\).
It follows that
\[|h(z,\xi,t)| = z^{\lambda+2}\left|\int_{0}^{t}\zeta^{\frac{2}{a}(\lambda+1)-1}E _{\frac{2}{a},\frac{2}{a}(\lambda+1)}^{\lambda+1}(b_{+}\zeta^{\frac{2}{a}})(t -\zeta)^{\frac{2}{a}(\lambda+1)-1}\right.\] \[\times\left.E_{\frac{2}{a},\frac{2}{a}(\lambda+1)}^{\lambda+1}(b_ {-}(t-\zeta)^{\frac{2}{a}})\,\mathrm{d}\zeta\right|\] \[\leq C\frac{z^{\lambda+2}}{z^{\lambda+2-1/6}}\left|\int_{0}^{t}\zeta^ {\frac{\lambda}{a}+\frac{1}{6a}-1}(t-\zeta)^{\frac{\lambda}{a}+\frac{1}{6a}-1 }\,\mathrm{d}\zeta\right|\] \[= C\cdot z^{1/6}t^{\frac{2\lambda}{a}+\frac{1}{3a}-1}\int_{0}^{1} u^{\frac{\lambda}{a}+\frac{1}{6a}-1}(1-u)^{\frac{\lambda}{a}+\frac{1}{6a}-1} \mathrm{d}u\] \[\leq Cz^{1/6}t^{\frac{2\lambda}{a}+\frac{1}{3a}-1}.\]
This completes the proof.
The main estimate for \(K_{a}^{m}(x,y)\) of this section is as follows,
**Theorem 4.10**.: _Let \(a>1\) and \(m\geq 2\). Assume that \(\pi/a<\mu<\min\{\pi,2\pi/a\}\), then there exists \(C>0\) only depending on \(\mu,m\) and \(a\) such that_
\[|K_{a}^{m}(x,y)|\leq C, \tag{4.8}\]
_for all \(x,y\in\mathbb{R}^{m}\) satisfying \(\mu\leq|\arg e^{i(\pm\theta+\pi/a)}|\leq\pi\). In particular, when \(a>4\), the inequality (4.8) holds for any \(x,y\in\mathbb{R}^{m}\) such that \(\langle x,y\rangle\leq 0\)._
Proof.: When \(|z|\geq 1\), by (4.4) and (4.7), we have
\[|K_{a}^{m}(z,\xi)| = \left|c_{a,m}\int_{0}^{1}\left[(1+2\tau)^{-\frac{\lambda}{a}}J_{ \frac{\lambda\lambda}{a}}\left(\frac{2}{a}z^{a/2}\sqrt{1+2\tau}\right)\right.\] \[-\left.\left.e^{-i\frac{2\pi}{a}}(1+2\tau)^{-\frac{\lambda+2}{a} }J_{\frac{2\lambda+4}{a}}\left(\frac{2}{a}z^{a/2}\sqrt{1+2\tau}\right)\right]h (z,\xi,\tau)d\tau\right|\] \[\leq C\int_{0}^{1}\left(\left|J_{\frac{2\lambda}{a}}\left(\frac{2}{ a}z^{a/2}\sqrt{1+2\tau}\right)\right|+\left|J_{\frac{2\lambda+4}{a}}\left(\frac{2}{ a}z^{a/2}\sqrt{1+2\tau}\right)\right|\right)\] \[\times\left(z^{a/2}\sqrt{1+2\tau}\right)^{\frac{1}{3}}|h(z,\xi, \tau)|z^{-\frac{\pi}{6}}\,\mathrm{d}\tau\] \[\leq Cz^{\frac{1-a}{6}}\int_{0}^{1}\tau^{\frac{2\lambda}{a}+\frac{1}{3 a}-1}\,\mathrm{d}\tau\] \[\leq C. \tag{4.9}\]
The second step is by the fact \(1+2\tau\geq 1\) for \(\tau\in[0,1]\). In the third step, we have used (4.6) and the following inequality (see [22]),
\[\sup_{x\in\mathbb{R}}|x|^{1/3}|J_{\nu}(x)|\leq\sup_{x\in\mathbb{R}}|x|^{1/3}|J_{ 0}(x)|=0.7857\ldots,\qquad\nu>0.\]
When \(|z|\leq 1\), since the kernel series (1.3) converges absolutely and uniformly on any compact set of (see [3, Lemma 4.17])
\[U:=\{(z,\xi)\in\mathbb{R}\times[-1,1]\} \tag{4.10}\]
and it is continuous on \(U\), there must exist a constant \(C>0\) such that
\[|K_{a}^{m}(z,\xi)|\leq C. \tag{4.11}\]
Combining (4.9) and (4.11), we obtain (4.8).
The remaining is obtained by checking the argumental function.
_Remark 4.11_.: Notice that when \(a\to\infty\), we can choose \(\mu\to 0\), see Figure 1. This means that the domain we have not yet estimated is gradually reducing in size.
## 5. Uniformly bounded kernels in dimension \(2\)
The kernels of dimension two behave more nicely than those of higher dimensions. In this section, we derive uniform bounds for certain parameters \(a\). In Theorem 4.10, we have shown that the radially deformed Fourier kernel is bounded when \(a>4\) and \(\langle x,y\rangle\leq 0\). Consequently, to establish a uniform bound, it suffices to consider the remaining domain, i.e. \(\langle x,y\rangle>0\).
First, we give a proof for the boundedness of \(K_{8}^{2}(x,y)\). This proof will be expanded upon to accommodate other parameters.
**Theorem 5.1**.: _When \(a=8\) and \(m=2\), there exists \(C>0\) such that_
\[\left|K_{8}^{2}(x,y)\right|\leq C\]
_for all \(x,y\in\mathbb{R}^{2}\)._
Proof.: We split the kernel series (2.1) into its even and odd parts,
\[K_{8}^{2}(z,\xi)= \underbrace{J_{0}(z_{8})+2\sum_{k=1}^{\infty}e^{-\frac{i\pi k}{4 }}J_{\frac{2k}{4}}(z_{8})\cos 2k\theta}_{K_{4}^{2}\left(\frac{1}{2}z^{2},\cos 2 \theta\right)}\] \[+2\sum_{k=1}^{\infty}e^{-\frac{i\pi(2k-1)}{8}}J_{\frac{2(2k-1)}{ 8}}(z_{8})\cos(2k-1)\theta\] \[=: I+II.\]
The even part can be expressed using the kernel \(K_{4}^{2}\) given in (2.3), which is uniformly bounded by \(3\) for all \(x,y\in\mathbb{R}^{2}\).
By Theorem 4.10, we know \(|K_{8}^{2}(z,\xi)|\leq C\) when \(\xi\leq 0\). As mentioned at the beginning of this section, we only need to bound \(K_{8}^{2}(z,-\xi)\). Using the following property of Gegenbauer polynomials (see [29, Eq. (4.7.4)])
\[C_{k}^{(\lambda)}(-\xi)=(-1)^{k}C_{k}^{(\lambda)}(\xi),\]
and the triangle inequality, we have
\[\left|K_{8}^{2}(z,-\xi)\right|= \left|I-II\right|\leq\left|I\right|+\left|II\right|\] \[= \left|I\right|+\left|K_{8}^{2}(z,\xi)-I\right|\] \[\leq 2\left|I\right|+\left|K_{8}^{2}(z,\xi)\right|\] \[\leq C,\]
when \(\xi\leq 0\). This completes the proof.
By induction, we record the following theorem.
**Theorem 5.2**.: _Let \(m=2\) and \(a=2^{\ell}\) with \(\ell\in\mathbb{N}_{0}\), there exists \(C>0\) depending only on \(\ell\) such that_
\[\left|K_{2^{\ell}}^{2}(x,y)\right|\leq C\]
_for all \(x,y\in\mathbb{R}^{2}\)._
It can be further extended using the following result, see [6, Lemma 2].
**Lemma 5.3**.: _Let_
\[f(\theta)=\sum_{k=0}^{+\infty}a_{k}\cos k\theta,\qquad a_{k}\in\mathbb{C},\]
_be an absolutely convergent Fourier series. Then the series_
\[g(\theta)=\sum_{k=0}^{\infty}a_{nk}\cos k\theta\]
_is given explicitly by_
\[g(\theta)=\frac{1}{n}\sum_{j=0}^{n-1}f\left(\frac{\theta+2\pi j}{n}\right).\]
This leads to the following,
**Theorem 5.4**.: _Let \(m=2\) and \(a=2^{\ell}/n\) with \(\ell\in\mathbb{N}_{0}\) and \(n\in\mathbb{N}\), there exists \(C>0\) depending only on \(a\) such that_
\[\left|K_{\frac{2^{\ell}}{n}}^{2}(x,y)\right|\leq C\]
_for all \(x,y\in\mathbb{R}^{2}\)._
Proof.: By Lemma 5.3, we have
\[K_{\frac{2^{\ell}}{n}}^{2}(z,\cos\theta)=\frac{1}{n}\sum_{j=0}^{n-1}K_{2^{ \ell}}^{2}\left(nz^{1/n},\cos\left(\frac{\theta+2\pi j}{n}\right)\right).\]
Then the bound follows from Theorem 5.2.
Using the above uniform bound and the Plancherel theorem, now the \(L^{p}\)-boundedness of \(\mathcal{F}_{2^{\ell}/n}\) follows by the Riesz-Thorin interpolation theorem.
**Theorem 5.5** (Hausdorff-Young inequality).: _Let \(m=2\) and \(a=2^{\ell}/n\) with \(\ell\in\mathbb{N}_{0}\) and \(n\in\mathbb{N}\). For \(1\leq p\leq 2\) and \(1/p+1/p^{\prime}=1\), there exists a constant \(c(p,a)>0\) such that_
\[\left\|\mathcal{F}_{\frac{2^{\ell}}{n}}(f)\right\|_{L^{p^{\prime}}(\mathbb{R }^{2},\,\mathrm{d}\mu(x))}\leq c(p,a)\|f\|_{L^{p}(\mathbb{R}^{2},\,\mathrm{d} \mu(x))}, \tag{5.1}\]
_for all \(f\in L^{p}\left(\mathbb{R}^{2},\,\mathrm{d}\mu(x)\right)\) with \(\,\mathrm{d}\mu(x)=|x|^{\frac{2^{\ell}}{n}-2}\mathrm{d}x\)._
_Remark 5.6_.: Using these bounds, we can now extend the applicability of the results in [19] and [21]. It is also interesting to extend the results for translation operator and multiplier theorems (see e.g. [1, 12]) for the generalized Fourier transforms with parameters \(a\) considered in this work.
|
2302.03544 | Causally-Interpretable Random-Effects Meta-Analysis | Recent work has made important contributions in the development of
causally-interpretable meta-analysis. These methods transport treatment effects
estimated in a collection of randomized trials to a target population of
interest. Ideally, estimates targeted toward a specific population are more
interpretable and relevant to policy-makers and clinicians. However,
between-study heterogeneity not arising from differences in the distribution of
treatment effect modifiers can raise difficulties in synthesizing estimates
across trials. The existence of such heterogeneity, including variations in
treatment modality, also complicates the interpretation of transported
estimates as a generic effect in the target population. We propose a conceptual
framework and estimation procedures that attempt to account for such
heterogeneity, and develop inferential techniques that aim to capture the
accompanying excess variability in causal estimates. This framework also seeks
to clarify the kind of treatment effects that are amenable to the techniques of
generalizability and transportability. | Justin M. Clark, Kollin W. Rott, James S. Hodges, Jared D. Huling | 2023-02-07T15:52:52Z | http://arxiv.org/abs/2302.03544v1 | # Causally-Interpretable Random-Effects Meta-Analysis
###### Abstract
Recent work has made important contributions in the development of causally-interpretable meta-analysis. These methods transport treatment effects estimated in a collection of randomized trials to a target population of interest. Ideally, estimates targeted toward a specific population are more interpretable and relevant to policy-makers and clinicians. However, between-study heterogeneity not arising from differences in the distribution of treatment effect modifiers can raise difficulties in synthesizing estimates across trials. The existence of such heterogeneity, including variations in treatment modality, also complicates the interpretation of transported estimates as a generic effect in the target population. We propose a conceptual framework and estimation procedures that attempt to account for such heterogeneity, and develop inferential techniques that aim to capture the accompanying excess variability in causal estimates. This framework also seeks to clarify the kind of treatment effects that are amenable to the techniques of generalizability and transportability.
**Keywords:** causal inference, generalizability, transportability, meta-analysis, evidence synthesis, clinical trials
Introduction
Given data from a collection of randomized controlled trials (RCTs), an important question faced by clinicians and policy-makers alike is whether such results apply to target populations of interest. Recent work has made important advances in tackling this question by developing causal inference methods designed for meta-analysis (Dahabreh et al. 2022). These techniques are designed to account for differences between the trial and target populations, resulting in effect estimates that have a causal interpretation for the target population.
The impact of such population differences is one example of a challenge in research synthesis long recognized by practitioners of meta-analysis: between-study heterogeneity of many kinds should be taken into account when evaluating results from one trial to the next. Differences in the characteristics of populations represents just one form of between-study heterogeneity. In this paper, we develop causal quantities and estimators that build on the existing causally-interpretable meta-analysis framework to additionally take into account unexplained between-study heterogeneity beyond that induced by covariate differences between the collection of trials and the target population.
Our approach produces effect estimates that both are applicable to a target population of scientific interest and remain interpretable even when between-study heterogeneity prevents data pooling across trials. We also see our work as a conceptual bridge between the developing field of causally-interpretable meta-analysis and random-effects approaches that are well-established in evidence synthesis. This structural resemblance to traditional random-effects meta-analysis allow us to adopt some of the analytical frameworks important to that approach while retaining causal interpretability.
In principle, systematic reviews of randomized trials should have important implications for health policy and clinical practice (Berlin and Golub 2014). For example, the Strength of Recommendation Taxonomy (SORT) assigns meta-analysis to the highest level of study quality (Ebell et al. 2004). However, recent work has suggested that meta-analyses-particularly meta-analyses involving individual patient data (IPD)-have had a relatively small impact on organizations developing and publishing clinical guidelines (Vale et al. 2015), even among guidelines for which relevant IPD meta-analyses were readily available.
Analogous work published in 2021 also found limited use of systematic reviews to inform clinical guidelines: in that study, only 34% of analyzed guidelines conducted a systematic review of available evidence to inform clinical practice (Lunny et al. 2021). These authors give many possible explanations for their findings, including ignorance on the part of guideline development groups, a related over-reliance on expert opinion, and the amount of labor and time required to complete a high-quality systematic review. However, it is worth interrogating whether the low uptake of systematic reviews to inform clinical guidelines might arise
from the usefulness of the meta-analyses themselves. That is, a review of clinical evidence may be systematic, rigorous, and follow international standards while still being of limited applicability to decision-making in a particular population or setting.
Much of the relevance of a given trial to clinical practice stems from how well the patient population targeted by the practitioner aligns with trial participants. The complex, idiosyncratic process of recruitment can result in studies that, while exhibiting a high level of internal validity, are unlikely to apply to individuals distinct from trial participants, thereby reducing external validity (Degtiar and Rose 2021). Concerns over external validity are especially relevant to health system decision making, where beneficiaries may differ in key ways from participants in the RCTs that are used to justify coverage decisions. For example, a 2008 study suggested that participants in the RCTs underlying a meta-analysis informing CMS policy differed substantially from the Medicare population (Dhruva 2008). This internal-external validity gap may help to explain growing interest in the use of large electronic health records (EHR) databases to inform clinical research (Galson and Simon 2016).
Issues of relevance and external validity apply to an even greater degree when synthesizing evidence across several studies. Many of these challenges relate to possible sources of heterogeneity between the RCTs included in a given meta-analysis. Considering only one such possible difference, that of heterogeneity in the participant population underlying each RCT, it is difficult to imagine how an estimated treatment effect averaged over such populations--as is done in standard meta-analyses--would apply to a given clinician's or health system's population of interest. Moreover, other relevant sources of heterogeneity remain, including treatment modalities and methods of evaluating primary outcomes (Berlin and Golub 2014).
The standard approach to modeling such between-study heterogeneity is random-effects meta-analysis, in which the effects or outcomes of each study are conceived as random draws from some distribution, typically a normal distribution (Higgins, Thompson, and Spiegelhalter 2009). Heterogeneity between trials is therefore viewed as part of the total variance in trial outcomes: one component arising from sampling variation within a trial and another reflecting systematic (though still i.i.d.) variation between the trials.
However, estimates from random-effects meta-analyses still fail to explicitly account for differences between trial participants and a target population of interest to clinicians and policy makers. Recent work in causally interpretable meta-analysis has yielded methods for making population-specific inferences using data from multiple RCTs (Dahabreh et al. (2022), (2020)). Roughly, this work uses a representative sample from the target population of interest to transport treatment effects estimated in the RCT populations to the target population.
A key assumption in this approach is that two individuals from the same target population would have identical average responses to treatment regardless of which clinical trial in the meta-analysis they may have
participated in. Moreoever, the treatment response observed for an individual in the target population had they participated in any one of the trials is assumed analogous to the treatment response expected in the population more generally. Systematic heterogeneity in trial conduct and the possibility of trial participation effects threaten both of these assumptions (Dahabreh, Robertson, and Hernan 2022). While some of this heterogeneity might be captured by covariate differences, other site-specific mechanisms may be unrelated to participant characteristics.
### Examples of Between-Study Heterogeneity
Defining such heterogeneity more concretely, consider a scenario where each study in the meta-analysis applied a somewhat different version of treatment. In the context of studies designed to ameliorate lower back pain, for example, providers in one study may apply a slightly different form of spinal manipulation than providers in another study. Such differences could be pre-defined in each study's protocol, but may also arise simply because different chiropractors implement the manipulation in slightly different ways. Moreover, differences in how a pain outcome is measured (e.g., the exact time at which the pain scale is assessed relative to treatment) could also require an expansion of potential outcome notation.
These differences in, say, treatment version, would persist even after accounting for variation in the distribution of treatment effect modifiers both between studies and between the trial and target populations. In the back pain example, if two individuals with identical covariate data enrolled in separate studies, differences in the spinal manipulation type would persist when attempting to compare and define their respective potential outcomes. This suggests indexing such heterogeneity in the potential outcomes themselves, e.g., an individual's potential outcome had they been assigned treatment \(a\) under one provider versus another.
Broader differences in trial conduct also have the potential to induce this heterogeneity. For example, after disruptions to biomedical research caused by the COVID-19 pandemic, some clinical trials opted to deliver self-administered medications to trial participants by mail, rather than distribute them in-person (U.S. Food and Drug Administration 2021). Delivery timings under these conditions may vary significantly between different trials, e.g., trials whose participants reside in more remote, rural locations may have slower delivery times than trials in urban locations. For diseases like COVID-19, where timing of treatment affects subsequent outcomes, these differences in delivery time could induce heterogeneity in the potential outcomes from each trial.
Differences in the timing of treatments administered in-person may also have important implications for research synthesis. A recent preprint examined whether passively administered antibodies altered health outcomes associated with SARS-CoV-2 infection (Stadler et al. 2022). They found that earlier administration
of such treatments may improve their efficacy. Imagine two patients with identical baseline covariates enrolling in two studies investigating the same monoclonal antibody treatment. If the two studies differed in the timing of treatment relative to infection, this study suggests that those individuals may have distinct, study-specific expected potential outcomes.
Another example where differences in trial conduct may induce heterogeneity is when provider-assessed ratings are an outcome. For example, The Positive and Negative Syndrome Scale (PANSS) is routinely used to assess symptom severity in patients with schizophrenia (Kay, Fiszbein, and Opler 1987). For each symptom evaluated in the PANSS, raters assign a score of 1 (absent) to 7 (extreme). Differences between treatment groups might then be assessed by differences in their aggregated PANSS scores. However, individual raters may perform these qualitative evaluations in different ways. Analyses synthesizing outcomes from many sites employing different raters may need to take these differences into account.
Animated by such concerns, our work aims to lay the groundwork for a causally-interpretable meta-analysis that accounts for heterogeneity between studies beyond that induced by differences in the distribution of treatment effect modifiers. Our approach attempts to combine the techniques and intuitions of traditional random-effects meta-analysis with the causal inference-based meta-analytic framework introduced by Dahabreh et al. (2022). Concretely, this involves posing two causal questions. First, what outcomes would we observe, on average, if a member of the target population had participated in a particular study included in the collection of relevant RCTs? Moreover, what would we observe if we conducted a new RCT whose participants are drawn directly from the target population? This latter question is of primary scientific interest in this work, and follows directly from our novel framework. By answering such questions, we hope to expand the toolkit available to health policy makers and clinicians when evaluating treatment efficacy through meta-analysis.
## 2 Estimands for Capturing Between-Study Heterogeneity
Causally-interpretable meta-analysis analyzes outcomes and treatment effects observed in a collection of studies through the lens of transportability and generalizability. We assume the existence of some target population of interest and consider what this collection of studies can tell us about the effect of an intervention in that target population. Suppose we have data from a collection of clinical trials indexed by a set \(\mathcal{S}=\{1,...,m\}\). One causal question of interest is: what treatment effect can we expect if a member of the target population had participated in study \(s\in\mathcal{S}\)? As we will describe more precisely later, we can roughly describe our approach as decomposing such an average treatment effect \(\tau_{s}\) into the sum of an overall grand mean \(\tau\) and a
deviation \(\delta_{s}\) specific to study \(s\):
\[\tau_{s}=\tau+\delta_{s}. \tag{1}\]
In such a formulation, the grand mean \(\tau\) gives the treatment effect in the target population averaged over the collection of studies while the \(\delta_{s}\) reflects between-study heterogeneity in the underlying treatment effect.
Our ultimate goal is to precisely characterize, estimate, and perform inference on \(\tau_{0}\), which is the expected treatment effect if a new trial were conducted in the target population. Such a treatment effect comes about via a new draw \(\delta_{0}\), a new shift about \(\tau\).
The decomposition in (1) is structurally similar to parameters studied in random-effects meta-analysis, wherein study-specific treatment effects are assumed to be i.i.d. draws from a distribution, typically normal. In this work, we define and estimate parameters that maintain the familiar form of (1) while admitting a precise causal interpretation. This is accomplished by incorporating the study-specific heterogeneity reflected in \(\delta_{s}\) within the potential outcomes framework. Specifically, we incorporate the variation represented by \(\delta_{s}\) as an additional argument to the standard potential outcomes notation.
### The Data
Adding notation to the example outlined above, suppose we have IPD from a collection of \(m\) trials, all of which examined the same set of treatments. Again, we index these trials with the set \(\mathcal{S}=\{1,...,m\}\) and treatments with the set \(\mathcal{A}\). For each participant \(i\) in a given trial \(s\in\mathcal{S}\) with \(n_{s}\) participants total, we have outcome data \(Y_{i}\), baseline covariate information \(X_{i}\) and treatment assignment \(A_{i}\), where \(A_{i}\in\mathcal{A}\). We also assume to have baseline covariate information from a random sample of individuals in a target population of interest; we do not require treatment or outcome information in this sample. As in Dahabreh et al. (2022), we let \(S=0\) for individuals in the target population and introduce another variable \(R\) which takes value 1 for participants in the collection of trials and 0 for members of the target population. Thus, the observed data for each individual in the full dataset--that is, combined data from both the collection of trials and the target population--is of the form
\[(R_{i}Y_{i},R_{i}A_{i},X_{i},S_{i}).\]
\(R_{i}Y_{i}\) and \(R_{i}A_{i}\) evaluate to zero for those in the target population, indicating that such data are unavailable. The total sample size including both the sample from the target population and the collection of trials is \(n=n_{0}+n_{1}+\cdots+n_{m}\).
### Rationale for New Causal Quantities
In introducing causally-interpretable meta-analysis, Dahabreh et al. (2022) define potential outcomes \(Y(a)\) that depend on the assigned treatment \(a\in\mathcal{A}\). Different causal estimands describe the distribution of these potential outcomes within various populations of interest. For instance, \(E[Y(a)|R=0]\) gives the expected potential outcome under treatment \(a\) in the target population. To identify quantities like \(E[Y(a)|R=0]\), Dahabreh et al. assume exchangeability in potential outcomes across values of \(S\), conditional on baseline covariates. They also show that \(E[Y(a)|R=0]\) is identifiable under the weaker assumption of mean exchangeability, roughly defined as the assumption that:
\[E[Y(a)|X=x,S=0]=E[Y(a)|X=x,S=s] \tag{2}\]
for \(s=1,...,m\), along with additional conditions ensuring that the assumption is only made across values of \(S\) where the covariate pattern \(X=x\) has positive probability of occurring.
Intuitively, the primary risk in assuming (2) when it does not, in fact, hold is that the particular idiosyncrasies of each trial have the potential to muddle a transported treatment effect estimated from pooled data. This is especially problematic if the trials in the meta-analytic collection differ a great deal in sample size. In that setting, violations of (2) may result in causal estimates that are heavily weighted to the particular conditions of the largest trial. Such a weighting scheme has no clinical meaning and is merely an artifact of the mean exchangeability assumption.
The motivation for referencing such between-study heterogeneity stems directly from the logic underlying standard random-effects meta-analysis (Higgins, Thompson, and Spiegelhalter 2009). In that paradigm, a meta-analysis of a collection of trials proceeds under the assumption that the underlying parameter of interest--typically an average treatment effect--differs between trials. More precisely, in the traditional random-effects model, each study's latent true treatment effect is assumed to be randomly sampled from a distribution of effects. The variance of this underlying distribution is often of interest, and serves to quantify the degree of between-study heterogeneity.
Our proposed method models the average effects arising from studies \(s\in\mathcal{S}\) transported to the target population as varying about a grand mean according to draws of a latent random variable. Draws of this latent random variable fix these underlying average effects at different values, where differences between these values reflect between-study heterogeneity beyond that induced by covariate differences. The key distinction between these methods and traditional random-effects models is that heterogeneity in underlying parameter values (e.g., heterogeneity between studies in treatment effects) stems from variation in the potential outcomes of study participants, thereby retaining causal interpretations absent from standard meta-analysis.
Recall that a key causal question motivating this study is: what outcome would we expect if an individual were assigned treatment \(a\in\mathcal{A}\) in study \(s\in\mathcal{S}\)? This question suggests a need to distinguish potential outcomes arising from one trial setting versus another. Using the notation of VanderWeele and Hernan (2013), we write the potential outcome of a subject receiving treatment \(a\in\mathcal{A}\) under the conditions present in study \(s\in\mathcal{S}\) as having two arguments: one specifying the treatment assignment and another fixing the conditions present in a given setting, e.g., treatment group \(a\) in study \(s\):
\[Y(a,k_{s}^{a}).\]
Here, \(k_{s}^{a}\) constitutes a realization of a random variable \(K_{s}^{a}(a)\). Because we conceptualize values for \(K_{s}^{a}(a)\) even for individuals who did not receive treatment \(a\) in study \(s\), its notation mimics that of a standard potential outcome, as one's experience in a particular study could be a function of the treatment received. We suppose that each combination of a study and treatment arm is associated with one such random variable, the collection of which, across studies for a given treatment \(a\), is an i.i.d. sample from some common distribution:
\[K_{1}^{a}(a),...,K_{m}^{a}(a)\overset{i.i.d.}{\sim}F_{K^{a}(a)}.\]
We do not impose a particular interpretation on \(K_{s}^{a}(a)\) except that it reflects the conditions an individual would have experienced had they been in treatment group \(a\) of study \(s\). That is, two individuals with common values of this random variable can be understood to have been assigned treatment \(a\) under equivalent conditions. Again, we treat \(K_{s}^{a}(a)\) as a potentially counterfactual variable which fixes the heterogeneity in setting/treatment group combinations even for individuals who were not assigned treatment \(a\) or did not participate in study \(s\).
We suppose these random variables are an i.i.d. sample from some common distribution not because we think this gives the most accurate approximation to the data generating process that induces between-study heterogeneity. Rather, such an assumption constitutes our best guess as to the nature of this heterogeneity in the absence of additional information. Equipped with only outcome, baseline covariate, and treatment arm information, residual between-study heterogeneity is as good as i.i.d. noise from the perspective of the (meta) analyst, playing a role similar to residual errors in a simple linear regression model. If more information were available about the nature of between-trial heterogeneity, this assumption could be relaxed or refined.
The impact of this heterogeneity is related to but distinct from the effects of trial participation per se, as studied in Dahabreh et al. (2019). Here, we develop a conceptual framework for understanding the effects of participating in one trial versus another rather than participation in and of itself. In part, this effort can be
interpreted as clarifying the interpretation of causal estimands defined in earlier work on causally-interpretable meta-analysis. Recognizing the impact of between-trial heterogeneity, we study the effects of participation in one of the trials under study, or trials similar to those under study, rather than a more generic effect of treatment assignment in the target population.
### Defining Causal Estimands
As in Dahabreh et al. (2022), our interest is in transporting inferences from the collection of trials to a target population. Since the realized draw of \(K_{s}^{a}(a)=k_{s}^{a}\) indexes the heterogeneity in potential outcomes arising from trial \(s\), a causal quantity relevant to evaluating the kind of outcomes we would observe had members of the target population participated in trial \(s\) is
\[\mu_{a,0}(k_{s}^{a})=E[Y(a,k_{s}^{a})|R=0]. \tag{3}\]
This is a fixed quantity which can be interpreted as the average potential outcome in the target population where the heterogeneity in application of treatment \(a\) is fixed at the value associated with trial \(s\).
We recognize that policy makers and clinicians are often most interested in treatment _effects_ within their target population, rather than mean potential outcomes alone. In this paper, we focus our attention on mean potential outcomes for simplicity of presentation. However, contrasts of causal quantities like (3), e.g.
\[\tau_{a,a^{\prime}}^{s}=\mu_{a,0}(k_{s}^{a})-\mu_{a^{\prime},0}(k_{s}^{a^{ \prime}})=E[Y(a,k_{s}^{a})|R=0]-E[Y(a^{\prime},k_{s}^{a^{\prime}})|R=0]\]
can be defined corresponding to average treatment effects in the target population under the conditions of trial \(s\). Focus on such effects also allows relaxation of mean exchangeability assumptions on mean potential outcomes to be replaced by mean effect exchangeability.
Returning our focus to (3), note that the random variables whose \(m\) realized values give these transported potential outcomes for each study are a function of the latent random sample \(K_{1}^{a}(a),...,K_{m}^{a}(a)\):
\[\mu_{a,0}(a,K_{1}^{a}(a)),...,\mu_{a,0}(a,K_{m}^{a}(a)) \tag{4}\]
As such, they also constitute a simple random sample from some distribution.
We assume this random sample \(\mu_{a,0}\left(K_{1}^{a}(a)\right),...,\mu_{a,0}\left(K_{m}^{a}(a)\right)\) arises from a distribution with mean \(\mu_{a,0}\) and finite variance. Without any further modeling assumptions, we can decompose each such random variable
in the sample as:
\[\mu_{a,0}\left(K_{s}^{a}(a)\right)=\mu_{a,0}+\Delta_{0,s}^{a}, \tag{5}\]
where \(\Delta_{0,s}^{a}\) is a random variable with mean zero and finite variance. Note that (5) does not impose any particular modeling assumption on \(\mu_{a,0}\left(K_{s}^{a}(a)\right)\); it simply labels random variation about its expectation over \(K_{s}^{a}(a)\) as \(\Delta_{0,s}^{a}\). Rearranging (5), we have
\[\Delta_{0,s}^{a} =\mu_{a,0}\left(K_{s}^{a}(a)\right)-\mu_{a,0}\] \[=\mu_{a,0}\left(K_{s}^{a}(a)\right)-E_{K^{a}(a)}\left[\mu_{a,0} \left(K_{s}^{a}(a)\right)\right].\]
The decomposition in (5) allows us to better understand the role that the latent random variables \(K_{s}^{a}(a)\) play in driving systematic differences between studies in causal quantities. Namely, a draw of \(K_{s}^{a}(a)=k_{s}^{a}\) corresponds to a realization of \(\Delta_{0,s}^{a}=\delta_{0,s}^{a}\) which in turn shifts the average potential outcome under \(a\) transported from study \(s\) to the target population.
Recalling the stylized version of our approach in Equation (1), our model for the treatment effect we would observe if a member of the target population had been assigned to treatment \(a\) vs. \(a^{\prime}\) in study \(s\) is therefore
\[\mu_{a,0}(k_{s}^{a})-\mu_{a^{\prime},0}(k_{s}^{a^{\prime}}) =\mu_{a,0}+\delta_{0,s}^{a}-\left(\mu_{a^{\prime},0}-\delta_{0,s} ^{a^{\prime}}\right)\] \[=\left(\mu_{a,0}-\mu_{a^{\prime},0}\right)+\left(\delta_{0,s}^{a }-\delta_{0,s}^{a^{\prime}}\right). \tag{6}\]
We can conceptualize \(\tau_{a,a^{\prime},0}=\mu_{a,0}-\mu_{a^{\prime},0}\) as the "grand mean effect" of treatment \(a\) vs. \(a^{\prime}\) in the target population and \(\delta_{0,s}^{a}-\delta_{0,s}^{a^{\prime}}\) as the heterogeneity of that overall effect when transporting from the setting of study \(s\). These two quantities stand in for \(\tau\) and \(\delta_{s}\), respectively in Equation (1).
Besides considering the expected potential outcome if an individual from the target population participated in one of the trials in our collection, we might also ask: what potential outcomes would we expect if a new trial were conducted that recruited a simple random sample from the target population? We make an important assumption here that the same random process which induces heterogeneity in expected potential outcomes within the collection of trials would apply to a new trial of the same collection of treatments.
We can represent this assumption by adding \(K_{0}^{a}(a)\) to our original random sample:
\[K_{1}^{a}(a),...,K_{m}^{a}(a),K_{0}^{a}(a)\overset{i.i.d.}{\sim}F_{K^{a}(a)}\]
and defining \(\mu_{a,0}\left(k_{0}^{a}\right)\) and \(\mu_{a,0}\left(K_{0}^{a}(a)\right)\) as previously.
This framing is analogous to the idea in random-effects meta-analysis that we can make inferences for the treatment effects of studies that are not included in the meta-analysis (Higgins, Thompson, and Spiegelhalter 2009). Again, our i.i.d. assumption regarding \(K_{0}^{a}(a)\) reflects our best guess as to the random process governing between-study heterogeneity. In the absence of additional information, we posit both that the same sources of heterogeneity that induce systematic differences between RCTs in the trial sample would apply to a hypothetical RCT in the target population and that this additional variation stems from i.i.d. draws from a common distribution. As explained below, we use this assumption when producing prediction intervals that contain \(\mu_{a,0}\left(k_{0}^{a}\right)\) with some specified probability.
There are several possible motivations for estimating the outcomes in an unobserved trial. One is as an input to planning and preparation for new clinical studies. Clinicians may desire to estimate treatment effects they might expect in a planned study with participants similar to members of the target population. Alternatively, transported estimates from a clinical trial might serve as a kind of bound for effects we are likely to observe in practice. Estimating outcomes in an unobserved trial also produces such a bound, albeit one not overly influenced by the idiosyncrasies present in any particular study in the collection of trials.
## 3 Identification and Estimation of Causal Quantities
### Identification
We first consider identification of \(\mu_{a,0}(k_{s}^{a})\), which gives the expected outcome that would have been observed if a member of the target population had been assigned treatment \(a\) in study \(s\). To express this as a functional of the observed data, we make the following identifying assumptions, many of which are similar to those in Dahabreh et al. (2022) and VanderWeele and Hernan (2013):
1. **Exchangeability in mean between trials:** \[E[Y(a,k_{s}^{a})|X=x,S=s_{1}]=E[Y(a,k_{s}^{a})|X=x,S=s_{2}]\] for all \(s_{1},s_{2}\in\{0,1,...,m\}\) and \(x\in\mathcal{X}\) such that \(f(x,S=s_{1})\neq 0\) and \(f(x,S=s_{2})\neq 0.\) Note here that the heterogeneity in trial \(s_{1}\) and \(s_{2}\) is fixed at \(k_{s}^{a}\) in both settings.
2. **Exchangeability over treatment groups within a trial:** \[Y(a,k_{s}^{a})\perp\!\!\!\perp A|(X,S=s)\]
for all \(a\in\mathcal{A}\), \(k_{s}^{a}\in\mathcal{K}^{a}\), and \(s\in\mathcal{S}\).
3. **Consistency:** If \(A_{i}=a\) and \(K_{i}^{a}(a)=k^{a}\) for individual \(i\), then \(Y_{i}(a,k^{a})=Y_{i}\).
4. **Positivity:** For all \(s\in\mathcal{S}\), if \(f(x,R=0)\neq 0\), then \(P(S=s|X=x)>0\).
5. **Distribution of \(K^{a}(a)\):** For \(s=1,...,m\) and \(a\in\mathcal{A}\) each random variable \(K_{s}^{a}(a)\) consitutes an i.i.d. draw from a distribution \(F_{K^{a}(a)}\).
6. **Constancy of \(K^{a}(a)\) by treatment group/study:** Within each treatment group \(a\) and study \(s\), the value of \(K^{a}(a)\) for each participant is fixed at the realized value of \(K_{s}^{a}(a)=k_{s}^{a}\). That is, if \(A=a\) and \(S=s\), then \[K_{i}^{a}(a)=K_{s}^{a}(a)=k_{s}^{a}\] for \(i\in 1,...,n_{s}\).
Assumptions 3 and 5 involve a slight abuse of notation, wherein we define a random variable \(K_{i}^{a}(a)\) giving the version of treatment for individual \(i\). The key point here is that such a random variable takes on the same realized value of \(K_{s}^{a}(a)\) for all participants in trial \(s\). Alternatively, one might assume a hierarchical model where each individual in treatment arm \(a\) and study \(s\) has an associated random variable \(\left(K_{s}^{a}(a)\right)_{i}\), \(i=1,...,n_{s}\), centered at a the realized value of \(K_{s}^{a}(a)=k_{s}^{a}\). In this paper, we focus on trial/treatment group-wide heterogeneity to greatly simplify the mathematical presentation. However, a hierarchical model of the kind proposed above can-under certain assumptions-lead to the same identification results presented here.
Under these assumptions, we identify \(\mu_{a,0}(k_{s}^{a})\) as
\[\psi_{s,0}(a)=E[E[Y|X,S=s,A=a]|R=0]. \tag{7}\]
That is, we average a regression function relating outcomes under treatment \(a\) in study \(s\) to covariates \(X\) over the distribution of such covariates in the observed target population. (A full proof of this result is given in the Appendix.) This observed data functional is analogous to that identified in Equation (6) of Theorem 1 in Dahabreh et al. (2022), with \(R=1\) in their case replaced by \(S=s\) in ours.
### Estimation
Letting \(g_{a}^{s}(X)=E[Y|X,S=s,A=a]\), we could apply an outcome model/standardization approach that averages an estimate \(\hat{g}_{a}^{s}(X)\) of \(E[Y|X,S=s,A=a]\) over the distribution of covariates in the target
population:
\[\hat{\psi}_{s,0}(a)=\left\{\sum_{i=1}^{n}I(S_{i}=0)\right\}^{-1}\sum_{i=1}^{n}I(S_ {i}=0)\left\{\hat{g}_{a}^{s}(X_{i})\right\}\]
Another option involves inverse probability weighting, in which the outcomes observed among participants in study \(s\) are weighted by the similarity of each participant to members of the target population. The estimator applying this approach is given by
\[\hat{\psi}_{s,0}^{ipw}(a)=\left\{\sum_{i=1}^{n}I(S_{i}=0)\right\}^{-1}\sum_{i=1 }^{n}\left(\frac{I(A_{i}=a)}{\hat{e}_{a}^{s}(X_{i})}\right)I(S_{i}=s)\frac{ \hat{p}_{0}(X_{i})}{\hat{p}_{s}(X_{i})}Y_{i}\]
where \(\hat{p}_{0}(X_{i})\) estimates \(P(R=0|X_{i})\), \(\hat{e}_{a}(X_{i})\) estimates \(P(A_{i}=a|S=s,X_{i})\), and \(\hat{p}_{s}(X_{i})\) estimates \(P(S=s|X_{i})\). Identification of this result, proceeding from (7) is given in the appendix.
A final option is the so-called augmented inverse probability weighting (AIPW) estimator, which combines the two approaches introduced above:
\[\hat{\psi}_{s,0}^{aipw}(a)=\left\{\sum_{i=1}^{n}I(S_{i}=0)\right\}^{-1}\sum_{ i=1}^{n}\left[I(S_{i}=0)\left\{\hat{g}_{a}^{s}(X_{i})\right\}+\left(\frac{I(A_ {i}=a)}{\hat{e}_{a}^{s}(X_{i})}\right)I(S_{i}=s)\frac{\hat{p}(X_{i})}{1-\hat{ p}(X_{i})}(\hat{g}_{a}^{s}(X_{i})-Y_{i})\right]\]
This approach adds a correction to the outcome model approach via inverse probability weighting over the residuals of the outcome model for arm \(a\) of study \(s\).
Our second causal quantity of interest is \(E[Y(a,k_{0}^{a})|R=0]\), where \(k_{0}^{a}\) is the realized value of the random variable \(K_{0}^{a}(a)\) described in the previous section. In the absence of additional information concerning the distribution \(F_{K^{a}(a)}\), our estimation strategy for \(E[Y(a,k_{0}^{a})|R=0]\) relies on the following approximation, the full details of which are included in the Appendix:
\[E[Y(a,K_{0}^{a}(a))|R=0] =\sum_{k^{a}\in\mathcal{K}^{a}}E\left\{E[Y(a,K_{0}^{a}(a))|R=0,K_ {0}^{a}(a)=k^{a}]|R=0\right\}P(K_{0}^{a}(a)=k^{a})\] \[\approx\sum_{k^{a}\in\mathcal{K}^{a}}E\left\{E[Y(a,K_{0}^{a}(a))| R=0,K_{0}^{a}(a)=k^{a}]|R=0\right\}\left\{\frac{1}{m}\sum_{s=1}^{m} \mathbb{1}\left(k^{a}=k_{s}^{a}\right)\right\}\] \[=\frac{1}{m}\sum_{s=1}^{m}E[Y(a,k_{s}^{a})|R=0]\] \[=\frac{1}{m}\sum_{s=1}^{m}E[E[Y|X,S=s,A=a]|R=0] \text{by (\ref{eq:1})}\] \[=\frac{1}{m}\sum_{s=1}^{m}\psi_{s,0}(a).\]
Again, taking contrasts of the above quantity applied to different treatments implies identification
of treatment effects of interest, e.g., causal quantities of the form given in (6). We can then estimate \(\frac{1}{m}\sum_{s=1}^{m}\psi_{s,0}(a)\) using any of the methods discussed above for estimating each \(\psi_{s,0}(a)\) individually. For instance, we might estimate this quantity using
\[\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a)=\frac{1}{m}\sum_{s=1}^{m}\left( \left\{\sum_{i=1}^{n}I(S_{i}=0)\right\}^{-1}\sum_{i=1}^{n}\left\{\hat{g}_{a}^{ s}(X_{i})\right\}\right).\]
### Estimation of Between-Study Variability
A central parameter of interest in traditional random-effects meta-analysis is the between-study variance, which describes the variability of the effects underlying each study. Recalling the decomposition given in (5), the analogous parameter in our work is the variance of \(\Delta_{0,s}^{a}\), which specifies the variability between studies in the transported mean potential outcomes. We propose an estimate of this variance inspired by the method of moments approach derived by Rao et al. (1981) and subsequently applied by DerSimonian and Laird (1986) in their seminal work on random-effects meta-analysis. The key distinction between our estimate and the traditional random-effects estimate is that, in practice, the estimates for \(\mu_{a,0}(k_{1}^{a}),...,\mu_{a,0}(k_{m}^{a})\) are correlated due to their dependence on the same sample from the target population. This correlation complicates the derivation and form of the resulting estimator.
In the derivation below, we consider estimating between-study variability in a highly general setting; the only relationship to our causal framework is that of correlation between study-specific estimates. Operating in a simplified setting, suppose we have \(m\) study-specific means \(\mu_{1},...,\mu_{m}\) drawn from a distribution \(F_{\mu}\) with mean \(\mu\) and variance \(\gamma^{2}\). The analogy to our setting is obtained by letting \(\mu_{s}=\mu_{a,0}(k_{s}^{a})\). We estimate these means with \(\hat{\mu}_{1},...,\hat{\mu}_{m}\) each of which is individually unbiased for its respective study-specific mean. The analogous quantities for us are \(\hat{\mu}_{s}=\hat{\psi}_{s,0}(a)\). We estimate the grand mean \(\mu\) using a weighted sum of the study-specific estimates \(\hat{\mu}=\sum_{s=1}^{m}w_{s}\hat{\mu}_{s}\), where \(\sum_{s=1}^{m}w_{s}=1\). (For instance, we might have \(w_{s}=\frac{1}{m}\) when taking a simple average.) Also, let \(\mbox{Var}\left(\hat{\mu}_{s}|\,S=s\right)=\sigma_{s}^{2}\) denote the sampling variance of each study-specific estimator and \(\mbox{Var}\left(\hat{\mu}_{s}\right)=\sigma_{s}^{2}+\gamma^{2}\), which reflects both within- and between-study variance. Letting \(Q=\sum_{s=1}^{m}\left(\hat{\mu}_{s}-\hat{\mu}\right)^{2}\) and \(C_{s}=-2\sum_{i\neq s}w_{i}(1-w_{s})\sigma_{is}+\sum_{i\neq s}\left[\sum_{j \neq i,j\neq s}w_{i}w_{j}\sigma_{ij}\right]\) we can show that
\[E[Q]=m\sum_{s=1}^{m}w_{s}^{2}(\sigma_{s}^{2}+\gamma^{2})+\sum_{s=1}^{m}\left[ (1-2w_{s})(\sigma_{s}^{2}+\gamma^{2})+C_{s}\right].\]
A moment-based estimator for \(\gamma^{2}\) is then given by the value \(\tilde{\gamma}^{2}\) that satisfies
\[\sum_{s=1}^{m}\left(\hat{\mu}_{s}-\hat{\mu}\right)^{2}=m\sum_{s=1}^{m}w_{s}^{2 }(\sigma_{s}^{2}+\tilde{\gamma}^{2})+\sum_{s=1}^{m}\left[(1-2w_{s})(\sigma_{s }^{2}+\tilde{\gamma}^{2})+C_{s}\right].\]
Solving for such a \(\hat{\gamma}^{2}\), we obtain
\[\hat{\gamma}^{2}=\frac{\sum_{s=1}^{m}\left(\hat{\mu}_{s}-\hat{\mu}\right)^{2}- \sum_{s=1}^{m}\sigma_{s}^{2}\left\{mw_{s}^{2}+\left(1-2w_{s}\right)\right\}- \sum_{s=1}^{m}C_{s}}{\sum_{s=1}^{m}\left\{mw_{s}^{2}+\left(1-2w_{s}\right)\right\}}.\]
Our final estimate for the between-study heterogeneity would then be
\[\hat{\gamma}^{2}=\max(0,\hat{\gamma}^{2}). \tag{8}\]
A full derivation of the above estimator is given in the Appendix. The key distinction between our estimator and the traditional random-effects estimator introduced by Rao et al. (1981) is the inclusion of \(-\sum_{s=1}^{m}C_{s}\) in the numerator of \(\hat{\gamma}^{2}\). This has the effect of "correcting" the traditional estimator to account for the nonzero correlation between study-specific estimates. Beyond our setting, the estimator can be generally applied to any analysis which takes a weighted average of correlated quantities to estimate some underlying grand mean; note, though, that it is subject to bias when individual estimates are themselves biased.
### Inference for Outcomes in a New Trial
Inference for the outcomes we would expect if members of the trial population participated in an _observed_ trial, e.g., \(\mu_{a,0}(k_{s}^{a})\), \(s\in\mathcal{S}\) can proceed in a variety of ways, including simple bootstrap resampling on data from the target population and study \(s\) or via asymptotic approximations to the sampling distribution of \(\hat{\psi}_{s,0}(a)\). Here, we focus on inference for \(\mu_{a,0}(K_{0}^{a}(a))\). We keep \(K_{0}^{a}(a)\) as a random variable in this estimand since it refers to outcomes in a trial we have yet to observe. As such, a simple bootstrap applied to all observed trials and the target population would fail to capture the excess variability induced by conducting a new trial. That is, our prediction intervals aim to capture both the sampling variability of our estimator \(\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a)\) as reflected in the observed data and additional uncertainty about the unobserved trial. We can make progress under the assumption that the unobserved trial is subject to similar heterogeneity as that between the observed trials. Here, we propose three methods for constructing these prediction intervals, and later evaluate them in a simulation experiment.
#### 3.4.1 Inference Based on \(\hat{\gamma}^{2}\)
Above, we derived an estimator \(\hat{\gamma}^{2}\), which we generically interpret as an estimate of between-study heterogeneity beyond that induced by measured covariates. When we replace \(\hat{\mu}_{s}\) with \(\hat{\psi}_{s,0}(a)\) and \(\hat{\mu}\) with \(\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a)\), \(\hat{\gamma}^{2}\) then estimates \(\text{Var}\left(\mu_{a,0}\left(K_{s}^{a}(a)\right)\right)\). That is, within our framework, \(\hat{\gamma}^{2}\) quantifies the variability in expected transported potential outcomes from one trial to another, where the variability is driven by setting-specific
heterogeneity. To construct a prediction interval centered at \(\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a)\), we employ the same form of the interval considered in Higgins et al. (2009) for inference on "the effect in an unspecified study":
\[\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a)\pm t_{m-2}^{\alpha}\sqrt{\hat{ \gamma}^{2}+\widehat{\mathrm{Var}}\left(\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a)\right)}, \tag{9}\]
where \(\widehat{\mathrm{Var}}\left(\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a)\right)\) is obtained using a simple bootstrap on all of the observed studies and the target population, and \(t_{m-2}^{\alpha}\) is the \((1-\alpha)\)th quantile of the \(t\) distribution with \(m-2\) degrees of freedom.
#### 3.4.2 Inference Based on the Bootstrap
One way to approximate the difference between our observed estimates and that of an unobserved trial is to compute \(\frac{1}{m-1}\sum_{s=1}^{m-1}\hat{\psi}_{s,0}(a)\) using data from a subset of \(m-1\) trials and compare this result with that of the transported potential outcome from the \(m\)th, left-out trial (as repeated for all trials). This procedure aims to capture variability related to between-trial heterogeneity. To additionally capture sampling variability in the observed data, we couple the leave-one-out approach with application of the simple or wild bootstrap. Both such bootstraps are described below. The result of both procedures is to construct an estimate for the distribution of \(\mu_{a,0}\left(K_{s}^{a}(a)\right)\). For ease of notation in the following, we let \(\hat{\mu}_{0}=\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a)\) and \(\hat{\mu}_{s}=\hat{\psi}_{s,0}(a)\).
Let \(D_{s}\) denote all of the data available in our original sample from trial \(s=1,...,m\). Let \(X\) denote the covariate data available from the target population. For \(b=1,...,B\),
1. Randomly choose one of \(m\) studies to treat as the "unobserved" trial on which we're trying to make a prediction. Denote this choice \(s_{b}\in\{1,...,m\}\).
2. Draw a simple bootstrap sample \(D_{s_{b}}^{*}\) from \(D_{s_{b}}\) and \(X_{1}^{*}\) from \(X\) and estimate \(\hat{\mu}_{s_{b}}^{*}\) using these two datasets.
3. Draw simple bootstrap samples \(\{D_{s}^{*}:s\neq s_{b}\}\) from the other trial data. Draw another bootstrap sample \(X_{2}^{*}\) from \(X\). With these samples, compute \(\mu^{*}=\frac{1}{m-1}\sum_{s\neq s_{b}}\hat{\mu}_{s}^{*}\).
4. Compute an estimated residual \(\hat{\delta}_{b}^{*}=\mu^{*}-\hat{\mu}_{s_{b}}^{*}\) that quantifies our prediction error
5. Finally, construct another estimate \(\hat{\mu}^{*}\) using a new set of bootstrap samples for all of the original data, including the trial that we left out when computing the prediction error. Let \(\hat{\mu}_{b}^{pred}=\hat{\mu}^{*}-\hat{\delta}_{b}^{*}\).
We also consider constructing prediction intervals using the wild bootstrap based on the influence function of the estimator \(\hat{\mu}_{s}\). This procedure follows an outline similar to the simple bootstrap above, except that the approach of Matsouaka et al. (2022) replaces sampling the data with replacement. Note that the exact form
of the influence function depends on the estimator. Because we use the outcome model in the simulation study described below, we use its influence function in implementing the wild bootstrap. See, e.g., Tsiatis (2006) for additional detail regarding the definition and derivation of influence functions.
## 4 Simulation Study
To evaluate the performance of the three approaches outlined above, we conduct a simulation study as follows.
### Data Generating Procedure
_Covariates_: We assume we have three covariates \(X_{1}\), \(X_{2}\), \(X_{3}\) for each of the \(m\) trials and the sample from the target population. Each trial has 100 participants total, split into two treatment groups, and the target population sample is of size 1000. The covariates in each setting \(s=0,1...,m\) are generated as follows:
1. Determine the mean covariate vector \(\boldsymbol{\mu}_{s}\in\mathbb{R}^{3}\) in study \(s\) by generating \(m\) equally spaced values in the interval \([0,1.5]\) and setting \(\boldsymbol{\mu}_{s}\) equal to the \(s\)th such value (repeating the value three times for each entry in \(\boldsymbol{\mu}_{s}\)). Thus, if \(m=3\), then \(\boldsymbol{\mu}_{1}=(0,0,0)^{T}\), \(\boldsymbol{\mu}_{2}=(0.75,0.75,0.75)^{T}\), and \(\boldsymbol{\mu}_{3}=(1.5,1.5,1.5)^{T}\). In every scenario, the mean covariate vector in the target population is \(\boldsymbol{\mu}_{0}=(1,1,1)^{T}\).
2. For each of \(n_{s}\) participants in study \(s\), draw a vector \((\mathbf{X}_{s})_{i}\sim\mathcal{N}(\boldsymbol{\mu}_{s},\Sigma)\), where \[\Sigma=\begin{pmatrix}1&0.5&0.5\\ 0.5&1&0.5\\ 0.5&0.5&1\end{pmatrix}.\] Note that this matrix is identical across studies and the target population.
_Potential Outcomes_: Equipped with these covariate values, we generate each participant's potential outcome under treatment \(a\) in study \(s\) as a linear combination of \((X_{s,1},X_{s,2},X_{s,3})_{i}^{T}\). That is,
\[\left(Y(a,k_{s}^{a})|S=s,\mathbf{X=x}\right)_{i}=\beta_{0}+\beta_{1}x_{s,1}+ \beta_{2}x_{s,2}+\beta_{3}x_{s,3}+\delta_{s}^{a}+\epsilon, \tag{10}\]
where \(\beta_{0}=\beta_{1}=\beta_{2}=\beta_{3}=0.5\) and \(\epsilon\sim N(0,1)\). The value \(\delta_{s}^{a}\) is the realized value of random variation in potential outcomes that results from applying treatment \(a\) in study \(s\); it is fixed across all participants in study \(s\). The distribution of this variation is distinct in different simulation scenarios, as detailed below.
Under the specification in (10), the true value for the estimand of interest is
\[E[Y(a,k_{0}^{a})|R=0]=E[E[Y(a,k_{0}^{a})|S=0,\mathbf{X}]] =0.5+0.5E[X_{0,1}]+0.5E[X_{0,2}]+0.5E[X_{0,3}]+\delta_{0}^{a}+E[\epsilon]\] \[=0.5+0.5(1)+0.5(1)+0.5(1)+\delta_{0}^{a}+0\] \[=2+\delta_{0}^{a}. \tag{11}\]
This is the true expected value under treatment \(a\) if members of the target population participated in an "unobserved" trial. Treatment assignment in each observed trial to \(a\) or placebo proceeds under 1:1 randomization.
_Estimation_: Using notation defined earlier, we estimate the average observed outcome under treatment \(a\) in study \(s\) (denoted \(g_{a}^{s}(x)=E[Y|X=x,S=s,A=a]\)) using a correctly specified outcome model
\[\hat{g}_{a}^{s}(x)=\mathbf{x}^{T}\boldsymbol{\hat{\beta}}_{s},\]
where \(\mathbf{x}^{T}=(1,x_{1},x_{2},x_{3})^{T}\) and \(\boldsymbol{\hat{\beta}}_{s}=(\beta_{s,0},\beta_{s,1},\beta_{s,2},\beta_{s,3 })^{T}\). The expected value of this outcome averaged over the distribution of covariates in the target population (denoted \(\psi_{s,0}(a)=E[E[Y|X,S=s,A=a]|R=0]\)) is estimated as
\[\hat{\psi}_{s,0}(a)=\frac{1}{n_{0}}\sum_{i:S_{i}=0}\hat{g}_{a}^{s}(X_{i}).\]
As described above, the average of the \(\psi_{s,0}(a)\)'s over the \(m\) studies is our best guess of the target estimand \(\mu_{a,0}(k_{0}^{a})\):
\[\hat{\mu}_{a,0}(k_{0}^{a})=\frac{1}{m}\sum_{s=1}^{m}\hat{\psi}_{s,0}(a).\]
We then construct a prediction interval centered at \(\hat{\mu}_{a,0}\) which aims to contain \(E[Y(a,k_{0}^{a})|R=0]=2+\delta_{0}^{a}\) with some pre-specified probability, using the three methods described above.
### Simulation Scenarios
The above data generating process is highly simplified. It assumes the true outcome model is identical across scenarios and that we specify this model correctly. Future work will complicate this setup to investigate other issues, e.g, model misspecification. For now, this simple scenario focuses our attention on two main parameters, which we vary across simulations:
1. _The number of trials in the meta-analysis study dataset_. This takes values across \(m=\)5, 10, 15, 30, and 50 studies. While we rarely expect to have 50 studies in a meta-analysis, the goal of that scenario is to
investigate whether a given method over/under covers as we collect data from more and more studies.
2. _The distribution of the setting-specific variation_. Settings include:, \(\text{Unif}[-2,2]\), \(N(0,1)\), \(\text{Exponential}(1)-1\), and \(\text{Pareto}(1,3)\).
The settings above define \(5\times 4=20\) distinct simulation scenarios. We evaluate each scenario with 1000 artificial datasets simulated from the data generating process described above. Where applicable, any application of the bootstrap in our estimation procedure includes 1000 replications. For a given method of constructing the prediction interval of interest, we estimate the coverage probability as the proportion of prediction intervals across those 1000 iterations that contain the true parameter value.
### Results
Figure 1 displays estimates of coverage for prediction intervals constructed according to the quantiles of the bootstrap estimates. The second set of plots does so using using an approximation to the normal distribution, i.e., the mean of the bootstrapped estimates \(\pm 1.96\) times the standard deviation of the bootstrapped estimates. In the figure, Method of Moments refers to intervals constructed using our estimate of \(\hat{\gamma}^{2}\), as in (9). (Note that the Method of Moments results are the same in both figures because they do not depend on such choices.)
In general, the method of moments intervals based on the corrected estimator of between-study variance perform better than bootstrap-based alternatives across different distributions of residual heterogeneity. The normal-based approximations for the bootstrap approaches also outperform prediction intervals based on empirical quantiles, though the normal-based approximations are still outperformed by the method of moments estimator. Although simulations assuming a very large number of MA studies are helpful for understanding aspects of our approaches' asymptotic behavior, we recognize that 15 or more studies is larger than the vast majority of meta-analysis datasets, especially given the need for IPD. Thus, these simulation results suggest applying the method of moments intervals based on the corrected estimator of between-study variance in most circumstances when researchers are interested in estimating effects measured in a new, unobserved trial recruited from the target population.
Although the performance of the two bootstrap alternatives is similar, we note that the wild bootstrap has non-negligible computational advantages. On a standard Windows machine with an Intel Core i7-9700 Processor, the wild bootstrap approach generated prediction intervals more than twice as fast as the simple bootstrap in a scenario with 5 studies and the sample sizes given above. The absolute times in such cases were negligible (16 seconds vs. 6 seconds); however, with more studies and a much larger target population sample, the wild bootstrap becomes far more computationally attractive.
Figure 1: Coverage for prediction intervals constructed according to the quantiles of the bootstrap estimates. Each plot corresponds to a separate distribution for setting-specific variation.
Figure 2: Coverage for prediction intervals constructed according to the quantiles of the normal distribution. Each plot corresponds to a separate distribution for setting-specific variation.
Discussion
In this work, we extend recent developments in causally-interpretable meta-analysis to account for between-study heterogeneity beyond covariate differences between trials and the target population. Our causal framework attempts to bridge the structure of traditional random-effects meta-analysis with causal inference. To do so, we introduce novel estimands, estimates, and inferential procedures that all explicitly reference the role that a trial's setting can play in driving systematic differences between study-specific parameters.
The impact of between-study heterogeneity can severely limit the interpretability of treatment effect estimates transported from a collection of trials to the target population. Under such heterogeneity, estimates derived from data pooled across distinct trials may reflect a combination of the idiosyncrasies present in each trial setting. In many contexts, such heterogeneity persists despite standardizing the trial covariate distributions to that of a target population. For instance, variation in treatment version could cause differences in the potential outcomes we would expect to observe across studies, even after conditioning on all effect-modifiers, as well as covariates relevant to treatment selection or trial participation.
To address these issues, we begin by defining potential outcomes that explicitly reference setting-specific heterogeneity and, subsequently, construct estimators and inferential procedures that take this additional variability into account. By allowing potential outcomes to depend on both assigned treatment and the setting of that assignment, we can disaggregate the effects of a trial from the trial's population.
Our work is a first step toward clarifying the dual influence of trial setting and participants in the context of causally-interpretable meta-analysis. With additional information about protocol-level differences between trials, our framework might be extended to reference specific trial characteristics of interest. Future work might consider transporting or up-weighting a specific source of heterogeneity (e.g. differing levels of adherence) to produce transported effects most relevant to the clinical setting of interest. Relatedly, it is also important to clarify the distinction between intention-to-treat and per-protocol effects when attempting to account for setting-specific heterogeneity. That is, investigators should distinguish between studying what would happen if a member of the target population had participated in a given trial and was compliant or was simply assigned treatment. The estimands introduced in this work should be interpreted as defining intention-to-treat transported effects, as our observed data contains information on treatment assignment alone. However, these methods may be extended to describe per-protocol effects when such data are available.
In part, our work follows from a simple acknowledgment that the particular context of an RCT has relevance for causal interpretability. In other words, estimates transported from RCTs do not necessarily have a generic interpretation uncoupled from the effects of trial setting. We view our framework as one step toward developing a causal structure that can accommodate between-study heterogeneity. In the future,
we aim to expand on this structure to incorporate trial-specific information relevant to a variety of clinical questions in populations of interest.
|
2301.05465 | Explicit Temporal Embedding in Deep Generative Latent Models for
Longitudinal Medical Image Synthesis | Medical imaging plays a vital role in modern diagnostics and treatment. The
temporal nature of disease or treatment progression often results in
longitudinal data. Due to the cost and potential harm, acquiring large medical
datasets necessary for deep learning can be difficult. Medical image synthesis
could help mitigate this problem. However, until now, the availability of GANs
capable of synthesizing longitudinal volumetric data has been limited. To
address this, we use the recent advances in latent space-based image editing to
propose a novel joint learning scheme to explicitly embed temporal dependencies
in the latent space of GANs. This, in contrast to previous methods, allows us
to synthesize continuous, smooth, and high-quality longitudinal volumetric data
with limited supervision. We show the effectiveness of our approach on three
datasets containing different longitudinal dependencies. Namely, modeling a
simple image transformation, breathing motion, and tumor regression, all while
showing minimal disentanglement. The implementation is made available online at
https://github.com/julschoen/Temp-GAN. | Julian Schön, Raghavendra Selvan, Lotte Nygård, Ivan Richter Vogelius, Jens Petersen | 2023-01-13T10:31:27Z | http://arxiv.org/abs/2301.05465v1 | Explicit Temporal Embedding in Deep Generative Latent Models for Longitudinal Medical Image Synthesis
###### Abstract
Medical imaging plays a vital role in modern diagnostics and treatment. The temporal nature of disease or treatment progression often results in longitudinal data. Due to the cost and potential harm, acquiring large medical datasets necessary for deep learning can be difficult. Medical image synthesis could help mitigate this problem. However, until now, the availability of GANs capable of synthesizing longitudinal volumetric data has been limited. To address this, we use the recent advances in latent space-based image editing to propose a novel joint learning scheme to explicitly embed temporal dependencies in the latent space of GANs. This, in contrast to previous methods, allows us to synthesize continuous, smooth, and high-quality longitudinal volumetric data with limited supervision. We show the effectiveness of our approach on three datasets containing different longitudinal dependencies. Namely, modeling a simple image transformation, breathing motion, and tumor regression, all while showing minimal disentanglement. The implementation is made available online1.
Footnote 1: [https://github.com/julskhoen/Temp-GAN](https://github.com/julskhoen/Temp-GAN)
Keywords:Generative Adversarial Networks Temporal Generation Semantic Editing.
## 1 Introduction
The use of deep learning in the medical domain has increased recently but is limited by the need for large and well-labeled datasets [14]. A potential mitigation to this problem is the use of synthetic data obtained using generative models such as Generative Adversarial Networks (GANs) [7], which has been shown to enhance medical deep learning algorithms [18]. Due to the natural temporal development of, e.g., disease progression or treatment monitoring, temporal
data is gathered frequently in the medical domain. Prominent cases are neurodegenerative diseases such as Alzheimer's or cancer-related longitudinal data collected during radiotherapy. Combining longitudinal medical data and deep learning can allow for earlier and more accurate prognosis [21], as well as disease modeling, such as tumor progression or regression [24]. GANs have successfully been used to generate temporal data. They have shown promising results in video generation [17, 20], precipitation forecasting [16], and also medical temporal data generation [1, 5]. However, all previous approaches have been either on 2D data [1, 16, 20, 17] or have considered image-to-sequence or sequence-to-sequence generation tasks [5, 16, 20]. While temporal data generation can be done with image-to-sequence or sequence-to-sequence models, they do not allow for the generation of new sequences but rely on input data on which the generated data expands.
In recent years, a line of work has focused on the interpretability of GANs by investigating them utilizing linear directions in their latent spaces that result in meaningful and interpretable image transformations [6, 19, 10]. These works show that simple shifts of latent codes along a linear direction can result in powerful image transformations such as increasing memorability [6], rotating the image subject [10], or even background removal [19]. However, these approaches operate on pre-trained GANs and can only discover what is already captured by the learned representation.
The following summarises the main contributions of our work:
* We propose a novel model, jointly training a GAN and a direction in the latent space corresponding to any desired image transformation for which ordered data is available. To the best of our knowledge, our approach is the first to explicitly embed image transformations in the form of linear directions into the GAN latent space during training. Furthermore, the proposed joint training procedure is model agnostic and works with any latent variable-based GAN.
* We use the proposed framework to embed a linear direction corresponding to temporal changes in volumetric medical data. This allows the generation of longitudinal volumetric data for the first time without requiring input to the generator in the form of images or sequences of images. Furthermore, as the temporal sequence generation is based on a simple shift in the latent space, we can generate smooth and continuous sequences processing each time point individually, thereby lessening memory requirements compared to the processing of full sequences.
## 2 Related Work
Our work considers concepts from different lines of prior research in generative modeling. In the following, we summarise this.
#### 2.0.1 Volumetric Data Synthesis
Despite the advances in natural image synthesis, there is a lack of usage and implementations of general-purpose state-of-the-art
volumetric GANs architectures. Existing volumetric GANs either do not utilize current advances in GAN architectures [22], are task-specific [12], trade-off image resolution to allow for state-of-the-art model architectures [8] or are focusing on image-to-image generation tasks [11; 13]. More advanced architectures, such as Self-Attention Generative Adversarial Network (SA-GAN) [23], which can be made more memory efficient by using residual blocks with bottlenecks as suggested with Big Generative Adversarial Network (BigGAN) [3], are well suited to volumetric data synthesis. However, while their use is common in the natural image domain, they are generally not used for volumetric data.
#### 2.0.1 Latent Based Image Editing
Latent-based image edits have been possible in GAN latent spaces since the introduction of Deep Convolutional Generative Adversarial Network (DCGAN) [15]. Currently, most approaches use linear directions, i.e., latent walks, in the latent space of pre-trained generators corresponding to interpretable image edits [19; 6; 10]. The learned representation, however, does not necessarily contain any potentially desired image transformation. InfoGAN [4] mitigates this by jointly training the GAN and an additional latent vector input to disentangle the learned representations. However, a potentially desired image transformation can not be explicitly enforced. In contrast, thanks to jointly training the generator and the desired embedding, we ensure that the desired edit is encoded in the latent space.
#### 2.0.2 Temporal Synthesis
Prior works on temporal synthesis have focused on video generation with GANs [16; 17; 20]. This work was inspired by the shown viability of temporal latent embedding [17] and the use of two discriminators [16; 17; 20]. However, most approaches use generators conditioned on input images [16; 20], limiting their ability to generate new sequences, the exception being Saito et al. [17], which shows the viability of temporal generation based on latent models, they do so on natural images and expand the latent space for the temporal modeling. In contrast, our approach is the first to operate on volumetric data, and we make this possible by embedding the temporal component into the latent space directly rather than expanding it.
More similar approaches to ours have been introduced with TR-GAN [5] and the work by Abdullah et al. [1]. TR-GAN explores many-to-many predictions of Magnetic Resonance Imaging (MRI) using a single generator. Like the previous methods, TR-GAN utilizes a generator conditioned on input sequences to predict temporal sequences. Thus, future time steps are directly generated from input. In contrast, our approach embeds temporal dependencies in the latent space. Thus, TR-GAN relies on a sequential generator and cannot generate new sequences. Finally, Abdullah et al. [1] propose subdividing the latent space to embed temporal dependencies of medical images. However, they rely on 2D data, crops around the region of interest, and a-priori information on the time-dependent variable (e.g., accelerometer data). In contrast, our proposed method only requires the natural ordering of the temporal sequence, operates on entire volumes, and we choose linear latent embeddings.
## 3 Methods
In this section, we introduce the proposed framework. Figure 1 shows a schematic overview of our proposed model architecture.
Our base architecture takes inspiration from video GANs and uses two discriminators. \(D_{im}\) takes individual volumes, i.e., time steps, and discriminates between real and synthesized data. Thus given an underlying GAN architecture, the discriminator can be used as \(D_{im}\) without further changes. Next, the architecture has a temporal discriminator \(D_{temp}\), which, given three volumes, discriminates whether they are temporally consistent. Again, this is implemented using an underlying GAN architecture and tripling the input channels to allow for the input of temporal sequences. Further, we use two generators. \(G_{im}\) is a traditional generator from a GAN taking some latent code \(z\in\mathbb{R}^{L}\), where \(L\) is the latent space size, and mapping it to an image \(x\) without any further changes. Finally, the temporal generator \(G_{temp}\) takes a latent code \(z\) and shift magnitudes \(\alpha\) and shifts \(z\) by magnitudes \(\alpha\) along a learned linear direction \(d\in\mathbb{R}^{L}\) where \(L\) is the size of the latent space. These shifted latent codes are individually used as input to \(G_{im}\) to generate the sequence of volumes. Thus, rather than directly generating a sequence, we embed a direction in the latent space corresponding to the desired change, and by shifting with increasing \(\alpha\), we can create a set of latent codes \(\vec{z}\) corresponding to consecutive time steps of variable length. These latent codes are individually used as input to \(G_{im}\) to create the desired number of time steps in data space.
Given the design of the proposed model, any GAN architecture consisting of discriminator \(D\) and generator \(G\) can be used by adding the temporal generator \(G_{temp}\) as detailed above and using the discriminator architecture twice as \(D_{im}\) and \(D_{temp}\) to construct an explicitly embedding GAN.
We suggest the following implementation details: Based on the work of Voynov
Figure 1: Schematic overview of our proposed architecture for explicit embedding in GAN latent space. \(z\) is a latent vector, \(\mathbf{\alpha}\) is a set of shift magnitudes, \(z_{1,...,t}\) are the shifted latent vectors, \(x_{1,...,t}\) are shifted images corresponding to synthesized or real data, \(x_{y}\) is the time step corresponding to the original latent vector \(z\) if generated or one real image otherwise, and \(x_{i,j,k}\) are three time steps, where \(i,j,k\in\{1,...,t\}\).
and Babenko [19], we use a direction \(d\) of unit length and \(\alpha\in\mathcal{U}[-6,6]\). As the image discriminator follows standard GAN training, we suggest using the same loss for the image discriminator and generator that the base architecture uses.
For temporal consistency, we optimize using adversarial learning. Let \(p_{true}\) be the distribution of real data correctly ordered w.r.t. transformation magnitude and \(p_{false}\) be incorrectly ordered, and \(p_{z}\) the latent distribution. Further, let \(\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\alpha_{3})\) be any \(\alpha_{i}\in\mathbb{R}\) for which \(\alpha_{1}\leq\alpha_{2}\leq\alpha_{3}\), then we define the adversarial loss objective for the temporal discriminator using the hinge loss as:
\[\begin{split}\min_{D_{temp}}\mathcal{L}_{D_{temp}}& =\operatorname*{\mathbb{E}}_{x\sim p_{true}}[\min(0,1-D_{temp}(x))] \\ &+\operatorname*{\mathbb{E}}_{x\sim p_{false}}[\min(0,1+D_{temp }(x))]\\ &+\operatorname*{\mathbb{E}}_{z\sim p_{z}}[\min(0,1+D_{temp}(G_{ im}(G_{temp}(z,\boldsymbol{\alpha}))))]\end{split} \tag{1}\]
Given that we want to force the embedding in the latent space, we add a loss term for both \(G_{im}\) and \(G_{temp}\) so that \(G_{temp}\) learns the direction and \(G_{im}\) learns the latent representation corresponding to the desired transformation. Thus, we get:
\[\min_{G_{im},G_{temp}}\mathcal{L}_{G}=\operatorname*{\mathbb{E}}_{z,z^{\prime} \sim p_{z}}\big{[}-D_{temp}(G_{im}(G_{temp}(z,\boldsymbol{\alpha})))+L_{ GAN}(D_{im}(G_{im}(z^{\prime})))\big{]} \tag{2}\]
where \(L_{GAN}\) is the applicable GAN loss of the base architecture, and \(\boldsymbol{\alpha}\) and \(p_{z}\) are defined as above.
Intuitively, the temporal discriminator learns to discriminate based on the transformation we aim to embed. Therefore, the generators trying to fool the temporal discriminator must generate data that exhibits the correct transformation and does not change the scene (e.g., patient) markedly.
We evaluate image quality using visual inspection and slice-wise Frechet Inception Distance (FID) and visual inspection for temporal consistency. The shifted images should not be of worse image quality than images resulting from directly sampled latent codes. Thus, the basic image quality measures are also used to assess the shifted images.
## 4 Experiments
#### 4.0.1 Datasets
We evaluate the proposed architecture on three volumetric thoracic Computerized Tomography (CT) datasets.
* The Lung Image Database Consortium image collection (LIDC) [2]. We pre-process LIDC by limiting the intensity range to \([-1000,2000]\) Hounsfield Units (HU) and normalize to a range of \([-1,1]\) using min-max scaling. We resize the data to \(128\times 128\times 128\) voxels to limit computational demands. To test the approach, we introduce a simple image transformation corresponding to shifts along the \(x\)-axis with a shift magnitude of \([-32,32]\) voxels randomly selected.
* Breathing Motion. The dataset, collected at Rigshospitalet, Copenhagen, Denmark, consists of longitudinal thoracic CT scans showing the breathing phases of 499 non-small cell lung cancer patients treated with radiotherapy between 2009 to 2017. Each data point generally consists of 10 scans, where the first five correspond to exhaling and the following five to inhaling. We only consider the scans corresponding to exhaling. We limit the range to \([-1000,500]\) HU and normalize to a range of \([-1,1]\) using min-max scaling. We resize the data to \(128\times 128\times 64\) voxels to limit computational demands.
* Tumor Regression. The final dataset consists of 256 patients, a subset of the patients from the Breathing Motion dataset, for which at least 10 daily treatment thoracic cone beam CT scans were available. It shows tumor regression during radiotherapy. We apply the same preprocessing as with the breathing motion dataset.
For all three datasets, we use 90% of the patients for training and 10% for testing. Figure 2 shows an example of the breathing motion and tumor regression dataset.
#### 3.2.2 Implementation Details
We run all experiments on two Nvidia RTX A6000. We use Python 3.9 and PyTorch 1.11.0 for the implementation. The first author performed all visual inspections without formal education in medical image evaluation. We adapt SA-GAN [23] to volumetric data and use it as our base GAN architecture following the parameters (e.g., learning rate and optimizer) suggested by the authors unless otherwise specified. Throughout all experiments, we sample \(\alpha\sim\mathcal{U}[-6,6]\) with a linear direction \(d\) of unit length based on Voynov and Babenko [19], and we arbitrarily choose to train for 5000 iterations. We use a batch size of 8 for the LIDC dataset and a batch size of 16 for the other two to fit the memory of the used GPUs. For the LIDC dataset, we use a latent space size of \(L=512\) and reduce it to \(L=256\) for the other two, as the resolution of the datasets is halved as well.
Figure 2: Examples of breathing motion and tumor regression datasets. The presented examples are after preprocessing. For breathing motion, the center volume shows the most exhaled while the ones to the left correspond to exhaling and those to the right to inhaling. For tumor regression, we estimate the slice corresponding to the center of the tumor manually. The tumor is marked with the red bounding box.
## 5 Results
We present the final, unbiased estimate of the image quality of the proposed model on all three datasets in Table 1. Note that some image transformations are more readily visible in video format. Therefore, consider the generated volumes as videos in the provided GitHub repository.
Examples of generated volumes of the model trained on the full LIDC data set are given in Figure 3.
The resulting volumes are of high quality, and we observe the desired image transformation embedded as a shift in the latent space. The transition between shifted images is smooth with minimal entanglement.
Next, we consider the breathing motion dataset. Figure 4 presents some generated volumes for the breathing motion dataset. We observe good image quality with sufficient detail and realistic anatomy. Considering the embedding, we observe that breathing motion is captured well, and the temporal dependencies of breathing are embedded in the latent space. The clearest change when moving
\begin{table}
\begin{tabular}{c||c|c|c} & FID ax. & FID cor. & FID sag. \\ \hline \hline LIDC & \(93.8\pm 1.0\) & \(54.3\pm 0.5\) & \(30.0\pm 1.2\) \\ Breathing Motion & \(139.2\pm 2.7\) & \(79.8\pm 1.0\) & \(99.6\pm 1.4\) \\ Tumor Regression & \(82.2\pm 2.7\) & \(42.3\pm 1.4\) & \(53.4\pm 0.8\) \\ \end{tabular}
\end{table}
Table 1: Image Quality of the temporal GAN trained on all three datasets. All models are trained on 90% of the scans split patient-wise and evaluated on 10%. The FID scores are calculated using random time steps of real and synthesized data.
Figure 3: Four examples of generated volumes with embedded shift for the proposed model on the LIDC dataset. The center volume corresponds to the original latent vector. All images show the center slice for the sagittal, coronal, and axial view.
along the embedded direction is the diaphragm moving upward while exhaling. This is also the most obvious change observable in the real data. Additionally, we observe the stomach or rib cage contracting while exhaling. Lastly, we observe very high scene consistency. I.e., the generated scan of the patient does not change markedly while moving the latent code along the embedded direction. Thus, the patient's anatomy is preserved, and the changes induced by moving along the embedded direction are restricted to breathing-related changes.
Finally, we consider the model trained on the tumor regression dataset. We present examples of generated volumes in Figure 5.
The image quality of the generated volumes is good, showing details in the vessel, tissue, and bone structure. We observe volumes both containing and not containing tumors. Those generated volumes containing tumors have them in varied places, shapes, and sizes. If tumors are present in the generated volumes,
Figure 4: Four examples of generated volumes with embedded shift for the proposed model on the breathing motion dataset. The center volume corresponds to the original latent vector. All images show the center slice for the sagittal, coronal, and axial view.
Figure 5: Four examples of generated volumes with embedded shift for the proposed model on the tumor regression dataset. The center volume corresponds to the original latent vector. For all images, we try to locate the center of the tumor for the sagittal, coronal, and axial view.
results in tumors shrinking in size. I.e., the model successfully embeds temporal tumor regression in image space as a linear direction in the latent space. When traversing the embedded direction, there is minimal change to the volumes other than the reduction in tumor size. Further, no clear change exists if no tumor is present in the volume. Thus, the direction models tumor regression in a disentangled manner.
## 6 Discussion
#### 6.0.1 Temporal Generation
We can use a simple learned direction to generate temporal sequences of data using only a single non-temporal generator, which, to the best of our knowledge, has not been shown previously. Our proposed method of jointly training such a direction and the GAN shows distinct benefits over discovering directions in pretrained generators. From visible inspection, our method shows almost no entanglement and ensures that even complex transformations, such as tumor regression, are enforced to be present as a linear direction in the latent space.
Our model shows high-quality synthetic data with controllable enforced image transformations with smooth continuous generation on three datasets. Additionally, in particular, on the breathing motion and tumor regression dataset, we observe clearer changes than observed in the data (e.g., movement of the diaphragm in the breathing motion dataset). This indicates that the proposed method isolates the signal corresponding to temporal development well. Compared to other temporal generation approaches, our model does not require conditioning of the generator or a sequential generator. As a consequence, we can easily vary the architecture and benefit from advances in GAN architectures that are to come. The results we observe on the tumor regression dataset deserve the most attention. While traditional tumor regression models might be more patient- and therapy-specific [9], the scale we operate on is novel. Furthermore, unlike previous methods, we generate entire volumes and show that we can model tumor regression as part of the image generation process.
#### 6.0.2 Limitations
We use the parameters used for SA-GAN. While this is likely a good choice for the image generation component of our proposed model, the temporal aspects could perform better with different parameters and architecture choices. Moreover, as there is little prior investigation into evaluating temporal GANs, we needed to devise our evaluation strategy, further development in this area would likely be beneficial.
#### 6.0.3 Impact & Future Work
As our method is trained based only on the order of the transformation magnitude, it is reasonable to assume that it can be applied to any transformation where such an ordering exists. Further, our method provides many practical applications to downstream machine learning tasks by providing a method to synthesize controllable data, e.g., with or without tumor,
in a domain where annotations are costly and difficult to obtain. We provide a proof of concept of unsupervised tumor segmentation using our method in Figure 6.
Further practical applications directly benefiting medical image analysis will likely arise with further investigation of our method. Given the clear and isolated signal we observe, natural applications of our work could be in the visualization and understanding of changes happening in temporal sequences. Additionally, investigating how much the sampled temporal development reflects patient-specific as opposed to therapy-specific aspects would offer valuable insights. Finally, we see the future investigation of embedding non-temporal dependencies as one of the most promising potential applications. Embedding transformations across patients, e.g., disease severity, would allow for further fine-grained control over data synthesis in the medical domain.
## 7 Conclusion
In this work, we investigate the possibility of explicitly embedding temporal dependencies in the latent space of generative models to generate longitudinal volumetric data. We generate controllable longitudinal data with minimal entanglement. Further, linear directions in the latent space are sufficient to generate temporal sequences from a non-temporal generator. Due to the simplicity of the linear latent walk, we can generate continuous and smooth sequences of varying lengths, unlike other suggested temporal GANs.
We show that our framework can generate complex temporal dependencies, e.g., breathing motion or tumor regression, as part of the image synthesis task on medical data. The method could potentially improve unsupervised tumor segmentation, disease-aware image augmentation, and radiotherapy planning. Further, our method can embed any temporal dependency with limited supervision and thus provides further usefulness beyond what we explore in this work.
Figure 6: Proof of concept of unsupervised tumor segmentation using our method. We generate a future time point to a given volume, take the difference image, and threshold it with a threshold of 0.2. Then we apply two erosion followed by two dilation operations to remove noise and are left with the segmentation mask. Next to the direct application to tumor segmentation, the difference image might also visualize treatment effects, such as weight loss, which is a common side effect.
#### Acknowledgements
The authors acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study. The authors would like to thank Anna Kirchner for help in preparation of the manuscript. Jens Petersen is partly funded by research grants from the Danish Cancer Society (grant no. R231-A13976) and Varian Medical Systems.
|
2308.09754 | Long-time dynamics for the energy critical heat equation in $R^5$ | We investigate the long-time behavior of global solutions to the energy
critical heat equation in $R^5$ \begin{equation*}
\begin{cases}
\pp_t u=\Delta u+|u|^{\frac{4}{3}} u ~&\mbox{ in }~ R^5 \times (t_0,\infty),
u(\cdot,t_0)=u_0~&\mbox{ in }~ R^5.
\end{cases} \end{equation*} For $t_0$ sufficiently large, we show the
existence of positive solutions for a class of initial value $u_0(x)\sim
|x|^{-\gamma}$ as $|x|\rightarrow \infty$ with $\gamma>\frac32$ such that the
global solutions behave asymptotically \begin{equation*}
\| u(\cdot,t) \|_{L^\infty (\R^5)} \sim
\begin{cases}
t^{-\frac{3(2-\gamma)}{2}} ~&\mbox{ if }~ \frac32<\gamma<2
(\ln t)^{-3} ~&\mbox{ if }~ \gamma=2
1 ~&\mbox{ if }~ \gamma>2
\end{cases}
\mbox{ \ for \ } t >t_0, \end{equation*} which is slower than the
self-similar time decay $t^{-\frac{3}{4}}$. These rates are inspired by
Fila-King \cite[Conjecture 1.1]{FilaKing12}. | Zaizheng Li, Qidi Zhang, Yifu Zhou, Juncheng Wei | 2023-08-18T18:01:57Z | http://arxiv.org/abs/2308.09754v1 | # Long-time dynamics for the energy critical heat equation in \(\mathbb{R}^{5}\)
###### Abstract.
We investigate the long-time behavior of global solutions to the energy critical heat equation in \(\mathbb{R}^{5}\)
\[\begin{cases}\partial_{t}u=\Delta u+|u|^{\frac{4}{3}}u&\text{in}\ \ \mathbb{R}^{5}\times(t_{0},\infty),\\ u(\cdot,t_{0})=u_{0}&\text{in}\ \ \mathbb{R}^{5}.\end{cases}\]
For \(t_{0}\) sufficiently large, we show the existence of positive solutions for a class of initial value \(u_{0}(x)\sim|x|^{-\gamma}\) as \(|x|\to\infty\) with \(\gamma>\frac{3}{2}\) such that the global solutions behave asymptotically
\[\|u(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{5})}\sim\begin{cases}t^{-\frac{3(2- \gamma)}{2}}&\text{if}\ \ \frac{3}{2}<\gamma<2\\ (\ln t)^{-3}&\text{if}\ \ \gamma=2\ \ \ \ \ \ \ \ \ \text{for}\ \ t>t_{0},\\ 1&\text{if}\ \ \gamma>2\end{cases}\]
which is slower than the self-similar time decay \(t^{-\frac{3}{4}}\). These rates are inspired by Fila-King [9, Conjecture 1.1].
## 1. Introduction and main results
Consider the semilinear heat equation
\[\begin{cases}\partial_{t}u=\Delta u+|u|^{p-1}u,&\text{in}\ \ \mathbb{R}^{n}\times(0,\infty),\\ u(\cdot,0)=u_{0},&\text{in}\ \ \mathbb{R}^{n},\end{cases} \tag{1.1}\]
with \(p>1\). It corresponds to the negative \(L^{2}\)-gradient flow of the associated energy functional
\[\frac{1}{2}\int_{\mathbb{R}^{n}}|\nabla u|^{2}-\frac{1}{p+1}\int_{\mathbb{R}^ {n}}|u|^{p+1},\]
which is decreasing along classical solutions.
Equation (1.1) has been widely studied since Fujita's celebrated work [12]. The Fujita equation looks rather simple, but extensively rich and sophisticated phenomena arise, and these are intimately related to the power nonlinearity in a rather precise manner. For instance, the Fujita exponent \(p_{F}\), the Sobolev exponent \(p_{S}\) defined respectively as
\[p_{F}:=1+\frac{2}{n},\qquad p_{S}=\begin{cases}\frac{n+2}{n-2}&\text{for}\ \ n\geq 3\\ \infty&\text{for}\ \ n=1,2\end{cases}\]
play an important role in (1.1) concerning singularity formation, long-time dynamics, and many others, and they have been studied intensively in innumerable literature. It is well known that (1.1) possesses a global nontrivial solution \(u\geq 0\) if and only if \(p>p_{F}\). Whether or not the steady states exist greatly affects the dynamical behavior of (1.1). The stationary equation of (1.1) does not have positive classical solutions if and only if \(p<p_{S}\) (see [15] and [2] for instance). For \(p=p_{S}\), up to translations and dilations, the positive steady state to the Yamabe problem is the well known Aubin-Talenti bubble
\[U(x)=\alpha_{n}(1+|x|^{2})^{-\frac{n-2}{2}},\quad\alpha_{n}=[n(n-2)]^{\frac{n -2}{4}}\,.\]
Such a profile is commonly used when investigating the mechanism of singularity formation for (1.1) with critical exponent \(p=p_{S}\). On the other hand, Liouville type theorems for the heat flow (1.1) and their applications have also been thoroughly investigated. In the subcritical case \(p<p_{S}\), Polacik and Quittner [22] proved the nonexistence of positive radially symmetric bounded entire solution, and they showed that the global nonnegative radial solution of (1.1) decays to \(0\) uniformly as \(t\to\infty\). Polacik, Quittner and Souplet [23] developed a general scheme connecting parabolic Liouville type theorems and universal estimates of solutions. Recently in [27], Quittner proved the optimal Liouville theorems without extra symmetry nor decay assumptions on the solutions for \(1<p<p_{S}\) and showed that the nonnegative global solution of (1.1) must decay to \(0\) as \(t\to\infty\).
This paper aims to understand possible long-time dynamics for global solutions of (1.1) with \(p=p_{S}\) in \(\mathbb{R}^{5}\). Here we call a solution global if its maximal existence time is infinity. The long-time behavior for the solution of (1.1) is partially motivated by the study of threshold solutions. For any nonnegative, smooth function \(\phi(x)\) with \(\phi\not\equiv 0\), let us define
\[\alpha^{*}=\alpha^{*}(\phi):=\sup\{\alpha>0:\ T_{\max}(\alpha\phi)=\infty\},\]
and \(u^{*}:=u(x,t;\alpha^{*}\phi)\) is called the threshold solution associated with \(\phi\). Roughly speaking, the threshold solution lies on the borderline between global solutions and those that blow up in finite time since for \(\alpha\gg\alpha^{*}\), the nonlinearity dominates the Laplacian and vice versa. At the threshold level, the dynamics for \(u^{*}\) in the pointwise sense might be global and bounded, global and unbounded, or blow up in finite time. Any of these might happen depending on the power nonlinearity and the domain. We refer the readers to Ni-Sacks-Tavantzis [20], Lee-Ni [19], Galaktionov-Vazquez [14], Polacik [21], Quittner [26], and the monograph by Quittner and Souplet [28] and their references for comprehensive studies and descriptions of threshold solutions. On the other hand, the global decaying threshold and non-threshold solutions of Fujita equation have been studied extensively, see [10, 11, 16, 17, 18, 19, 23, 24, 25, 30, 31] and the references therein.
In [18], Kawanago gave a complete description of the asymptotic behavior of the positive solution in the case \(p_{F}<p<p_{S}\). Specially, \(\|u(\cdot,t;\alpha^{*}\phi)\|_{L^{\infty}}\sim t^{-\frac{1}{p-1}}\) for \(t>1\). The spatial decay of initial value plays an important role in the long-time behavior of solutions and threshold solutions of (1.1). For \(p\geq p_{S}\), under the assumption that the initial value \(u_{0}\) is radial, positive, continuous, and
\[\lim_{|x|\to\infty}u_{0}(x)|x|^{\frac{2}{p-1}}=0,\]
Quittner [25, Theorem 1.2] showed that there are no global positive radial solutions with self-similar time decay \(t^{-\frac{1}{p-1}}\). From this point, for \(p=p_{S}\), Fila and King [9] predicted formally, via matched asymptotics, the possible decaying/growing rate (in time) of threshold solutions to (1.1) with the radial initial value \(u_{0}\) satisfying
\[\lim_{r\to\infty}r^{\gamma}u_{0}(r)=A\ \ \text{for some}\ \ A>0\ \ \text{and}\ \ \gamma>\frac{n-2}{2}. \tag{1.2}\]
They conjectured that the threshold solution \(u\) of (1.1) with initial value \(u_{0}\) should satisfy
\[\lim_{t\to\infty}\frac{\|u(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{n})}}{\varphi(t; n,\gamma)}=C\]
for some positive constant \(C\) depending on \(n\) and \(u_{0}\), where \(\varphi(t;n,\gamma)\) is given as:
The case \(\gamma>1,\ n=3\) was answered affirmatively by del Pino, Musso and Wei [7], where the infinite time blow-up solutions were constructed by the gluing method. The infinite time blow-up solutions are also called grow-up/growing solutions in some literature. The case \(\gamma>2,\ n=4\) was solved in [32] recently. Due to the intimate connection with the critical Fujita equation in \(\mathbb{R}^{4}\), the trichotomy dynamics of the \(1\)-equivariant harmonic map heat flow was studied in [33]. See also Galaktionov-King [13], Cortazar-del Pino-Musso [3], del Pino-Musso-Wei-Zheng [8] (sign-changing solutions), and Ageno-del Pino [1] for their counterparts in the case of the bounded domain, where the Dirichlet boundary plays a significant role in determining the blow-up dynamics.
This paper addresses the case for \(n=5\) in Table 1. We first introduce some notations that we will use throughout the paper.
**Notations:**
* We write \(a\lesssim b\) (respectively \(a\gtrsim b\)) if there exists a constant \(C>0\) independent of \(t_{0}\) such that \(a\leq Cb\) (respectively \(a\geq Cb\)). Set \(a\sim b\) if \(b\lesssim a\lesssim b\). Denote \(f_{1}=O(f_{2})\) if \(|f_{1}|\lesssim f_{2}\).
* For any \(x\in\mathbb{R}^{n}\) with \(|x|=\big{(}\sum\limits_{i=1}^{n}x_{i}^{2}\big{)}^{1/2}\), the Japanese bracket denotes \(\langle x\rangle=\sqrt{|x|^{2}+1}\).
* For any \(c\in\mathbb{R}\), we use the notation \(c-\) (respectively \(c+\)) to denote a constant less (respectively greater) than \(c\) and can be chosen arbitrarily close to \(c\).
* \(\eta(x)\) is a smooth cut-off function satisfying \(\eta(x)=1\) for \(|x|\leq 1\), \(\eta(x)=0\) for \(|x|\geq 2\), and \(0\leq\eta(x)\leq 1\) for all \(x\in\mathbb{R}^{n}\).
The main theorem is stated below.
**Theorem 1.1**.: _Consider_
\[\partial_{t}u=\Delta u+|u|^{\frac{4}{3}}u\ \ \mbox{in}\ \ \mathbb{R}^{5}\times(t_{0}, \infty). \tag{1.3}\]
_Given constants \(\gamma>\frac{3}{2}\) and \(D_{0},\ D_{1}\) satisfying \(0<D_{0}\leq D_{1}<2D_{0}\), for \(t_{0}\) sufficiently large, then there exists a positive solution \(u\) of the form_
\[u=15^{\frac{3}{4}}\mu^{-\frac{3}{2}}\left(1+\left|\frac{x-\xi}{\mu}\right| \right)^{-\frac{3}{2}}\eta\left(\frac{x-\xi}{\sqrt{t}}\right)+O\left(t^{- \frac{\gamma}{2}}R^{5}\ln^{2}R\right),\quad R=\ln\ln t, \tag{1.4}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \(\frac{n-2}{2}<\gamma<2\) & \(\gamma=2\) & \(\gamma>2\) \\ \hline \(n=3\) & \(t^{\frac{\gamma-1}{2}}\) & \(t^{\frac{1}{2}}(\ln t)^{-1}\) & \(t^{\frac{1}{2}}\) \\ \hline \(n=4\) & \(t^{-\frac{2-\gamma}{2}}\ln t\) & \(1\) & \(\ln t\) \\ \hline \(n=5\) & \(t^{-\frac{N(2-\gamma)}{2}}\) & \((\ln t)^{-3}\) & \(1\) \\ \hline \(n\geq 6\) & \(1\) & \(1\) & \(1\) \\ \hline \end{tabular}
\end{table}
Table 1. Fila-King [9, Conjecture 1.1]
_where \(\tilde{\gamma}=\min\left\{\gamma,\ 3-\right\}\), \(\mu=\mu(t),\ \xi=\xi(t)\in C^{1}[t_{0},\infty)\) satisfy_
\[\mu\sim\begin{cases}t^{2-\gamma},&\gamma<2\\ \ln^{2}t,&\gamma=2\,\qquad|\xi|\lesssim R^{-\frac{7}{4}}\begin{cases}t^{2- \gamma},&\gamma<2\\ \ln^{2}t,&\gamma=2\\ 1,&\gamma>2.\end{cases}\\ \end{cases} \tag{1.5}\]
_In particular, \(\|u(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{5})}=15^{\frac{3}{4}}\mu^{-\frac{3}{2}} \left(1+O\left(t^{\max\left\{3-2\gamma,-\frac{5}{2}\right\}}\ln^{4}t\right)\right)\). Moreover, the initial value satisfies_
\[u(x,t_{0})=(4\pi t_{0})^{-\frac{5}{2}}\int_{\mathbb{R}^{5}}e^{-\frac{|x-x|^{2}} {4t_{0}}}\psi_{0}(z)dz\ \ \ \ \text{for}\ \ |x|>4t_{0}^{\frac{1}{2}}\]
_with an arbitrary function \(\psi_{0}(x)\) satisfying \(D_{0}\langle x\rangle^{-\gamma}\leq\psi_{0}(x)\leq D_{1}\langle x\rangle^{- \gamma}\). Furthermore, if \(D_{0}=D_{1}\), we have_
\[\lim_{|x|\to\infty}\langle x\rangle^{\gamma}u(x,t_{0})=D_{0}. \tag{1.6}\]
**Remark 1.1**.:
* _The restriction_ \(D_{1}<2D_{0}\) _is due to a technical reason in the derivation process of (_5.15_) for the case_ \(\gamma\leq 2\)_._
* _The scaling rate/dynamics_ \(\mu\) _is derived by balancing the heat flow of the initial value and the Aubin-Talenti bubble via the orthogonal condition (_3.1_)._
* _Consider_ \(\partial_{t}u=\partial_{rr}u+\frac{n-1}{r}\partial_{r}u+u^{\frac{n+2}{n-2}}\)_,_ \(r>0,\ t>0\) _with_ \(n\in(4,6)\)_. It is possible to deduce similar results by redoing the construction process._
* _The scaling rate with logarithmic correction_ \(t^{k_{1}}(\ln t)^{k_{2}}(\ln\ln t)^{k_{3}}\cdots\) _for some_ \(k_{i}\in\mathbb{R}\)_,_ \(i\in\mathbb{Z}_{+}\) _with finite multiplicity can be expected when we take the initial value of the form_ \(u_{0}(x)\sim\langle x\rangle^{\gamma_{1}}\langle\ln\langle x\rangle\rangle^{ \gamma_{2}}\langle\ln\langle\ln\langle x\rangle\rangle\rangle^{\gamma_{3}}\cdots\) _for some_ \(\gamma_{i}\in\mathbb{R}\)_,_ \(i\in\mathbb{Z}_{+}\)_._
For \(p>p_{F}\), Lee and Ni [19, Theorem 3.8] gave positive global solutions of (1.1) with the decay rate
\[\|u(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{n})}\sim t^{-k}\ \ \text{for any}\ \ k\in\left[\frac{1}{p-1},\ \frac{n}{2}\right].\]
In particular, for \(n=5\) and \(p=p_{S}\), \(k\in\left[\frac{3}{4},\frac{5}{2}\right]\).
Theorem 1.1 implies a direct consequence that somewhat expands the picture of global dynamics of positive solutions in the critical case \(p=p_{S}\) in \(\mathbb{R}^{5}\) with algebraic decay rate:
**Corollary 1.1**.: _For \(n=5,\ p=\frac{7}{3}\), for all \(k\in[0,\frac{5}{2}]\), there exists a global positive solution of (1.1) with the rate \(\|u(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{5})}\sim t^{-k}\) as \(t\to\infty\)._
The construction of Theorem 1.1 is done by the gluing method recently developed in [3, 6]. It is a rather versatile and systematic tool that can be used to investigate the singularity formation for various evolution PDEs, and we refer to [3, 4, 5, 6, 7, 29] and the references therein.
The rest of this paper is devoted to the proof of Theorem 1.1.
## 2. Approximate solutions and the gluing system
Consider the critical heat equation
\[\partial_{t}u=\Delta u+|u|^{\frac{4}{n-2}}u\ \ \text{in}\ \ \mathbb{R}^{n}\times(t_{0},\infty). \tag{2.1}\]
The unique positive solution (up to translations and dilations) of the stationary equation \(\Delta u+u^{\frac{n+2}{n-2}}=0\), is given by the Aubin-Talenti solution
\[U(x)=\alpha_{n}(1+|x|^{2})^{-\frac{n-2}{2}},\ \ \ \alpha_{n}=[n(n-2)]^{\frac{n-2}{4}}\,.\]
The corresponding linearized operator \(\Delta+\frac{n+2}{n-2}U^{\frac{4}{n-2}}\) has bounded kernels
\[Z_{i}(x)=\partial_{x_{i}}U(x),\quad i=1,\cdots,n,\quad Z_{n+1}(x)=\frac{n-2}{2}U( x)+x\cdot\nabla U(x).\]
The leading term of the solution to (2.1) is taken as the following form
\[u_{1}(x,t)=\mu^{-\frac{n-2}{2}}U\left(y\right)\eta\left(\tilde{y}\right)+\Psi_ {0}(x,t),\ \ \text{where}\ \ y:=\frac{x-\xi}{\mu},\quad\tilde{y}:=\frac{x-\xi}{\sqrt{t}},\]
\(\mu=\mu(t)>0,\ \xi=\xi(t)\in C^{1}[t_{0},\infty)\) will be determined later, and
\[\Psi_{0}(x,t)=\left(4\pi t\right)^{-\frac{n}{2}}\int_{\mathbb{R}^{n}}e^{-\frac {|x-z|^{2}}{4t}}\psi_{0}(z)dz,\]
where \(D_{0}\langle x\rangle^{-\gamma}\leq\psi_{0}(x)\leq D_{1}\langle x\rangle^{-\gamma}\) with some constants \(0<D_{0}\leq D_{1}\). Obviously, \(\Psi_{0}>0\) and
\[\partial_{t}\Psi_{0}=\Delta\Psi_{0},\quad\Psi_{0}(\cdot,0)=\psi_{0}.\]
We first give a lemma concerning a precise estimate related to \(\Psi_{0}\).
**Lemma 2.1**.: _Given \(n>0\), \(\gamma\in\mathbb{R}\), \(t\geq 1\), then_
\[(4\pi t)^{-\frac{n}{2}}\int_{\mathbb{R}^{n}}e^{-\frac{|x|^{2}}{4t}}\langle y \rangle^{-\gamma}dy=v_{n,\gamma}(t)(C_{n,\gamma}+g_{n,\gamma}(t)), \tag{2.2}\]
_where_
\[v_{n,\gamma}(t)=\begin{cases}t^{-\frac{\gamma}{2}},&\gamma<n\\ t^{-\frac{n}{2}}\ln(1+t),&\gamma=n\\ t^{-\frac{n}{2}},&\gamma>n,\end{cases} \tag{2.3}\]
\[C_{n,\gamma}=\begin{cases}(4\pi)^{-\frac{n}{2}}\int_{\mathbb{R}^{n}}e^{-\frac {|x|^{2}}{4}}|z|^{-\gamma}dz,&\gamma<n\\ (4\pi)^{-\frac{n}{2}}\frac{1}{2}\left|S^{n-1}\right|,&\gamma=n\\ (4\pi)^{-\frac{n}{2}}\int_{\mathbb{R}^{n}}\langle y\rangle^{-\gamma}dy,&\gamma >n\end{cases},\qquad g_{n,\gamma}(t)=O\bigg{(}\begin{cases}t^{-1},&\gamma<n-2 \\ t^{-1}\langle\ln t\rangle,&\gamma=n-2\\ t^{\frac{\gamma-n}{2}},&n-2<\gamma<n\\ (\ln(1+t))^{-1},&\gamma=n\\ t^{\frac{n-\gamma}{2}},&n<\gamma<n+2\\ t^{-1}\langle\ln t\rangle,&\gamma=n+2\\ t^{-1},&\gamma>n+2\end{cases}\bigg{)}. \tag{2.4}\]
The proof of Lemma 2.1 is postponed to Appendix A.
Hereafter, we always assume \(t_{0}\geq 1\) is sufficiently large and \(t\geq t_{0}\). By Lemma 2.1, we have
\[D_{0}v_{n,\gamma}(t)\left(C_{n,\gamma}+g_{n,\gamma}(t)\right)\leq\Psi_{0}(0,t) \leq D_{1}v_{n,\gamma}(t)\left(C_{n,\gamma}+g_{n,\gamma}(t)\right). \tag{2.5}\]
By similar calculation, we have
\[\|\nabla\Psi_{0}(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{n})}\lesssim t^{-\frac{1}{ 2}}v_{n,\gamma}(t). \tag{2.6}\]
By [32, Lemma A.3],
\[\Psi_{0}(x,t)\lesssim t^{-\frac{5}{2}}\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}+|x |^{-\tilde{\gamma}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}, \tag{2.7}\]
where \(\tilde{\gamma}\) is defined as
\[\tilde{\gamma}:=\min\left\{\gamma,3-\right\}. \tag{2.8}\]
Define the error of \(f\) as
\[E[f]:=-\partial_{t}f+\Delta f+|f|^{\frac{4}{n-2}}f.\]
Straightforward computation implies
\[E[u_{1}]=\mu^{-\frac{n}{2}}\dot{\mu}Z_{n+1}(y)\eta\left(\tilde{y}\right)+\mu^ {-\frac{n}{2}}\dot{\xi}\cdot\left(\nabla U\right)\left(y\right)\eta\left( \tilde{y}\right)+\mathcal{E}_{\eta}+|u_{1}|^{\frac{4}{n-2}}u_{1}-\mu^{-\frac{n+ 2}{2}}U\left(y\right)^{\frac{n+2}{n-2}}\eta\left(\tilde{y}\right),\]
where
\[\mathcal{E}_{\eta}:=\mu^{-\frac{n-2}{2}}U(y)\left(2^{-1}t^{-1}\tilde{y}+t^{-\frac {1}{2}}\dot{\xi}\right)\cdot\left(\nabla\eta\right)\left(\tilde{y}\right)+2\mu^{- \frac{n}{2}}t^{-\frac{1}{2}}\left(\nabla U\right)\left(y\right)\cdot\left( \nabla\eta\right)\left(\tilde{y}\right)+\mu^{-\frac{n-2}{2}}t^{-1}U\left(y \right)\left(\Delta\eta\right)\left(\tilde{y}\right). \tag{2.9}\]
We look for an exact solution \(u\) of (2.1) in the form
\[u=u_{1}+\psi(x,t)+\mu^{-\frac{n-2}{2}}\phi\left(\frac{x-\xi}{\mu},t\right)\eta _{R},\quad\eta_{R}:=\eta\left(\frac{x-\xi}{\mu R}\right),\quad R=R(t)=\ln\ln t. \tag{2.10}\]
We make the ansatz
\[2\mu R\leq\sqrt{t}/9. \tag{2.11}\]
Direct calculation deduces that
\[E[u]= \left(\mu^{-\frac{n}{2}}\dot{\mu}Z_{n+1}(y)+\mu^{-\frac{n}{2}} \dot{\xi}\cdot\left(\nabla U\right)\left(y\right)\right)\eta\left(\tilde{y} \right)+\mathcal{E}_{\eta}+\mu^{-\frac{n+2}{2}}U(y)^{\frac{n+2}{n-2}}\left( \eta(\tilde{y})^{\frac{n+2}{n-2}}-\eta\left(\tilde{y}\right)\right)\] \[+\mathcal{N}\left[\psi,\phi,\mu,\xi\right]+\frac{n+2}{n-2}\mu^{- 2}U(y)^{\frac{4}{n-2}}\eta(\tilde{y})^{\frac{4}{n-2}}\left(\Psi_{0}+\psi+\mu^ {-\frac{n-2}{2}}\phi(y,t)\eta_{R}\right)\] \[-\partial_{t}\psi+\Delta\psi-\mu^{-\frac{n-2}{2}}\partial_{t} \phi(y,t)\eta_{R}+\mu^{-\frac{n+2}{2}}\Delta_{y}\phi(y,t)\eta_{R}+\Lambda_{1} \left[\phi,\mu,\xi\right]+\Lambda_{2}\left[\phi,\mu,\xi\right],\]
where
\[\Lambda_{1}\left[\phi,\mu,\xi\right]:=\mu^{-\frac{n+2}{2}}R^{-2}\phi(y,t) \left(\Delta\eta\right)\left(\frac{y}{R}\right)+2\mu^{-\frac{n+2}{2}}R^{-1} \nabla_{y}\phi(y,t)\cdot\left(\nabla\eta\right)\left(\frac{y}{R}\right) \tag{2.12}\]
\[\Lambda_{2}\left[\phi,\mu,\xi\right]:=\mu^{-\frac{n}{2}}\dot{\mu}\left(\frac{n -2}{2}\phi(y,t)+y\cdot\nabla_{y}\phi(y,t)\right)\eta_{R}+\mu^{-\frac{n}{2}} \dot{\xi}\cdot\nabla_{y}\phi(y,t)\eta_{R}, \tag{2.13}\]
\[\mathcal{N}\left[\psi,\phi,\mu,\xi\right]:=|u|^{\frac{4}{n-2}}u-\mu^{-\frac{n +2}{2}}U(y)^{\frac{n+2}{n-2}}\eta(\tilde{y})^{\frac{n+2}{n-2}}-\frac{n+2}{n-2} \mu^{-2}U(y)^{\frac{4}{n-2}}\eta(\tilde{y})^{\frac{4}{n-2}}\left(\Psi_{0}+ \psi+\mu^{-\frac{n-2}{2}}\phi(y,t)\eta_{R}\right). \tag{2.14}\]
In order to make \(E[u]=0\), it suffices to solve the following gluing system.
**The outer problem:**
\[\partial_{t}\psi=\Delta\psi+\mathcal{G}\left[\psi,\phi,\mu,\xi\right]\ \ \text{in}\ \ \mathbb{R}^{n}\times(t_{0},\infty),\quad\psi(\cdot,t_{0})=0\ \ \text{in}\ \ \mathbb{R}^{n}, \tag{2.15}\]
where
\[\mathcal{G}\left[\psi,\phi,\mu,\xi\right]:=\Lambda_{1}\left[\phi,\mu,\xi \right]+\Lambda_{2}\left[\phi,\mu,\xi\right]+\left(\mu^{-\frac{n}{2}}\dot{\mu} Z_{n+1}(y)+\mu^{-\frac{n}{2}}\dot{\xi}\cdot\left(\nabla U\right)(y)\right)\eta \left(\tilde{y}\right)\left(1-\eta_{R}\right)+\mathcal{E}_{\eta} \tag{2.16}\]
**The inner problem:**
\[\mu^{2}\partial_{t}\phi=\Delta_{y}\phi+\frac{n+2}{n-2}U(y)^{\frac{4}{n-2}}\phi +\mathcal{H}\left[\psi,\mu,\xi\right]\ \ \ \ \ \text{for}\ \ t>t_{0},\quad y\in B_{4R(t)}, \tag{2.17}\]
where
\[\mathcal{H}\left[\psi,\mu,\xi\right]:=\mu\dot{\mu}Z_{n+1}(y)+\mu\dot{\xi} \cdot\left(\nabla U\right)(y)+\frac{n+2}{n-2}\mu^{\frac{n-2}{2}}U(y)^{\frac{4 }{n-2}}\left(\Psi_{0}(\mu y+\xi,t)+\psi(\mu y+\xi,t)\right). \tag{2.18}\]
We introduce the new time variable
\[\tau=\tau(t):=\int_{t_{0}}^{t}\mu^{-2}(s)ds+C_{\tau}t_{0}\mu^{-2}(t_{0}),\quad \tau_{0}:=\tau(t_{0}), \tag{2.19}\]
with a sufficiently large constant \(C_{\tau}\) independent of \(t_{0}\). Then (2.17) can be rewritten as
\[\partial_{\tau}\phi=\Delta_{y}\phi(y,t(\tau))+\frac{n+2}{n-2}U(y)^{\frac{4}{n-2 }}\phi(y,t(\tau))+\mathcal{H}\left[\psi,\mu,\xi\right](y,t(\tau))\ \ \ \ \text{for}\ \ \tau>\tau_{0},\quad y\in B_{4R(t(\tau))}. \tag{2.20}\]
## 3. Formal analysis of \(\mu\) and \(\phi\)
Hereafter, we take \(n=5\). As the leading term of \(\mu\), \(\mu_{0}\) is determined by the orthogonal condition
\[\int_{B_{4R}}\left(\mu_{0}\dot{\mu}_{0}Z_{n+1}(y)+\frac{n+2}{n-2}\mu_{0}^{\frac{ n-2}{2}}U(y)^{\frac{4}{n-2}}\Psi_{0}(0,t)\right)Z_{n+1}(y)dy=0, \tag{3.1}\]
which is equivalent to
\[\dot{\mu}_{0}=A(R)\mu_{0}^{\frac{n-4}{2}}\Psi_{0}(0,t), \tag{3.2}\]
where
\[A(R):=-\frac{n+2}{n-2}\frac{\int_{B_{4R}}U(y)^{\frac{4}{n-2}}Z_{n+1}(y)dy}{ \int_{B_{4R}}Z_{n+1}^{2}(y)dy}=\frac{n-2}{2}\frac{\int_{\mathbb{R}^{n}}U(y)^{ \frac{n+2}{n-2}}dy}{\int_{\mathbb{R}^{n}}Z_{n+1}^{2}(y)dy}\left(1+O\left(R^{ \max\{-2,4-n\}}\right)\right)\sim 1 \tag{3.3}\]
for \(t\geq M\) with \(M\) sufficiently large, and here we have used
\[\int_{\mathbb{R}^{n}}U(y)^{\frac{4}{n-2}}Z_{n+1}(y)dy=-\frac{(n-2)^{2}}{2(n+2 )}\int_{\mathbb{R}^{n}}U(y)^{\frac{n+2}{n-2}}dy.\]
We take a solution of (3.2) as
\[\mu_{0}(t)=\left(\frac{6-n}{2}\int_{M}^{t}A(R(s))\Psi_{0}(0,s)ds\right)^{ \frac{2}{6-n}}. \tag{3.4}\]
By (2.5), for \(t_{0}\geq 9M\) sufficiently large,
\[0<\mu_{0}(t)\sim\mu_{0*}(t):=\begin{cases}t^{\frac{2-\gamma}{6-n}},&\gamma<2 \\ (\ln t)^{\frac{2}{6-n}},&\gamma=2\;,\quad\dot{\mu}_{0}(t)\sim\mu_{0*}(t)^{ \frac{n-4}{2}}v_{n,\gamma}(t).\\ 1,&\gamma>2\end{cases} \tag{3.5}\]
We make the following ansatz about \(\mu\):
\[\mu=\mu_{0}+\mu_{1},\ \ \text{where}\ \ \mu_{1}=\mu_{1}(t)\in C^{1}[t_{0}, \infty),\quad|\mu_{1}|\leq\mu_{0}/9,\quad|\dot{\mu}_{1}|\leq\mu_{0*}^{\frac{ n-4}{2}}v_{n,\tilde{\gamma}}/9, \tag{3.6}\]
which implies \(\frac{8}{9}\mu_{0}\leq\mu\leq\frac{10}{9}\mu_{0}\) and the ansatz (2.11) for \(\gamma>\frac{3}{2}\) and \(t_{0}\) sufficiently large.
Recall (2.19), then \(\tau(t)\) and \(t\) have the following relation
\[\tau(t)\sim\begin{cases}t^{\frac{2+2\gamma-n}{6-n}},&\frac{n-2}{2}<\gamma<2\\ t(\ln t)^{\frac{4}{n-6}},&\gamma=2\\ t,&\gamma>2.\end{cases} \tag{3.7}\]
By the ansatz (3.6), roughly speaking, the upper bound of \(\mathcal{H}\left[\psi,\mu,\xi\right]\) is determined by
\[|\mu\dot{\mu}Z_{n+1}(y)|+\left|\frac{n+2}{n-2}\mu^{\frac{n-2}{2}}U(y)^{\frac{ 4}{n-2}}\Psi_{0}(0,t)\right|\lesssim\mu_{0*}^{\frac{n-2}{2}}v_{n,\tilde{\gamma }}(y)^{\max\{-4,2-n\}}. \tag{3.8}\]
By \(\gamma>\frac{n-2}{2}\) and (3.7), we have
\[\mu_{0*}^{\frac{n-2}{2}}v_{n,\tilde{\gamma}}=\begin{cases}t^{\frac{n-2-2}{6-n }},&\gamma<2\\ (\ln t)^{\frac{n-2}{6-n}}\,t^{-1},&\gamma=2\\ t^{-\frac{\tilde{\gamma}}{2}},&2<\tilde{\gamma}<3\end{cases}\sim\tilde{v}_{n, \tilde{\gamma}}(\tau(t)),\ \ \text{where}\ \ \tilde{v}_{n,\tilde{\gamma}}(\tau):=\begin{cases}\tau^{-1},&\frac{n-2}{2}< \gamma<2\\ (\tau\ln\tau)^{-1},&\gamma=2\\ \tau^{-\frac{\tilde{\gamma}}{2}},&2<\tilde{\gamma}<3.\end{cases} \tag{3.9}\]
Taking \(n=5\), we introduce the norm to measure the right hand side of the inner problem
\[\|f\|_{*}:=\sup_{\tau>\tau_{0},\ y\in B_{2R(t\tau)}}\left[\tilde{v}_{5,\tilde{ \gamma}}(\tau)\right]^{-1}\langle y\rangle^{3}|f(y,\tau)|.\]
The linearized operator \(\Delta+\frac{7}{3}U^{\frac{4}{3}}\) has only one positive eigenvalue \(\gamma_{0}>0\) such that
\[\Delta Z_{0}+\frac{7}{3}U^{\frac{4}{3}}Z_{0}=\gamma_{0}Z_{0}, \tag{3.10}\]
where the corresponding eigenfunction \(Z_{0}\in L^{\infty}(\mathbb{R}^{5})\) is radially symmetric and has exponential decay at spatial infinity. The following linear theory of the inner problem in dimension \(5\) is given by [32, Proposition 7.2] and [3, Proposition 7.1].
**Proposition 3.1**.: _Consider_
\[\begin{cases}\partial_{\tau}f=\Delta f+\frac{7}{3}U(y)^{\frac{4}{3}}f+h&\text { for }\ \tau>\tau_{0},\quad y\in B_{4R(t(\tau))},\\ f(y,\tau_{0})=e_{0}Z_{0}(y)&\text{ for }\ y\in B_{4R(t(\tau_{0}))},\end{cases} \tag{3.11}\]
_where \(h\) satisfies \(\|h\|_{*}<\infty\) and_
\[\int_{B_{4R(t(\tau))}}h(y,\tau)Z_{j}(y)dy=0,\quad\forall\tau\in(\tau_{0}, \infty),\quad j=1,2,\cdots,6, \tag{3.12}\]
_then for \(\tau_{0}\) suffciently large, there exists a solution \((f,e_{0})=(\mathcal{T}_{\rm in}[h],\mathcal{T}_{e_{0}}[h])\) as a linear mapping about \(h\), which satisfies the estimates_
\[\langle y\rangle|\nabla f|+|f|\lesssim\tilde{v}_{5,\tilde{\gamma}}(\tau)R^{5} \ln R\langle y\rangle^{-6}\|h\|_{*},\quad|e_{0}|\lesssim\tilde{v}_{5,\tilde{ \gamma}}(\tau_{0})R(\tau_{0})\|h\|_{*}.\]
**Remark 3.1**.: _By (2.19), \(4R(t(\tau))\) given here behaves like \(\ln\ln\tau\) but does not satisfy the assumption for \(R(\tau)\) in [32, p.37] accurately. In fact, one can repeat the proof of [32, Proposition 7.2] and [3, Proposition 7.1] to obtain Proposition 3.1._
By Proposition 3.1 and the convenience for applying the Schauder fixed-point theorem for the inner problem (2.20), we define the norm
\[\|g\|_{\rm in}:=\sup_{\tau>\tau_{0},y\in B_{2R(t(\tau))}}\Big{[}\tilde{v}_{5, \tilde{\gamma}}(\tau)R^{5}(t(\tau))\ln^{2}\left(R(t(\tau))\right)\Big{]}^{-1} \langle y\rangle^{6}\big{(}\langle y\rangle|\nabla g(y,\tau)|+|g(y,\tau)| \big{)}, \tag{3.13}\]
and we will solve (2.20) in the space
\[B_{\rm in}:=\left\{g(x,\tau)\ |\ g(\cdot,\tau)\in C^{1}\left(B_{2R(t(\tau))} \right)\ \ \text{for }\ \tau>\tau_{0},\quad\|g\|_{\rm in}\leq 1\right\}. \tag{3.14}\]
## 4. Solving the outer problem
**Proposition 4.1**.: _Given \(\phi\in B_{\rm in}\), \(\mu_{1},\ \xi\in C^{1}[t_{0},\infty)\) satisfying_
\[|\mu_{1}|\leq\mu_{0*}R^{-\frac{1}{2}},\quad|\dot{\mu}_{1}|\leq\mu_{0*}^{\frac {1}{2}}v_{5,\tilde{\gamma}}R^{-\frac{1}{2}},\quad|\xi|\leq\mu_{0*}R^{-\frac{1 }{2}},\quad|\dot{\xi}|\leq\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R^{-\frac{ 3}{2}}, \tag{4.1}\]
_then for \(t_{0}\) sufficiently large, there exists a unique solution \(\psi=\psi[\phi,\mu_{1},\xi]\) for the outer problem (2.15) with \(n=5\), which satisfies the following estimates:_
\[|\psi|\lesssim v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\left(\mathbf{1}_{|x|\leq \sqrt{t}}+t|x|^{-2}\mathbf{1}_{|x|>\sqrt{t}}\right), \tag{4.2}\]
\[\|\nabla\psi(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{5})}\lesssim v_{5,\tilde{ \gamma}}\mu_{0*}^{-1}R^{-2}\ln^{2}R. \tag{4.3}\]
Proof.: It suffices to find a fixed point for the following mapping
\[\psi=\mathcal{T}_{5}^{\rm out}\left[\mathcal{G}\left[\psi,\phi,\mu,\xi\right] \right],\]
where \(\mathcal{G}\left[\psi,\phi,\mu,\xi\right]\) is given in (2.16), and
\[\mathcal{T}_{5}^{\rm out}\left[f\right]:=\int_{t_{0}}^{t}\int_{\mathbb{R}^{5} }\left[4\pi(t-s)\right]^{-\frac{5}{2}}e^{-\frac{|x-s|^{2}}{4(t-s)}}f(z,s)dzds.\]
In this proof, we always assume \(t_{0}\) is sufficiently large and \(\int_{t_{2}}^{t_{1}}\cdots ds=0\) if \(t_{1}\leq t_{2}\). Obviously, (4.1) implies the ansatz (3.6) as well as (2.11). Combining these with (3.5), we see that there exists a constant \(C_{\mu}>9\) sufficiently large such that
\[9C_{\mu}^{-1}\mu_{0*}<\mu<C_{\mu}\mu_{0*}/9. \tag{4.4}\]
In what follows, [32, Lemma A.1, Lemma A.2] will be used repetitively to estimate \(\mathcal{T}_{5}^{\mathrm{out}}[\cdot]\).
Recall \(\Lambda_{1}\left[\phi,\mu,\xi\right]\) given in (2.12). Using (2.11), (4.1), and \(\gamma>\frac{3}{2}\), we have
\[\left|\frac{\dot{\xi}}{\mu R}\right|+\left|\frac{\partial_{t}(\mu R)}{\mu R} \right|=\left|\frac{\dot{\xi}}{\mu R}\right|+\left|\frac{\dot{\mu}}{\mu}+\frac {\dot{R}}{R}\right|\lesssim\mu^{-2}R^{-2}.\]
For \(\phi\in B_{\mathrm{in}}\), by (3.9),
\[\langle y\rangle|\nabla_{y}\phi|+|\phi|\lesssim\mu_{0*}^{\frac{3}{2}}v_{5, \tilde{\gamma}}R^{5}\ln^{2}R\langle y\rangle^{-6}.\]
Thus,
\[|\Lambda_{1}\left[\phi,\mu,\xi\right]|\lesssim\mu_{0*}^{-2}R^{-3}\ln^{2}Rv_{5, \tilde{\gamma}}\mathbf{1}_{R\leq|y|\leq 2R}\leq\mu_{0*}^{-2}R^{-3}\ln^{2}Rv_{5, \tilde{\gamma}}\mathbf{1}_{|x|\leq C_{\mu}\mu_{0*}R}. \tag{4.5}\]
Then
\[\mathcal{T}_{5}^{\mathrm{out}}\left[\mu_{0*}^{-2}R^{-3}\ln^{2}Rv_{5,\tilde{ \gamma}}\mathbf{1}_{|x|\leq C_{\mu}\mu_{0*}R}\right]\lesssim t^{-\frac{b}{2}} e^{-\frac{|x|^{2}}{16t}}\int_{t_{0}}^{\frac{t}{2}}\mu_{0*}^{3}(s)\left(R^{2}\ln^{2 }R\right)(s)v_{5,\tilde{\gamma}}(s)ds\]
\[+\mu_{0*}^{-2}R^{-3}\ln^{2}Rv_{5,\tilde{\gamma}}\left[\left(\mu_{0*}R\right)^ {2}\mathbf{1}_{|x|\leq\mu_{0*}R}+|x|^{-3}e^{-\frac{|x|^{2}}{16t}}\left(\mu_{0* }R\right)^{5}\mathbf{1}_{|x|>\mu_{0*}R}\right]\]
\[\lesssim w_{o}(x,t):=v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\left(\mathbf{1}_{|x| \leq\sqrt{t}}+t|x|^{-2}\mathbf{1}_{|x|>\sqrt{t}}\right),\]
where we have used the properties \(v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\gtrsim t^{-\frac{5}{2}+c}\) with a small constant \(c>0\) and \(\gamma>\frac{3}{2}\) to get the last inequality. Then
\[\left|\mathcal{T}_{5}^{\mathrm{out}}\left[\Lambda_{1}\left[\phi,\mu,\xi\right] \right]\right|\leq C_{o}w_{o}(x,t)/2\]
with a sufficiently large constant \(C_{o}\geq 2\). For this reason, we define the norm
\[\|f\|_{\mathrm{out}}:=\sup_{t\geq t_{0},\ \varkappa\in\mathbb{R}^{5}}\left(w_{o}(x, t)\right)^{-1}|f(x,t)|,\]
and the outer problem (2.15) will be solved in the space
\[B_{\mathrm{out}}:=\{f\ |\ \|f\|_{\mathrm{out}}\leq C_{o}\}. \tag{4.6}\]
Assume \(\epsilon_{1}>0\) is a sufficiently small constant, which can vary from line to line. For \(\Lambda_{2}\left[\phi,\mu,\xi\right]\) given in (2.13), we have
\[|\Lambda_{2}\left[\phi,\mu,\xi\right]|\lesssim\mu_{0*}^{-\frac{1}{2}}v_{5, \tilde{\gamma}}^{2}R^{5}\ln^{2}R\langle y\rangle^{-6}\mathbf{1}_{|x|\leq C_{ \mu}\mu_{0*}R}\lesssim t^{-\epsilon_{1}}\mu_{0*}^{-2}R^{-3}\ln^{2}Rv_{5, \tilde{\gamma}}\mathbf{1}_{|x|\leq C_{\mu}\mu_{0*}R},\]
where we have used \(\gamma>\frac{3}{2}\) and the last term has been handled in (4.5).
By (2.11), (4.1), and \(\gamma>\frac{3}{2}\), one has
\[\left|\left(\mu^{-\frac{5}{2}}\dot{\mu}Z_{6}(y)+\mu^{-\frac{5}{2} }\dot{\xi}\cdot\left(\nabla U\right)(y)\right)\eta\left(\tilde{y}\right)(1- \eta_{R})\right|+|\mathcal{E}_{\eta}|+\left|\mu^{-\frac{7}{2}}U(y)^{\frac{7}{3} }\left(\eta(\tilde{y})^{\frac{7}{3}}-\eta\left(\tilde{y}\right)\right)\right|\] \[+\left|\frac{7}{3}\mu^{-2}U(y)^{\frac{4}{3}}\eta(\tilde{y})^{\frac {4}{3}}\Psi_{0}\left(1-\eta_{R}\right)\right|\] \[\lesssim\mu_{0*}^{-2}v_{5,\tilde{\gamma}}\langle y\rangle^{-3} \mathbf{1}_{\mu R\leq|x-\xi|\leq 2\sqrt{t}}+\mu_{0*}^{-\frac{3}{2}}t^{-1}\langle y \rangle^{-3}\mathbf{1}_{\sqrt{t}\leq|x-\xi|\leq 2\sqrt{t}}\] \[\lesssim v_{5,\tilde{\gamma}}\mu_{0*}|x|^{-3}\mathbf{1}_{C_{\mu} ^{-1}\mu_{0*}R\leq|x|\leq 4\sqrt{t}}+\mu_{0*}^{\frac{3}{2}}t^{-\frac{5}{2}} \mathbf{1}_{\sqrt{t}/2\leq|x|\leq 4\sqrt{t}};\]
and their convolutions can be estimated as
\[\mathcal{T}_{5}^{\mathrm{out}}\left[v_{5,\tilde{\gamma}}\mu_{0*}|x| ^{-3}\mathbf{1}_{C_{\mu}^{-1}\mu_{0*}R\leq|x|\leq 4\sqrt{t}}\right]\lesssim t^{-\frac{5}{2}}e^{-\frac{|x|^{2}}{16t}} \int_{t_{0}}^{\frac{t}{2}}v_{5,\tilde{\gamma}}(s)\mu_{0*}(s)sds\] \[+v_{5,\tilde{\gamma}}\mu_{0*}\left(\left(\mu_{0*}R\right)^{-1} \mathbf{1}_{|x|\leq\mu_{0*}R}+|x|^{-1}\mathbf{1}_{\mu_{0*}R<|x|\leq\sqrt{t}}+t|x |^{-3}e^{-\frac{|x|^{2}}{16t}}\mathbf{1}_{|x|>\sqrt{t}}\right)\lesssim\left( \ln R\right)^{-2}\omega_{o}(x,t);\] \[\mathcal{T}_{5}^{\mathrm{out}}\left[\mu_{0*}^{\frac{3}{2}}t^{-\frac {5}{2}}\mathbf{1}_{\sqrt{t}/2\leq|x|\leq 4\sqrt{t}}\right]\lesssim t^{-\frac{5}{2}}e^{-\frac{|x|^{2}}{16t}} \int_{t_{0}}^{\frac{t}{2}}\mu_{0*}^{\frac{3}{2}}(s)ds+\mu_{0*}^{\frac{3}{2}}t^{- \frac{3}{2}}\left(\mathbf{1}_{|x|\leq\sqrt{t}}+t^{\frac{3}{2}}|x|^{-3}e^{- \frac{|x|^{2}}{16t}}\mathbf{1}_{|x|>\sqrt{t}}\right)\lesssim t^{-\epsilon_{1}} \omega_{o}(x,t),\]
where in the last step, we have used the property \(\tilde{\gamma}<3\) in (2.8) and \(\gamma>\frac{3}{2}\).
For any \(\psi_{1},\psi_{2}\in B_{\rm out}\), we have
\[\left|\mu^{-2}U(y)^{\frac{4}{3}}\eta(\tilde{y})^{\frac{4}{3}}\left( \psi_{1}-\psi_{2}\right)(1-\eta_{R})\right|\lesssim\mu_{0*}^{-2}\langle y\rangle ^{-4}\|\psi_{1}-\psi_{2}\|_{\rm out}w_{o}(x,t)\mathbf{1}_{\mu R\leq|x-\xi| \leq 2\sqrt{t}}\] \[\lesssim v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\mu_{0*}^{2}|x|^{-4} \mathbf{1}_{C_{\mu}^{-1}\mu_{0*}R\leq|x|\leq 4\sqrt{t}}\|\psi_{1}-\psi_{2}\|_{ \rm out}.\]
Then
\[\mathcal{T}_{5}^{\rm out}\left[v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R \mu_{0*}^{2}|x|^{-4}\mathbf{1}_{C_{\mu}^{-1}\mu_{0*}R\leq|x|\leq 4\sqrt{t}} \right]\lesssim t^{-\frac{5}{2}}e^{-\frac{|x|^{2}}{16t}}\int_{t_{0}}^{\frac{t}{ 2}}(v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R)(s)\mu_{0*}^{2}(s)s^{\frac{1}{2}}ds\] \[+v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\mu_{0*}^{2}\left(\mu_{0*}^{- 2}R^{-2}\mathbf{1}_{|x|\leq\sqrt{t}}+|x|^{-3}e^{-\frac{|x|^{2}}{16t}}t^{\frac{ 1}{2}}\mathbf{1}_{|x|>\sqrt{t}}\right)\lesssim R^{-2}w_{o}(x,t),\]
where we have used \(\gamma>\frac{3}{2}\) in the last step.
For \(\mathcal{N}\left[\psi,\phi,\mu,\xi\right]\) defined in (2.14), given any \(\psi\in B_{\rm out}\), we estimate
\[\left|\mathcal{N}\left[\psi,\phi,\mu,\xi\right]\right|\lesssim \left(\left|\mu^{-\frac{3}{2}}U(y)\eta(\tilde{y})\right|^{\frac{1}{3}}+\left| \Psi_{0}+\psi+\mu^{-\frac{3}{2}}\phi(y,t)\eta_{R}\right|^{\frac{1}{3}}\right) \left|\Psi_{0}+\psi+\mu^{-\frac{3}{2}}\phi(y,t)\eta_{R}\right|^{2}\] \[\lesssim\mu^{-\frac{1}{2}}U(y)^{\frac{1}{3}}\eta(\tilde{y})^{ \frac{1}{3}}\left(\Psi_{0}^{2}+\psi^{2}+\left|\mu^{-\frac{3}{2}}\phi(y,t)\eta _{R}\right|^{2}\right)+\left|\Psi_{0}\right|^{\frac{7}{3}}+\left|\psi\right|^{ \frac{7}{3}}+\left|\mu^{-\frac{3}{2}}\phi(y,t)\eta_{R}\right|^{\frac{7}{3}}\] \[\lesssim\mu^{-\frac{1}{2}}\langle y\rangle^{-1}\mathbf{1}_{|x-\xi| \leq 2t^{\frac{1}{2}}}\Big{[}t^{-\tilde{\gamma}}\mathbf{1}_{|x|\leq t^{\frac{1}{2 }}}+|x|^{-2\tilde{\gamma}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}+C_{o}^{2}\left(v_{5, \tilde{\gamma}}R^{-1}\ln^{2}R\right)^{2}\left(\mathbf{1}_{|x|\leq t^{\frac{1}{ 2}}}+\left(t|x|^{-2}\right)^{2}\mathbf{1}_{|x|>t^{\frac{1}{2}}}\right)\] \[\quad+\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2}R\langle y\rangle^{-6 }\right)^{2}\mathbf{1}_{|x-\xi|\leq 2\mu R}\Big{]}\] \[\quad+t^{-\frac{7}{3}\frac{7}{2}}\mathbf{1}_{|x|\leq t^{\frac{1}{ 2}}}+|x|^{-\frac{7}{3}\tilde{\gamma}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}+C_{o}^{ \frac{7}{3}}\left(v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\right)^{\frac{7}{3}} \left(\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}+\left(t|x|^{-2}\right)^{\frac{7}{3} }\mathbf{1}_{|x|>t^{\frac{1}{2}}}\right)\] \[\quad+\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2}R\langle y\rangle^{-6 }\right)^{\frac{7}{3}}\mathbf{1}_{|x-\xi|\leq 2\mu R}\] \[\lesssim\mu^{\frac{1}{2}}_{0*}\left(|x|+\mu_{0*}\right)^{-1} \mathbf{1}_{|x|\leq t^{\frac{1}{2}}}\left[C_{o}^{2}\left(v_{5,\tilde{\gamma}}R ^{5}\ln^{2}R\right)^{2}\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}+|x|^{-2\tilde{ \gamma}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}+C_{o}^{2}\left(v_{5,\tilde{\gamma}}R^{ -1}\ln^{2}R\right)^{2}\left(t|x|^{-2}\right)^{2}\mathbf{1}_{|x|>t^{\frac{1}{2} }}\right]\] \[\quad+C_{o}^{\frac{7}{3}}\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2}R \right)^{\frac{7}{3}}\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}+|x|^{-\frac{7}{3} \tilde{\gamma}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}+C_{o}^{\frac{7}{3}}\left(v_{5, \tilde{\gamma}}R^{-1}\ln^{2}R\right)^{\frac{7}{3}}\left(t|x|^{-2}\right)^{\frac{7 }{3}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}\] \[\lesssim C_{o}^{2}\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2}R\right)^{2} \mu_{0*}^{\frac{1}{2}}\left(|x|+\mu_{0*}\right)^{-1}\mathbf{1}_{|x|\leq t^{ \frac{1}{2}}}+C_{o}^{2}\mu_{0*}^{\frac{1}{2}}t^{-\frac{1}{2}-\tilde{\gamma}} \mathbf{1}_{t^{\frac{1}{2}}<|x|\leq 4t^{\frac{1}{2}}}\] \[\quad+C_{o}^{\frac{7}{3}}\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2}R \right)^{\frac{7}{3}}\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}+|x|^{-\frac{7}{3} \tilde{\gamma}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}+C_{o}^{\frac{7}{3}}\left(v_{5, \tilde{\gamma}}R^{-1}\ln^{2}R\right)^{\frac{7}{3}}\left(t|x|^{-2}\right)^{\frac{7 }{3}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}.\]
Here, by \(\gamma>\frac{3}{2}\), we then estimate their convolutions
\[\mathcal{T}_{5}^{\rm out}\left[\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2 }R\right)^{2}\mu_{0*}^{\frac{1}{2}}\left(|x|+\mu_{0*}\right)^{-1}\mathbf{1}_{|x| \leq t^{\frac{1}{2}}}\right]\lesssim t^{-\frac{5}{2}}e^{-\frac{|x|^{2}}{16t}} \int_{t_{0}}^{\frac{t}{2}}\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2}R\right)^{2}(s) \mu_{0*}^{\frac{1}{2}}(s)s^{2}ds\] \[\quad+\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2}R\right)^{2}\mu_{0*}^{ \frac{1}{2}}\left(t^{\frac{1}{2}}\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}+t^{2}|x|^{- 3}e^{-\frac{|x|^{2}}{16t}}\mathbf{1}_{|x|>t^{\frac{1}{2}}}\right)\lesssim t^{- \epsilon_{1}}w_{o}(x,t);\] \[\mathcal{T}_{5}^{\rm out}\left[\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2 }R\right)^{\frac{7}{3}}\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}\right]\lesssim t^{- \frac{5}{2}}e^{-\frac{|x|^{2}}{16t}}\int_{t_{0}}^{\frac{t}{2}}\left(v_{5, \tilde{\gamma}}R^{5}\ln^{2}R\right)^{\frac{7}{3}}(s)s^{\frac{5}{2}}ds\] \[\quad+\left(v_{5,\tilde{\gamma}}R^{5}\ln^{2}R\right)^{\frac{7}{3}}
where we used the property \(\tilde{\gamma}<3\);
\[\mathcal{T}_{5}^{\mathrm{out}}\left[|x|^{-\frac{7}{3}\tilde{\gamma}} \mathbf{1}_{|x|>t^{\frac{1}{2}}}\right]\lesssim\left[t^{-\frac{5}{2}}\int_{t_{0} }^{\frac{1}{2}}\begin{cases}0,&\text{ if }\ \frac{7}{3}\tilde{\gamma}<5\\ \langle\ln\left(ts^{-1}\right)\rangle,&\text{ if }\ \frac{7}{3}\tilde{\gamma}=5\ ds \ \ +t^{1-\frac{7}{6}\gamma}\right]\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}\\ s^{\frac{5}{2}-\frac{7}{6}\gamma},&\text{ if }\ \frac{7}{3}\tilde{\gamma}>5 \end{cases}\] \[+\left[t|x|^{-\frac{7}{3}\tilde{\gamma}}+t^{-\frac{5}{2}}e^{- \frac{|x|^{2}}{16\tau}}\int_{t_{0}}^{\frac{1}{2}}\begin{cases}0,&\text{ if }\ \frac{7}{3}\tilde{\gamma}<5\\ \langle\ln(|x|s^{-\frac{1}{2}})\rangle,&\text{ if }\ \frac{7}{3}\tilde{\gamma}=5\ ds \end{cases}\mathbf{1}_{|x|>t^{\frac{1}{2}}}\lesssim t^{-\epsilon_{1}}w_{o}(x,t);\] \[\mathcal{T}_{5}^{\mathrm{out}}\left[\left(v_{5,\tilde{\gamma}}R^ {-1}\ln^{2}R\right)^{\frac{7}{3}}\left(t|x|^{-2}\right)^{\frac{7}{3}}\mathbf{1 }_{|x|>t^{\frac{1}{2}}}\right]\lesssim\left[t^{-\frac{7}{3}}\int_{t_{0}}^{ \frac{1}{2}}\left(v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\right)^{\frac{7}{3}}(s)s ^{\frac{7}{3}}ds+t\left(v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\right)^{\frac{7}{3} }\right]\mathbf{1}_{|x|\leq t^{\frac{1}{2}}}\] \[+|x|^{-\frac{14}{3}}\left[t^{\frac{10}{3}}\left(v_{5,\tilde{ \gamma}}R^{-1}\ln^{2}R\right)^{\frac{7}{3}}+\int_{t_{0}}^{\frac{7}{2}}\left(v_ {5,\tilde{\gamma}}R^{-1}\ln^{2}R\right)^{\frac{7}{3}}(s)s^{\frac{7}{3}}ds \right]\mathbf{1}_{|x|>t^{\frac{1}{2}}}\lesssim t^{-\epsilon_{1}}w_{o}(x,t).\]
For any \(\psi_{1},\psi_{2}\in B_{\mathrm{out}}\), one has
\[|\mathcal{N}\left[\psi_{1},\phi,\mu,\xi\right]-\mathcal{N}\left[ \psi_{2},\phi,\mu,\xi\right]|\] \[= \left|\frac{7}{3}\left(\psi_{1}-\psi_{2}\right)\left[\left|\mu^{- \frac{3}{2}}U(y)\eta(\tilde{y})+\Psi_{0}+\theta\psi_{1}+\left(1-\theta\right) \psi_{2}+\mu^{-\frac{3}{2}}\phi(y,t)\eta_{R}\right|^{\frac{4}{3}}-\left|\mu^{ -\frac{3}{2}}U(y)\eta(\tilde{y})\right|^{\frac{4}{3}}\right]\right|\] \[\lesssim \|\psi_{1}-\psi_{2}\|_{\mathrm{out}}w_{o}(x,t)\bigg{[}\mu^{- \frac{1}{2}}U(y)^{\frac{1}{3}}\eta(\tilde{y})^{\frac{1}{3}}\left(|\Psi_{0}|+C_{ o}w_{o}(x,t)+\left|\mu^{-\frac{3}{2}}\phi(y,t)\eta_{R}\right|\right)\] \[+|\Psi_{0}|^{\frac{4}{3}}+C_{o}^{\frac{4}{3}}w_{o}(x,t)^{\frac{ 4}{3}}+\left|\mu^{-\frac{3}{2}}\phi(y,t)\eta_{R}\right|^{\frac{4}{3}}\bigg{]}\] \[\lesssim \|\psi_{1}-\psi_{2}\|_{\mathrm{out}}\bigg{[}\mu^{-\frac{1}{2}}U(y )^{\frac{1}{3}}\eta(\tilde{y})^{\frac{1}{3}}\left(|\Psi_{0}|^{2}+C_{o}w_{o}(x,t)^{2}+\left|\mu^{-\frac{9}{2}}\phi(y,t)\eta_{R}\right|^{2}\right)\] \[+|\Psi_{0}|^{\frac{7}{3}}+C_{o}^{\frac{4}{3}}w_{o}(x,t)^{\frac{7}{ 3}}+\left|\mu^{-\frac{3}{2}}\phi(y,t)\eta_{R}\right|^{\frac{7}{3}}\bigg{]},\]
which can be handled by the same way for estimating \(|\mathcal{N}\left[\psi,\phi,\mu,\xi\right]|\).
In sum, for \(t_{0}\) sufficiently large, \(\mathcal{T}_{5}^{\mathrm{out}}\left[\mathcal{G}\left[\psi,\phi,\mu,\xi\right] \right]\in B_{\mathrm{out}}\) and is a contraction mapping about \(\psi\), which implies that there exists a unique solution \(\psi\in B_{\mathrm{out}}\). Moreover, by \(\gamma>\frac{3}{2}\), we have
\[|\mathcal{G}\left[\psi,\phi,\mu,\xi\right]|\lesssim v_{5,\tilde{\gamma}}\mu_{0* }^{-2}R^{-3}\ln^{2}R.\]
By the scaling argument, we get (4.3).
## 5. Solving orthogonal equations about \(\mu_{1},\xi\)
For the utilization of Proposition 3.1 for the inner problem, we need to choose suitable \(\mu_{1}\), \(\xi\) such that the orthogonal conditions
\[\int_{B_{4R}}\mathcal{H}[\psi,\mu,\xi](y,t)Z_{i}(y)dy=0,\quad\mu=\mu_{0}+\mu_{1 },\quad i=1,\ldots,n+1,\quad n=5 \tag{5.1}\]
are satisfied, where \(\psi=\psi[\phi,\mu_{1},\xi]\) is solved by Proposition 4.1, and \(\mathcal{H}\left[\psi,\mu,\xi\right]\) is given in (2.18).
**Proposition 5.1**.: _Given \(0<D_{0}\leq D_{1}<2D_{0}\), for \(t_{0}\) sufficiently large, then there exists a solution \((\mu_{1},\xi)=(\mu_{1}[\phi],\xi[\phi])\) for (5.1) with \(n=5\) satisfying_
\[|\mu_{1}|\lesssim\mu_{0*}R^{-\frac{2}{3}},\quad|\dot{\mu}_{1}|\lesssim\mu_{0*}^ {\frac{1}{2}}v_{5,\tilde{\gamma}}R^{-\frac{2}{3}},\quad|\xi|\lesssim\mu_{0*}R^{- \frac{7}{4}},\quad|\dot{\xi}|\lesssim\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R ^{-\frac{7}{4}}. \tag{5.2}\]
Proof.: First, let us consider the general dimension \(n\). By (2.18), (5.1) is equivalent to
\[\dot{\mu}=-\frac{n+2}{n-2}\left(\int_{B_{4R}}Z_{n+1}^{2}(y)dy\right) ^{-1}\mu^{\frac{n}{2}-2}\int_{B_{4R}}\left(\Psi_{0}(\mu y+\xi,t)+\psi(\mu y+\xi, t)\right)U(y)^{\frac{4}{n-2}}Z_{n+1}(y)dy, \tag{5.3}\] \[\dot{\xi}=\bar{\mathcal{S}}[\mu_{1},\xi]:=\left(\mathcal{S}_{1}[ \mu_{1},\xi],\ldots,\mathcal{S}_{n}[\mu_{1},\xi]\right),\quad\text{ for }i=1,2,\ldots,n,\] (5.4) \[\mathcal{S}_{i}[\mu_{1},\xi]:=-\frac{n+2}{n-2}\left(\int_{B_{4R}}Z _{i}^{2}(y)dy\right)^{-1}\mu^{\frac{n}{2}-2}\] \[\qquad\times\int_{B_{4R}}\Big{[}\Psi_{0}(\mu y+\xi,t)-\Psi_{0}(0, t)+\psi(\mu y+\xi,t)-\psi(0,t)\Big{]}U(y)^{\frac{4}{n-2}}Z_{i}(y)dy,\]
where we have used the parity of \(Z_{i}(y)\). By \(\mu=\mu_{0}+\mu_{1}\) and \(\mu_{0}\) satisfying (3.2), we rewrite (5.3) as
\[\dot{\mu}_{1}+\beta(t)\mu_{1}=\mathcal{F}[\mu_{1},\xi](t), \tag{5.5}\]
where
\[\beta(t):=\frac{n+2}{n-2}\left(\int_{B_{4R}}Z_{n+1}^{2}(y)dy \right)^{-1}\frac{n-4}{2}\mu_{0}^{\frac{n}{2}-3}\Psi_{0}(0,t)\int_{B_{4R}}U(y) ^{\frac{4}{n-2}}Z_{n+1}(y)dy \tag{5.6}\] \[\qquad=\frac{n-4}{n-6}\frac{A(R)\Psi_{0}(0,t)}{\int_{M}^{t}A(R(s ))\Psi_{0}(0,s)ds}\]
with the application of (3.4) in the last step;
\[\mathcal{F}[\mu_{1},\xi](t):=-\frac{n+2}{n-2}\left(\int_{B_{4R}}Z _{n+1}^{2}(y)dy\right)^{-1}\bigg{[}\mu^{\frac{n}{2}-2}\int_{B_{4R}}\psi(\mu y +\xi,t)U(y)^{\frac{4}{n-2}}Z_{n+1}(y)dy \tag{5.7}\] \[\qquad\quad+\mu^{\frac{n}{2}-2}\int_{B_{4R}}\left(\Psi_{0}(\mu y+ \xi,t)-\Psi_{0}(0,t)\right)U(y)^{\frac{4}{n-2}}Z_{n+1}(y)dy\] \[\qquad\quad+\left(\mu^{\frac{n}{2}-2}-\mu_{0}^{\frac{n}{2}-2}- \frac{n-4}{2}\mu_{0}^{\frac{n}{2}-3}\mu_{1}\right)\Psi_{0}(0,t)\int_{B_{4R}}U( y)^{\frac{4}{n-2}}Z_{n+1}(y)dy\bigg{]}.\]
In order to find a solution \((\mu_{1},\xi)\) for the system (5.4)-(5.5), it suffices to solve the following fixed point problem about \(\dot{\mu}_{1},\dot{\xi}\),
\[\dot{\mu}_{1}=\mathcal{S}_{n+1}[\mu_{1},\xi]:=\frac{d}{dt}\left( \int_{\tilde{t}_{0}}^{t}\mathcal{F}[\mu_{1},\xi](s)e^{\int_{t}^{s}\beta(a)da} ds\right)=-\beta(t)\int_{\tilde{t}_{0}}^{t}\mathcal{F}[\mu_{1},\xi](s)e^{ \int_{t}^{s}\beta(a)da}ds+\mathcal{F}[\mu_{1},\xi](t), \tag{5.8}\] \[\mu_{1}=\mu_{1}[\dot{\mu}_{1}](t):=\int_{\tilde{t}_{0}}^{t}\dot{ \mu}_{1}(a)da,\quad\dot{\xi}=\bar{\mathcal{S}}[\mu_{1},\xi],\quad\xi=\xi[ \dot{\xi}](t):=\int_{\tilde{t}_{0}}^{t}\dot{\xi}(a)da\ \ \text{with}\ \ \tilde{t}_{0}:=\begin{cases}t_{0},&\gamma\leq 2\\ \infty,&\gamma>2\end{cases}\]
if these integrals are well-defined.
Hereafter, we take \(n=5\). By (2.5) and (3.3), we have
\[\left\{\begin{aligned} &-\frac{D_{1}}{D_{0}}\left(1-\frac{ \gamma}{2}\right)t^{-1}\left(1+O(R^{-\frac{1}{2}})\right)\leq\beta(t)\leq-\frac {D_{0}}{D_{1}}\left(1-\frac{\gamma}{2}\right)t^{-1}\left(1+O(R^{-\frac{1}{2}} )\right)&\text{if}\ \ \gamma<2,\\ &-\frac{D_{1}}{D_{0}}(t\ln t)^{-1}\left(1+O(R^{-\frac{1}{2}}) \right)\leq\beta(t)\leq-\frac{D_{0}}{D_{1}}(t\ln t)^{-1}\left(1+O(R^{-\frac{1 }{2}})\right)&\text{if}\ \ \gamma=2,\\ &\beta(t)\sim-v_{5,\gamma}&\text{if}\ \ \gamma>2.\end{aligned}\right. \tag{5.9}\]
We will solve the system (5.8) in the space
\[B_{\dot{\mu}_{1}}:=\left\{f\in C[t_{0},\infty)\ |\ \|f\|_{\dot{\mu}_{1}}\leq 1 \right\},\quad B_{\dot{\xi}}=\left\{\vec{f}=(f_{1},\ldots,f_{5})\in C[t_{0}, \infty)\ |\ \|\vec{f}\|_{\dot{\xi}}\leq 1\right\} \tag{5.10}\]
with the norm
\[\|f\|_{\dot{\mu}_{1}}:=\sup_{t\geq t_{0}}\left(\mu_{0*}^{\frac{1}{2}}v_{5, \tilde{\gamma}}R^{-\frac{2}{3}}\right)^{-1}(t)\left|f(t)\right|,\quad\|\vec{f} \|_{\dot{\xi}}:=\sup_{t\geq t_{0}}\left(\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{ \gamma}}R^{-\frac{7}{4}}\right)^{-1}(t)|\vec{f}(t)|, \tag{5.11}\]
where
\[\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}=\begin{cases}t^{1-\gamma},&\gamma<2\\ t^{-1}\ln t,&\gamma=2\\ t^{-\frac{\tilde{\gamma}}{2}},&\gamma>2\end{cases}.\]
For any \((\dot{\mu}_{1},\dot{\xi})\in B_{\dot{\mu}_{1}}\times B_{\dot{\xi}}\), it is easy to see that \(\int_{\tilde{t}_{0}}^{t}\dot{\mu}_{1}(a)da\) and \(\int_{\tilde{t}_{0}}^{t}\dot{\xi}(a)da\) in (5.8) are well-defined, and
\[|\mu_{1}|\lesssim\begin{cases}t^{2-\gamma}R^{-\frac{2}{3}},&\gamma<2\\ \left(\ln t\right)^{2}R^{-\frac{2}{3}},&\gamma=2\end{cases}\lesssim\mu_{0*}R^{ -\frac{2}{3}},\quad|\xi|\lesssim\mu_{0*}R^{-\frac{\gamma}{4}}. \tag{5.12}\]
Thus, \(\mu_{1}\), \(\dot{\mu}_{1}\), \(\xi\), \(\dot{\xi}\) satisfy the assumption (4.1) in Proposition 4.1. By (2.6), (4.3), and \(\gamma>\frac{3}{2}\), we get
\[\left|\vec{\mathcal{S}}[\mu_{1},\xi]\right|\lesssim\mu^{\frac{1}{2}}\left(| \nabla_{x}\Psi_{0}(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{5})}+\|\nabla_{x}\psi( \cdot,t)\|_{L^{\infty}(\mathbb{R}^{5})}\right)(|\mu|+|\xi|)\lesssim\mu_{0*}^{ \frac{1}{2}}v_{5,\tilde{\gamma}}R^{-2}\ln^{2}R. \tag{5.13}\]
Using (4.2), (2.6), (2.5) in order, we have
\[\left|\mu^{\frac{1}{2}}\int_{B_{4R}}\psi(\mu y+\xi,t)U(y)^{\frac{ 4}{3}}Z_{6}(y)dy\right|\lesssim\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R^{- 1}\ln^{2}R,\] \[\left|\mu^{\frac{1}{2}}\int_{B_{4R}}\left(\Psi_{0}(\mu y+\xi,t)- \Psi_{0}(0,t)\right)U(y)^{\frac{4}{3}}Z_{6}(y)dy\right|\lesssim\mu^{\frac{1}{ 2}}\left(\mu+|\xi|\right)t^{-\frac{1}{2}}v_{5,\gamma}\sim\mu_{0*}^{\frac{3}{2 }}t^{-\frac{1}{2}}v_{5,\gamma},\] \[\left|\left(\mu^{\frac{1}{2}}-\mu_{0}^{\frac{1}{2}}-\frac{1}{2} \mu_{0}^{-\frac{1}{2}}\mu_{1}\right)\Psi_{0}(0,t)\int_{B_{4R}}U(y)^{\frac{4}{ 3}}Z_{6}(y)dy\right|\lesssim\mu_{0*}^{\frac{1}{2}}v_{5,\gamma}R^{-\frac{4}{3}},\]
which implies
\[|\mathcal{F}[\mu_{1},\xi](t)|\lesssim\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma} }R^{-\frac{3}{4}}. \tag{5.14}\]
Since \(0<D_{0}\leq D_{1}<2D_{0}\), there exists \(\epsilon_{1}>0\) sufficiently small so that \(\frac{D_{1}}{D_{0}}(1+\epsilon_{1})<2\). By taking \(t_{0}\) sufficiently large, which can depend on \(\gamma\), and using (5.9), we obtain that for \(\gamma<2\),
\[\left|\int_{\tilde{t}_{0}}^{t}\mathcal{F}[\mu_{1},\xi](s)e^{f_{t} ^{s}\,\beta(a)da}ds\right|\lesssim\int_{t_{0}}^{t}s^{1-\gamma}(\ln\ln s)^{- \frac{3}{4}}e^{\frac{D_{1}}{D_{0}}\left(1-\frac{\gamma}{2}\right)(1+\epsilon_{ 1})\int_{s}^{t}a^{-1}da}ds\] \[= t^{\frac{D_{1}}{D_{0}}\left(1-\frac{\gamma}{2}\right)(1+\epsilon _{1})}\int_{t_{0}}^{t}s^{1-\gamma-\frac{D_{1}}{D_{0}}\left(1-\frac{\gamma}{2} \right)(1+\epsilon_{1})}(\ln\ln s)^{-\frac{3}{4}}ds\lesssim t^{2-\gamma}R^{- \frac{3}{4}};\]
for \(\gamma=2\),
\[\left|\int_{\tilde{t}_{0}}^{t}\mathcal{F}[\mu_{1},\xi](s)e^{f_{t} ^{s}\,\beta(a)da}ds\right|\lesssim\int_{t_{0}}^{t}s^{-1}\ln s(\ln\ln s)^{- \frac{3}{4}}e^{\frac{D_{1}}{D_{0}}(1+\epsilon_{1})\int_{s}^{t}(a\ln a)^{-1}da}ds\] \[= (\ln t)^{\frac{D_{1}}{D_{0}}(1+\epsilon_{1})}\int_{t_{0}}^{t}s^{- 1}(\ln s)^{1-\frac{D_{1}}{D_{0}}(1+\epsilon_{1})}(\ln\ln s)^{-\frac{3}{4}}ds\] \[= (\ln t)^{\frac{D_{1}}{D_{0}}(1+\epsilon_{1})}\int_{\ln t_{0}}^{\ln t }z^{1-\frac{D_{1}}{D_{0}}(1+\epsilon_{1})}(\ln z)^{-\frac{3}{4}}dz\lesssim( \ln t)^{2}R^{-\frac{3}{4}};\]
for \(\gamma>2\),
\[\left|\int_{\tilde{t}_{0}}^{t}\mathcal{F}[\mu_{1},\xi](s)e^{f_{t}^{s}\,\beta(a )da}ds\right|\lesssim\int_{t}^{\infty}s^{-\frac{\tilde{\gamma}}{2}}\left(\ln \ln s\right)^{-\frac{3}{4}}ds\lesssim t^{1-\frac{\tilde{\gamma}}{2}}R^{-\frac{ 3}{4}}.\]
Thus, \(\int_{\tilde{t}_{0}}^{t}\mathcal{F}[\mu_{1},\xi](s)e^{f_{t}^{s}\,\beta(a)da}ds\) is well-defined in (5.8), and
\[\left|\beta(t)\int_{\tilde{t}_{0}}^{t}\mathcal{F}[\mu_{1},\xi](s)e^{f_{t}^{s} \,\beta(a)da}ds\right|\lesssim\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R^{- \frac{3}{4}}. \tag{5.15}\]
Combining (5.13), (5.14) and (5.15), we have
\[|\mathcal{S}_{6}[\mu_{1},\xi]|\lesssim\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}} R^{-\frac{3}{4}},\quad\left|\vec{\mathcal{S}}[\mu_{1},\xi]\right|\lesssim\mu_{0*}^{ \frac{1}{2}}v_{5,\tilde{\gamma}}R^{-2}\ln^{2}R, \tag{5.16}\]
which implies \(\big{(}\mathcal{S}_{6},\vec{\mathcal{S}}\big{)}[\mu_{1},\xi]\in B_{\dot{\mu}_{1}} \times B_{\dot{\xi}}\).
For any sequence \((\dot{\mu}_{1}^{[j]},\dot{\xi}^{[j]}]_{j\geq 1}\subset B_{\dot{\mu}_{1}}\times B_{\dot{ \xi}}\), denote \(\mu_{1}^{[j]}=\int_{t_{0}}^{t}\dot{\mu}_{1}^{[j]}(a)da\), \(\xi^{[j]}=\int_{t_{0}}^{t}\dot{\xi}^{[j]}(a)da\). We set \(\tilde{\mu}_{1}^{[j]}:=\mathcal{S}_{6}[\mu_{1}^{[j]},\xi^{[j]}]\), \(\tilde{\xi}^{[j]}:=\vec{\mathcal{S}}[\mu_{1}^{[j]},\xi^{[j]}]\). By the same method for deducing (5.16), we have
\[|\tilde{\mu}_{1}^{[j]}|\leq C_{1}\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R^{- \frac{3}{4}},\quad|\tilde{\xi}^{[j]}|\leq C_{1}\mu_{0*}^{\frac{1}{2}}v_{5, \tilde{\gamma}}R^{-2}\ln^{2}R\quad\text{ for all }\ j\geq 1 \tag{5.17}\]
with a constant \(C_{1}>0\) independent of \(j\).
For any compact subset \(K\subset\subset[t_{0},\infty)\), by the equation (5.8) and the space-time regularity for the outer solution \(\psi\), for all \(j\geq 1\), \(\tilde{\mu}_{1}^{[j]}\) and \(\tilde{\xi}^{[j]}\) are uniformly Holder continuous in \(K\). Since there exist countable compact sets to saturate \([t_{0},\infty)\), then up to a subsequence, for any compact set \(K\subset\subset[t_{0},\infty)\),
\[\tilde{\mu}_{1}^{[j]}\to g,\quad\tilde{\xi}^{[j]}\to\vec{g}\ \ \text{in}\ \ L^{\infty}(K)\ \ \text{as}\ \ j\to\infty\]
for some \(g,\ \vec{g}\in C[t_{0},\infty)\). By (5.17), we have
\[|g|\leq C_{1}\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R^{-\frac{3}{4}},\quad |\vec{g}|\leq C_{1}\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R^{-2}\ln^{2}R.\]
Thus, for any \(\epsilon_{1}>0\), there exists \(t_{1}\) sufficiently large such that for all \(j\geq 1\),
\[\sup_{t\geq t_{1}}\left(\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R^{-\frac{ 2}{3}}\right)^{-1}(t)\left|\left(\tilde{\mu}_{1}^{[j]}-g\right)(t)\right|+\sup _{t\geq t_{1}}\left(\mu_{0*}^{\frac{1}{2}}v_{5,\tilde{\gamma}}R^{-\frac{2}{ 4}}\right)^{-1}(t)\left|\left(\tilde{\xi}^{[j]}-\vec{g}\right)(t)\right|< \epsilon_{1}.\]
Additionally,
\[\lim_{j\to\infty}\left[\sup_{t_{0}\leq t\leq t_{1}}\left(\mu_{0*}^{\frac{1}{2 }}v_{5,\tilde{\gamma}}R^{-\frac{2}{3}}\right)^{-1}(t)\left|\left(\tilde{\mu}_ {1}^{[j]}-g\right)(t)\right|+\sup_{t_{0}\leq t\leq t_{1}}\left(\mu_{0*}^{ \frac{1}{2}}v_{5,\tilde{\gamma}}R^{-\frac{2}{4}}\right)^{-1}(t)\left|\left( \tilde{\xi}^{[j]}-\vec{g}\right)(t)\right|\bigg{]}=0.\]
Consequently, \(\lim_{j\to\infty}\left(\|\tilde{\mu}_{1}^{[j]}-g\|_{\dot{\mu}_{1}}+\|\tilde{ \xi}^{[j]}-\vec{g}\|_{\dot{\xi}}\right)=0\), which implies \(\big{(}\mathcal{S}_{6},\vec{\mathcal{S}}\big{)}[\mu_{1},\xi]\) is a compact mapping on \(B_{\dot{\mu}_{1}}\times B_{\dot{\xi}}\).
By the Schauder fixed-point theorem, there exists a solution \((\dot{\mu}_{1},\dot{\xi})\in B_{\dot{\mu}_{1}}\times B_{\dot{\xi}}\) for the system (5.8).
## 6. Solving the inner problem
By (5.2), (2.6), (4.2), and (3.9), for \(|y|\leq 4R\),
\[\left|\mu\dot{\xi}\cdot(\nabla U)\,(y)+\frac{7}{3}\mu^{\frac{3}{2}}U(y)^{\frac {5}{3}}\Big{(}\Psi_{0}(\mu y+\xi,t)-\Psi_{0}(0,t)+\psi(\mu y+\xi,t)\Big{)}\right|\]
\[\lesssim\mu_{0*}^{\frac{3}{2}}v_{5,\tilde{\gamma}}R^{-1}\ln^{2}R\langle y \rangle^{-4}\sim\tilde{v}_{5,\tilde{\gamma}}(\tau(t))R^{-1}\ln^{2}R\langle y \rangle^{-4}.\]
For brevity, denote \(\tilde{H}[\phi]:=\mathcal{H}\left[\psi\big{[}\phi,\mu_{1}[\phi],\xi[\phi] \big{]},\mu_{0}+\mu_{1}[\phi],\xi[\phi]\right]\). From (3.8), we have
\[|\tilde{H}[\phi]|\lesssim\tilde{v}_{5,\tilde{\gamma}}(\tau(t))\langle y \rangle^{-3}. \tag{6.1}\]
By Proposition 5.1, we can apply Proposition 3.1 to the inner problem (2.20), and it suffices to solve the following fixed-point problem
\[\phi=\mathcal{T}_{\text{in}}\big{[}\tilde{H}[\phi]\big{]}.\]
Indeed, for any \(\phi\in B_{\text{in}}\), given \(t_{0}\) (i.e. \(\tau_{0}\)) sufficiently large, by Proposition 3.1, we have
\[\langle y\rangle\left|\nabla_{y}\mathcal{T}_{\text{in}}\big{[}\tilde{H}[\phi] \big{]}\right|+\left|\mathcal{T}_{\text{in}}\big{[}\tilde{H}[\phi]\big{]} \right|\lesssim\tilde{v}_{5,\tilde{\gamma}}(\tau)R^{5}\ln R\langle y\rangle^{-6}, \quad\left|\mathcal{T}_{e_{0}}\big{[}\tilde{H}[\phi]\big{]}\right|\lesssim\tilde{v }_{5,\tilde{\gamma}}(\tau_{0})R(\tau_{0}), \tag{6.2}\]
which implies \(\mathcal{T}_{\text{in}}\big{[}\tilde{H}[\phi]\big{]}\in B_{\text{in}}\) in particular.
For any sequence \((\phi_{j})_{j\geq 1}\in B_{\rm in}\), denote \(\tilde{\phi}_{j}:=\mathcal{T}_{\rm in}\big{[}\tilde{H}[\phi_{j}]\big{]}\), \(\tilde{e}_{j}:=\mathcal{T}_{e_{0}}\big{[}\tilde{H}[\phi_{j}]\big{]}\), which satisfies
\[\begin{cases}\partial_{\tau}\tilde{\phi}_{j}=\Delta_{y}\tilde{\phi}_{j}+\frac{ 7}{3}U(y)^{\frac{4}{3}}\tilde{\phi}_{j}+\tilde{H}[\phi_{j}]&\text{ in }\ \mathcal{D}_{4R}:=\big{\{}(y,\tau)\ |\ \tau\in(\tau_{0}, \infty),\quad y\in B_{4R(t(\tau))}\big{\}}\\ \tilde{\phi}_{j}(\cdot,\tau_{0})=\tilde{e}_{j}Z_{0}&\text{ in }\ B_{4R(t_{0})}.\end{cases}\]
Repeating the process for deducing (6.1) and (6.2), one sees that there exists a constant \(C_{1}\) independent of \(j\) such that
\[|\tilde{H}[\phi_{j}]|\leq C_{1}\tilde{v}_{5,\tilde{\gamma}}(\tau)\langle y \rangle^{-3},\quad\langle y\rangle\left|\nabla_{y}\tilde{\phi}_{j}\right|+ \left|\tilde{\phi}_{j}\right|\leq C_{1}\tilde{v}_{5,\tilde{\gamma}}(\tau)R^{5 }\ln R\langle y\rangle^{-6},\quad|\tilde{e}_{j}|\leq C_{1}\tilde{v}_{5,\tilde{ \gamma}}(\tau_{0})R(t_{0}). \tag{6.3}\]
By the parabolic regularity theory, for any compact set \(K\subset\subset\mathcal{D}_{3R}\cup(B_{3R(t_{0})}\times\{\tau_{0}\})\), it holds that \(\left\|\phi_{j}\right\|_{C^{1+\ell},\frac{1+\ell}{2}(K)}\leq C_{2}\) with a constant \(C_{2}\) independent of \(j\) and a constant \(\ell\in(0,1)\). By Arzela-Ascoli theorem, up to a subsequence, there exists a function \(g\) which is \(C^{1}\) in space, such that
\[\tilde{\phi}_{j}\to g,\quad\nabla_{y}\tilde{\phi}_{j}\to\nabla_{y}g\ \text{ in }\ L^{\infty}(K)\ \text{ as }\ j\to\infty.\]
By (6.3), we have
\[\langle y\rangle\left|\nabla_{y}g\right|+|g|\leq C_{1}\tilde{v}_{5,\tilde{ \gamma}}(\tau)R^{5}\ln R\langle y\rangle^{-6}\ \text{ in }\ \mathcal{D}_{3R}.\]
For any \(\epsilon_{1}>0\), there exists \(\tau_{1}\) sufficiently large such that
and
\[\lim_{j\to\infty}\sup_{\tau_{0}\leq\tau\leq\tau_{1},\ y\in B_{2R(t(\tau))}} \big{(}\tilde{v}_{5,\tilde{\gamma}}(\tau)R^{5}(t(\tau))\ln^{2}\left(R(t(\tau) )\right)\big{)}^{-1}\left\langle y\right\rangle^{6}\big{(}\langle y\rangle \left|\nabla_{y}(\tilde{\phi}_{j}-g)(y,\tau)\right|+\left|(\tilde{\phi}_{j}-g) (y,\tau)\right|\big{)}=0.\]
Thus \(\|\tilde{\phi}_{j}-g\|_{\rm in}\to 0\), which implies \(\mathcal{T}_{\rm in}\big{[}\tilde{H}[\phi]\big{]}\) is a compact mapping on \(B_{\rm in}\). By the Schauder fixed-point theorem, there exists a solution \(\phi\in B_{\rm in}\) and thus the construction is complete.
## 7. Properties of the solution \(u\)
Recall \(u\) given in (2.10). By (2.7), \(\psi\) given by Proposition 4.1, \(\phi\) solved in \(B_{\rm in}\) (see (3.14)), \(\mu_{1},\xi\) given in Proposition 5.1, we have the validity of (1.4). The initial value
\[u(x,t_{0})=\mu^{-\frac{3}{2}}(t_{0})U\left(\frac{x-\xi(t_{0})}{\mu(t_{0})} \right)\eta\left(\frac{x-\xi(t_{0})}{\sqrt{t_{0}}}\right)+\Psi_{0}(x,t_{0})+ \mu^{-\frac{3}{2}}(t_{0})\phi\left(\frac{x-\xi(t_{0})}{\mu(t_{0})},t_{0} \right)\eta\left(\frac{x-\xi(t_{0})}{\mu(t_{0})R(t_{0})}\right).\]
Here \(\Psi_{0}>0\). Denote \(y(t_{0})=\frac{x-\xi(t_{0})}{\mu(t_{0})}\), then
\[U(y(t_{0}))-\phi(y(t_{0}),t_{0})\geq 15^{\frac{3}{4}}\langle y(t_{0})\rangle^{-3}-C \langle y(t_{0})\rangle^{-6}R^{5}(t_{0})\ln^{2}R(t_{0})\begin{cases}t_{0}^{3-2 \gamma},&\frac{3}{2}<\gamma<2\\ t_{0}^{-1}(\ln t_{0})^{3},&\gamma=2\\ t_{0}^{-\frac{5}{2}},&\gamma>2\end{cases}>0\]
for \(t_{0}\) large enough. Therefore, \(u(x,t_{0})>0\), which implies \(u>0\) by the maximum principle. In addition, by Lemma B.1, we get (1.6). Finally, we conclude the proof of Theorem 1.1.
## Appendix A Proof of Lemma 2.1
Proof of Lemma 2.1.: For \(\gamma<n\), \(t\geq 1\),
\[(4\pi t)^{-\frac{n}{2}}\int_{\mathbb{R}^{n}}e^{-\frac{|y|^{2}}{4t}}\langle y \rangle^{-\gamma}dy=(4\pi)^{-\frac{n}{2}}t^{-\frac{\gamma}{2}}\int_{\mathbb{R} ^{n}}e^{-\frac{|z|^{2}}{4}}\left(|z|^{2}+t^{-1}\right)^{-\frac{\gamma}{2}}dz\]
\[=t^{-\frac{\gamma}{2}}(4\pi)^{-\frac{n}{2}}\int_{\mathbb{R}^{n}}e^{-\frac{|z|^ {2}}{4}}|z|^{-\gamma}dz+O\Big{(}t^{-\frac{\gamma}{2}}\begin{cases}t^{-1},& \gamma<n-2\\ t^{-1}\langle\ln t\rangle,&\gamma=n-2\\ t^{\frac{\gamma-n}{2}},&n-2<\gamma<n\end{cases}\Big{)}\]
since
\[\left|\left(\int_{|z|\leq t^{-\frac{1}{2}}}+\int_{|z|>t^{-\frac{1}{2}}}\right) \int_{\mathbb{R}^{n}}e^{-\frac{|z|^{2}}{4}}\left[\left(|z|^{2}+t^{-1}\right)^ {-\frac{\gamma}{2}}-|z|^{-\gamma}\right]dz\right|\]
\[\lesssim t^{\frac{\gamma-n}{2}}+\begin{cases}t^{-1},&\gamma<n-2\\ t^{-1}\langle\ln t\rangle,&\gamma=n-2\\ t^{\frac{\gamma-n}{2}},&n-2<\gamma<n\end{cases}\begin{cases}t^{-1},&\gamma<n- 2\\ t^{-1}\langle\ln t\rangle,&\gamma=n-2\\ t^{\frac{\gamma-n}{2}},&n-2<\gamma<n.\end{cases}\]
For \(\gamma=n\), \(t\geq 1\),
\[(4\pi t)^{-\frac{n}{2}}\int_{\mathbb{R}^{n}}e^{-\frac{|y|^{2}}{4t}}\langle y \rangle^{-n}dy=t^{-\frac{n}{2}}\ln(1+t)\left(4\pi\right)^{-\frac{n}{2}}\frac{ 1}{2}\left|S^{n-1}\right|\left(1+O\left((\ln(1+t))^{-1}\right)\right)\]
since
\[(4\pi t)^{-\frac{n}{2}}\int_{|y|\geq t^{\frac{1}{2}}}e^{-\frac{|y|^{2}}{4t}} \langle y\rangle^{-n}dy\sim t^{-\frac{n}{2}}\int_{\frac{1}{4}}^{\infty}e^{-z}z ^{-1}dz,\]
\[\left|\left(4\pi t\right)^{-\frac{n}{2}}\int_{|y|<t^{\frac{1}{2}}}\left(e^{- \frac{|y|^{2}}{4t}}-1\right)\langle y\rangle^{-n}dy\right|\lesssim t^{-\frac{n }{2}}\int_{|y|<t^{\frac{1}{2}}}t^{-1}|y|^{2}\langle y\rangle^{-n}dy\sim t^{- \frac{n}{2}},\]
\[(4\pi t)^{-\frac{n}{2}}\int_{|y|<t^{\frac{1}{2}}}\langle y\rangle^{-n}dy=t^{- \frac{n}{2}}\left(4\pi\right)^{-\frac{n}{2}}\frac{1}{2}\left|S^{n-1}\right| \int_{0}^{t}\left[\frac{z^{\frac{n-2}{2}}}{(1+z)^{\frac{n}{2}}}-\frac{1}{1+z}+ \frac{1}{1+z}\right]dz\]
\[=t^{-\frac{n}{2}}\left(4\pi\right)^{-\frac{n}{2}}\frac{1}{2}\left|S^{n-1} \right|\left(\ln(1+t)+O(1)\right).\]
For \(\gamma>n\), \(t\geq 1\),
\[(4\pi t)^{-\frac{n}{2}}\int_{\mathbb{R}^{n}}e^{-\frac{|y|^{2}}{4t}}\langle y \rangle^{-\gamma}dy=t^{-\frac{n}{2}}\left(4\pi\right)^{-\frac{n}{2}}\int_{ \mathbb{R}^{n}}\langle y\rangle^{-\gamma}dy+O\Big{(}t^{-\frac{n}{2}}\begin{cases} t^{\frac{n-\gamma}{2}},&n<\gamma<n+2\\ t^{-1}\langle\ln t\rangle,&\gamma=n+2\\ t^{-1},&\gamma>n+2\end{cases}\Big{)}\]
since
\[\left|\left(4\pi t\right)^{-\frac{n}{2}}\left(\int_{|y|\leq t^{\frac{1}{2}}}+ \int_{|y|>t^{\frac{1}{2}}}\right)\left(e^{-\frac{|y|^{2}}{4t}}-1\right)\langle y \rangle^{-\gamma}dy\right|\]
\[\lesssim t^{-\frac{n}{2}}\left(\int_{|y|\leq t^{\frac{1}{2}}}t^{-1}|y|^{2} \langle y\rangle^{-\gamma}dy+\int_{|y|>t^{\frac{1}{2}}}\langle y\rangle^{- \gamma}dy\right)\lesssim t^{-\frac{n}{2}}\begin{cases}t^{\frac{n-\gamma}{2}},&n <\gamma<n+2\\ t^{-1}\langle\ln t\rangle,&\gamma=n+2\\ t^{-1},&\gamma>n+2.\end{cases}\]
## Appendix B The limitation of \(\int_{\mathbb{R}^{n}}e^{-A|x-y|^{2}}\langle y\rangle^{-b}dy\)
**Lemma B.1**.: _For \(n>0\), \(A>0\), \(b\in\mathbb{R}\), we have_
\[\int_{\mathbb{R}^{n}}e^{-A|x-y|^{2}}\langle y\rangle^{-b}dy= \langle x\rangle^{-b}\int_{\mathbb{R}^{n}}e^{-A|z|^{2}}dz\] \[\qquad+O\bigg{(}|x|\langle x\rangle^{-b-2}\left(|x|^{n+1}\mathbf{ 1}_{|x|\leq 1}+\mathbf{1}_{|x|>1}\right)+\int_{|z|>\frac{|x|}{2}}e^{-A|z|^{2}} \left(|z|+\langle x\rangle\right)^{\max\{0,-b\}}dz\bigg{)}.\]
Proof.: \[\int_{\mathbb{R}^{n}}e^{-A|x-y|^{2}}\langle y\rangle^{-b}dy=\int_{ \mathbb{R}^{n}}e^{-A|z|^{2}}\left[\left(1+|x|^{2}\right)^{-\frac{b}{2}}+\left( 1+|x-z|^{2}\right)^{-\frac{b}{2}}-\left(1+|x|^{2}\right)^{-\frac{b}{2}}\right]dz.\]
Here, we estimate
\[\left|\int_{|z|\leq\frac{|x|}{2}}e^{-A|z|^{2}}\left[\left(1+|x-z|^ {2}\right)^{-\frac{b}{2}}-\left(1+|x|^{2}\right)^{-\frac{b}{2}}\right]dz\right|\] \[= \left|-\frac{b}{2}\int_{|z|\leq\frac{|x|}{2}}e^{-A|z|^{2}}\left[ 1+\theta|x-z|^{2}+(1-\theta)|x|^{2}\right]^{-\frac{b}{2}-1}\left(|x-z|-|x| \right)\left(|x-z|+|x|\right)dz\right|\] \[\lesssim |x|\langle x\rangle^{-b-2}\int_{|z|\leq\frac{|x|}{2}}e^{-A|z|^{2} }|z|dz\sim|x|\langle x\rangle^{-b-2}\left(|x|^{n+1}\mathbf{1}_{|x|\leq 1}+ \mathbf{1}_{|x|>1}\right)\]
with a parameter \(\theta\in[0,1]\);
\[\left|\int_{|z|>\frac{|x|}{2}}e^{-A|z|^{2}}\left[\left(1+|x-z|^{ 2}\right)^{-\frac{b}{2}}-\left(1+|x|^{2}\right)^{-\frac{b}{2}}\right]dz\right|\] \[\lesssim \begin{cases}\int_{|z|>\frac{|x|}{2}}e^{-A|z|^{2}}dz,&b\geq 0 \\ \int_{|z|>\frac{|x|}{2}}e^{-A|z|^{2}}\left(|z|^{-b}+\langle x\rangle^{-b} \right)dz,&b<0\end{cases}\sim\int_{|z|>\frac{|x|}{2}}e^{-A|z|^{2}}\left(|z|+ \langle x\rangle\right)^{\max\{0,-b\}}dz.\]
## Acknowledgements
Z. Li is funded by Natural Science Foundation of Hebei Province, No. A2022205007 and by Science and Technology Project of Hebei Education Department, No. QN2022047. J. Wei is partially supported by NSERC of Canada.
|
2302.02882 | Jacobian-free implicit MDRK methods for stiff systems of ODEs | In this work, an approximate family of implicit multiderivative Runge-Kutta
(MDRK) time integrators for stiff initial value problems is presented. The
approximation procedure is based on the recent Approximate Implicit Taylor
method (Baeza et al. in Comput. Appl. Math. 39:304, 2020). As a Taylor method
can be written in MDRK format, the novel family constitutes a multistage
generalization. Two different alternatives are investigated for the computation
of the higher order derivatives: either directly as part of the stage equation,
or either as a separate formula for each derivative added on top of the stage
equation itself. From linearizing through Newton's method, it turns out that
the conditioning of the Newton matrix behaves significantly different for both
cases. We show that direct computation results in a matrix with a conditioning
that is highly dependent on the stiffness, increasing exponentially in the
stiffness parameter with the amount of derivatives. Adding separate formulas
has a more favorable behavior, the matrix conditioning being linearly dependent
on the stiffness, regardless of the amount of derivatives. Despite increasing
the Newton system significantly in size, through several numerical results it
is demonstrated that doing so can be considerably beneficial. | Jeremy Chouchoulis, Jochen Schütz | 2023-02-06T15:51:04Z | http://arxiv.org/abs/2302.02882v1 | # Jacobian-free implicit MDRK methods for stiff systems of ODEs
###### Abstract
In this work, an approximate family of implicit multiderivative Runge-Kutta (MDRK) time integrators for stiff initial value problems is presented. The approximation procedure is based on the recent Approximate Implicit Taylor method (Baeza et al. in Comput. Appl. Math. 39:304, 2020). As a Taylor method can be written in MDRK format, the novel family constitutes a multistage generalization. Two different alternatives are investigated for the computation of the higher order derivatives: either directly as part of the stage equation, or either as a separate formula for each derivative added on top of the stage equation itself. From linearizing through Newton's method, it turns out that the conditioning of the Newton matrix behaves significantly different for both cases. We show that direct computation results in a matrix with a conditioning that is highly dependent on the stiffness, increasing exponentially in the stiffness parameter with the amount of derivatives. Adding separate formulas has a more favorable behavior, the matrix conditioning being linearly dependent on the stiffness, regardless of the amount of derivatives. Despite increasing the Newton system significantly in size, through several numerical results it is demonstrated that doing so can be considerably beneficial.
keywords: Multiderivative Runge-Kutta, Jacobian-free, ODE integrator Msc: [2020] 65F35, 65L04, 65L05, 65L06, 65L12, 65L20
## 1 Introduction
We are interested in developing stable and efficient _implicit_ multiderivative time integrators, see, e.g., [1; 2; 3; 4; 5; 6; 7; 8; 9] and the references therein, for stiff ordinary differential equations (ODEs)
\[y^{\prime}(t)=\Phi(y), \tag{1}\]
where \(\Phi\colon\mathbb{R}^{M}\to\mathbb{R}^{M}\) is the flux and \(y:\mathbb{R}^{+}\to\mathbb{R}^{M}\) the unknown solution variable. In our case, stiffness is introduced through a variable \(\varepsilon\ll 1\) into the flux, which is given by
\[\Phi_{i}(y)=f_{i}(y_{1},\ldots,y_{M})+\frac{g_{i}(y_{1},\ldots,y_{M})}{ \varepsilon},\quad 1\leq i\leq M, \tag{2}\]
for smooth functions \(f_{i}\) and \(g_{i}\) that do not explicitly dependent on \(\varepsilon\). Multiderivative methods not only take into account the first derivative \(y^{\prime}(t)\), but as well higher order time derivatives
\[y^{(k)}(t):=\frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}y(t)\,.\]
By repeatedly making use of the ODE system (1), and ignoring the \(t-\)dependency of \(y^{(k)}\) for the ease of presentation, this leads to the formulas
\[y^{(2)} =\Phi^{\prime}(y)y^{(1)}\,, \tag{3a}\] \[y^{(3)} =\Phi^{\prime\prime}(y)\bullet\left[y^{(1)}|y^{(1)}\right]+\Phi^ {\prime}(y)y^{(2)}\,,\] (3b) \[y^{(4)} =\Phi^{\prime\prime\prime}(y)\bullet\left[y^{(1)}|y^{(1)}|y^{(1)} \right]+3\Phi^{\prime\prime}(y)\bullet\left[y^{(1)}|y^{(2)}\right]+\Phi^{ \prime}(y)y^{(3)}\,, \tag{3c}\]
and so forth. The bullet operator is the tensor action, i.e.,
\[\Phi^{\prime\prime\prime}\bullet[u|v|w]:=\sum_{j,k,l=1}^{M}\frac{\partial^{3} \Phi}{\partial y_{j}\partial y_{k}\partial y_{l}}u_{j}v_{k}w_{l}\,, \tag{4}\]
where \(u,v,w\in\mathbb{R}^{M}\). Already at this introductory level, it can be seen that it is quite cumbersome to explicitly put all the terms used in (3) into an algorithm. Furthermore, plugging \(\Phi_{i}(y)\) into (3) reveals that \(y^{(k)}=\mathcal{O}(\varepsilon^{-k})\). As a result, the derivatives \(y^{(k)}\) quickly tend to become extremely large with each added order of the derivative, potentially leading to a huge disparity in values handled in a multiderivative solver. Therefore, one can expect the typical limitations associated to floating-point arithmetic. In particular, the algebraic system of equations that results from the nonlinear timescheme is strongly influenced. It is shown numerically in this work that the conditioning of the linearized equation system behaves as \(\mathcal{O}(\varepsilon^{-k})\).
In 2018, Baeza et al. [6] have constructed a recursive algorithm on the basis of centered finite differences that approximates the derivatives \(y^{(k)}\) for a Taylor expansion, accordingly named the Approximate Taylor (AT) method. This approach directly stems from a recursive finite difference scheme that was designed for the circumvention of the Cauchy-Kovalevskaya procedure in the context of hyperbolic conservation laws [10]. In order to deal with stiffness and strict timestepping restrictions, more recently Baeza et al. [7] extended the AT method with an implicit variant, named the Approximate Implicit Taylor method. To simplify the computation of the Newton Jacobian, [7] suggests including additional equations into the ODE system for the calculation of the derivatives \(y^{(k)}\). We show that this as well can improve the conditioning of the Jacobian compared to the \(\mathcal{O}(\varepsilon^{-k})\) behavior that is achieved by directly incorporating (3).
In this work, we generalize the approximate implicit Taylor method to more general multiderivative Runge-Kutta (MDRK) schemes. This improves the solution quality significantly. While a Taylor method has order of convergence \(\mathcal{O}(\Delta t^{k})\), with \(k\) denoting the maximally used derivative, MDRK schemes can achieve the same order through less derivatives by incorporating more stages. Furthermore, we thoroughly investigate multiple methods to solve the resulting algebraic system of equations. Although all methods are equivalent with _infinite_ machine precision, we observe that numerically, the methods differ quite significantly.
First, in Sect. 2 traditional MDRK time integrators for ODEs are introduced, highlighting the variety of ways to compute the time derivatives \(y^{(k)}\), among which a review of the AT procedure is given. Next, Sect. 3 is devoted to understanding the stability of the linear system obtained from applying Newton's method. In settings with timesteps large compared to the stiffness parameter \(\varepsilon\), we show that the Newton Jacobian has a condition number that grows exponentially with the amount of derivatives. As an alternative for the traditional MDRK approach, in Sect. 4, along the lines of the approximate implicit Taylor method, we introduce the MDRK-DerSol approach, where the derivatives are computed as solution variables via new relations in a larger ODE system. We verify numerically that the Newton Jacobian of this bigger system has a more favorable conditioning asymptotically for \(\varepsilon\) going to \(0\). Finally, our conclusions are summarized and future endeavors are explored in Sect. 5.
## 2 Implicit multiderivative Runge-Kutta solvers
In order to apply a time-marching scheme to Eq. (1), we discretize the temporal domain with a constant1 timestep \(\Delta t\) and iterate \(N\) steps such that \(\Delta t=T_{\text{end}}/N\). Consequently, we define the time levels by
Footnote 1: A fixed \(\Delta t\) is used for solving any ODE system described within this work. Nevertheless, all presented methods can readily be applied with a variable timestep \(\Delta t^{n}\) if needed.
\[t^{n}:=n\Delta t\qquad 0\leq n\leq N.\]
The central class of time integrators in this work are _implicit_ MDRK methods. By adding extra temporal derivatives of \(\Phi(y)\), these form a natural generalization of classical implicit Runge-Kutta methods. To present our ideas, let us formally define the MDRK scheme as follows:
**Definition 1** (Kastlunger, Wanner [1, Section 1]).: A \(q\)-th order implicit \(\mathbf{r}\)-derivative Runge-Kutta scheme using \(\mathbf{s}\) stages (\(\mathbf{r}\)DRK\(q\)-\(\mathbf{s}\)) is any method which can, for given coefficients \(a_{l\nu}^{(k)}\), \(b_{l}^{(k)}\) and \(c_{l}\), be formalized as
\[y^{n,l}:=y^{n}+\sum_{k=1}^{\mathbf{r}}\Delta t^{k}\sum_{\nu=1}^{\mathbf{s}}a_{l \nu}^{(k)}\frac{\mathrm{d}^{k-1}}{\mathrm{d}t^{k-1}}\Phi\left(y^{n,\nu}\right),\quad l=1,\ldots,\mathbf{s},\] (5a) where \[y^{n,l}\] is a stage approximation of \[y\] at time \[t^{n,l}:=t^{n}+c_{l}\Delta t\]. The update is given by \[y^{n+1}:=y^{n}+\sum_{k=1}^{\mathbf{r}}\Delta t^{k}\sum_{l=1}^{\mathbf{s}}b_{l }^{(k)}\frac{\mathrm{d}^{k-1}}{\mathrm{d}t^{k-1}}\Phi(y^{n,l})\,.\] (5b) Typically, the values of \[a_{l\nu}^{(k)}\], \[b_{l}^{(k)}\] and \[c_{l}\] are summarized in an extended Butcher tableau, see A for some examples.
As can be seen from Eq. (3), at least the \[k\] -th order Jacobian tensor
\[\Phi^{k}(y)=\frac{\partial^{k}\Phi}{\partial y^{k}}(y) \tag{6}\]
is needed for the derivation of \(\frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}\Phi\). For systems of ODEs, \(\Phi^{k}\) is an \(M\times\ldots\times M\) (\(k\)-times) tensor. The generalization of (3b), named Faa Di Bruno's formula (see [7, Prop. 1]), can therefore be very expensive. A more sensible way to obtain the time derivatives of \(\Phi\) is from the recursive relation
\[\frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}\Phi(y)=\left[\frac{\mathrm{d}^{k-1} \Phi(y)}{\mathrm{d}t^{k-1}}\right]^{\prime}y^{(1)}\,. \tag{7}\]
The prime symbol here denotes the Jacobian derivative with respect to \(y\). Here, the quantities \(\left[\frac{\mathrm{d}^{k-1}\Phi}{\mathrm{d}t^{k-1}}\right]^{\prime}\) are \(M\times M\) matrices, regardless of \(k\).
**Remark 1**.: Although Faa Di Bruno's formula is mathematically equivalent to the recursive relation (7), numerical results do in actuality differ. Due to the many tensor actions with \(\Phi^{k}\) in Faa Di Bruno's formula, numerical computations are much more prone to round-off errors. To illustrate this, we will apply both Faa Di Bruno's formula, as in (3), and the recursive relation (7). We refer to the tensors \(\Phi^{k}\) with the wording "Exact Jacobians" (EJ) from hereon.
### Approximating the time derivatives
Despite the availibility of the recursive relation (7), the Jacobian derivatives w.r.t. \(y\) within this relation can nevertheless be quite intricate to deal with. And as such, avoiding Jacobian derivation by hand often leads to the use of symbolic computing software to allow the user to focus directly on the numerical procedure. There are two major downsides here, first, it being that symbolic software is computationally expensive, and secondly, not always feasible to apply. For large numerical packages for example, generally it is not desirable to significantly alter vital portions of code. To overcome symbolic procedures completely, a high-order centered differences approximation strategy has recently been developed by Baeza et al. [6; 7] to obtain values
\[\widetilde{y}^{(k)}=y^{(k)}+\mathcal{O}(\Delta t^{\mathbf{r}-k+1})=\frac{ \mathrm{d}^{k-1}}{\mathrm{d}t^{k-1}}\Phi(y)+\mathcal{O}(\Delta t^{\mathbf{r}-k +1})\,, \tag{8}\]
for \(k=2,\ldots,\mathbf{r}\). An overview of the method is given here; first, necessary notation is introduced.
**Definition 2**.: For any number \(p\in\mathbb{N}\), define the locally centered stencil function having \(2p+1\) nodes by means of angled brackets
\[\langle\cdot\rangle\colon\mathbb{Z}\to\mathbb{Z}^{2p+1}\colon z\mapsto\left(z -p,\ \ \ldots,\ \ z+p\right)^{T}\,. \tag{9}\]
In this manner it is possible to write the vectors
\[\mathbf{y}^{\langle n\rangle}:=\begin{pmatrix}y^{n-p}\\ \vdots\\ y^{n+p}\end{pmatrix}\quad\text{and}\quad y(\mathbf{t}^{\langle n\rangle}):= \begin{pmatrix}y(t^{n-p})\\ \vdots\\ y(t^{n+p})\end{pmatrix}\,. \tag{10}\]
Such representation allows us to concisely write down approximations \(\widetilde{y}^{(k)}\) to \(y^{(k)}\). Let \(k=1,\ldots,\mathtt{r}\) be the derivative order of interest which we would like to approximate.
**Lemma 1** (Carrillo, Pares [11, Proposition 4], Zorio et al. [10, Proposition 2]).: _For \(k\geq 1\) and \(p\geq\lfloor\frac{k+1}{2}\rfloor\) (\(p\in\mathbb{N}\)), there exist \(2p+1\) quantities \(\delta_{p,j}^{k}\in\mathbb{R}\) for \(j=-p,\ldots,p\), such that the linear operator_
\[P^{(k)}\colon\mathbb{R}^{2p+1}\to\mathbb{R},\qquad\mathbf{v}\mapsto\frac{1}{ \Delta t^{k}}\sum_{j=-p}^{p}\delta_{p,j}^{k}v_{j} \tag{11}\]
_approximates the \(k\)-th derivative up to order \(\omega:=2p-2\lfloor\frac{k-1}{2}\rfloor\), i.e._
\[P^{(k)}y(\mathbf{t}^{\langle n\rangle})=y^{(k)}(t^{n})+\mathcal{O}(\Delta t^ {\omega})\,. \tag{12}\]
The linear operator \(P^{(k)}\), however, is difficult to apply in practice, since it introduces additional unknown values \(y(t^{n+1}),\ldots,y(t^{n+p})\) into the stencil. In order to bypass the issue of creating more unkowns, in [6, 7, 10] a recursive strategy that includes Taylor approximations into the centered difference operator is incorporated. This gives the following computations of the values \(\widetilde{y}^{(k)}\):
\[\widetilde{y}^{(1)} :=\Phi(y^{n}), \tag{13}\] \[\widetilde{y}^{(k)} :=P^{(k-1)}\mathbf{\Phi}_{T}^{k-1,\langle n\rangle},\quad 2\leq k \leq\mathtt{r}\,,\]
in which
\[\Phi_{T}^{k-1,n+j}:=\Phi\left(y^{n}+\sum_{m=1}^{k-1}\frac{(j\Delta t)^{m}}{m!} \widetilde{y}^{(m)}\right) \tag{14}\]
is an approximation to \(\Phi\big{(}y(t^{n+j})\big{)}\). By adopting the recursive Taylor approach (13)-(14) into Def. 1, we acquire a novel family of time-marching schemes:
**Definition 3** (AMDRK method).: The rDRK\(q\)-s scheme (Def. 1) in which the time derivatives \(\frac{\mathrm{d}^{k-1}}{\mathrm{d}t^{k-1}}\Phi(y)\) are approximated by using the formulas (13)-(14) is called the _Approximate_ MDRK method, denoted by the short-hand notation \(A\)rDRK\(q\)-s.
Without proof - it is very similar to related cases, see for example [6] in the context of explicit Taylor schemes for ODEs and [12] for explicit MDRK schemes applied to hyperbolic PDEs - we state the order of convergence:
**Theorem 2**.: _The consistency order of an \(A\)rDRK\(q\)-s method is \(\min(2p+1,q)\), the variable \(q\) being the consistency order of the underlying MDRK method, and \(p\) denoting the use of the \(2p+1\) points \(\{t^{n-p},\ldots,t^{n+p}\}\) in Eq. (13)._
**Remark 2**.: Note that the variable \(p\) is not defined in the terminology "\(A\)rDRK\(q\)-s". Since the consistency order is \(\min(2p+1,q)\), the optimal choice w.r.t. computational efficiency is to set \(p=\lfloor q/2\rfloor\). Throughout this paper \(p\) is chosen along this line of reasoning for all numerical results.
In the first row of Figure 1 an overview of the thus far presented MDRK methods is given, with differences focused around the computations of the derivatives \(y^{(k)}\). As all of these MDRK methods exclusively solve the Eqs. (5a)-(5b), we refer to them as "Direct" in this work.
### Specific MDRK schemes
As is the case for standard Runge-Kutta methods, the (A)rDRK\(q\)-s method has a lot of flexibility in choosing the coefficients \(a^{(k)}_{l_{\nu}}\) and \(b^{(k)}_{l}\). We put spotlight on two varying implementations of the (A)rDRK\(q\)-s method:
* _Full Storage MDRK (FSMDRK)_: Under the assumption that the Butcher tableau consists of dense matrices, all s stages should be solved for simultaneously. This approach can be applied for any existing MDRK scheme, but might not be efficient as it often leads to large systems of equations.
* _Diagonally Implicit MDRK (DIMDRK)_: If each stage \(l\) only depends on previous stages \(\nu=1,\ldots,l-1\), and is only implicit in itself, it can be more efficient to solve for each stage one at a time.
In A the extended Butcher tableaux used in this paper are displayed, three families are considered:
* _Taylor schemes (Tables A.3-A.4)_: The explicit and implicit Taylor method can be reformulated as a single-stage MDRK scheme. This respectively leads to the Approximate Explicit Taylor methods in [6] and the Approximate Implicit Taylor methods in [7]. Having only one stage, the FSMDRK and DIMDRK approaches are equivalent.
* _Hermite-Birkhoff (HB) schemes (Tables A.5-A.10)_ : The coefficients are obtained from the Hermite-Birkhoff quadrature rule [8; 13] which integrates a Hermite polynomial that also takes derivative data into account, with possibly a varying amount of derivative data per point. By taking equispaced abscissa \(c_{l}\), the resulting tableau is fully implicit whilst having a fully explicit first stage. Hence, for s \(>2\), stages 2 to s should be solved with an FSMDRK method.
* _Strong-Stability Preserving (SSP) schemes (Tables A.11-A.12)_: In [9], Gottlieb et al. have constructed implicit multiderivative SSP schemes. The tableaux are diagonally implicit, and therefore each stage can be solved for one after another. Both the DIMDRK and FSMDRK approach are thus valid, with the DIMDRK approach likely being more efficient.
Figure 1: An overview of the different MDRK approaches applied in this work. In here \(M\) represents the size of the ODE system being solved, r and s represent the amount of derivatives and the amount of stages, respectively, of the MDRK scheme (Def. 1). For both DIMDRK and FSMDRK schemes (Subsect. 2.2) the size of the linear system resulting from applying Newton’s method is displayed. In comparison to the other Direct approaches, EJ-Direct necessitates the most computations through tensor calculations, making it less fit for efficient time integration. Moreover, we found that using rec-Dersol leads to algebraic systems with extremely large condition numbers in relation to the other DerSol procedures; rec-Dersol hence turned out to be less suited for stiff equations than the others.
### Nonlinear solver
The implicit (A)MDRK scheme (Defs. 1 and 3) requests a nonlinear solver, irrespective of whether the derivatives are either calculated exactly through (3), recursively obtained with (7) or approximated by means of (13)-(14). In case that a single stage \(Y=y^{n,l}\) (with \(l=1,\ldots,\mathbf{s}\)) is considered, as for DIMDRK schemes, or all the unknown stages are combined into a single vector \(Y=\left(y^{n,1},\ldots,y^{n,\mathbf{s}}\right)\) as in the FSMDRK approach, it is possible to write the stage equation(s) (5a) as
\[F(Y)=0\,, \tag{15}\]
and then choose any nonlinear solver of preference. Computationally, solving Eq. (15) is the most expensive portion of the numerical method. Hence, it is vital for the efficiency of the overall method to well understand the behavior of the selected solver. In this paper we use Newton's method, and thus require the Jacobian matrix \(F^{\prime}(Y)\). Given an initial value \(Y^{[0]}\), the linearized system
\[F^{\prime}(Y^{[i]})\Delta Y^{[i]}=-F(Y^{[i]}),\quad Y^{[i+1]}=Y^{[i]}+\Delta Y ^{[i]} \tag{16}\]
is solved for \(i=0,\ldots,N_{\mathrm{iter}}-1\) or until some convergence criteria are satisfied. In this work, criteria are invoked on the residuals,
\[\left\|F(Y^{[i]})\right\|_{2}<10^{-n_{\mathrm{tol}}}\quad\text{or}\quad\frac{ \left\|F(Y^{[i]})\right\|_{2}}{\left\|F(Y^{[0]})\right\|_{2}}<10^{-n_{\mathrm{ tol}0}}\,, \tag{17}\]
where \(n_{\mathrm{tol}},n_{\mathrm{tol}0}\in\mathbb{N}\). Under the assumptions that \(Y^{[0]}\) is in a neighborhood close enough to the exact solution \(Y\), and the Jacobian matrix is nonsingular, Newton's method converges quadratically (13, Theorem 7.1).
**Remark 3**.: In what follows, we avoid the superscript index \(i\) whenever possible, and instead write \(F^{\prime}(Y)\) or even \(F^{\prime}\).
## 3 Newton stability of direct (A)MDRK methods
In order to investigate the conditioning of the Newton Jacobian \(F^{\prime}(Y)\) in the linearized Newton system (16), we consider the Pareschi-Russo (PR) problem [14], given by
\[y_{1}^{\prime}(t)=-y_{2},\qquad y_{2}^{\prime}(t)=y_{1}+\frac{\sin(y_{1})-y_{ 2}}{\varepsilon},\qquad y(0)=\left(\frac{\pi}{2},1\right). \tag{18}\]
Let us first verify that the consistency order given in Theorem 2 is achieved and compare it with the exact MDRK method as in Def. 1 (using relation (7)). In Figure 2, convergence plots are shown for five different MDRK schemes (three Hermite-Birkhoff and two SSP, see A). Final time is set to \(T_{\mathrm{end}}=5\); the coarsest computation uses \(N=4\) timesteps. To separate convergence order from stiffness, \(\varepsilon\) is set to \(1\). We can clearly see that the AMDRK method achieves the appropriate convergence orders for all the considered schemes. Also, when compared to their exact MDRK counterpart, differences are barely visible. Only for high values of \(\Delta t\) and large orders of consistency, differences are visible.
When stiffness is increased by decreasing \(\varepsilon\) to \(\varepsilon=10^{-3}\), see Figure 3, the same methods as well show convergence for \(\Delta t\to 0\). However, due to order reduction phenomena, it is more difficult to observe the appropriate order here. Above all, for values \(\varepsilon\ll\Delta t\), the scheme HB-I4DRK8-2s (Table 10) has not properly converged in the Newton iterations; for \(N=4\) the AMDRK method diverges immediately, hence there being no node in the left plot of Figure 3, whereas the exact MDRK method shows a large error.
### Numerical observations of the Newton conditioning
So, even though all three approaches (3), (7) and (13)-(14) for calculating the derivatives \(y^{(k)}\) yield valid high-order algorithms, numerically we observe stability issues for stiff problems \(\varepsilon\ll\Delta t\). More specifically, when \(\varepsilon\ll\Delta t\), the Newton Jacobian \(F^{\prime}(Y)\) is badly conditioned. In Table 1 we display the arithmetic mean
Figure 3: Pareschi-Russo problem: Convergence of the AMDRK scheme (Def. 3) and the MDRK scheme (Def. 1). The final time \(T_{\rm end}=5\), timesteps start from \(N=4\); \(\varepsilon=10^{-3}\). Three Hermite-Birkhoff schemes and two SSP schemes are considered, see A. For \(N=4\) the HB-I4DRK8-2s scheme diverges, hence no node is shown.
Figure 2: Pareschi-Russo problem: Convergence of the AMDRK scheme (Def. 3) and the MDRK scheme (Def. 1). The final time is set to \(T_{\rm end}=5\), timesteps start from \(N=4\); \(\varepsilon=1\). Three Hermite-Birkhoff schemes and two SSP schemes are considered, see A.
of the condition numbers in the 1-norm w.r.t. the Newton iterations,
\[\mu(\operatorname{cond}(F^{\prime})):=\frac{\sum\limits_{i=1}^{N_{\text{iter}}} \operatorname{cond}(F^{\prime}(Y^{[i]}))}{N_{\text{iter}}}\,,\]
which we have obtained from solving Eq. (18) with the approximate implicit Taylor method of order \(\mathtt{r}=3\) for different values of \(\varepsilon\). To account for large timesteps, only a single step \(N=1\) of size \(\Delta t=1\) was applied. Newton tolerances, Eq. (17), were set to \(10^{-12}\) under a maximum of 10000 iterations.
In order to put the obtained condition numbers into perspective, the empirical orders w.r.t. \(\varepsilon\)
\[\operatorname{EO}_{\varepsilon}:=\frac{\log\!\left(\frac{\mu\!\left( \operatorname{cond}(F^{\prime}_{\varepsilon})\right)}{\mu\!\left( \operatorname{cond}(F^{\prime}_{1\alpha})\right)}\right)}{\log(10)} \tag{19}\]
are additionally computed (where \(F^{\prime}_{\varepsilon}\) denotes \(F^{\prime}\) for a particular value of \(\varepsilon\)). In this case, the experimental order seems to equal the order (\(\mathtt{r}=3\) here) of the implicit Taylor method. And in fact, the same behavior is observed for any amount of derivatives \(\mathtt{r}\) used. That means, we numerically observe the asymptotic behavior
\[\operatorname{cond}(F^{\prime})=\mathcal{O}(\varepsilon^{-\mathtt{r}}) \tag{20}\]
to hold true for any order of the implicit Taylor scheme. As a result of bad conditioning, in Table 1 we can therefore observe that the (A)MDRK methods do not converge for Pareschi-Russo's equation (18) when \(\varepsilon=10^{-5}\). In general, when considering any DIMDRK scheme, the same derivative-dependent behavior holds true for any implicit stage. In Figure 4 we plot the condition number for the final stage of different DIMDRK schemes.
The FSMDRK implementation shows a behavior similar to the DIMDRK implementation, see Figure 5. An intuitive reasoning can be found in the block-matrix structure of the Newton Jacobian. Due to the stages all being solved for at once, the Jacobian reads as
\[F^{\prime}(Y)=\frac{\partial}{\partial Y}\left[\begin{array}{c}F_{1}\\ \vdots\\ F_{\mathtt{s}}\end{array}\right]=\left[\begin{array}{c|c|c}\partial_{Y_{1}} F_{1}&\ldots&\partial_{Y_{\mathtt{s}}}F_{1}\\ \hline\vdots&\ddots&\vdots\\ \hline\partial_{Y_{1}}F_{\mathtt{s}}&\ldots&\partial_{Y_{\mathtt{s}}}F_{\mathtt{ s}}\end{array}\right]\,, \tag{21}\]
with \(\partial_{Y_{\mathtt{s}}}F_{\mathtt{l}}\) the partial derivative of the \(\mathtt{l}\)-th stage equation w.r.t. the \(\mathtt{\nu}\)-th stage variable \(Y_{\mathtt{\nu}}\). This block-structure assures a dependency of \(\operatorname{cond}(F^{\prime})\) on the conditioning \(\operatorname{cond}(\partial_{Y_{\mathtt{s}}}F_{\mathtt{l}})\) of the separate blocks. Hence, if \(\operatorname{cond}(\partial_{Y_{\mathtt{s}}}F_{\mathtt{l}})=\mathcal{O}( \varepsilon^{-\mathtt{r}})\), as is often the case from what is observed in the DIMDRK implementation for an \(\mathtt{r}\)-derivative scheme, the complete Jacobian likely also behaves at least as \(\mathcal{O}(\varepsilon^{-\mathtt{r}})\).
**Remark 4**.: If the first stage is explicit, s.t. \(y^{n,1}=y^{n}\), we make the assumption that the FSMDRK approach instead solves for \(Y=\left(y^{n,2},\ldots,y^{n,\mathtt{s}}\right)\). The Hermite-Birkhoff schemes (Tables A.5-A.10) are good examples of RK-schemes with an explicit first stage.
Moreover, when we consider other problems with a similar dependency on a small non-dimensional value \(\varepsilon\) as for the PR problem (18), then as well \(\mathcal{O}(\varepsilon^{-\mathtt{r}})\) behavior is observed. Similar condition number plots alike the ones in Figs. 4 and 5 have been obtained for van der Pol and Kaps problems described in [15].
**Remark 5**.: Albeit mathematically equivalent, in Table 1 we can numerically see different results between using exact Jacobians (in the sense that we apply Faa die Bruno's formula) and using recursive formulas for calculating the derivatives \(y^{(k)}\).
### Conditioning of a two-variable ODE system
Eq. (18), but also van der Pol and Kaps equation, can be put into the form
\[y_{1}^{\prime}(t) =f_{1}(y_{1},y_{2}) \tag{22}\] \[y_{2}^{\prime}(t) =f_{2}(y_{1},y_{2})+\frac{g(y_{1},y_{2})}{\varepsilon}\,, \tag{23}\]
in which \(f_{1},f_{2}\) and \(g\) are smooth functions. In order to get a basic understanding of how the condition number of the Newton Jacobian behaves in terms of \(\varepsilon\), we consider the simplified system
\[y_{1}^{\prime}(t)=y_{2},\quad y_{2}^{\prime}(t)=\alpha y_{1}+\frac{g(y_{1},y_ {2})}{\varepsilon},\quad 0\leq t\leq T\,, \tag{24}\]
where \(\alpha\in\mathbb{R}\) and \(g:\mathbb{R}^{2}\to\mathbb{R}\) is smooth. We are interested in the analytical form of the Newton Jacobian obtained from the (A)MDRK method in the case that \(\varepsilon\ll\Delta t\).
**Example 1**.: _Applying implicit Taylor order \(\mathtt{r}=2\) to the system of ODEs (24) yields a system \(F=(y_{1}^{n},y_{2}^{n})^{T}\) with_
\[F=\left[\begin{array}{l}y_{1}^{n+1}-\Delta ty_{2}^{n+1}+\frac{\Delta t^{2}} {2}\left(\alpha y_{1}^{n+1}+\frac{g^{n+1}}{\varepsilon}\right)\\ y_{2}^{n+1}-\Delta t\left(\alpha y_{1}^{n+1}+\frac{g^{n+1}}{\varepsilon}\right)+ \frac{\Delta t^{2}}{2}\left(\alpha y_{2}^{n+1}+\frac{\partial_{y_{1}}g^{n+1} }{\varepsilon}y_{2}^{n+1}+\frac{\partial_{y_{2}}g^{n+1}}{\varepsilon}(\alpha y _{1}^{n+1}+\frac{g^{n+1}}{\varepsilon})\right)\end{array}\right]. \tag{25}\]
_Note that \(g^{n+1}\) has been defined as \(g(y_{1}^{n+1},y_{2}^{n+1})\)._
\begin{table}
\begin{tabular}{c|c|c|c|c} Method & \(\varepsilon\) & \(N_{\rm iter}\) & \(\mu(\mbox{\rm cond}(F^{\prime}))\) & EO\({}_{\varepsilon}\) \\ \hline \multirow{8}{*}{A-Direct} & \(1\) & \(5\) & \(4.45\cdot 10^{0}\) & \multirow{8}{*}{\(10^{17}\)} \\ & \(10^{-1}\) & \(6\) & \(2.89\cdot 10^{2}\) & \(1.81\) \\ & \(10^{-2}\) & \(34\) & \(2.69\cdot 10^{5}\) & \(2.97\) \\ & \(10^{-3}\) & \(75\) & \(2.71\cdot 10^{8}\) & \(3.00\) \\ & \(10^{-4}\) & \(226\) & \(2.51\cdot 10^{11}\) & \(2.97\) \\ & \(10^{-5}\) & \(702\) & \(8.20\cdot 10^{14}\) & \(3.51\) \\ \hline \multirow{8}{*}{EJ-Direct} & \(1\) & \(5\) & \(3.44\cdot 10^{0}\) & \multirow{8}{*}{\(10^{11}\)} \\ & \(10^{-1}\) & \(7\) & \(2.78\cdot 10^{2}\) & \(1.91\) \\ & \(10^{-2}\) & \(124\) & \(3.00\cdot 10^{5}\) & \(3.03\) \\ \cline{1-1} & \(10^{-3}\) & \(45\) & \(5.57\cdot 10^{8}\) & \(3.27\) \\ \cline{1-1} & \(10^{-4}\) & \(10000\) & \(6.58\cdot 10^{10}\) & \(2.07\) \\ \cline{1-1} & \(10^{-5}\) & \(10000\) & \(2.70\cdot 10^{13}\) & \(2.61\) \\ \hline \multirow{8}{*}{rec-Direct} & \(1\) & \(5\) & \(3.44\cdot 10^{0}\) & \multirow{8}{*}{\(10^{7}\)} \\ & \(10^{-1}\) & \(7\) & \(2.78\cdot 10^{2}\) & \(1.91\) \\ \cline{1-1} & \(10^{-2}\) & \(124\) & \(3.00\cdot 10^{5}\) & \(3.03\) \\ \cline{1-1} & \(10^{-3}\) & \(45\) & \(5.57\cdot 10^{8}\) & \(3.27\) \\ \cline{1-1} & \(10^{-4}\) & \(10000\) & \(6.32\cdot 10^{10}\) & \(2.05\) \\ \cline{1-1} & \(10^{-5}\) & \(10000\) & \(3.31\cdot 10^{13}\) & \(2.72\) \\ \end{tabular}
\end{table}
Table 1: Newton statistics of the implicit Taylor method of order 3 applied for a single timestep (\(N=1\)) of size \(T_{\rm end}=1\) (\(\Delta t=1\)) to the PR problem (18). Tolerances, Eq. (17), were set to \(10^{-12}\) under a maximum of \(10000\) iterations. Left: The amount of iterations \(N_{\rm iter}\) and the average condition number in the 1-norm of the Newton Jacobian \(\mu(\mbox{\rm cond}(F^{\prime}))\). EO\({}_{\varepsilon}\) is the experimental order of the average w.r.t. \(\varepsilon\) according to Eq. (19). For \(\varepsilon=10^{-5}\), none of the methods converged, with A-Direct diverging at \(702\) iterations. Right: The first \(330\) iterations of A-Direct. We can observe that for \(\varepsilon=10^{-5}\) the scheme becomes unstable and diverges eventually at iteration \(702\).
**Proposition 3**.: _Assume that \(g\) and all its partial derivatives are \(\mathcal{O}(1)\), and assume that \(\varepsilon\ll\Delta t\). Then, the Newton Jacobian \(F^{\prime}\) obtained from solving the system of ODEs (24) with the implicit Taylor method of order \(\mathbf{r}=2\) behaves in the 1-norm as_
\[\left\|F^{\prime}\right\|_{1}=\mathcal{O}\!\left(\frac{\Delta t^{2}}{ \varepsilon^{2}}\right)\,,\quad\text{and}\quad\operatorname{cond}(F^{\prime}) =\mathcal{O}\left(\varepsilon^{-1}\right)\,.\]
**Remark 6** (part 1).: The behavior shown in Prop. 3 is not what we observe from the numerical experiments in Figs. 4 and 5, where we obtained \(\operatorname{cond}(F^{\prime})=\mathcal{O}(\varepsilon^{-2})\) for two-derivative schemes. There is no contradiction here though. We reason in part 2 of this remark that, often, an order of \(\varepsilon\) is gained through the determinant \(\det(F^{\prime})\).
Figure 4: DIMDRK schemes applied as a Direct method (Figure 1) to the PR problem (18). The average condition number in the 1-norm of the Newton Jacobian obtained from the last RK-stage is shown for different values of \(\varepsilon\). The behavior \(\operatorname{cond}(F^{\prime})=\mathcal{O}(\varepsilon^{-\mathbf{r}})\) is observed, \(\mathbf{r}\) being the amount of derivatives. A single timestep (\(N=1\)) of size \(T_{\operatorname{end}}=1.25\) (\(\Delta t=1.25\)) has been considered with tolerances, Eq. (17), set to \(10^{-12}\) under a maximum of 1000 iterations.
Figure 5: FSMDRK schemes applied as a Direct method (Figure 1) to the PR problem (18). The average condition number in the 1-norm of the Newton Jacobian is shown for different values of \(\varepsilon\). The behavior \(\operatorname{cond}(F^{\prime})=\mathcal{O}(\varepsilon^{-\mathbf{r}})\) is observed, \(\mathbf{r}\) being the amount of derivatives. A single timestep (\(N=1\)) of size \(T_{\operatorname{end}}=1.25\) (\(\Delta t=1.25\)) has been considered with tolerances, Eq. (17), set to \(10^{-12}\) under a maximum of 1000 iterations.
Proof of Proposition 3.: For simplicity, the notation \((u,v)=(y_{1},y_{2})\) will be used in what follows. From the construction of \(F\) as given in Eq. (25), it is apparent that the Newton Jacobian satisfies
\[\left\|F^{\prime}\right\|_{1}=\mathcal{O}\!\left(\frac{\Delta t^{2}}{\varepsilon ^{2}}\right)\,,\]
under the assumption that \(\varepsilon\ll\Delta t\). For the behavior of the inverse matrix \({F^{\prime}}^{-1}\) we make use of the identity \(A^{-1}=\frac{1}{\det(A)}\operatorname{adj}(A)\). As \(F^{\prime}\) is a \(2\times 2\) matrix, its adjugate is obtained from simply shuffling terms and possibly adding a minus sign. Consequently, the behavior of its norm remains unaffected w.r.t \(\varepsilon\) and \(\Delta t\). The determinant can be explicitly computed as
\[\det(F^{\prime})=1+\frac{1}{4}\frac{\Delta t^{4}}{\varepsilon^{3}}\underbrace {(\partial_{u}g\partial_{vv}g-\partial_{v}g\partial_{uv}g)}_{Dg}g+\mathcal{O }(\varepsilon^{-2})\,. \tag{26}\]
So in total:
\[\operatorname{cond}(F^{\prime})=\left\|F^{\prime}\right\|_{1}\cdot\left\|{F^ {\prime}}^{-1}\right\|_{1}=\frac{\left\|F^{\prime}\right\|_{1}\left\| \operatorname{adj}(F^{\prime})\right\|_{1}}{\left|\det(F^{\prime})\right|}= \mathcal{O}\!\left(\frac{\Delta t^{2}}{\varepsilon^{2}}\right)\mathcal{O}\! \left(\frac{\varepsilon^{3}}{\Delta t^{4}}\right)\mathcal{O}\!\left(\frac{ \Delta t^{2}}{\varepsilon^{2}}\right)=\mathcal{O}(\varepsilon^{-1})\,,\]
under the assumption that \(\varepsilon\ll\Delta t\).
**Remark 6** (part 2).: In equation (26) we observe that \(\det(F^{\prime})=\mathcal{O}\!\left(\frac{\Delta t^{4}}{\varepsilon^{3}}\right)\) under the assumption that \(\varepsilon\ll\Delta t\). In many cases we nonetheless observe \(\det(F^{\prime})=\mathcal{O}(\varepsilon^{-2})\):
1. The values are mainly decided by \(g(y_{1},y_{2})\) and a function of partial derivatives which we have denoted \(Dg(y_{1},y_{2})\). In case of the PR-problem (18), \(\alpha=1\) and \(g(y_{1},y_{2})=\sin(y_{1})-y_{2}\). Therefore, any mixed partial derivatives of \(g\), or second partial derivative of \(g\) w.r.t \(y_{2}\) equals \(0\). So for the PR-problem \(Dg=0\).
2. In general, it does not need to hold true that \(Dg=0\). The van der Pol problem (as in [15]) for instance has \(g(y_{1},y_{2})=(1-y_{1}^{2})y_{2}-y_{1}\), and therefore yields \(Dg=2y_{1}(1-y_{1}^{2})\). Here, a clarification can be given by the (very) harsh restriction set in Prop. 3 that \(g\) and all its partial derivatives are \(\mathcal{O}(1)\), which typically is not true. For well-prepared initial conditions and an asymptotically consistent algorithm, \(g=\mathcal{O}(\varepsilon)\)[16].
A similar type of effect takes place for a higher amount of derivatives \(\mathtt{r}\); the resulting conditioning is \(\mathcal{O}(\varepsilon^{-\mathtt{r}})\).
## 4 Derivatives as members of the solution
One of the main issues for the \(\mathcal{O}(\varepsilon^{-\mathtt{r}})\) conditioning of the direct (A)MDRK method is the fact that with each higher derivative \(y^{(k)}\), the order of \(\varepsilon\) increases simultaneously. Such behavior is to be expected due to a built-in dependency on the lower order derivatives, _i.e._
\[\begin{split} y^{(1)}&=\Phi(y),\\ y^{(k)}&=\Psi_{k}(y,y^{(1)},\ldots,y^{(k-1)}),\quad 2 \leq k\leq\mathtt{r}\,.\end{split} \tag{27}\]
The operator \(\Psi_{k}\) is then either the relation that uses the Exact Jacobians (EJ) as in (3), so that
\[\Psi_{2} =\Phi^{\prime}(y)y^{(1)}\,, \tag{28a}\] \[\Psi_{3} =\Phi^{\prime\prime}(y)\bullet\left[y^{(1)}|y^{(1)}\right]+\Phi^{ \prime}(y)y^{(2)}\,,\] (28b) \[\Psi_{4} =\Phi^{\prime\prime\prime}(y)\bullet\left[y^{(1)}|y^{(1)}|y^{(1)} \right]+3\Phi^{\prime\prime}(y)\bullet\left[y^{(1)}|y^{(2)}\right]+\Phi^{ \prime}(y)y^{(3)}\,, \tag{28c}\]
and so forth, or either is given recursively from (7), so that
\[\Psi_{k+1}=\left[\frac{\mathrm{d}^{k-1}\Phi(y)}{\mathrm{d}t^{k-1}}\right]^{\prime }y^{(1)}\,, \tag{29}\]
for \(k=1,\ldots,\mathtt{r}-1\).
**Example 2** (part 1).: _Consider the implicit Taylor scheme of order \(\mathtt{r}=3\), then there is only a single stage \(Y=y\) to solve for. In terms of the relations (27), the Newton system \(F(y)=0\) simply writes as_
\[y-\Delta t\Phi(y)+\frac{\Delta t^{2}}{2}\Psi_{2}-\frac{\Delta t^{3}}{6}\Psi_{3 }-y^{n}=0\,. \tag{30}\]
From the above example it is clear that computing \(F^{\prime}(Y)\) necessitates deriving the formulas \(\Psi_{k}\) with respect to \(y\),
\[\frac{\partial y^{(k)}}{\partial y}=\partial_{y}\Psi_{k}+\sum_{m=1}^{k-1} \partial_{y^{(m)}}\Psi_{k}\cdot\frac{\partial y^{(m)}}{\partial y}\,. \tag{31}\]
It is exactly because of this recursive dependency on lower order derivatives that the order of \(\varepsilon\) increases in \(\mathrm{cond}(F^{\prime})\). A similar recursion holds true when calculating the approximate values \(\widetilde{y}^{(k)}\) with the recursive formulas (13)-(14).
In order to better understand the \(\varepsilon\)-behavior, we investigate a linear problem in the sequel. To reduce the complexity of involved formulas, we only consider scalar problems (\(m=1\)) in this section.
### \(\varepsilon\)-scaled Dahlquist test equation
We consider an \(\varepsilon\)-scaled Dahlquist test problem
\[y^{\prime}=\frac{\lambda}{\varepsilon}y,\qquad y(0)=1, \tag{32}\]
with the exact solution \(y(t)=\mathrm{e}^{(\lambda/\varepsilon)t}\). As the equation is linear, the AMDRK method (A-Direct) coincides with the MDRK method that uses EJ (EJ-Direct), see for example [17, Proposition 1]2. The rationale behind the observed behavior follows immediately from the next lemma.
Footnote 2: The AMDRK method approximates the derivatives \(y^{(k)}\) on the basis of finite differences. For linear problems finite differences are exact.
**Lemma 4**.: _The derivatives \(y^{(k)}\) and their Jacobians \(\partial_{y}y^{(k)}\) of the \(\varepsilon\)-scaled Dahlquist test are \(\mathcal{O}(\varepsilon^{-k})\), i.e._
\[y^{(k)}=\frac{\mathrm{d}^{k-1}}{\mathrm{d}t^{k-1}}\Phi=\left(\frac{\lambda}{ \varepsilon}\right)^{k}y=\mathcal{O}(\varepsilon^{-k}),\qquad\frac{\partial y ^{(k)}}{\partial y}\!\!=\!\!\left[\frac{\mathrm{d}^{k-1}\Phi}{\mathrm{d}t^{k- 1}}\right]^{\prime}=\left(\frac{\lambda}{\varepsilon}\right)^{k}=\mathcal{O}( \varepsilon^{-k})\,.\]
It would be more optimal for the conditioning of the Jacobian to unfold the \(\varepsilon\)-dependency through its recursion given by \(\Psi_{k}\) in Eqs. (27). When applying EJ (and thus also for AMDRK) there holds,
\[\Psi_{k}=\frac{\lambda}{\varepsilon}y^{(k-1)}\,, \tag{33}\]
whereas recursion (rec-Direct) gives the relation
\[\Psi_{k}=\left(\frac{\lambda}{\varepsilon}\right)^{k-1}y^{(1)}\,, \tag{34}\]
for \(k=1,\ldots,\mathtt{r}\). Already here we can notice that the first out of these two is more favorable, as it unfolds the \(\varepsilon\)-dependency more thoroughly.
### Recursive dependencies as additional system equations
In order to achieve such an unfolding of the \(\varepsilon\)-dependency, Baeza et al. [7] suggest to take the derivatives as members of the solution. Instead of directly solving for \(Y\), additionally, the independent unknowns
\[z_{k}\approx y^{(k)}\,,\qquad 1\leq k\leq\mathtt{r}, \tag{35}\]
are sought for using the same recursive dependencies
\[\begin{split} z_{1}&=\Phi(z_{0}),\\ z_{k}&=\Psi_{k}(z_{0},z_{1},\ldots,z_{k-1}),\quad 2 \leq k\leq\mathtt{r},\end{split} \tag{36}\]
where we have defined \(z_{0}:=Y\). In constrast to the single relation \(F(Y)=0\), we now solve the \(\mathtt{r}+1\) relations as a bigger system \(\mathcal{F}(z)=0\), with \(z:=(z_{0},z_{1},\ldots,z_{\mathtt{r}})\). In summary, the recursive dependency in one single formula is traded off for a larger system containing the \(\mathtt{r}\) additional relations given by (36).
**Example 2 (part 2).**_For the third order Taylor scheme (30),_
\[z_{0}-\Delta tz_{1}+\frac{\Delta t^{2}}{2}z_{2}-\frac{\Delta t^{3}}{6}z_{3}-y ^{n}=0\,, \tag{37}\]
_and_
\[\mathcal{F}(z)=\begin{bmatrix}z_{0}-\Delta tz_{1}+\frac{\Delta t^{2}}{2}z_{2 }-\frac{\Delta t^{3}}{6}z_{3}-y^{n}\\ \Phi(z_{0})-z_{1}\\ \Psi_{2}(z_{0},z_{1})-z_{2}\\ \Psi_{3}(z_{0},z_{1},z_{2})-z_{3}\end{bmatrix}\,. \tag{38}\]
_The Jacobian is now less clustered, in our example_
\[\mathcal{F}^{\prime}(z)=\begin{bmatrix}1&-\Delta t&\frac{\Delta t^{2}}{2}&- \frac{\Delta t^{3}}{6}\\ \Phi^{\prime}(z_{0})&-1&0&0\\ \partial z_{0}\Psi_{2}&\partial z_{1}\Psi_{2}&-1&0\\ \partial z_{0}\Psi_{3}&\partial z_{1}\Psi_{3}&\partial z_{2}\Psi_{3}&-1\end{bmatrix}\,. \tag{39}\]
_In the case of the \(\varepsilon\)-scaled Dahlquist test (32), the relations (33) and (34) respectively yield_
\[\mathcal{F}^{\prime}_{\rm EJ}(z)=\begin{bmatrix}1&-\Delta t&\frac{\Delta t^{2 }}{2}&-\frac{\Delta t^{3}}{6}\\ -\frac{1}{\varepsilon}&-1&0&0\\ 0&-\frac{1}{\varepsilon}&-1&0\\ 0&0&-\frac{1}{\varepsilon}&-1\end{bmatrix}\quad\text{and}\quad\mathcal{F}^{ \prime}_{\rm rec}(z)=\begin{bmatrix}1&-\Delta t&\frac{\Delta t^{2}}{2}&-\frac {\Delta t^{3}}{6}\\ -\frac{1}{\varepsilon}&-1&0&0\\ 0&-\frac{1}{\varepsilon}&-1&0\\ 0&\frac{1}{\varepsilon^{2}}&0&-1\end{bmatrix}. \tag{40}\]
Regarding AMDRK schemes, Baeza et al [7] introduce the scaled unknowns \(z_{k}\approx\Delta t^{k-1}\widetilde{y}^{(k)}\), \(1\leq k\leq\mathtt{r}\). With this choice, analogous relations
\[\begin{split} z_{1}&=\Phi(z_{0}),\\ z_{k}&=\widetilde{\Psi}_{k}(z_{0},z_{1},\ldots,z_{k-1}), \quad 2\leq k\leq\mathtt{r},\end{split} \tag{41}\]
are found on the basis of the formulas (13)-(14), namely
\[\widetilde{\Psi}_{k}:=\Delta t^{k-1}P^{(k-1)}\boldsymbol{\Phi}_{T}^{k-1, \langle n\rangle}\,. \tag{42}\]
In here,
\[\Phi_{T}^{k-1,n+j}:=\Phi\left(z_{0}+\Delta t\sum_{m=1}^{k-1}\frac{j^{m}}{m!}z_ {m}\right)\,, \tag{43}\]
for \(j=-p,\ldots,p\). Note the slight redefinition of \(\Phi_{T}^{k-1,n+j}\) in contrast to Eq. (14) to account for the \(\Delta t\) dependency of the \(\widetilde{\Psi}_{k}\). For a specific example of the AMDRK method, and its Jacobian \(\widetilde{\mathcal{F}}^{\prime}(z)\), we refer the reader to [7, Subsection 4.2].
As a counterpart to the "Direct" MDRK methods in Section 2, we denote the MDRK approach in which the derivatives are taken as members of the solution by "DerSol". A summary of the six different MDRK approaches is presented in Figure 1. From the specific Taylor example that we have investigated in this section, there are two important observations to be made:
* Most importantly, compared to \(\mathcal{F}^{\prime}_{\text{EJ}}\) and \(\mathcal{F}^{\prime}_{\text{rec}}\), no second order Jacobian \(\Phi^{\prime\prime}\) occurs for the approximate procedure. From (42)-(43) it can be observed that the AMDRK method solely relies on finite difference computations of \(\Phi\). Hence, \(\Phi^{\prime}\) is sufficient for retrieving partial derivatives of \(\widetilde{\Psi}_{k}\). If the problem is not scalar anymore (\(m>1\)) no tensor calculations are needed, whereas such calculations can not be avoided for an exact MDRK scheme.
* Starting from three derivatives, the matrices \(\mathcal{F}^{\prime}_{\text{EJ}}\) and \(\mathcal{F}^{\prime}_{\text{rec}}\) are not the same anymore, i.e. \(\mathcal{F}^{\prime}_{\text{rec}}\) will only fill up the first two columns (and the diagonal), whereas \(\mathcal{F}^{\prime}_{\text{EJ}}\) has a full lower-triangular submatrix. In the numerical results below it will be demonstrated that there is significantly different behavior in the conditioning of these Jacobians.
When effectively applying the DerSol approach to several DIMDRK schemes, different orders of \(\varepsilon\) can be observed in the condition numbers, see Figure 6. In comparison to the Direct MDRK approach (see Figure 4), many schemes behave as
\[\text{cond}(\mathcal{F}^{\prime})=\mathcal{O}(\varepsilon^{-1})\,, \tag{44}\]
confirming the succesful unfolding of the \(\varepsilon\)-dependency through the \(\mathbf{r}\) additional equations in the A-DerSol and (partially) in the EJ-DerSol approach. The same can not be said for the rec-DerSol approach, where the order seems to behave as \(\mathcal{O}(\varepsilon^{-\mathbf{r}+1})\). This behavior was foreshadowed in the relation (34); exactly one recursion order is resolved, therefore as well unfolding exactly one order in the \(\varepsilon\)-dependency. For that reason, it is highly disadvised to apply the rec-DerSol approach for practical purposes.
The EJ-DerSol approach as well does not seem to be flawless when we consider the scheme HB-I4DRK8-2s (Table 10). Instead, the order \(\mathcal{O}(\varepsilon^{-3})\) seems to be achieved. In fact, numerically we observe that the
Figure 6: DIMDRK schemes applied as a DerSol method (Figure 1) to the PR problem (18). The average condition number in the 1-norm of the Newton Jacobian obtained from the last RK-stage is shown for different values of \(\varepsilon\). The behavior \(\text{cond}(\mathcal{F}^{\prime})=\mathcal{O}(\varepsilon^{-1})\) is observed for the A-DerSol and EJ-DerSol methods, the rec-DerSol methods seem to behave as \(\mathcal{O}(\varepsilon^{-\mathbf{r}+1})\), \(\mathbf{r}\) being the amount of derivatives. A single timestep (\(N=1\)) of size \(T_{\text{end}}=1.25\) (\(\Delta t=1.25\)) has been considered with tolerances, Eq. (17), set to \(10^{-12}\) under a maximum of 1000 iterations.
scheme tend toward \(\mathcal{O}(\varepsilon^{-2})\) for up to \(\varepsilon=10^{-8}\). From running all schemes in Figure 6 up to \(\varepsilon=10^{-8}\), this behavior appears to be unique among the applied DIMDRK schemes. Even more so, when considering different problems (van der Pol and Kaps, see [15]), all the same schemes show \(\mathcal{O}(\varepsilon^{-1})\) up to \(\varepsilon=10^{-8}\), except for HB-I4DRK8-2s applied to van der Pol. For both the A-DerSol and the EJ-DerSol approach, around \(\varepsilon\approx 10^{-6}\) there is a sudden change from \(\mathcal{O}(\varepsilon^{-1})\) to \(\mathcal{O}(\varepsilon^{-4})\) and worse.
This leads us to believe that the observed phenomena of the HB-I4DRK8-2s scheme come as a result of floating-point arithmetic. The double-precision format in MATLAB has a machine precision of \(2^{-52}\approx 2.22\cdot 10^{-16}\). Given a value \(\varepsilon=10^{-4}\), a four-derivative Runge-Kutta method yields values \(\varepsilon^{4}=10^{-16}\) in the denominator of \(z_{4}=\Psi_{3}(z_{0},z_{1},z_{2},z_{3})\). Albeit the implicit Taylor method of order 4 giving the requested behavior for the condition number, the Butcher coefficients are larger compared with those of the HB-I4DRK8-2s scheme (see Tables 4 and 10). The application of many-derivative schemes to stiff problems having very small values \(\varepsilon\) should therefore be regarded with sufficient awareness of the machine accuracy being used.
### The (A)MDRK scheme for a general amount of stages
In the most general case, it is not possible to solve for each stage one at a time, an FSMDRK approach is therefore a necessity. Thus, there is a need to solve for \(Y=\left(y^{n,1},\ldots,y^{n,\mathtt{s}}\right)\) at once. This entails that for each stage \(\mathtt{r}+1\) separate equations have to be solved, leading to a Jacobian matrix \(\mathcal{F}(z)\) of size \(\left((\mathtt{r}+1)\mathtt{s}M\right)^{2}\).
When using the DerSol approach, one has two options in which one can order all the unknown variables. Either all the variables of the same stage are grouped together, or either the variables are collected by degree of the derivatives. In this work we have chosen to do the ordering in a _stage-based_ manner
\[z=\left(z^{n,1},\ldots,z^{n,\mathtt{s}}\right)\,, \tag{45}\]
with for each stage \(z^{n,l}:=(z_{0}^{n,l},z_{1}^{n,l},\ldots,z_{\mathtt{r}}^{n,l})\). This allows us to obtain an anologous block-structure (21) as in the Direct implementation:
\[\mathcal{F}^{\prime}(z)=\left[\begin{array}{c|c|c}\partial_{z^{n,\mathtt{s }}}\mathcal{F}_{1}&\ldots&\partial_{z^{n,\mathtt{s}}}\mathcal{F}_{1}\\ \hline\vdots&\ddots&\vdots\\ \hline\partial_{z^{n,\mathtt{s}}}\mathcal{F}_{\mathtt{s}}&\ldots&\partial_{z^{ n,\mathtt{s}}}\mathcal{F}_{\mathtt{s}}\end{array}\right]\,, \tag{46}\]
where each block-matrix \(\partial_{z^{n,\mathtt{s}}}\mathcal{F}_{l}\) inside is of size \(\left((\mathtt{r}+1)M\right)^{2}\) with a similar construction as the matrix (39) in Example 2 (part 2).
Figure 7 displays the average condition numbers \(\mu(\text{cond}(\mathcal{F}^{\prime}))\) for different MDRK schemes. The results are very similar to the ones of the DIMDRK implementation in Figure 6, thus the previous remarks remaining valid pertaining to the FSMDRK implementation. It is clear that \(\mathcal{F}^{\prime}(z)\) will quickly grow large for an increasing amount of derivatives \(\mathtt{r}\) and stages \(\mathtt{s}\), and that this consequently has an impact on the performance of the MDRK method. Still, it might be beneficial to introduce the additional derivative relations for the overall efficiency of the method. As highlighted before w.r.t. the condition of the block-Jacobian (21), here as well \(\text{cond}(\mathcal{F}^{\prime}(z))\) is strongly dependent on the condition of the seperate blocks. If \(\text{cond}(\partial_{z^{n,\mathtt{s}}}\mathcal{F}_{l})=\mathcal{O}( \varepsilon^{-1})\) can be guaranteed, there might be a significant difference in the total used amount of Newton iterations compared to the Direct counterpart. Furthermore, there is more certainty that the method itself will converge after all, which, for example, is not always the case for A-Direct methods (see Table 1).
## 5 Conclusion and outlook
We have developed a family of implicit Jacobian-free multiderivative Runge-Kutta (MDRK) solvers for stiff systems of ODEs. These so-called AMDRK methods have been tailored to deal with the unwanted outcomes that come from the inclusion of a higher amount of derivatives: (1) each added \(k\)-th derivative yields a power term \(\varepsilon^{k}\) in the denominator, and (2) the complexity of the formulas for the derivatives increases rapidly with each derivative order.
When adopting Newton's method as a nonlinear solver, these two negatives become noticeable in the Jacobian: the condition number of the Jacobian grows exponentially with each added derivative, as well as the Jacobian having to be obtained from intricate formulas that request tensor calculations. In order to manage these negatives, the AMDRK methods have been established along the lines of the Approximate Implicit Taylor method in [7].
First, by adding an additional equation to the ODE system for each derivative, the derivatives become a part of the unknown solution, which we named MDRK-DerSol. In this manner, the \(\varepsilon\)-dependencies are distributed among the newly added relations. Numerically we have shown that this procedure alleviates the exponential growth in the condition number that is typical for direct MDRK methods (correspondingly named MDRK-Direct), for some cases resulting in much less Newton iterations per timestep. Second, by recursively approximating the derivatives using centered differences, no complicated formulas or tensor calculation are needed. The desired convergence order \(\min(2p+1,q)\) is achieved, \(2p+1\) denoting the amount of stencil points used for the centered differences and \(q\) being the order of the MDRK scheme.
Despite the (A)MDRK-DerSol methods for \(\varepsilon\to 0\) having a more favorable behavior in the condition number in comparison to (A)MDRK-Direct methods, the total system grows in size, and therefore might be less efficient. In order to balance on the one hand the amount of Newton iterations per timestep, and on the other hand the computing time that is needed for solving the linear system, it might be beneficial in the future to establish a threshold value that switches between (A)MDRK-DerSol and (A)MDRK-Direct methods. Such threshold can play a siginificant role when transitioning to parabolic PDEs with viscous effects, where the size of linear systems depends on the spatial resolution. A careful consideration w.r.t. efficiency will be needed in the development of MDRK-DerSol approaches for PDEs with viscous effects.
#### Declarations
_Conflicts of interest_ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Availability of data and materialThe datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request via [email protected].
Figure 7: FSMDRK schemes applied as a DerSol method (Figure 1) to the PR problem (18). The average condition number in the 1-norm of the Newton Jacobian is shown for different values of \(\varepsilon\). The behavior \(\text{cond}(\mathcal{F}^{\prime})=\mathcal{O}(\varepsilon^{-1})\) is observed for the A-DerSol and EJ-DerSol methods, the rec-DerSol methods seem to behave as \(\mathcal{O}(\varepsilon^{-r+1})\), \(\mathbf{r}\) being the amount of derivatives. A single timestep (\(N=1\)) of size \(T_{\text{end}}=1.25\) (\(\Delta t=1.25\)) has been considered with tolerances, Eq. (17), set to \(10^{-12}\) under a maximum of 1000 iterations.
## Appendix A Butcher tableaux
All the used multiderivative Runge-Kutta methods in this paper are displayed in this section. A typical multiderivative Runge-Kutta method can be summarized in an extended Butcher tableau of the form as in Table 2.
We use the explicit and implicit Taylor method reformulated as RK scheme, two-derivative Hermite-Birkhoff (HB) schemes taken from [8] together with new higher-derivative HB-schemes designed along the same line of reasoning, and Strong-Stability Preserving schemes taken from [9]. The corresponding Butcher tableaux of the HB schemes have been generated using a short MATLAB code which can be downloaded from the personal webpage of Jochen Schutz at www.uhasselt.be/cmat or directly from [http://www.uhasselt.be/Documents/CMAT/Code/generate_HBRK_tables.zip](http://www.uhasselt.be/Documents/CMAT/Code/generate_HBRK_tables.zip). |
2301.12119 | Introduction to the Fifth-rung Density Functional Approximations:
Concept, Formulation, and Applications | The widespread use of (generalized) Kohn-Sham density functional theory
(KS-DFT) lies in the fact that hierarchical sets of approximations of the
exchange-correlation (XC) energy functional can be designed, offering versatile
choices to satisfy different levels of accuracy needs. The XC functionals
standing on the fifth (top) rung of the Jacob's ladder incorporate the
information of unoccupied Kohn-Sham orbitals, and by doing so can describe
seamlessly non-local electron correlations that the lower-rung functionals fail
to capture. The doubly hybrid approximations (DHAs) and random phase
approximation (RPA) based methods are two representative classes of fifth-rung
functionals that have been under active development over the past two decades.
In this review, we recapitulate the basic concepts of DHAs and RPA, derive
their underlying theoretical formulation from the perspective of
adiabatic-connection fluctuation-dissipation theory, and describe the
implementation algorithms based on the resolution-of-identity technique within
an atomic-orbital basis-set framework. Illustrating examples of practical
applications of DHAs and RPA are presented, highlighting the usefulness of
these functionals in resolving challenging problems in computational materials
science. The most recent advances in the realms of these two types of
functionals are briefly discussed. | Igor Ying Zhang, Xinguo Ren | 2023-01-28T08:09:37Z | http://arxiv.org/abs/2301.12119v1 | Introduction to the Fifth-rung Density Functional Approximations: Concept, Formulation, and Applications
###### Abstract
The widespread use of (generalized) Kohn-Sham density functional theory (KS-DFT) lies in the fact that hierarchical sets of approximations of the exchange-correlation (XC) energy functional can be designed, offering versatile choices to satisfy different levels of accuracy needs. The XC functionals standing on the fifth (top) rung of the Jacob's ladder incorporate the information of unoccupied Kohn-Sham orbitals, and by doing so can describe seamlessly non-local electron correlations that the lower-rung functionals fail to capture. The doubly hybrid approximations (DHAs) and random phase approximation (RPA) based methods are two representative classes of fifth-rung functionals that have been under active development over the past two decades. In this review, we recapitulate the basic concepts of DHAs and RPA, derive their underlying theoretical formulation from the perspective of adiabatic-connection fluctuation-dissipation theory, and describe the implementation algorithms based on the resolution-of-identity technique within an atomic-orbital basis-set framework. Illustrating examples of practical applications of DHAs and RPA are presented, highlighting the usefulness of these functionals in resolving challenging problems in computational materials science. The most recent advances in the realms of these two types of functionals are briefly discussed.
## 1 Introduction
It is now an undisputed fact that the most widely used method for the first-principles electronic-structure calculations is Kohn-Sham (KS) density functional theory (DFT) [1, 2]. Although the exact exchange-correlation (XC) energy functional within the KS-DFT framework is unknown, there exist many practical approximations - density functional approximations (DFAs). Due to their excellent trade-off between accuracy and efficiency, these DFAs have been applied with great success in chemistry, physics, |
2303.12760 | Uncertainty Aware Active Learning for Reconfiguration of Pre-trained
Deep Object-Detection Networks for New Target Domains | Object detection is one of the most important and fundamental aspects of
computer vision tasks, which has been broadly utilized in pose estimation,
object tracking and instance segmentation models. To obtain training data for
object detection model efficiently, many datasets opt to obtain their
unannotated data in video format and the annotator needs to draw a bounding box
around each object in the images. Annotating every frame from a video is costly
and inefficient since many frames contain very similar information for the
model to learn from. How to select the most informative frames from a video to
annotate has become a highly practical task to solve but attracted little
attention in research. In this paper, we proposed a novel active learning
algorithm for object detection models to tackle this problem. In the proposed
active learning algorithm, both classification and localization informativeness
of unlabelled data are measured and aggregated. Utilizing the temporal
information from video frames, two novel localization informativeness
measurements are proposed. Furthermore, a weight curve is proposed to avoid
querying adjacent frames. Proposed active learning algorithm with multiple
configurations was evaluated on the MuPoTS dataset and FootballPD dataset. | Jiaming Na, Varuna De-Silva | 2023-03-22T17:14:10Z | http://arxiv.org/abs/2303.12760v1 | Uncertainty Aware Active Learning for Reconfiguration of Pre-trained Deep Object-Detection Networks for New Target Domains
###### Abstract
Object detection is one of the most important and fundamental aspects of computer vision tasks, which has been broadly utilized in pose estimation, object tracking and instance segmentation models. To obtain training data for object detection model efficiently, many datasets opt to obtain their unannotated data in video format and the annotator needs to draw a bounding box around each object in the images. Annotating every frame from a video is costly and inefficient since many frames contain very similar information for the model to learn from. How to select the most informative frames from a video to annotate has become a highly practical task to solve but attracted little attention in research. In this paper, we proposed a novel active learning algorithm for object detection models to tackle this problem. In the proposed active learning algorithm, both classification and localization informativeness of unlabelled data are measured and aggregated. Utilizing the temporal information from video frames, two novel localization informativeness measurements are proposed. Furthermore, a weight curve is proposed to avoid querying adjacent frames. Proposed active learning algorithm with multiple configurations was evaluated on the MuPoTS dataset [1] and FootballPD dataset.
## 1 Introduction
Object detection models have been widely used in monitoring complex traffic scenes [2], [3]; autonomous driving [4], [5]; human face recognition [6], [7] and sports data analytics [8], [9]. The object detection task is to detect all the instances of interested classes from the input image. Since various computer vision tasks are built upon an object detection model, improving the object detection accuracy has always been a crucial computer vision objective.
Most of the object detection models [10], [11] consist of two components: localization and classification. During the localization phase, a set of bounding box candidates are proposed based on the likelihood of them containing an object. Each of the bounding box candidates are classified with a class score in the classification phase. Lastly, the duplicated bounding boxes are removed and only the most likely ones are left as the final detection.
The state-of-the-art object detection models are trained with large datasets such as PASCAL Visual Object Classes dataset [12] and Common Objects in Context dataset [13]. The training process usually takes multiple days on GPUs depending on the model architecture [10], [11]. Some of pre-trained weights of these neural networks are available to public. However, directly using the pre-trained weights on new input data might lead to two practical issues: 1) the input data contains undetectable classes that was not included in the training data and 2) the accuracy is much lower compared to the reported evaluation accuracy, since the feature distribution of the new input data is drastically different to the training dataset [14]. In such situation, user usually adept the architecture of these object detection models and train it with their own dataset to reach certain accuracy.
To train a deep learning model for object detection requires massive amount of training data. The key advantage of deep learning is the capability of extracting features from high-dimensional data. However, high-dimensional data is always more costly to annotate [15]. In the circumstance that there are unlabelled data available and the annotation task for each example is easy to perform, active learning is a labour-efficient solution to train a deep learning model more effectively within a supervised setting [15]. As shown in Figure 1, active learning algorithms let the deep learning model query specific data examples based on query strategy and the human annotator only labels the selected examples for future training. By combining active learning and deep learning, it is possible to achieve a higher performance with lower data annotation cost compared to passive learning.
One common way for unannotated image data collection is through video since shooting a video is less time consuming compared to manually collect images. The occlusion and motion blur induced by the nature of videos can make the object detection task more difficult. However, by properly annotate these data examples, the final trained model can also be more prone to noises [16].
Figure 1: One active learning iteration for object detection model.
In this paper, we propose an active learning algorithm to efficiently query frames from an unannotated video data for annotation, so that the object detection model could reach a desired accuracy with the least number of frames annotated for training.
An object detection model usually contains two components: localization and classification. Multiple query strategies for measuring data informativeness for the classification component in the object detection model has been proposed [17, 18]. However, [19, 20] has shown that the data informativeness for the localization component is also crucial to the performance of an active learning algorithm. Such informativeness is a difficult to measure since object detection models performs the localization task differently. Most previous active learning algorithms measured the data localization informativeness by modifying neural network architecture [19, 20, 21]. As a result, these algorithms only work with certain object detection models. In this work, we propose an active learning algorithm that measures data localization informativeness using the temporal information from unlabelled video data, that works with any object detection model.
With separate informativeness scores for the localization and classification component, the most common aggregation methods are sum, average and maximum [18]. In this work we proposed an aggregation method to measure the inconsistency between the localization and classification informativeness using a dynamically changing weight factor and shows that such inconsistency could be a better informativeness measurement than the maximum aggregation in some cases. Since the consecutive frames are likely to contain similar information, we proposed a weight curve to preferably query frames that are far away from annotated frames. By doing this, the localization measurements can also be better estimated in future active learning iterations.
In summary, our contributions are the following:
* We represent two metrics to measure the localization informativeness of video frames, which is compatible with object detection models with regular output format.
* We propose an aggregation function with a dynamically changing balance parameter to aggregate data classification and localization informativeness based on their inconsistency, which is proven to be effective compared to a baseline aggregation approach.
* We demonstrate that the proposed weight curve can efficiently avoid querying adjacent frames from video data in an active learning setting.
## 2 Related Works
**Object detection models** have seen increasing accuracy in recent years. [22] proposed a two-step object detection model: Region Convolutional Network (R-CNN). It first estimates a set of region proposals from the input image (localization), then detects and classifies objects (classification). Fast R-CNN [23] proposed the Region of Interest (ROI) Pooling algorithm that only crops the feature map of the input image, so that it is faster and less computational power consuming compared to R-CNN models. Faster R-CNN [11] introduced the concept of anchors to crop the image more efficiently and designed a region proposal network (RPN) to further speed up the object detector at a higher accuracy. You Only Look Once (YOLO) group of object detection models [10] were first introduced in 2016 and has been improved multiple times since then. They are in a similar architecture as SSD and adapted the anchor concept from Faster R-CNN. The latest YOLO object detector [25] has surpassed R-CNN based models in accuracy while maintains the speed advantage, it has been used for many real-time object detection tasks.
**Active learning on object detection** has attracted more research interest recently. [21, 26] calculates pixel scores and use them for selecting informative samples. [27] works similarly but approximates the uncertainty via MC-dropout. [19] proposes two different measurements: localization tightness which is the overlapping ratio between the region proposal and the final prediction; and localization stability which is based on the variation of predicted object locations when input images are corrupted by noise. [17] proposed black-box and white-box methods. Whereas the black-box methods do not depend on the underlying network architecture, white-box methods are defined based on the network architecture. [28] outperforms the other single model-based methods. During the training, the method learns to predict the target loss for each sample. During the active learning stage, it chooses to label the samples with the highest predicted loss.
**Video object detection models through post processing** utilizes an image object detection model to detect for each frame from the video, then the detections are finetuned using the temporal information from the video data. [29] performed single-frame object detection and object movements tracking across frames in a multi-task fashion. Then it links the detections across frames to object tubelets using the predicted movements, and re-weights detection scores in tubelets. [30] proposed Seq-NMS to form high score linkages using bounding box IoU across frames and then rescore the boxes associated with each linkage to the average or maximum scores of the linkage. The fundamental difference between our proposed active learning algorithm and above models is that our algorithm enables any image object detection model to learn from video data to achieve higher accuracy on any other single image input with similar feature distribution. Moreover, while applying our actively trained object detection model to video data, all the post-processing approaches are compatible.
## 3 Proposed Method
As shown in Figure 2, proposed active learning algorithm uses three score functions to measure data informativeness, the queried data are selected based on their aggregated and weighted scores. To initialize our active learning algorithm for training object detection model using video data, a set of intermediate frames are annotated as the initial training set. These frames will be annotated and used to train the object detection model as the first active learning iteration. For
example, using a video with 500 frames; frame 1, frame 50,..., frame 500 can be selected as the initial training set. The reason that these intermediate frames are selected as the initial training set is that, without any information from the object detection output, the further the distance between the frames the more likely they contain different information.
For the future iterations, the instance bounding boxes and their categories from unannotated frames will be detected using the object detection model trained in the previous iteration. Based on detection outputs, a set of informative frames are queried using our proposed query strategies. In section 3.1, we introduce the score functions that we proposed to measure the data informativeness for both localization and classification component in an object detection model. To combine these scores, we proposed two aggregation functions in section 3.2 to compute the final informativeness score for querying unannotated frames. Queried frames will then be annotated to train the object detection model for next active learning iteration. The proposed algorithm works for any multi-class object detection model. For each object class, data informativeness is measured for each unannotated frame. The class informativeness of the most informative class is used as the informativeness for each frame as suggested by [19]. For simplicity, the following sections covers the definition and formulations using only one class.
### Informativeness Measurement - Score Functions
While applying an object detection model \(f_{\theta}\) that detects objects in \(k\) classes \(\{C_{1},...,C_{k}\}\) for an \(w*h\) image input \(d\), the object detection output is in the following form:
\[f_{\theta}(d)=\{b_{i}^{i},p_{i}^{i}\}\text{; for }j=1,...,n_{d}^{k},i=1,...k\] \[\text{where }b_{j}^{i}=\{x_{j}^{i},y_{j}^{i},w_{j}^{i},h_{j}^{i}\} \text{; }x_{j}^{i}\in[0,w],y_{j}^{i}\in[0,h]\]
Where \(n_{d}^{k}\) is the total number of detected instances that belong to class \(k\) in image \(d\). \(b_{i}^{i}\) is the set of detected bounding boxes and each bounding boxes is represented by four coordinates (box \(w\), box \(h\) and box centre coordinates). Each bounding box is assigned with a class distribution vector \(p_{j}^{i}\) that represents the estimated probabilities of the bounding box belonging to class 1 to class \(k\). Following above notations, while taking an \(m\)-frame video as input, the object detection output of detected instances in one class is:
\[\{b_{n}^{i},p_{n}^{i}\}\text{, where }i=1,...,m;n=1,...,n_{i} \tag{1}\]
where there are \(n_{i}\) instances detected in frame \(i\). Each \(b_{n}^{i}\) represents the \(n\)th bounding box in frame \(i\). Each \(p_{n}^{i}\) is a class distribution vector shows the probabilities of the bounding box \(b_{n}^{i}\) belonging to class 1 to class \(k\). One single-shot segment of the video is noted as a video piece. In the proposed algorithm, each active learning loop uses one video piece. Using these object detection outputs from video input, three score functions are proposed in the following sections.
#### 3.1.1 Classification Uncertainty
To measure the informativeness in a multi-class object detection setting, we adept the uncertainty sampling paradigm and use entropy to measure the data informativeness score for classification. The informativeness of a data example \(x\) regarding the current model \(P_{\theta}\) can be measured by its entropy [15]:
\[l_{x}=-\sum_{i}p_{\theta}(y_{i}|x)\log p_{\theta}(y_{i}|x) \tag{2}\]
Where \(y_{i}\) covers all the possible annotations. For object detection models, each data example contains multiple object instances and the entropy for each instance detection can be calculated by Equation (2). There are two approaches to aggregate the entropy of all the detected instances: 1) the average aggregation is more prone to the detection number variance meaning that \(n_{i}\) is more likely to influence the aggregated entropy; 2) the maximum aggregation is more prone to the outlier meaning that the aggregated entropy is dependent on the outliers [18]. In practice, more results have shown that the maximum aggregation performs better in active learning algorithms [18, 19]. In this work, the maximum aggregation will be used for instance informativeness score aggregation. Following the notations in Equation (1), the classification uncertainty score function \(C_{l}\) is shown in Equation (3):
\[C_{i}=argmax_{n}-\sum_{k}p_{n}^{i}(k)*logp_{n}^{i}(k) \tag{3}\]
Figure 2: The framework of the proposed algorithm
#### 3.1.2 Localization Uncertainty Based on Discontinuity in Detected Instances
Different from the classification uncertainty, the localization uncertainty of a data example cannot be represented with entropy. Because it is impossible to compute the localization probability distribution without heavily modifying the object detector architecture like [20]. Therefore, we use the temporal discontinuity from video data detection output to estimate the localization uncertainty.
For one video piece, the number of instances of one class detected should be a set of continuous integers if the object detection uncertainty in the localization component is 0. Therefore, whenever there is a discontinuity in the number of instances detected over the frames, there is supposed to be an uncertain localization. The estimated instance curve \(\tilde{n}_{l}\) was proposed to detect such discontinuity.
To define \(\tilde{n}_{l}\), the initial training set that contains the intermediate frames are used as the initial guiding set. Based on the number of instances from the annotation of guiding frames, a curve of estimated number of instances in each frame is fitted, which is noted as \(\tilde{n}_{l}\). In the proposed algorithm, the estimated instance curve is fitted by linearly connecting the nodes of each two adjacent guiding frames (other interpolation method are also applicable). By comparing the number of detected instances \(n_{l}\) from each frame to the estimated instance curve \(\tilde{n}_{l}\), the localization uncertainty for frame \(i\) can be estimated as follows:
\[\Delta n_{i}=max\{1,\left|\frac{n_{i}-\tilde{n}_{l}}{\tilde{n}_{l}}\right|\} \tag{4}\]
Note that the percentage difference of \(n_{l}\) compared to \(\tilde{n}_{l}\) could exceed 1 in some situations. Therefore, \(\Delta n_{i}\) is capped at 1 so that it can be treated as a probability value in the aggregation step. After each active learning query step, a set of new frames will be queried based on \(\Delta n_{i}\) after aggregated with other score functions. The selected frames will be annotated and added to the guiding set, so that the fitted \(\tilde{n}_{i}\) curve will become closer to the ground truth.
#### 3.1.3 Localization Uncertainty Based on Bounding Box
Similar to \(n_{l}\), the location and size of each detected bounding box should not change drastically from frame to frame for each video piece as shown in Figure 3. Assuming the width of each frame is \(w\) and the height is \(h\). The detected bounding boxes \(b_{h}^{i}\) for each frame could be represented by a \(w*h\) matrix \(H\), named as the localization matrix, the value of each entry is set to 1 indicating at least one detected bounding box is covering the corresponding pixel. We use intersection over union (IoU) to measure the different between the localization matrices of two adjacent frames. Given two matrices with binary entry values, their IoU is defined as:
\[IoU(H_{1},H_{2})=\frac{H_{1}*H_{2}}{H_{1}+H_{2}}\]
Like Equation (4), by comparing the localization matrix \(H_{i}\) of frame \(i\) to the localization matrices of the adjacent frames, a score can be computed to measure the changes in the size and location of the detected bounding boxes over frames.
\[\Delta H_{l}=[IoU(H_{l},H_{l-1})+IoU(H_{l},H_{l+1})]/2 \tag{5}\]
### Score Function Aggregation
While having multiple informativeness scores in an active learning algorithm, there are multiple ways to aggregate these scores. Most used aggregation methods are 1) multiplying the scores, 2) adding all the scores and 3) using the highest score. Even though the value computed from each score function is between 0 and 1 that measures the uncertainty of the detection, the approaches that each of them are computed are drastically different. The classification uncertainty score is computed from the entropy of each instance detection, while the localization scores are computed from the temporal detection discontinuity in a video piece. Taking both localization and classification uncertainty into consideration, [19] suggested that the high inconsistency between localization and classification uncertainty indicates a highly informative example, as shown in Figure 4. Following this concept, an aggregation functions \(S_{l}^{1}\) is proposed to aggregate \(C_{l}\), \(\Delta n_{l}\) and \(\Delta H_{l}\):
\[S_{i}^{1}=|max\{dH_{i},\,\Delta n_{l}\}-\mu*C_{l}| \tag{6}\]
Figure 4: Two image examples of classification uncertainty and localization uncertainty inconsistency for an object detection model. The detection in the middle of the left image is localized close to the ground truth, while the model is uncertain about whether to categorize it as a person. One the right image, the same person was localized poorly; but based on the captured features from the bounding box, the detector is almost certain that a person was detected.
Figure 3: These four images are all frames from a video footage, the left two frames are two consecutive frames that indicates a good localization performance from the model, while the right two consecutive frames show a sign of localization uncertainty.
```
Require: Unlabelled \(m\)-frame video \(U\); Initial object detection model \(f_{0}\); Score functions \(C\), \(\Delta n\) and \(\Delta H\); Aggregation function \(S\); Weight curve \(w\) Initialize: Query: \(G_{0}=\{U_{0},U_{10},U_{20},...\}\), Construct initial guiding set Annotation: \(G_{0}^{among}=(G_{0},Y_{0})\), Label initial guiding set with \(Y_{0}\) Train: \(f\gets f_{0}\) trained with \(G_{0}^{among}\) \(U\gets U\backslash G_{0}\), Update unlabelled set \(L\gets G_{0}^{among}\), Construct labelled set For\(k\) iterations: For\(i\) from \(1\) to \(m\): compute detection output \(D_{i}\) If frame \(i\) in \(L\): \(D_{i}=y_{i}\) Fit \(\tilde{n}_{i}\) curve, Update \(w_{i}\) If frame \(i\) in \(U\): \(D_{i}=f(U_{i})=\{b_{n}^{i},\ p_{n}^{i}\}\) \(n_{i}=\) No. of detections in frame \(i\) \(H_{i}=\) localization matrix for frame \(i\) While frame \(i\) in \(U\): Calculate \(C_{i}\), \(\Delta n_{i}\), \(\Delta H_{i}\) and \(S_{i}\) \(\tilde{S}_{i}=S_{i}\cdot w_{i}\) Query: \(G_{k}=\{U_{i}\}\) s.t. \(\tilde{S}_{i}\) is the 10 highest scores Annotation: \(G_{k}^{among}=(G_{k},y_{k})\) Train: \(f\gets f\) trained with \(G_{k}^{among}\) \(U\gets U\backslash G_{k}\) \(L\gets L\cup G_{k}^{among}\)
```
**Algorithm 1**
Since both \(\Delta H_{i}\) and \(\Delta n_{i}\) are computed similarly, \(\Delta n_{i}\) can be considered as a discrete version of the localization uncertainty measurement while \(\Delta H_{i}\) can be considered as continuous. Therefore, the higher value from \(\Delta H_{i}\) and \(\Delta n_{i}\) can be used to represent the localization uncertainty. \(\blacksquare\) is a balancing parameter that controls how the frames are queried. Large \(\blacksquare\) means that the frames with high classification uncertainty score are more likely to be queried, whereas small \(\blacksquare\) means that the frames with high localization uncertainty score are more likely to be queried. For each iteration, \(\blacksquare\) is calculated as the mean localization uncertainty of unannotated frames divided by the mean classification uncertainty of unannotated frames. To validate the effectiveness of consistency aggregation in Equation (6), a summation aggregation method is also used for comparison:
\[S_{i}^{2}=max\{\Delta H_{i},\Delta n_{i}\}+C_{i} \tag{7}\]
For a video piece, adjacent frames will be likely to contain similar information. To avoid repeatedly selecting adjacent frames, a weight curve is proposed to control the likelihood of selecting frames. The weight curve is designed as follows: the weight is set to 0 at each guiding frame and it increases to 1 at the middle point of two closest guiding frames, the weight curve is updated every time after a set of new frames are annotated. To calculate the weighted score, multiply the aggregated score to its corresponding weight. Another benefit of using the weight curve is that the estimated bounding box curve \(\tilde{n}_{i}\) will be fitted more easily, since distanced frames are more likely to be selected for annotation. In the next section, the effectiveness of the weight curve will also be tested. Putting the aggregation function and the weight curve together, the proposed algorithm is shown in Algorithm 1.
## 4 Experiments
To our knowledge, there is no priorly proposed active learning algorithm for unannotated video data specifically. Following the experiments setup from [19], our proposed active learning algorithm was compared to two baseline query strategies:
* Passive Selection (P): unannotated frames are queried randomly, then added to the labelled set.
* Classification only (C): only use the classification uncertainty as the informativeness measurement.
For all query strategies, ten intermediate frames are annotated as the initial guiding set. This also applies to the passive selection query strategy to ensure that the models are initialized identically. For each active learning query step, ten frames are queried based on the query strategy. Unannotated frames are queried in batches and trained with newly queried frames solely might cause model performance degeneration on previously learned examples. Therefore, the incremental learning scheme was used as suggested in [18]. After the query step, the object detection model is trained for ten epochs with a mini-batch of size twenty. Each mini-batch is constructed by randomly selecting ten frames from the guiding set together with the ten frames queried in the query step. Based on the experiment results, he learning rate was set to 0.001 for consistency.
Proposed active learning algorithm was applied to two object detection datasets to evaluate its performance. Both datasets contain one or multiple video pieces and their details will be covered in the following sections. For each video piece, 10% of the frames will be randomly selected as the test set. The rest of the frames are treated as unannotated frames and will be annotated under different query strategies until 80% of these frames are annotated.
The proposed algorithm is designed to work with any image object detection model. In this work, the model used for object detection is the Faster R-CNN model with a ResNet-50-FPN backbone. It contains two parts: Region Proposal Network (RPN) part for localization and Fast R-CNN part for classification. The RPN part predicts the bounding boxes for possible objects and the Fast R-CNN part classifies these proposed regions and refine the bounding boxes predicted. The output layer is modified to match the object classes for our dataset. Instead of only finetuning the last few layers, all the layers (except backbone) are trained from randomly initialized weights. In this way, the object detection model can learn the localization and classification information from queried frames more efficiently [15].
The modern standard of evaluating an object detection model is using mean average precision (mAP). The detection for one instance is correct if its ground truth intersection over union (IoU) is above a threshold. The average precision is calculated by taking average of the maximum precision at 11 recall values between 0 and 1. The mAP is computed by setting the IoU threshold to multiple values and taking average of the corresponding AP values. To evaluate the
performance of proposed active learning algorithm, model mAP on the test set after each active learning iteration is reported. All the mAP values reported are the average of mAP values from three separate runs.
### Multi-person Pose Estimation Test Set
The main experiments were taken out on the Multi-person Pose estimation Test Set (MuPoTS) [1]. It contains twenty subsets, where each subset consists of continuous frames of a video piece. Six subsets were selected for our experiments, the details of selected subsets are shown in supplementary. Besides two baseline query strategies, our proposed algorithm was evaluated in three different configurations. All three configurations take both frame localization and classification informativeness into consideration, using different aggregation functions: 1) S1: S1 aggregation function with changing balancing parameter \(\mu\); 2) S1/\(\mu\): S1 aggregation function with fixed balancing parameter \(\mu\) = 1 and 3) S2: S2 aggregation function.
As shown in Figure 5, all active learning query strategies including C outperformed passive selection as expected. As the active learning loop goes on for all subsets, the mAP gain per iteration for S1 and S2 became closer to C. In another word, the training speed improvement of applying proposed active learning algorithm mainly happens at the earlier stage. The video pieces in subsets 6 and 11 are the two longest videos in our experiments and the human body occlusion in these two videos is minimal compared to other video pieces. As a result, including localization uncertainty in query strategies provided less performance gain compared to subsets with heavier human body occlusion. Since object occlusion from input image is the main factor that causes localization uncertainty for the object detection models. Compared to classification uncertainty based active learning algorithms, the advantage of proposed algorithm is less significant for low-occlusion videos.
Comparing two aggregation functions, S1 outperforms S2 in most subsets. For more complex video pieces 7, 13 and 19 (e.g., more objects or heavy occlusion), the improvement of using inconsistency aggregation as in S1 is smaller and became marginal for the later iterations. In subset 7, one person is covering another person for the most part and the consistency aggregation did not work well as a result. Comparing the blue curves to the dotted blue curves, the advantage of the dynamically changing balancing parameter in inconsistency aggregation function can be easily observed. With \(\mu\) fixed, S1 and S2 performed similarly.
Based on the results in Figure 5, proposed active learning algorithm for object detection model has better ability to query high informativeness examples compared to classification-uncertainty-based algorithms, especially for videos with more objects and higher occlusion. In general, the inconsistency aggregation and its dynamically changing balancing parameter \(\mu\) further boost the performance of proposed active learning algorithm. However, it shows less significant improvement in the video pieces where most frames contain heavy occlusions.
The weight curve effectively improves the performance for all active learning query strategies for video frame annotation (experiment results in supplementary). Without the proposed weight curve, all query strategies tend to select the frames in particular video segments for consecutive active learning iterations. As a result, the training speed is much slower for
Figure 5: Proposed active learning algorithm performance on six different video pieces. From top left to bottom right: subset 6, 7, 11, 12, 13 and 19. (P) - Passive Selection; (C) – Classification uncertainty only; (S1) – S1 aggregation; (S1/\(\mu\)) – S1 aggregation with fixed \(\mu\); (S2) – S2 aggregation. The horizontal dotted lines are the mAP’s using pre-trained Faster R-CNN model.
all query strategies. In some cases, outperformed by passive selection. This is due to frames containing similar information being queried repeatedly for annotation. We also observed that the localization informativeness measurement \(\Delta\textit{n}\) in the early active learning iterations are less accurate. This is because the annotated frames are not separated enough to fit \(\tilde{\textit{n}}_{l}\) curve accurately. In our experiments, different shaped weight curve performs similar as long as the gradients are not overly large.
### Annotation for Football Player Detection Dataset
Object detection is the first step of top-down multi-person 3D pose estimation frameworks [31]. The human object detection accuracy signifyingly affects the accuracy of the localization and estimation of each single-person 3D pose. Using the football broadcast footage as input, one of the major motivations for this work is to use active learning algorithms to train a better player detection model.
To evaluate proposed algorithm in a more realistic setup, we selected a 371-frame video piece from a football broadcast footage and annotated these frames with proposed algorithm. Unlike previous experiments, all queried frames were annotated manually instead of using the ground truth provided in MuPoTS. We want to avoid any machine error during annotation and make sure the sizes of bounding boxes are consistent, since the ground truth bounding boxes of MuPoTS dataset were extracted from a multi-camera system. Compared to the frames in MuPoTS, the instance number in each frame of FootballPD is much higher and the change of instance number from frame to frame is also larger. For each instance, the size of each bounding box is smaller in FootballPD. Besides, there are more complex frames in FootballPD including heavy blurring and occlusion. In general, FootballPD is a harder dataset for object detection, which also makes active learning more challenging
Figure 6 shows the mAP's of the player detection model after 33 active learning iterations and the annotation cost using different query strategies. After annotated and trained the model with 80% of the frames, the player detection mAP reached 75%. The performance improvement of proposed active learning algorithm is consistent with the results on MuPoTS. Both proposed query strategies using aggregation function S1 and S2 reached a higher mAP with lower annotation cost compared to passive selection (P) and classification uncertainty only strategy (C) after 33 iterations. Both P and C query strategies failed to train the model to reach 75% mAP. Moreover, to train the detection model to 70% mAP, our proposed active learning algorithm S1 saved annotation cost by 40%.
## 5 Discussion
We tested the purposed active learning algorithm on two object detection datasets. With MuPoTS, we use the ground-truth annotation to mimic the manually labelling process. The results indicated performance improvement over both passive selection and classification uncertainty only query strategy. Besides MuPoTS, a 371-frame football broadcast footage was annotated with proposed algorithm. For this high-complexity dataset, the experiments showed consistent results as in MuPoTS. All experiment results showed significant reduction in annotation cost while applying proposed algorithm compared to baselines. In active learning settings, the weight curve proposed in this paper was also proven to be effective while querying frames from video data.
The task we want to solve with the proposed algorithm is to efficiently query informative frames from a video for annotation. To evaluate our algorithm on a video piece, the test set is constructed with randomly selected frames from the same video piece. The generalization performance of the actively trained object detection model on other dataset is untested since it is beyond the scope of this work.
All experiments were designed to extract information from single video pieces. However, in a multi-video setup, proposed algorithm could be easily adjusted to query frames simultaneously to construct a multi-video frame batch. This could also be done in a federated learning setting [32], where each video piece is utilized to actively train an object detection model and the model weights will be aggregated iteratively. The benefit of combining federated learning and active learning is that the privacy is preserved since the data annotation and model training are done separately with multiple annotators and only the model weights are centralized for aggregation.
High compatibility is a key advantage of proposed algorithm. To measure data informativeness, all the score functions take the regular object detection output as input. Therefore, proposed algorithm is supposed to work with any object detection model with regular output format.
## 6 Conclusion
To train an object detection model with unannotated video data, labelling each frame is time consuming and insufficient. To select the most informative frames from the video, we propose an active learning algorithm that utilizes the temporal information from video frames to measure the informativeness for each unlabelled frame in the form of three score functions. Proposed algorithm measures both classification and localization informativeness for each frame and is compatible to any object detection model with regular output format. Based on the experiment result, proposed algorithm outperforms baseline active learning algorithms noticeably. It was also demonstrated that, the proposed
Figure 6: mAP performance evaluation and annotation cost comparison for different query strategies on FootballPD dataset.
consistency-based aggregation method together with the weight curve can further reduce the annotation cost. We believe that proposed active learning algorithm is an efficient approach to reduce video annotation cost for object detection, with a great generalization potential to reduce video data annotation cost for more complex computer vision tasks like pose estimation and instance segmentation.
|
2307.00693 | A Study of Electronic and Magnetic Properties of Transition Metal
Trihalides | We present the electronic and magnetic structure calculations of VCl3, VBr3,
CrCl3 and CrBr3. The results are obtained by density functional theory with
plane wave basis sets. The trihalides generally optimize either in trigonal or
monoclinic structures. We have focused on the effect of symmetry on the
electronic and magnetic properties of the systems. We have found that magnetic
moments change considerably depending on the symmetry. Both CrX3 have shown a
bandgap around 2eV while the V-based systems have shown half-metallic
properties. | Shrestha Dutta, Sachin Varma U, Payel Bandyopadhyay, Rudra Banerjee | 2023-07-03T00:42:08Z | http://arxiv.org/abs/2307.00693v1 | # A Study of Electronic and Magnetic Properties of Transition Metal Trihalides
###### Abstract
We present the electronic and magnetic structure calculations of VCl\({}_{3}\), VBr\({}_{3}\), CrCl\({}_{3}\) and CrBr\({}_{3}\). The results are obtained by density functional theory with plane wave basis sets. The transition metal trihalides generally optimize either in trigonal or monoclinic structures. We have focused on the effect of symmetry on the electronic and magnetic properties of the systems. We have found that magnetic moments change considerably depending on the symmetry. Both CrX\({}_{3}\) has shown a bandgap \(\approx\) 2eV while the V based systems have shown half-metallic properties.
## 1 Introduction
The family of transition metal trihalides MX\({}_{3}\), where M is a transition metal cation (M=Ti, V, Cr, Fe, Mo, Ru, Rh, Ir) and X is a halogen anion (X=Cl, Br, I), have been known for more than 50 years[1, 2]. These compounds have been in spotlight for some amusing electronic and magnetic properties they exhibit in their bulk and monolayer phases such as, intrinsic long-range magnetic order in atomically thin layers[3] as well as its profound applications in spintronics[4]. With lower dimensionalities of MX\({}_{3}\), their bandgap can be tuned by doping or changing their stacking orientation. This property of MX\({}_{3}\) opens up the opportunity in multi-purpose applications.
Structurally an X-ray and neutron diffraction study ascertained that the single MX\({}_{3}\) crystal adopts a monoclinic AlCl\({}_{3}\) structure (space group \(C2/m\)), termed as (\(\beta\) phase)[5]. \(\beta\)-phase is found to be lower symmetry crystallographic phase which is most likely to be found at higher temperatures. Upon cooling from higher temperature, MX\({}_{3}\) layers rearrange themselves from \(\beta\)-phase to BiI\({}_{3}\)-type trigonal(space group R\(\bar{3}\)) structure known as \(\alpha\)-phase. The transition has been reported to be a first-order phase transition which does not involve any magnetic changes[6]. This phase transition is believed to result from interlayer interactions among the layers[7, 8]. The interlayer interactions are caused by weak van der Waals(vdW) bonding[7] between halogen(X) anions hence, known as vdW structures. The vdW structures reveal themselves to be truly captivating in the field of materials science due to the presence of intrinsic magnetism and magnetic
anisotropy[9], tunable band gap and high temperature magnetic ordering[4]. This property of tunable band gap leads to new generation spintronics, magnetic and magneto-optic applications[10].
Spintronics is a new generation information technology where spins of electrons are employed as information carriers and also speeds up data processing. Half metals, spin gapless semiconductors and bipolar magnetic semiconductors were proposed one after another as probable property in spintronics[11]. Half metals have been established commendatory in spin current generation and making spintronic and other nano-electronic devices. Dirac half metals exhibit a large bandgap in one spin channel and metallic character in the other. We attempt to figure out half-metals among the family of transition metals in bulk phases.
For many of the MX\({}_{3}\) compounds downscaling the bulk phase to a stable monolayer is still a challenge. Some studies suggest that monolayers with binding energy smaller than 0.15 eV per atom might be feasible for exfoliation[12].
The physical properties of a few-layer or monolayer structures of such transition metal MX\({}_{3}\) are quite different from that of their bulk structures as a result of the different screening environments they experience due to the differences in their effective dimensionality[13].
Various density functional theory(DFT) based studies of bulk and few-layer MX\({}_{3}\) for the ground-state have been carried out theoretically[6, 14, 15, 16]. The difference in electronic screening for bulk and few-layer MX\({}_{3}\)[17] results in different electronic bandgaps. These studies cover the interlayer exchange coupling[18] as well as its dependence on the stacking of layers, possible ways to stabilize skyrmions[19], and the electric-field switching of the magnetization[20].
Researchers have detected rich magnetic ground states in the MX\({}_{3}\) family. Magnetism in the ground state of a material is correlated to interlayer vdW coupling. While vdW coupling originates from distinct stacking orders and the signature Kagome
Figure 1: Bulk structures of MX\({}_{3}\) for (a)\(R\bar{3}\) and (b)\(C2/m\) space groups respectively
lattice[21].
In MX\({}_{3}\), magnetism originates from the angular momentum associated with partially filled \(d\) orbitals. It has been predicted that the in-plane magnetic interaction among the transition metal cations is a consequence of super-exchange through shared coordinating halogen anions. The super-exchange interaction depends upon factors like orbital occupations and the M-X-M angle[22].
Among the MX\({}_{3}\) series, the Cr and V-based materials are mainly of importance. CrCl\({}_{3}\) has recently been shown to have an unusual magnetic easy axis normal to the \(c\)-axis[23]. Further, Ebrahimian _et al._ has recently shown that the magnetism in bilayer CrCl\({}_{3}\) can be easily controlled by strain and electric field[24]. Ahmad _et al._ has shown magnetism of CrCl\({}_{3}\) can be controlled by pressure also[25]. VCl\({}_{3}\) has been shown to have half metallicity in 2D[26]. Doping \(3d\) transition metal to VCl\({}_{3}\) has been shown to tune its electronic and magnetic properties very recently[27]. High tunnel magnetoresistance and spin filtering effect are shown in CrBr\({}_{3}\)[28]. Grzeszczyk _et al._ has recently shown exciton magnetization in CrBr\({}_{3}\)[29].
From the above discussion, it is clear that V and Cr-based MX\({}_{3}\) systems play a significant role in the development of spintronics. Hence, a detailed and systematic study of these systems from the equivalent theoretical field is necessary. In this study, we compare the structural and electronic properties of the possible space groups (\(R\bar{3}\) and \(C2/m\)) of trichlorides and tribromides of V and Cr using _ab-inito_ density functional theory.
## 2 Computational Details
The Density functional theory (DFT) calculations of vdW layered MX\({}_{3}\) have been brought about with Quantum Espresso[30, 31]. For the exchange-correlation functional general gradient approximation (GGA) was used in the projector augmented wave(PAW)[32] pseudopotential. We used the plane-wave cut-off energy of 320 eV to treat interactions between the valence electrons and ion cores. The calculations are done in \(\Gamma\)-point.
Since MX\({}_{3}\) is a layered system, vdW corrections were used based on the semiempirical Grimme's DFT-D2[33] approach. We need to account for the strong correlation effects of localized \(3d\) electrons of transition metal atoms to describe their electronic and magnetic properties[34] correctly. In this case, a simplified version of the DFT+U method suggested by Dudarev[35] was employed. We set the effective Hubbard interaction parameter U\({}_{eff}\) of metal atoms to 2.7 eV[36, 37]. It is acquired by comparing physical properties directly.
## 3 Results and Discussions
The MX\({}_{3}\) system stabilizes in \(R\bar{3}\) and \(C2/m\) structures. We have calculated the ground state of both structures with M=V, Cr and X=Cl, Br. The optimized
lattice parameter(a), optimized M-X bond length(d\({}_{M-X}\)), interplanar separation(d\({}_{0}\))and magnetic moment per formula unit(_f.u._) for VCl\({}_{3}\), VBr\({}_{3}\), CrCl\({}_{3}\) and CrBr\({}_{3}\) are shown in Table (1), for FM magnetic ordering. The minimal energy ground state of the Bravis lattice are shown in bold.
The magnetic moments match closely with their experimental values (given in the bracket and reference therein). The discrepancies come from the fact that we have done a \(\Gamma\)-point only calculation.
In figures (2)-(5), we have discussed the electronic structure of MX\({}_{3}\) systems.
The DOS of VCl\({}_{3}\) shows prominent states in the majority spin channels and a big gap in minority spin channel for both \(R\bar{3}\)(Fig. (2a)) and \(c2/m\)(Fig. (2b)) structure. This resembles the experimental and previous theoretical findings[10].
VBr\({}_{3}\) DOS also shows the same characteristics. These half-metallic behavior of VCl\({}_{3}\) and VBr\({}_{3}\) is well known and sustained in bilayer systems[44].
\begin{table}
\begin{tabular}{l c c c c} \hline MX\({}_{3}\) & a(Å) & d\({}_{M-X}\)(Å) & d\({}_{0}\)(Å) & magnetic \\ & & & & moment(\(\mu_{B}\)/_f.u._) \\ \hline \(\mathbf{VCl_{3}(R\bar{3})}\) & 6.12 & 2.40 & 4.48 & 2.54(2.0 [38]) \\ VCl\({}_{3}(c2/m)\) & 6.26 & 2.40 & 5.08 & 2.32(2.96 [5]) \\ \(\mathbf{VBr_{3}(R\bar{3})}\) & 6.52 & 2.56 & 4.60 & 2.84(2.6 [39]) \\ VBr\({}_{3}(c2/m)\) & 6.62 & 2.57 & 5.15 & 2.46(2.0 [40]) \\ CrCl\({}_{3}\)(R\(\bar{3}\)) & 6.08 & 2.38 & 4.32 & 3.59(3.0 [41]) \\ \(\mathbf{CrCl_{3}(c2/m)}\) & 6.09 & 2.38 & 5.15 & 5.38 (6.0[42]) \\ CrBr\({}_{3}\)(R\(\bar{3}\)) & 6.46 & 2.55 & 4.75 & 3.91(3.10[43]) \\ \(\mathbf{CrBr_{3}(c2/m)}\) & 6.39 & 2.56 & 5.34 & 3.90 \\ \hline \end{tabular}
\end{table}
Table 1: Crystal and magnetic structure information of MX\({}_{3}\) compounds. The magnetic moments in the bracket show their experimental values.
Figure 4: DOS of CrCl\({}_{3}\) for (a) \(R\bar{3}\) and (b) \(c2/m\) structure
Figure 5: DOS of CrBr\({}_{3}\) for (a) \(R\bar{3}\) and (b) \(c2/m\) structure
With our very small U\({}_{eff}\), the story is completely different for Cr-based trihalids. Both CrCl\({}_{3}\) and CrBr\({}_{3}\) show bandgap \(\approx\) 2eV in all the cases. This is also well known feature[45]. Though our position of DOS peaks does not matche the existing literature, the characteristics are well produced with the small \(U_{eff}\). The band gap decreases significantly from CrCl\({}_{3}\) to CrBr\({}_{3}\).
## 4 Conclusion
Although there are several studies on MX\({}_{3}\) systems, there is still a gap in understanding their stacking, structure, and electronic and magnetic properties. Due to the recent interest in spintronic applications, it was necessary to study those materials systematically. In this study, we have explored the electronic and magnetic properties of CrCl\({}_{3}\), CrBr\({}_{3}\), VCl\({}_{3}\) and VBr\({}_{3}\) in bulk phase using DFT with plane wave basis. We have shown that the stacking and the space group play an important role in their magnetic and electronic properties. We are confident that this will enable us to design better MX\({}_{3}\) based spintronic devices with bilayer and multilayer.
|
2310.05324 | Increasing Entropy to Boost Policy Gradient Performance on
Personalization Tasks | In this effort, we consider the impact of regularization on the diversity of
actions taken by policies generated from reinforcement learning agents trained
using a policy gradient. Policy gradient agents are prone to entropy collapse,
which means certain actions are seldomly, if ever, selected. We augment the
optimization objective function for the policy with terms constructed from
various $\varphi$-divergences and Maximum Mean Discrepancy which encourages
current policies to follow different state visitation and/or action choice
distribution than previously computed policies. We provide numerical
experiments using MNIST, CIFAR10, and Spotify datasets. The results demonstrate
the advantage of diversity-promoting policy regularization and that its use on
gradient-based approaches have significantly improved performance on a variety
of personalization tasks. Furthermore, numerical evidence is given to show that
policy regularization increases performance without losing accuracy. | Andrew Starnes, Anton Dereventsov, Clayton Webster | 2023-10-09T01:03:05Z | http://arxiv.org/abs/2310.05324v1 | # Increasing Entropy to Boost Policy Gradient Performance on Personalization Tasks
###### Abstract
In this effort, we consider the impact of regularization on the diversity of actions taken by policies generated from reinforcement learning agents trained using a policy gradient. Policy gradient agents are prone to entropy collapse, which means certain actions are seldomly, if ever, selected. We augment the optimization objective function for the policy with terms constructed from various \(\varphi\)-divergences and Maximum Mean Discrepancy which encourages current policies to follow different state visitation and/or action choice distribution than previously computed policies. We provide numerical experiments using MNIST, CIFAR10, and Spotify datasets. The results demonstrate the advantage of diversity-promoting policy regularization and that its use on gradient-based approaches have significantly improved performance on a variety of personalization tasks. Furthermore, numerical evidence is given to show that policy regularization increases performance without losing accuracy.
personalization, entropy, regularization, reinforcement learning, discrepancy, divergence
## I Introduction
Recommendation system models (see, e.g., [1, 2]) have become critically important in retaining customers of industries such as retail, e-commerce, media apps, or even healthcare. Corporations like Netflix, Spotify, and Amazon, use sophisticated collaborative filtering and content-based recommendation systems for video, song, and/or product recommendations [3, 4, 5, 6, 7]. For a recent overview of recommender systems in the healthcare domain see, e.g., [8] and the references therein.
Conventional personalization focuses on personal, transactional, demographic, and possibly health-related information, such as an individual's age, residential location, employment, purchases, medical history, etc. Additional applications of personalization include: web content personalization and layout customization [9, 10]; customer-centric interaction with healthcare providers [11, 12, 13, 14, 15, 16]; personalized medical treatments [17, 18]. One of the major challenges associated with personalization techniques is the time required to adapt and update such approaches to changes in individual behaviors, reactions, and choices.
Recently, reinforcement learning (RL) has been increasingly exploited in personalized recommendation systems that continually interact with users (see, e.g., [19] and the references therein). As opposed to traditional recommendation techniques, RL is a more complex and transformative approach that considers behavioral and real-time data produced as the result of user action. Examples of this technique include online browsing behavior, communication history, in-app choices, and other engagement data. This allows for more individualized experiences like adding personalized engaging sections to the body of an email or sending push notifications at a time when the customer is typically active, which results in more customized communication and thus, ultimately, greater conversion.
One of the major challenges associated with personalized RL agents is that standard optimization techniques often stall or even fail to converge when applied to such complex problems. This results in highly localized policies having lower entropy which directly translates into very few actions taken by the agent throughout the training process. Improving the diversity of actions taken by the policy is critical to improving the performance of the RL agent on a variety personalization tasks [20].
The traditional approach for combating low-entropy models is to regularize the standard objective with an entropy (penalty) term, such that the optimal policy additionally aims to maximize its entropy at each visited state, see, e.g., [21, 22, 23] and the references therein. This is achieved by subtracting a weighted term for the entropy of the model's prediction from the loss function, thereby encouraging a more entropic model. This is equivalent to adding the Kullback-Leibler (KL) divergence between the policy and the uniform distribution.
Comparing probability distributions is a fundamental component of many supervised, unsupervised, and RL problems. In the machine learning community, the first discrepancies that were introduced to compare two probability distributions are \(\varphi\)-divergences [24], with \(\varphi\) is a convex, lower semi-continuous function such that \(\varphi(1)=0\). Such divergences can be viewed as a weighted average (by \(\varphi\)) of the odds-ratio between the two measures. In particular, we compute the following
\[D_{\varphi}(\alpha\|\beta)=\mathbb{E}_{\beta}\bigg{(}\varphi\Big{(}\frac{ \alpha}{\beta}\Big{)}\bigg{)}. \tag{1}\]
The computational simplicity of \(\varphi\)-divergences has made them very popular; with the most widely used being the KL divergence (see, e.g., Table I and the work [25, Section 2]).
However, such approaches suffer from the major drawback of not metrizing weak-convergence, which is instrumental for discrepancies on measure, as it ensures that the losses
remain stable under small perturbations of the support of the measures. A class of discrepancies that satisfy this requirement are known as Maximum Mean Discrepancies (MMD) [26], which are a special case of integral probability metrics (IPM) [27]. Such approaches compare distributions without initially estimating their density functions. MMD is defined by the notion of representing distances between distributions as distances between _mean embeddings_ of features, where the feature map is a kernel from a reproducing kernel Hilbert space (RKHS). This family of discrepancies presents the advantage of being efficiently computed from samples -- both statistically since the estimates are robust with a small number of samples (reduced complexity) and also numerically as it can be computed in closed form.
In this work we augment the optimization objective function for the policy with various \(\varphi\)-divergence-based as well as MMD-based1 term which encourages current policies to follow different state visitation and/or action choice distribution than previously computed policies. As such, by utilizing these more entropic variants of PG enables us to obtain a completely distinct set of policies.
Footnote 1: MMD is the more popular IPM in machine learning applications, including, e.g., generative models (see [28, 29] and the references therin) due to the fact that it is applicable to a wide range of data types and distributions, computationally tractable even for high-dimensional data, and it is relatively robust to the curse of dimensionality [25].
Our main contributions are:
* formalization of \(\varphi\)-divergence-based as well as MMD-based regularization for personalized tasks in contextual bandit problems; and
* empirical demonstration of impact such regularization approaches have on RL.
### _Related work_
The goal of this paper is to understand the impact that policy regularization has on an agent's learning. It is often observed that policy gradient algorithms suffer from premature convergence to semi-deterministic, suboptimal policies. Avoiding this lack of diversity in actions is the motivation for adding entropy regularization to the REINFORCE algorithm [30], which is aptly named REINFORCE/MENT with MENT standing for Maximization of ENTropy. Using entropy regularization has also been found to improve agent performance (e.g., [31, 32]). While typical entropy regularization uses KL divergence between the policy and a uniform distribution over the actions, [33] uses KL divergence between the policy and the so-called default policy to improve performance. Bergmann divergence is used in [34] to more safely train on-policy agents with off-policy data.
The work [35] presents diversity-driven approach for exploration, which can be easily combined with both off- and on-policy reinforcement learning algorithms. The authors show that by simply adding a distance measure regularization to the loss function, the proposed methodology significantly enhances an agent's exploratory behavior. Similarly, the effort [36] presents an MMD-based approach for identifying a collection of near-optimal policies with significantly different distributions of trajectories.
Soft policy optimization was introduced in [22, 23]. These works show that the impact of entropy regularization goes beyond providing the agent with extra exploration, but also serves as more stable training process by avoiding a collapse onto a select set of actions. Similar to our work, empirical results also show the robustness of these approaches when compared with standard optimization algorithms.
## II Background
We consider a contextual bandit environment [37], with a continuous state (context) space \(\mathcal{S}\subset\mathbb{R}^{m}\), a discrete action space \(\mathcal{A}=\{1,2,\ldots,n\}\) consisting of \(n\) available actions, and a reward function \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\). Using this conventional setting for recommendation and personalization tasks [38, 39], we define the reward \(\mathcal{J}\) of the policy \(\pi\), which is given by the expectation return under the policy, i.e.,
\[\mathcal{J}(\pi)=\mathbb{E}\Big{[}r(s,a)\,\,\big{|}\,\,s\sim\mathcal{S},a\sim \pi(s)\Big{]}, \tag{2}\]
where \(\pi(s)\) denotes the action probability distribution as state \(s\). In this setting, traditional approaches for reinforcement learning aim to find a policy \(\pi\) that maximizes the reward function \(\mathcal{J}\). However, to promote a more entropic model, we augment this optimization functional with various \(\varphi\)-divergence-based as well as an MMD-based regularization function \(\mathcal{R}\). In other words, without loss of generality, our goal is to find a policy \(\pi\) that solves the following regularized optimization problem, namely:
\[\max_{\theta\in\mathbb{R}^{d}}\,\,\,\mathcal{J}(\pi_{\theta})+\lambda\, \mathcal{R}(\pi_{\theta}), \tag{3}\]
where \(\theta=(\theta_{1},\ldots,\theta_{d})\in\mathbb{R}^{d}\) is a \(d\)-dimensional parameter that represents, e.g., the weights of a neural network, and \(\lambda\in\mathbb{R}\) is a regularization (penalty) parameter. In what follows, we detail the construction of the regularized problem (3) and solutions will be sought for the policy gradient technique.
### _Relative entropy_
The distribution of the agent's policy \(\pi\) is often critical in practical applications as it directly translates to the actions the agent is taking throughout the training process. A conventional way to quantify the policy distribution is by computing its entropy \(H(\pi)\) given by
\[H(\pi)=\mathbb{E}\left[\sum_{a\in\mathcal{A}}\pi(a|s)\log\pi(a|s)\ \bigm{|}s\sim \mathcal{S}\right]. \tag{4}\]
Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. In RL, entropy indicates how distributed the policy is, with more localized policies having lower entropy, which is known to lead to undesirable results, discussed in, e.g., [40].
### _Policy gradient methods_
Policy gradient (PG) makes use of gradients to iteratively optimize a policy \(\pi_{\theta}(s,a)\), parameterized by \(\theta\in\mathbb{R}^{d}\). In order to maximize \(\mathcal{J}\), we apply the Policy Gradient Theorem (see 13.2 of [41] for example), which shows
\[\nabla_{\theta}\mathcal{J}(\pi_{\theta}) =\sum_{a\in\mathcal{A}}r(s,a)\pi_{\theta}(a|s)\nabla\ln\pi_{ \theta}(a|s) \tag{5}\] \[=\mathbb{E}_{a\sim\pi_{\theta}(s)}(r(s,a)\nabla\ln\pi_{\theta}(a| s)).\]
We use a Monte Carlo approximation of this expectation in order to estimate the gradient, denoted \(\nabla_{\theta}\mathcal{J}_{\text{PG}}\) and given by (10).
## III Diversity promoting policy regularization
In this section we develop all the necessary machinery to improve the existing policy gradient method by including an additional diversity-promoting term, thus, resulting in more entropic approaches.
### \(\varphi\)_-divergence regularization_
The traditional approach for combating low-entropy models is to augment the standard objective with an entropy (penalty) term, such that the optimal policy additionally aims to maximize its entropy at each visited state [23]. This is achieved by adding a weighted term that measures the diversity of the model's prediction from the loss function, thereby encouraging a more entropic model. One way to accomplish this is by adding any of the \(\varphi\)-divergences in Table I calculated between the policy \(\pi_{\theta}\) and the uniform distribution. Table II provides simplified definitions of the \(\varphi\)-divergences in the case of a finite action space, where we use
\[u\sim\text{Unif}(1,2,\ldots,n) \tag{6}\]
as the uniform distribution here but can be replaced by any other distribution of the actions. Therefore, using (3) and given a regularization constant \(\lambda\in\mathbb{R}\), our goal is to find a policy \(\pi\) that maximizes a new objective function, namely:
\[\max_{\theta\in\mathbb{R}^{d}}\ \mathcal{J}(\pi_{\theta})+\lambda\,D_{\varphi}( \pi_{\theta}\,||\,u). \tag{7}\]
### _MMD regularization_
In addition to \(\varphi\)-divergence regularization, defined in Table II, we also propose to exploit a family of discrepancies known as Maximum Mean Discrepancy (MMD). Given a RKHS \(\mathcal{H}\) with kernel \(k\), MMD between two probability measures \(\alpha\) and \(\beta\) is given by
\[\begin{split}&\text{MMD}_{k}^{2}(\alpha,\beta):=\left(\sup_{\{f: \|f\|\mu\leqslant 1\}}\big{|}\mathbb{E}_{\alpha}f(x)-\mathbb{E}_{\beta}f(y) \big{|}\right)^{2}\\ &=\mathbb{E}_{\alpha\otimes\alpha}k(x,x^{\prime})+\mathbb{E}_{ \beta\otimes\beta}k(y,y^{\prime})-2\mathbb{E}_{\alpha\otimes\beta}k(x,y).\end{split} \tag{8}\]
This family of discrepancies presents the advantage of being efficiently estimated from samples of the measures -- both statistically since the estimates are robust with a small number or samples (reduced complexity) and also numerically, as (8) can be computed in closed form. Therefore, using (3) and given a regularization constant \(\lambda\in\mathbb{R}\), our goal is to find a policy \(\pi\) that maximizes a new objective function, namely:
\[\max_{\theta\in\mathbb{R}^{d}}\ \mathcal{J}(\pi_{\theta})+\lambda\,\text{MMD} _{k}^{2}(\pi_{\theta},u). \tag{9}\]
There are many choices for \(k\) (or equivalently \(\mathcal{H}\)). For our examples in this paper, we use the Gaussian kernel, \(k(x,y)=\text{exp}(\|x-y\|^{2})\), where \(\|x-y\|=1\,(x=y)\) for \(x,y\in\mathcal{A}\). We do this because each of the examples focuses on correctly labeling and the arithmetic difference between two labels is not meaningful.
### _Diversity-promoting policy gradients_
We will use \(\theta\in\mathbb{R}^{d}\) to denote the parameters of a neural network that takes as input \(s\in\mathcal{S}\) and outputs a probability distribution over \(\mathcal{A}\) with the policy output mapping \(\mathcal{Z}:\mathcal{S}\to\mathbb{R}\). We use the following notation:
\[\begin{split}\text{MMD}_{k}^{2}(\alpha,\beta):=\left(\sup_{\{f: \|f\|\mu\leqslant 1\}}\big{|}\mathbb{E}_{\alpha}f(x)-\mathbb{E}_{\beta}f(y)\big{|} \right)^{2}\\ =\mathbb{E}_{\alpha\otimes\alpha}k(x,x^{\prime})+\mathbb{E}_{ \beta\otimes\beta}k(y,y^{\prime})-2\mathbb{E}_{\alpha\otimes\beta}k(x,y).\end{split} \tag{10}\]
This family of discrepancies presents the advantage of being efficiently estimated from samples of the measures -- both statistically since the estimates are robust with a small number or samples (reduced complexity) and also numerically, as (8) can be computed in closed form. Therefore, using (3) and given a regularization constant \(\lambda\in\mathbb{R}\), our goal is to find a policy \(\pi\) that maximizes a new objective function, namely:
\[\max_{\theta\in\mathbb{R}^{d}}\ \mathcal{J}(\pi_{\theta})+\lambda\,\text{MMD} _{k}^{2}(\pi_{\theta},u). \tag{11}\]
There are many choices for \(k\) (or equivalently \(\mathcal{H}\)). For our examples in this paper, we use the Gaussian kernel, \(k(x,y)=\text{exp}(\|x-y\|^{2})\), where \(\|x-y\|=1\,(x=y)\) for \(x,y\in\mathcal{A}\). We do this because each of the examples focuses on correctly labeling and the arithmetic difference between two labels is not meaningful.
### _Diversity-promoting policy gradients_
We will use \(\theta\in\mathbb{R}^{d}\) to denote the parameters of a neural network that takes as input \(s\in\mathcal{S}\) and outputs a probability distribution over \(\mathcal{A}\) with the policy output mapping \(\mathcal{Z}:\mathcal{S}\to\mathbb{R}\). We use the following notation:
\[\begin{split}\text{MMD}_{k}^{2}(\alpha,\beta):=\left(\sup_{\{f:\|f\| \mu\leqslant 1\}}\big{|}\mathbb{E}_{\alpha}f(x)-\mathbb{E}_{\beta}f(y)\big{|}\right)^{2}\\ =\mathbb{E}_{\alpha\otimes\alpha}k(x,x^{\prime})+\mathbb{E}_{ \beta\otimes\beta}k(y,y^{\prime})-2\mathbb{E}_{\alpha\otimes\beta}k(x,y).\end{split} \tag{12}\]
This family of discrepancies presents the advantage of being efficiently estimated from samples of the measures -- both statistically since the estimates are robust with a small number or samples (reduced complexity) and also numerically, as (8) can be computed in closed form. Therefore, using (3) and given a regularization constant \(\lambda\in\mathbb{R}\), our goal is to find a policy \(\pi\) that maximizes a new objective function, namely:
\[\max_{\theta\in\mathbb{R}^{d}}\ \mathcal{J}(\pi_{\theta})+\lambda\,\text{MMD} _{k}^{2}(\pi_{\theta},u). \tag{13}\]
There are many choices for \(k\) (or equivalently \(\mathcal{H}\)). For our examples in this paper, we use the Gaussian kernel, \(k(x,y)=\text{exp}(\|x-y\|^{2})\), where \(\|x-y\|=1\,(x=y)\) for \(x,y\in\mathcal{A}\). We do this because each of the examples focuses on correctly labeling and the arithmetic difference between two labels is not meaningful.
### _Diversity-promoting policy gradients_
We will use \(\theta\in\mathbb{R}^{d}\) to denote the parameters of a neural network that takes as input \(s\in\mathcal{S}\) and outputs a probability distribution over \(\mathcal{A}\) with the policy output mapping \(\mathcal{Z}:\mathcal{S}\to\mathbb{R}\). We use the following notation:
\[\begin{split}\text{MMD}_{k}^{2}(\alpha,\beta):=\left(\sup_{\{f:\|f\| \mu\leqslant 1\}}\big{|}\mathbb{E}_{\alpha}f(x)-\mathbb{E}_{\beta}f(y)\big{|}\right)^{2}\\ =\mathbb{E}_{\alpha\otimes\alpha}k(x,x^{\prime})+\mathbb{E}_{ \beta\otimes\beta}k(y,y^{\prime})-2\mathbb{E}_{\alpha\otimes\beta}k(x,y).\end{split} \tag{14}\]
This family of discrepancies presents the advantage of being efficiently estimated from samples of the measures -- both statistically since the estimates are robust with a small number or samples (reduced complexity) and also numerically, as (8) can be computed in closed form. Therefore, using (3) and given a regularization constant \(\lambda\in\mathbb{R}\), our goal is to find a policy \(\pi\) that maximizes a new objective function, namely:
\[\max_{\theta\in\mathbb{R}^{d}}\ \mathcal{J}(\pi_{\theta})+\lambda\,\text{MMD} _{k}^{2}(\pi_{\theta},u). \tag{15}\]
There are many choices for \(k\) (or equivalently \(\mathcal{H}\)). For our examples in this paper, we use the Gaussian kernel, \(k(x,y)=\text{exp}(\|x-y\|^{2})\), where \(\|x-y\|=1\,(x=y)\) for \(x,y\in\mathcal{A}\). We do this because each of the examples focuses on correctly labeling and the arithmetic difference between two labels is not meaningful.
### _Diversity-promoting policy gradients_
We will use \(\theta\in\mathbb{R}^{d}\) to denote the parameters of a neural network that takes as input \(s\in\mathcal{S}\) and outputs a probability distribution over \(\mathcal{A}\) with the policy output mapping \(\mathcal{Z}:\mathcal{S}\to\mathbb{R}\). We use the following notation:
\[\begin{split}\text{MMD}_{k}^{2}(\alpha,\beta):=\left(\sup_{\{f:\|f\| \mu\leqslant 1\}}\big{|}\mathbb{E}_{\alpha}f(x)-\mathbb{E}_{\beta}f(y)\big{|}\right)^{2}\\ =\mathbb{E}_
\(\mathbb{R}^{n}\). For model parameters \(\theta\), the action selection distribution as a particular state, \(s\in\mathcal{S}\), is denoted by \(\pi_{\theta}(s)\).
The standard gradient loss estimate for policy gradient is given by
\[\nabla\mathcal{J}_{\text{PG}}(\pi_{\theta}(s))=r(s,a)\nabla\pi_{\theta}(a|s). \tag{10}\]
The gradients of each of the \(\varphi\)-divergences can be found in Table II. Lastly, the gradient of MMD in the contextual bandit setting is
\[\begin{split}\nabla_{\theta}&\mathsf{MMD}_{k}^{2} (\pi_{\theta}(s),u)\\ &=2\mathbb{E}_{\pi_{\theta}\otimes\pi_{\theta}\otimes u}\Big{(} \big{(}k(a,a^{\prime})-k(a,a^{*})\big{)}\nabla_{\theta}\ln\pi_{\theta}(a|s) \Big{)}\\ &=\sum_{a\in\mathcal{A}}c_{\theta,s,a}(a^{\prime},a^{*})\nabla_{ \theta}\pi_{\theta}(a|s),\end{split} \tag{11}\]
where
\[c_{\theta,s,a}(a^{\prime},a^{*})=\sum_{a^{\prime},a^{*}}\big{(}k(a,a^{\prime} )-k(a,a^{*})\big{)}u(a^{*})\pi_{\theta}(a^{\prime}|s).\]
The gradient update from \(\nabla_{\theta}\mathcal{J}_{\text{PG}}\) only depends on the gradient based on the action that was selected. Three of the \(\varphi\)-divergences, KL, Jensen-Shannon, and Hellinger, as well as MMD have gradients that are weighted sums of the gradients over all of the actions, not just the selected action. On the other hand, for Total-Variation the gradient only depends on the action whose likelihood is furthest away from the policy \(u\), given by (6).
When \(\pi_{\theta}\) is found using the softmax function, we can further expand all of the above gradients by
\[\nabla\pi_{\theta}(a|s)=\pi_{\theta}(a|s)\big{[}\mathbbm{1}\left(a=a^{\prime} \right)-\pi(a^{\prime}|s)\big{]}_{a^{\prime}=1}^{n}\times\nabla\mathcal{Z}(s).\]
Using the gradient information given by (10), optimal solutions to the \(\varphi\)-divergence-based diversity-promoting objective, given by (7), as well optimal solutions to the MMD-based diversity-promoting objective, given by (9), can then be solved with standard gradient-based methods.
## IV Numerical examples
In this section we conduct numerical experiments comparing performance of the RL agents with policy regularization methods described in Section III. Specifically, we consider the following agents:
1. pg: the default policy gradient agent without any regularization to act as a baseline;
2. pg_ent: pg-agent with entropy regularization;
3. pg_mmd: pg-agent with MMD regularization;
4. pg_js: pg-agent with Jensen-Shannon regularization;
5. pg_hl: pg-agent with Hellinger regularization; and
6. pg_tv: pg-agent with total variation regularization.
We chose these algorithms so that we could easily identify the impact that the regularizers have in the absence of additional constraints imposed by other algorithms such as TRPO [42] or PPO [43].
The agents and regularizer losses are manually implemented in TensorFlow and the network training is performed with Adam optimizer with the default hyperparameters. For all of our algorithm configurations, we use a batch size of 100. An agent policy is parameterized by a 2-layer feed-forward neural network with 32 nodes on each layer. For each regularized agent we perform a hyperparameter search to determine the appropriate value of the regularization coefficient.
We deploy the agents on various personalization tasks that are given by contextual bandit environments. For each agent and environment we report the following metrics, computed over the test set: the agent reward, the policy entropy, and the action selection histogram. For the simplicity of presentation, the histograms are sorted to emphasize the agent's action distribution over the test set.
The presented examples are performed using Python3.8 with Tensorflow 2.12 on personal laptops. The source code reproducing the given experiments is available at [https://github.com/acstames/wain23-policy-regularization](https://github.com/acstames/wain23-policy-regularization).
### _MNIST Environment_
We use MNIST dataset2 to create a contextual bandit environment, as done in [44, 45, 46, 47]. In this formulation the images act as observations and the labels act as the actions that agents can take. The reward for selecting the correct label is 1 and \(-\nicefrac{{1}}{{9}}\) for an incorrect classification. Defining the reward function this way means that the expected return for the uniformly random policy is \(0\) and for the optimal policy is \(1\). The agent reward, policy entropy, and action selection histograms for the various regularizers on MNIST environment are shown in Figure 1.
Footnote 2: [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/)
We note that all regularized agents solve the environment and demonstrate a comparable performance while outperforming the baseline agent. From the action selection histogram we observe that the unregularized policy gradient agent only selects \(7\) out of \(10\) actions, which results in the agent reward value plateauing at about \(0.6\). In contrast, all of the regularized agents maintain diverse action selection throughout the training process and achieve reward values close to \(1\), which indicates fully learning the environment.
### _CIFAR10 Environment_
As in the previous example, we use CIFAR10 dataset3 to create a contextual bandit environment. The agent reward, policy entropy, and action selection histograms for the various regularizers on CIFAR10 environment are shown in Figure 2.
Footnote 3: [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html)
Unlike the previous example, the agents fail to fully solve the environment in this case due to the increased complexity of CIFAR10 dataset. In fact, even the most successful agents achieve reward values of only about \(0.3\), which roughly equates to a \(35\%\) classification accuracy on CIFAR10 dataset. Such poor performance is due to the constrained network architecture and the contextual bandit formulation of the problem. Nonetheless, the advantage of regularized agents is evident, both from policy reward and entropy perspectives. In particular, we observe that most of the regularized agents are able to maintain diverse
(albeit unbalanced) action selection, while the baseline agent only selects \(5\) out of \(10\) available actions.
### _Spotify Environment_
In this experiment we set up a synthetic music recommendation system proposed in [47]. We use Spotify Web API4 to construct a contextual bandit environment that replicates the task of track recommendation to a user. In this setting the observations (users) are synthetically generated and are represented by their preferences to various musical genres, and the actions are given by the set of tracks the agent can recommend. The reward for recommending a track to a user is either \(1,-1\), or \(0\), indicating that the user liked/disliked/did not provide feedback, respectively. See [47] for a more detailed explanation of the environment. The agent reward, policy entropy, and action selection histograms for the various regularizers on Spotify environment are shown in Figure 3.
Footnote 4: [https://developer.spotify.com/documentation/web-api/](https://developer.spotify.com/documentation/web-api/)
We note that while all agents achieve satisfactory performance in terms of reward, the action selection of the baseline agent is constrained to only \(3\) tracks (out of \(50\) possible), which is neither practical nor acceptable in real-world applications. All regularized agents provide a much more diverse action selection, while also achieving higher reward values.
A particular interest of this environment is the fact that there are infinitely many policies that achieve near-optimal performance. As an example, even the unregularized policy gradient agent almost learned the environment, while only ever taking about \(6\%\) of the available actions, with one action being taken about \(40\%\) of the time. In comparison, for regularized agents the action selection is much more diverse with fewer "favorite" actions. Most notably, the MMD-regularized agent is actively taking about \(40\%\) of the actions with the most frequent one being selected only about \(8\%\) of the time.
## V Concluding remarks
In this effort, we consider the impact of regularization on the diversity of actions taken by policies generated from policy gradient RL agents. In the context of personalized RL there are several addition advantages that extend from this work. First, the \(\varphi\)-divergence and MMD-based regularization encourages
Fig. 1: Results of image classification experiment on MNIST environment.
exploration and aids to prevent early convergence to suboptimal policies. Second, the resulting policies can serve as a good (macro) initialization for a more (micro) specific behavior. Finally, the resulting policies are more robust in the face of adversarial perturbations or noise as evidenced by our various numerical examples.
However, there is much more extensive testing to be done and a supporting theory needs to be developed before any victories can be declared. As mentioned throughout, there has been extensive amounts of research by the RL community on using KL-type entropy regularization, but more advanced discrepancies such as the MMD-based approach we presented here are still in their infancy. In addition, there is a vast amount of research on optimal transport theory which, in connection with entropy-type penalization is something we also plan to investigate. These methods possess some computational challenges but have the ability to lift a ground metric from the data-space to the set of probability measures on this space and, therefore, take into account the underlying geometry of the data [48, 49, 26]. To our knowledge, this area of research has yet to be explored by the machine learning community.
|
2301.13207 | A quantum trajectory analysis of singular wave functions | The Schr\"{o}dinger equation admits smooth and finite solutions that
spontaneously evolve into a singularity, even for a free particle. This blowup
is generally ascribed to the intrinsic dispersive character of the associated
time evolution. We resort to the notion of quantum trajectories to reinterpret
this singular behavior. We show that the blowup can be directly related to
local phase variations, which generate an underlying velocity field responsible
for driving the quantum flux toward the singular region. | A. S. Sanz, L. L. Sanchez-Soto, A. Aiello | 2023-01-30T19:00:01Z | http://arxiv.org/abs/2301.13207v1 | # A quantum trajectory analysis of singular wave functions
###### Abstract
The Schrodinger equation admits smooth and finite solutions that spontaneously evolve into a singularity, even for a free particle. This blowup is generally ascribed to the intrinsic dispersive character of the associated time evolution. We resort to the notion of quantum trajectories to reinterpret this singular behavior. We show that the blowup can be directly related to local phase variations, which generate an underlying velocity field responsible for driving the quantum flux toward the singular region.
## I Introduction
The Schrodinger equation is, perhaps, the prototype of a dispersive equation; that is, if no boundary conditions are imposed, its wave solutions spread out in space as they evolve in time [1]. A frequent way to quantify this dispersion is by the so-called dispersive estimates, a topic with a long history [2; 3; 4] and whose main goal is to establish tight bounds on the decay of the solutions.
Recently, it has been pointed out that the Schrodinger equation, even for a free particle, presents dispersive singularities [5; 6]: an initial square-integrable profile \(\psi(x,0)\) could result in a solution \(\psi(x,t)\) that blows up in a finite time. In the remainder such profiles will be termed as _singular wave packets_. While this singular behavior (sometimes denoted as self-focusing or wave collapse) is well understood in presence of nonlinearities [7; 8; 9], it is, at first sight, surprising in a pure linear evolution.
From a mathematical viewpoint, this dispersive blowup can be related to the fact that the linear Schrodinger equation is ill-posed in the space \(L^{\infty}\): the free propagator is not a Fourier multiplier in \(L^{\infty}\)[10]. In physical terms, dispersive blowup is a focusing phenomenon due to both the unbounded domain of the problem and the propensity of the dispersion relation to propagating energy at different speeds. Interestingly, the same singular behavior has been described in for paraxial beams [11; 12; 13], which is consequent with the complete equivalence between the time-dependent Schrodinger equation and the paraxial wave equation [14].
In this paper, we address the physical interpretation of these singularities from the perspective of quantum trajectories. In this picture, quantum formalism is reinterpreted as describing particles following definite trajectories, each with a precisely defined position at each instant in time. However, in this approach, called Bohmian mechanics [15; 16; 17], the trajectories of the particles are quite different from those of classical particles because they are guided by the wave function [18; 19; 20; 21]. Our analysis shows that the blowup can be directly related to local phase variations, which generate an underlying velocity field (the phase gradient) responsible for driving the quantum flux toward the singular region. To shed light on this point, we compare the blowup with the focusing of a Gaussian and a rectangular wave packet: this demonstrates that imploding solutions are distinguished by an initial phase factor.
Furthermore, for Gaussian wave packets, which can be nicely analyzed in closed form, it is also observed that there are two types of solutions with very different properties, despite their initial density distributions being identical. One of such solutions leads to a classical type of propagation because the phase factor plays a minor role (or even no role at all). In contradistinction, the other type of solution is characterized by wide initial wave functions with an intrinsic highly oscillatory behavior. This emphasizes the prominent role of the phase as an active agent in the subsequent dynamics.
This article is organized as follows. In Sec. II we briefly discuss the spontaneous generation of a singularity in the Schrodinger equation and introduce the basic elements needed to define a quantum trajectory. In terms of this notion, we analyze the singularity and put forward the fundamental role played by the quantum phase to understand that phenomenon. In Sec. III we examine the behavior of a Gaussian and a rectangular packet and compare with the previous singular wave. Finally, Sec. IV summarizes our conclusions.
## II Dispersive blowup in the Schrodinger equation
### Spontaneous generation of a singularity
We first set the stage for our discussion. We will be considering the simplest case of the Schrodinger equation for a free particle of mass \(m\) in one dimension
\[i\hbar\frac{\partial\psi(x,t)}{\partial t}=-\frac{\hbar^{2}}{2m}\frac{\partial ^{2}\psi(x,t)}{\partial x^{2}}\,, \tag{1}\]
with the initial Cauchy problem \(\psi(x,0)\in L^{2}(\mathbb{R})\). The unique solution of (1) can be written in terms of the free-space propagator as [22]
\[\psi(x,t)=\sqrt{\frac{m}{2\pi i\hbar t}}\int_{\mathbb{R}}\exp\left[\frac{im}{ 2\hbar t}(x-x^{\prime})^{2}\right]\ \psi(x^{\prime},0)\ dx^{\prime}\,, \tag{2}\]
where the integral has to be understood in the improper Riemann sense. In this way, the Schrodinger equation appears as an integral equation, rather than a differential one, with the advantage of being valid even if the wave function is not a differentiable function.
Slightly generalizing results from Peres [5], we choose the initial data to be
\[\psi\left(x,0\right)=\frac{1}{\sqrt{\mathcal{N}_{\nu}}}\frac{\exp\left(-\frac{i \eta n}{2\hbar\tau}x^{2}\right)}{\left(1+\frac{x^{2}}{\sigma^{2}}\right)^{\nu}}\,, \tag{3}\]
where \(\mathcal{N}_{\nu}\) is a normalization constant, and \(\tau\) and \(\sigma\) are real numbers fixing the time scale and the width of the distribution, respectively. One can check that for \(\nu>1/4\), this function is in the space \(L^{2}(\mathbb{R})\), and so it is a physically admissible solution. When this holds true, the normalization constant is finite and equal to \(\mathcal{N}_{\nu}=\sqrt{\pi}\sigma\Gamma(2\nu-1/2)/\Gamma(2\nu)\).
For \(t\neq\tau\), we can apply the Riemann-Lebesgue lemma [23] to show that the resulting \(\psi\left(x,t\right)\) is continuous in \(x\) and \(t\) and tends to zero as \(|x|\to\infty\) (although not necessarily uniformly with respect to \(t\)). However, at \(t=\tau\) a discontinuity occurs: at this time the wave function reads
\[\psi\left(x,\tau\right)=\sqrt{\frac{m}{2\pi i\hbar\mathcal{N}_{\nu}}e^{\frac{ i\eta n}{2\hbar\tau}x^{2}}}\int_{\mathbb{R}}\frac{e^{-i\frac{\eta n}{2\hbar\tau} xx^{\prime}}}{\left(1+x^{\prime 2}/\sigma^{2}\right)^{\nu}}dx^{\prime}\,. \tag{4}\]
This integral is the Fourier transform of a Bessel potential [24] and can thus be expressed as
\[\psi\left(x,\tau\right)=\sqrt{\frac{m\sigma^{2}}{i\hbar\mathcal{N}_{\nu}}} \frac{e^{\frac{i\eta n}{2\hbar\tau}x^{2}}}{2^{\nu-1}\Gamma(\nu)}\left(\frac{m \sigma}{\hbar\tau}|x|\right)^{\nu-\frac{1}{2}}K_{\nu-\frac{1}{2}}\left(\frac{m \sigma}{\hbar\tau}|x|\right)\,, \tag{5}\]
which is valid for \(\nu>0\). Here, \(K_{\nu}\) denotes the modified Bessel function of order \(\nu\)[25], which is infinite at the origin but is nevertheless square integrable. The function \(\psi\left(x,\tau\right)\) is thus continuous, except perhaps at \(x=0\). To check the behavior around that point, we use the approximation of \(K_{\nu}\) for small values of the argument. This leads
\[|z|^{\nu-\frac{1}{2}}K_{\nu-\frac{1}{2}}\left(|z|\right)\approx\frac{\Gamma \left(\nu-\frac{1}{2}\right)}{2^{\frac{3}{2}-\nu}}+\frac{1}{|z|^{1-2\nu}} \frac{\Gamma\left(\frac{1}{2}-\nu\right)}{2^{\nu-\frac{1}{2}}}+O\left(|z|^{2 \nu+1}\right), \tag{6}\]
which shows that the singularity in \(\psi\left(x,\tau\right)\) thus arises for \(\nu<1/2\). In summary, when
\[\frac{1}{4}<\nu<\frac{1}{2} \tag{7}\]
we get the aforementioned singularity.
A similar analysis can be performed with the moments of the associated probability density \(|\psi\left(x,t\right)|^{2}\)[11]. The first moment \(\langle x\rangle\) is finite and equal to zero when \(\nu>1/2\), whereas the second moment \(\langle x^{2}\rangle\) exists provided that \(\nu>3/4\). All this relevant information is concisely summarized in Fig. 1.
### Quantum trajectories at the singularity
To explore the physical meaning of the singularity and, more particularly, its dynamical emergence, we resort to the concept of quantum trajectory. Apart from providing us with information on the probability density distribution, the wave function \(\psi(x,t)\) also contains dynamical information relevant to understand its time evolution. The Bohmian picture stresses this latter aspect, which manifests as quantum trajectories, which are in compliance with the evolution of the quantum flux [21]. To this end, one first decomposes \(\psi(x,t)\) as \(\psi(x,t)=\sqrt{\varrho(x,t)}\exp[iS(x,t)]/\hbar\), which allows us to split up the density information from the phase information encoded in the wave function. Quantum trajectories are directly related to the local variations undergone by the phase term, \(S(x,t)\), according to the so-called Bohmian guiding condition (or local velocity field) [26],
\[\dot{x}=\frac{J(x,t)}{\varrho(x,t)}=\frac{1}{m}\,\text{Re}\left(\frac{\hat{ \rho}\psi}{\psi}\right)=\frac{1}{m}\,\frac{\partial S(x,t)}{\partial x}, \tag{8}\]
with \(\hat{\rho}=-i\hbar\partial/\partial x\) being the usual momentum operator in the position representation and \(J(x,t)\) the probability current density or quantum flux [27]. We stress that Eq. (8) constitutes a general result that goes beyond any particular interpretation, as it involves quantities that are well defined in any picture of quantum mechanics.
More importantly, Eq. (8) explicitly shows the important role played by the phase, not as an indirect effect (e.g., in the appearance of interference features), but as a fundamental quantity that specifies the local dynamics exhibited by the quantum system on each point of the configuration space at each time. This action emerges in the form of the local velocity field that governs the dynamical evolution of the probability density at any time, making it to move from a region to another, to spread out all over the place, or, as it is the case here, to coalesce on a highly localized region at a very precise time.
After all, note that the above local velocity field is what allows us to establish the connection between the probability density, \(\varrho(x,t)\), and the quantum flux, \(J(x,t)\), according to the well-known transport relation \(J(x,t)=\nu(x,t)\varrho(x,t)\). Quantum trajectories simply arise after assuming that \(\nu(x,t)\) defines an equation of motion that can be integrated in time, rendering as a result such trajectories. Physically, these trajectories describe the flow of probability at a more local level than the
Figure 1: For \(\nu>1/4\) the wave function (3) is square integrable. The red band indicates the range \(1/4<\nu<1/2\) where the corresponding \(\psi(x,t)\) exhibits a singularity at time \(t=\tau\). For \(\nu>1/2\), the corresponding \(\psi(x,t)\) is finite everywhere and the first moment \(\langle x\rangle\) of the associated probability density \(|\psi(x,t)|^{2}\) exists and it is equal to \(0\). Finally, the second moment \(\langle x^{2}\rangle\) is finite for \(\nu>3/4\). The case \(\nu=1\) corresponds to the Lorentzian function.
probability density itself does (to some extent, we can say that this latter quantity provides us with a global view of what is going on). A more detailed discussion on the issue can be found in Ref. [28].
For definiteness, we take the initial state (3), with \(\nu=1/3\), to ensure a singular wave packet. However, to produce a numerically reliable (and physically more realistic) wave function, instead of the initial ansatz (3), we consider the following modified one
\[\psi(x,0) =\frac{1}{\sqrt{\mathcal{N}_{r}}}\,\exp\left(-\frac{im}{2\pi}x^{ 2}\right)^{1\over 3}\left[1+\tanh\left(\frac{x+x_{b}}{\sigma}\right)\right]\] \[\times\left[1+\tanh\left(\frac{x-x_{b}}{\sigma}\right)\right]\,, \tag{9}\]
where \(x_{b}>0\). The two smooth step functions represented by the hyperbolic tangents produce a relatively soft decay or cutoff at distance \(x_{b}/\sigma\) from the origin, which somehow mimics the effect of a limited aperture with soft boundaries, avoiding the appearance of spurious frequencies associated with a sudden cutoff or Gibbs phenomenon [29]. Because of the cutoff introduced, it is expected that there will not be time symmetry with respect to \(t=\tau\), although the time-evolved of (9) will behave close to the exact solution.
We next perform a numerical integration of the evolution (2) using a standard pseudospectral method on a spatial mess of size \(50\sigma\) with a total of 1,024 grid points, integrating in time from \(t=0\) to \(t=2\tau\) with a time step \(\delta t=10^{-3}\), which suffices for our purposes. The numerical solution \(\psi(x,t)\) is monitored through both density plots of the corresponding probability density and the associated quantum trajectories. A density plot of the probability density is shown in Fig. 2a), with a set of 51 trajectories (white solid lines) with equidistant initial conditions between \(x/\sigma=-15\) and \(x/\sigma=15\) to cover a wide region of the initial probability density. We have chosen \(x_{b}/\sigma=22.5\).
As it can be noticed, as time approaches the critical value \(\tau\), the swarm of trajectories quickly evolves towards the origin, which turns into a prominent increase of the density within a very narrow spatial region, thus originating the singularity.
This behavior can be better appreciated in the zoomed version around the singular region displayed in Fig. 2a'). In the same manner, as time proceeds and becomes larger than \(\tau\), the swarm of trajectories gets dispersed quickly again. It is worth noting that, while the quantum flux is quite laminar before and after the singularity, as it is indicated by the relative smoothness of the trajectories (they evolve with nearly uniform motion), in the region around the singularity there is a turbulent flow led by the appearance of transient nodes. In their attempt for avoiding these nodes (nodal regions), the trajectories will be forced to undergo a whirling motion.
## III Singular versus smooth wave packet evolution
To better understand the singularity, we will next examine a few characteristics of simpler but illustrative cases of smoothly focusing wave packets.
### Gaussian wave packet
As it is well known, the evolution of a Gaussian wave packet undergoes an initial boost or acceleration, and then it reaches a stationary linear expansion [30]. Consider the initial normalized Gaussian _ansatz_
\[\psi(x,0)=\frac{1}{\sqrt{\mathcal{N}_{G}}}\,\exp\left(-\frac{x^{2}}{4\sigma_{ 0}^{2}}\right)\,, \tag{10}\]
where \(\sigma_{0}>0\) is a real-valued parameter determining the width of the wave packet and the normalization constant is \(\mathcal{N}_{G}=\sqrt{2\pi\sigma_{0}^{2}}\). Substituting this into the free-space propagator leads to its time-evolved form,
\[\psi(x,t)=\frac{1}{\sqrt{\mathcal{N}_{G}}}\sqrt{\frac{\sigma_{0}}{\hat{\sigma }(t)}}\exp\left[-\frac{x^{2}}{4\sigma_{0}\hat{\sigma}(t)}\right]\,, \tag{11}\]
where the Gaussian complex-valued parameter
\[\hat{\sigma}(t)=\sigma_{0}\left(1+\frac{i\hbar t}{2m\sigma_{0}^{2}}\right) \tag{12}\]
accounts for both the spreading in time of the wave packet, given by
\[\sigma(t)=|\tilde{\sigma}_{t}|=\sigma_{0}\sqrt{1+\left(\frac{\hbar t}{2m \sigma_{0}^{2}}\right)^{2}}, \tag{13}\]
and the development of a space-dependent phase factor.
From the hydrodynamical point of view, the evolution of the above wave function maps onto the trajectories arising from the equation of motion
\[\dot{x}=\frac{\hbar^{2}t}{(2m\sigma_{0}^{2})^{2}}\,\frac{\sigma_{0}^{2}}{ \sigma(t)^{2}}\,x. \tag{14}\]
After integration, this equation of motion renders the hyperbolic trajectories
\[x(t)=\frac{\sigma(t)}{\sigma_{0}}\,x(0). \tag{15}\]
From Eq. (14), it is clear that, for \(t>0\), the trajectories are "repelled" from the region where they are initially confined, namely, the waist of the wave packet, since the sign of \(\dot{x}\) directly depends on the sign of \(x\) and hence on the corresponding initial conditions. Although the initial expansion is slow, later on, for \(t\gg t_{s}\), with \(t_{s}=2m\sigma_{0}^{2}/\hbar\) being a characteristic spreading time, it becomes essentially linear with time; for \(t\sim t_{s}\), the expansion is accelerated, although at different rates as time proceeds [19].
All this information is nicely conveyed by the trajectories (15), which separate at a rate proportional to their initial
distance, \(d(0)=|x_{2}(0)-x_{1}(0)|\), since \(d(t)/d(0)=\sigma(t)/\sigma_{0}\), where \(d(t)=|x_{2}(t)-x_{1}(t)|\). Taking into account (13), for the same \(d(0)\), the largest \(\sigma_{0}\), the slowest the dispersion, and vice versa, in compliance with what is expected in this case.
So far there are no novelties. However, we stress that the above solution is reversible in time, which means that, in the same way that the wave packet undergoes an expansion, it can also be tracked backwards. If the wave packet is then propagated ahead again, it will evolve imploding until reaching a minimum width (waist width), and then expanding again. Taking into account the translational time invariance of the solutions of the Schrodinger equation, if we call \(\tau\) the time when waist occurs, we can define a generalized Gaussian coefficient as \(\tilde{\sigma}_{g}(t)=\sigma(t-\tau)\). In this way the width and the phase of the wave packet at time \(t\) are given by
\[\begin{split}\sigma_{g}(t)&=\sigma_{0}\sqrt{1+ \left[\frac{\hbar(t-\tau)}{2m\sigma_{0}^{2}}\right]^{2}}\,,\\ \theta_{g}(t)&=\arctan\left[\frac{\hbar(t-\tau)}{ 2m\sigma_{0}^{2}}\right]\,.\end{split} \tag{16}\]
It is clear from these expressions that, at \(t=\tau\), we will observe a minimum waist, with \(\sigma_{g}(\tau)=\sigma_{0}\), and zero phase, \(\theta_{g}(\tau)=0\).
Now, contrary to the standard case, we note that there are two factors ruling the expansion dynamics: one associated with the initial width and another one related to a phase, which play opposite roles. If \(\sigma_{0}\) is too large, the phase factor decreases very rapidly, while a small width leads to a prominent phase factor. This dependence is shown in Fig. 3, where the phase and modulus are separately represented for a better understanding. As it can be seen, \(\sigma_{g}(0)\) has a minimum for \(\sigma_{0}=\sqrt{\tau/2}\), increasing linearly with \(\sigma_{0}\) for large widths and as \(1/\sigma_{0}\) when \(\sigma_{0}\) goes to zero. The associated phase approaches \(-\pi/2\) as \(\sigma_{0}\) decreases, while tends to vanish rapidly as \(\sigma_{0}\) increases above the threshold for minimum \(\sigma_{g}(0)\).
From the above discussion, we may now consider the initial Gaussian ansatz as in (10), but replacing \(\sigma_{0}\) with \(\sigma_{g}(0)\). The associated time evolution can be directly obtained and leads to the trajectories
\[x(t)=\frac{\sigma_{g}(t)}{\sigma_{0}}\;x(0). \tag{17}\]
As before, these trajectories undergo an initial implosion, until \(t=\tau\), and then a subsequent expansion. The question is how important the effect is, particularly taking into account that two different values of \(\sigma_{0}\), as it can readily be noticed from (16), can be associated with the same initial probability density. These two values will lead to very different dynamical behaviors. Thus, fixing the value of \(\sigma_{g}(0)\), from (16) we obtain the following two admissible values for the waist width
\[\sigma_{0,\pm}^{2}=\frac{1}{2}\sigma_{g}^{2}(0)\pm\sqrt{\sigma_{g}^{4}(0)- \left(\frac{\hbar\tau}{2m}\right)^{2}}\,. \tag{18}\]
To quantify the above effect, we consider a Gaussian wave packet with the (initial) width of its probability density at a
Figure 2: (Top panels) Quantum trajectories (51) displayed on top of a density plot describing the time evolution of the probability associated with (a) the the wave function (3) with \(\nu=1/3\), (b) the Gaussian (11) with waist width \(\sigma_{0,-}\), and (c) the rectangular wave packet (19) with width \(a\). For clarity in the density plot, due to the high values of the probability density around the singularity, it has been truncated to a tenth of its maximum value. (Bottom panels) Zoomed version of top panels around the focal region within the time interval where the maximum concentration of probability density is reached. The whirls in the trajectories denote the appearance and disappearance of nodes as the wave function approaches its maximum focusing.
tenth of the maximum value; i.e., \(\varrho_{G}(s_{\pm},0)/\varrho_{G}(0,0)=0.1\), equal to the corresponding value for the (modified) singular wave function (9). This yields an initial width for both wave packets given by \(\sigma_{g}^{2}(0)=(10\sqrt{10}-1)/(2\ln 10)\simeq 6.6512\), which gives the waist widths \(\sigma_{0,+}\simeq 2.571\) and \(\sigma_{0,-}\simeq 0.194\). When compared with the value for \(\sigma_{g}(0)\), we notice that while \(\sigma_{0,+}\) is practically the same [\(\sim 99\%\)\(\sigma_{g}(0)\)], which already indicates a poor dynamics, \(\sigma_{0,-}\) is significantly different [\(\sim 7.5\%\)\(\sigma_{g}(0)\)] and hence a more relevant dynamical behavior is expected.
The above expectations translate into the results displayed in Fig. 2b) for \(\sigma_{0,-}\). The characteristic time scale here is \(t_{s,-}\simeq 0.15\), about a tenth of \(\tau\) and hence with noticeable effects both in the implosion and, afterwards, in the subsequent dispersion. Note here that there is a more important phase contribution, since \(\theta_{g,-}(0)\simeq-0.38\pi\), a value closer to the maximum bound for the phase. Nonetheless, unlike the singular wave packet, here near the singular region the flux is not turbulent, which is consistent with the fact that the evolution of a Gaussian wave packet is characterized by the absence of nodes.
In Fig. 4 we plot the reverse case of a Gaussian wave packet for \(\sigma_{0,+}\). We can appreciate that the wave packet remains unaffected, with the flux described by the swarm of 51 Bohmian trajectories being nearly stationary. The characteristic spreading time scale is \(t_{s,+}\simeq 26.4\tau\), which implies that neither the evolution before \(\tau\) nor afterwards is going to be importantly affected. Indeed. the initial phase is \(\theta_{g,+}(0)\simeq-0.061\pi\), which already indicates the rather small contribution of the phase factor in the dynamics.
In Fig. 5 we represent the probability densities associated with these initial Gaussian wave packets. Interestingly, these probability densities are indistinguishable in position space, but they are completely different in momentum space: the momentum distribution for \(\sigma_{0,+}\) is rather wide, while for \(\sigma_{0,-}\) it approaches a Dirac delta function. It is precisely this wider momentum distribution that allows the second wave packet to coalesce toward the origin as the time approaches \(\tau\), similarly to the singular wave function, while the first wave packet will remain essentially the same.
### Rectangular wave packet
As our last example, we consider a rectangular wave packet [31], with an initial profile
\[\psi(x,0)=\frac{1}{\sqrt{\mathcal{N}_{r}}}\exp\left(-\frac{im}{2\hbar\tau}x^{ 2}\right)\text{rect}_{a}(x)\;, \tag{19}\]
where the rectangle function \(\text{rect}_{a}(x)\) is defined as 1 for \(|x|\leq a/2\) and 0 for \(|x|>a/2\) and the normalization constant is \(\mathcal{N}_{r}=a\). The time evolution can be found using again (2), finding [31]
\[\psi(x,t) =\frac{(-1)^{3/4}}{\sqrt{4i\mathcal{N}_{r}}}\exp\left(\frac{im}{ 2\hbar\tau}x^{2}\right)\left\{\text{erfi}\left[(-1)^{1/4}\sqrt{\frac{m}{2 \hbar t}}\left(x-\frac{a}{2}\right)\right]\right.\] \[- \left.\text{erfi}\left[(-1)^{1/4}\sqrt{\frac{m}{2\hbar t}}\left(x +\frac{a}{2}\right)\right]\right\}\;, \tag{20}\]
where \(\text{erfi}(x)\) is the imaginary error function and this is valid for \(t>0\).
The wave packet is composed of an infinite number of plane waves. At time \(t=0\) these plane waves interfere to give
Figure 3: Dependence of the phase (green line) and modulus (black line) of the initial complex-valued Gaussian parameter \(\bar{\sigma}_{g}\) on the waist width, \(\sigma_{0}\), for \(t/\tau=1\). The vertical blue dotted lines denote the values of the phase and modulus of \(\bar{\sigma}_{c}\) that correspond to Gaussians such that their width at 0.1 of their maximum value equals the same value of the probability density corresponding to the wave function. The horizontal red dashed line shows that there are always two Gaussian wave packets with the same initial width, but that lead to two different waist widths (in this case, \(\sigma_{g}\simeq 2.579\) is associated with \(\sigma_{0,+}\simeq 2.571\) and \(\sigma_{0,-}\simeq 0.194\)). Despite having the same value for \(\sigma_{g}\), each Gaussian wave packet has a very different initial phase, in particular, \(\theta_{g,+}\simeq-0.024\pi\) versus \(\theta_{g,-}\simeq-0.477\pi\).
Figure 4: Same trajectories as in Fig. 2b) for the probability associated with a Gaussian wave packet with waist width \(\sigma_{0,+}\). Note that, because the waist width is relatively large compared to the initial width \(\sigma_{g}(0)\), there is no apparent self-implosion (only a very slight narrowing at \(\tau\)), as it is evidenced by the nearly parallel flux trajectories.
a rectangular shape. As time elapses, the component plane waves travel, both to the right (\(k>0\)) and to the left (\(k<0\)), at different phase velocities \(\hbar k/2m\). Thus the pattern of the interference of these plane waves gradually changes, resulting in the dispersion of the wave packet.
In Fig. 2c), we plot the quantum trajectories associated with this evolution. Near the time \(\tau\), we appreciate the presence of wiggles for both the singular and the rectangular wave packets, which remind of a nonlinear flux. Conversely, the Gaussian profile looks perfectly laminar nearby the singular point. We recall that a flood in a river occurs because at some point water slows down and the quicker mass of water arriving from behind finds this "potential barrier" created by the slow water and tries to overcome it. In this case, the wiggles mark somehow a _slower light flow_, so that energy accumulates nearby the singularity and the density grows.
An alternative way to capture the degree of localization of a wave function is by studying the behavior exhibited its full width at half maximum (FWHM) [32]. More specifically, this quantity is computed in all cases determining the distance between the two positions, \(x_{+}\) and \(x_{-}\), at which the corresponding probability density reaches half its maximum value at any time; that is
\[\frac{\varrho(x_{\pm},t)}{\varrho_{\max}(x,t)}=\frac{1}{2}. \tag{21}\]
Except for Gaussian wave packets, the above equation cannot be solved analytically, so \(x_{+}\) and \(x_{-}\) have been numerically determined on the fly, during the time-evolution of the corresponding wave functions. From this, we obtain \(\mathrm{FWHM}(t)=x_{+}(t)-x_{-}(t)\), which is shown in Fig. 6 for the cases here considered. As it can be noticed, while the FWHM is nearly constant for the Gaussian with waist width \(\sigma_{0,+}\), it shows a linear decrease and increase, before and after the waist, respectively, for the Gaussian with \(\sigma_{0,-}\). A similar trend is also observed for the square wave function, although the FWHM shows a tiny asymmetry before and after the singularity, which is related to the limitations involved in the numerical method (the spatial size of the grid sets a cutoff for the high spatial frequencies). Finally, for the singular wave function (3), the FWHM slowly decrease until \(t\) is close to \(\tau\), as it can be appreciated in the inset of Fig. 6. Near this time, the FWHM undergoes a sudden decrease and then increase afterwards; at any later time, the FWHM increase near linearly, in a similar fashion to the Gaussian with \(\sigma_{0,-}\). We notice again a different behavior between the FWHM dynamics before and after \(t=\tau\), which is related to the fact that the wave function considered is not exactly the ansatz eqrefeq:psi0), but
Figure 5: Probability density in the \(x\)-position space (upper panel) and in the \(k\)-momentum space (lower panel) for the singular wave function (black solid line), a Gaussian wave packet with waist width \(\sigma_{0,+}\) (red dashed line), and a Gaussian wave packet with waist width \(\sigma_{0,-}\) (blue dotted line), and a rectangular wave packet (green dash-dotted line) for \(t=0\). The inset shows the same plots on a linear vertical scale. The waist widths for both Gaussians have been adjusted to the width of the probability density for the singular wave function at 0.1 of its maximum value.
Figure 6: Evolution temporal of the FWHM for the same wave packets as in Fig. 5, with the same symbols: the singular wave function (black solid line), a Gaussian wave packet with waist width \(\sigma_{0,+}\) (red dashed line), a Gaussian wave packet with waist width \(\sigma_{0,-}\) (blue dotted line), and a rectangular wave packet (green dash-dotted line).
the truncated version (9). All these characteristics concur with the corresponding probability density and quantum trajectories displayed in Fig. 2.
## IV Concluding remarks
To summarize, we have studied a family of solutions of the Schrodinger equation that spontaneously develop a singularity while propagating in free space. Due to the finiteness of these solutions, their singularities do not require a nonphysical infinite amount of energy to manifest. Nevertheless, the local amplitude of the field at a singular point may grow unboundedly. We have given a physical interpretation in terms of quantum trajectories.
While there is a widespread belief that extreme focusing requires strong nonlinear effects, we have demonstrated that this can be easily achieved with only linear propagation. This promising field enhancement mechanism may foster further interesting research in fields such as electron microscopy or optics.
###### Acknowledgements.
Financial support is acknowledged to the Spanish Research Agency (Grant No. PID2021-127781NB-I00). AA acknowledges support from Deutsche Forschungsgemeinschaft (Grant No. 429529648-TRR 306).
|
2304.09481 | A direct derivation of the Gent-McWilliams/Redi diffusion tensor from
quasi-geostrophic dynamics | The transport induced by ocean mesoscale eddies remains unresolved in most
state-of-the-art climate models and needs to be parameterized instead. The
natural scale separation between the forcing and the emergent turbulent flow
calls for a diffusive parameterization, where the eddy-induced fluxes are
related to the large-scale gradients by a diffusion tensor. The standard
parameterization scheme in climate modeling consists in adopting the
Gent-McWilliams/Redi (GM/R) form for the diffusion tensor, initially put
forward based on physical intuition and educated guesses before being put on
firm analytical footing using thickness-weighted average (TWA). In the present
contribution we provide a direct derivation of this diffusion tensor from the
quasi-geostrophic (QG) dynamics of a horizontally homogeneous three-dimensional
patch of ocean hosting a large-scale vertically-sheared zonal flow on the beta
plane. While less general than the TWA approach, the present QG framework leads
to rigorous constraints on the diffusion tensor. First, there is no diapycnal
diffusivity arising in the QG GM/R tensor for low viscosity and small-scale
diffusivities. The diffusion tensor then involves only two vertically dependent
coefficients, namely the GM transport coefficient $K_{GM}(z)$ and the Redi
diffusivity $K_R(z)$. Secondly, as already identified by previous authors the
vertical structures of the two coefficients are related by the so-called
Taylor-Bretherton relation. Finally, while the two coefficients generically
differ in the interior of the water column, we show that they are equal to one
another near the surface and near the bottom of the domain for low-enough
dissipative coefficients. We illustrate these findings by numerically
simulating the QG dynamics of a horizontally homogeneous patch of ocean hosting
a vertically sheared zonal current resembling the Antarctic Circumpolar
Current. | Julie Meunier, Benjamin Miquel, Basile Gallet | 2023-04-19T08:05:59Z | http://arxiv.org/abs/2304.09481v1 | # A direct derivation of the Gent-McWilliams/Redi diffusion tensor from quasi-geostrophic dynamics
###### Abstract
The transport induced by ocean mesoscale eddies remains unresolved in most state-of-the-art climate models and needs to be parameterized instead. The natural scale separation between the forcing and the emergent turbulent flow calls for a diffusive parameterization, where the eddy-induced fluxes are related to the large-scale gradients by a diffusion tensor. The standard parameterization scheme in climate modeling consists in adopting the Gent-McWilliams/Redi (GM/R) form for the diffusion tensor, initially put forward based on physical intuition and educated guesses before being put on firm analytical footing using thickness-weighted average (TWA). In the present contribution we provide a direct derivation of this diffusion tensor from the quasi-geostrophic (QG) dynamics of a horizontally homogeneous three-dimensional patch of ocean hosting a large-scale vertically-sheared zonal flow on the \(\beta\) plane. While less general than the TWA approach, the present QG framework leads to rigorous constraints on the diffusion tensor. First, there is no diapycnal diffusivity arising in the QG GM/R tensor for low viscosity and small-scale diffusivities. The diffusion tensor then involves only two vertically dependent coefficients, namely the GM transport coefficient \(K_{GM}\left(z\right)\) and the Redi diffusivity \(K_{R}\left(z\right)\). Secondly, as already identified by previous authors the vertical structures of the two coefficients are related by the so-called Taylor-Bretherton relation. Finally, while the two coefficients generically differ in the interior of the water column, we show that they are equal to one another near the surface and near the bottom of the domain for low-enough dissipative coefficients. We illustrate these findings by numerically simulating the QG dynamics of a horizontally homogeneous patch of ocean hosting a vertically sheared zonal current resembling the Antarctic Circumpolar Current.
Quasi-geostrophic flows, Geostrophic turbulence, Ocean processes
## 1 Introduction
Oceans and planetary atmospheres host currents or jets in thermal-wind balance with meridional buoyancy gradients. This situation is prone to baroclinic instability, however, and the resulting flows are strongly turbulent. In the ocean this turbulence takes the form of'mesoscale' eddies of size comparable to the Rossby deformation radius, a length scale of the order of 15-20 km in the Southern Ocean. While these vortices are key contributors to heat, salt and carbon transport, they are not resolved in state-of-the-art global climate models, and modelers need to parameterize the turbulent transport instead. It was soon realized that this turbulent transport is ill-described by standard horizontal diffusion (Gent, 2011). Instead, rapid rotation and strong
###### Abstract
We present a new method for obtaining the _global_ global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface and the rotation of the stellar surface surface surface. The global rotation of the stellar surface surface surface is derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface surface. The global rotation of the stellar surface surface surface is derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface surface. The global rotation of the stellar surface surface surface is derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface surface. The global rotation of the stellar surface surface is derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface surface. The global rotation of the stellar surface surface surface surface and the rotation of the stellar surface surface surface are derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface surface. The global rotation of the stellar surface surface surface is derived from the global rotation of the stellar surface surface surface surface and the rotation of the stellar surface surface surface. The global rotation of the stellar surface surface surface is derived from the global rotation of the stellar surface surface surface and the rotation of the stellar surface surface surface surface surface. The global rotation of the stellar surface surface surface surface is derived from the global rotation of the stellar surface surface surface surface surface and the rotation of the stellar surface surface surface surface surface. The global rotation of the stellar surface surface surface surface surface is derived from the global rotation of the stellar surface surface surface surface surface and the rotation of the stellar surface surface surface surface surface surface. The global rotation of the stellar surface surface surface surface surface is derived from the global rotation of the stellar surface surface surface surface surface surface and the rotation of the stellar surface
* Additionally, the QG boundary conditions at top and bottom readily provide constraints on the top and bottom values of \(K_{GM}\) and \(K_{R}\). Namely, these two coefficients are equal at top and bottom for low bottom drag.
We introduce the theoretical setup in section 2. We highlight the main conservation relations in section 3 from which we derive the GM/R diffusion tensor in section 4. In section 5 we identify constraints relating the GM coefficient and the Redi diffusivity. We illustrate these constraints using direct numerical simulation in section 6, before concluding in section 7. In appendix A we show that the contributions from the small-scale diffusive terms are negligible in the QG regime. In appendix B we make the connection between the present QG approach and the more general TWA approach of McDougall & McIntosh (2001). Finally, the details of the numerical procedure are provided in appendix C.
## 2 Quasi-geostrophic dynamics of an idealized 3D patch of ocean
We consider the idealized patch of ocean represented in figure 1. Water occupies a volume \((x,y,z)\in[0,L]^{2}\times[-H,0]\) with a stress-free boundary at the surface \(z=0\) and a linear-friction boundary condition at \(z=-H\). The fluid layer is subject to global rotation around the vertical axis with a local Coriolis parameter \(f_{0}+\beta y\), where \(y\) denotes the meridional (North-South) coordinate. Additionally, the fluid layer is density-stratified with an arbitrary buoyancy frequency profile \(N(z)\), and we restrict attention to a single stratifying agent. We focus on the rapidly rotating strongly stratified regime for which the fluid motion is governed by quasi-geostrophy (Venaille _et al._, 2011; Salmon, 1998; Vallis, 2006). In that limit the velocity field \(\mathbf{u}=(u,v,w)\) consists of a leading-order horizontal geostrophic flow \((u,v)=(-P_{y},P_{x})\), where the generalized pressure field \(P\) is defined as the opposite of the streamfunction, together with subdominant vertical velocity \(w\). The buoyancy field is given by \(B=f_{0}P_{z}\) as a result of hydrostatic balance.
The base flow consists of an arbitrary zonal velocity profile \(U(z)=-P_{y}\). Differentiating with respect to \(z\) indicates that the zonal flow is in thermal wind balance with a \(z\)-dependent meridional buoyancy gradient, \(\partial_{y}B=-f_{0}U^{\prime}(z)\). We consider the evolution of arbitrary departures from this base state. We denote as \(p(x,y,z,t)\) the departure from the base pressure field, with \(u=-p_{y}\) the departure zonal velocity, \(v=p_{x}\) the departure meridional velocity and \(b=f_{0}\,p_{z}\) the departure buoyancy. In the following we adopt dimensionless variables, with time expressed in units of \(|f_{0}|^{-1}\) and lengths in units of \(H\). For brevity we use the same symbols for the dimensionless
Figure 1: **Left:** An idealized patch of ocean. A layer of fluid is subject to global rotation at a rate that varies linearly with the meridional coordinate \(y\). The fluid is density-stratified with a profile \(N(z)\) for the buoyancy frequency. The background zonal shear flow has a profile \(U(z)\). The flow coexists with a background meridional buoyancy gradient as a result of thermal-wind balance. Bottom friction damps the fluctuating kinetic energy. **Right:** Snapshot of the departure buoyancy field \(b\) in the equilibrated state of the numerical simulation.
variables. The QG limit is obtained for small isopycnal slope of the base state, or equivalently in the small Rossby number limit for \(\mathcal{O}(1)\) stratification. Denoting as \(\epsilon\) the typical magnitude of the isopycnal slope, the QG system can be derived through the following scalings:
\[N^{2}\sim 1\,,\qquad\beta\sim\epsilon\,,\qquad\partial_{x}, \partial_{y},\partial_{z}\sim 1\,,\qquad\partial_{t}\sim\epsilon\,, \tag{1}\] \[U\sim\epsilon\,,\qquad(u,v)\sim\epsilon\,,\qquad w\sim\epsilon ^{2}\,,\qquad b\sim\epsilon\,.\]
A standard asymptotic expansion of the equations of motion leads to the QG system (Pedlosky, 1979; Salmon, 1998; Vallis, 2006), as recalled in Gallet _et al._ (2022) for the specific notations and scalings considered here. The evolution then reduces to a conservation equation for the quasi-geostrophic potential vorticity (QGPV). The dimensionless QGPV departure \(q\) is related to the departure pressure \(p\) through:
\[q=\Delta_{\perp}p+\partial_{z}\left(\frac{p_{z}}{N^{2}(z)}\right)\,, \tag{2}\]
where \(\Delta_{\perp}=\partial_{xx}+\partial_{yy}\). The (dimensionless) QGPV conservation equation reads:
\[\partial_{t}q+U(z)\;q_{x}+J(p,q)=[-\beta+\mathcal{S}^{\prime}(z)]p_{x}+ \mathcal{D}_{q}\,, \tag{3}\]
where the Jacobian is \(J(g,h)=g_{x}h_{y}-g_{y}h_{x}\) and we denote the isopycnal slope of the base state as \(\mathcal{S}(z)=U^{\prime}/N^{2}\). The first term on the right-hand side of (3) corresponds to the distortion of the background meridional PV gradient by the meridional flow. The second term, \(\mathcal{D}_{q}\), is the contribution from the viscosity and buoyancy diffusivity, which damp the small-scale vorticity and buoyancy fluctuations. The latter are explicited in appendix A.
At the same level of approximation, the evolution equation for the (dimensionless) buoyancy departure \(b=p_{z}\) reads:
\[\partial_{t}b+U(z)\;b_{x}+J(p,b)=U^{\prime}(z)p_{x}-wN^{2}(z)+\mathcal{D}_{b}\,, \tag{4}\]
where the diffusive term \(\mathcal{D}_{b}\) is provided in appendix A. The first two terms on the right-hand side are the sources of buoyancy fluctuations in the system: they correspond to the distortion by the turbulent flow of the background meridional and vertical buoyancy gradients, respectively. At the surface, where \(w=0\), this equation reduces to:
\[\partial_{t}b|_{0}+U(0)\;b_{x}|_{0}+J(p|_{0},b|_{0})=U^{\prime}(0)p_{x}|_{0}+ \mathcal{D}_{b}|_{0}\,, \tag{5}\]
where the subscript \(\cdot|_{0}\) refers to quantities evaluated at \(z=0\). Quantities evaluated just above the bottom Ekman boundary layer are denoted with the subscript \(-1^{+}\). At this depth, the pumping vertical velocity is given by \(w|_{-1^{+}}=\kappa\Delta_{\perp}p_{-1^{+}}\), where the friction coefficient \(\kappa\) can either be related to the vertical viscosity through standard Ekman layer theory over a flat bottom boundary, or specified at the outset as an independent coefficient parameterizing more realistic drag on the ocean floor (see appendix C and Gallet _et al._ (2022)). The evolution equation for the buoyancy at \(z=-1^{+}\) reads:
\[\partial_{t}b|_{-1^{+}}+U(-1)\;b_{x}|_{-1^{+}}+J(p|_{-1^{+}},b|_ {-1^{+}}) \tag{6}\] \[=U^{\prime}(-1)p_{x}|_{-1^{+}}-N^{2}(-1)\kappa\Delta_{\perp}p|_{ -1^{+}}+\mathcal{D}_{b}|_{-1^{+}}\,.\]
One way to integrate the QG system consists in marching in time the QGPV conservation equation (3) together with the top and bottom buoyancy equations (5) and (6). To infer the pressure field at each time step, one inverts the relation (2) using \(b|_{-1^{+}}\) and \(b|_{0}\) as boundary conditions. A desirable feature of this QG approach is that it is fully compatible with periodic boundary conditions in \(x\) and \(y\) for the departure fields. We adopt such periodic boundary conditions in the following.
In the bulk of the domain, the buoyancy evolution equation (4) provides a diagnostic relation
to infer the subdominant vertical velocity \(w\), the latter being crucial to parameterize eddy-induced vertical transport.
## 3 Material invariants: buoyancy, QGPV and the cross-invariant
We denote with an overbar \(\overline{\nu}\) a time average together with a horizontal area average. Our goal is to characterize the transport properties of the flow, more specifically the diffusion tensor connecting the eddy-induced fluxes to the large-scale background gradients of some arbitrary tracer, be it active or passive. One can gain insight into the structure of this tensor by focusing on two specific tracers: buoyancy and QGPV. In this section we thus derive rigorous constraints between the meridional and vertical turbulent fluxes of buoyancy and QGPV: \(\overline{vb}(z),\overline{wb}(z),\overline{\nu q}(z)\) and \(\overline{wq}(z)\).
The first constraint stems from the conservation of buoyancy variance. Multiplying the buoyancy evolution equation (4) with \(b\) before averaging over time, \(x\) and \(y\) leads to:
\[N^{2}\overline{wb}=U^{\prime}\overline{vb}\,, \tag{1}\]
up to diffusive corrections that vanish in the regime of low viscosity and buoyancy diffusivity. A proof that the diffusive contributions indeed vanish is provided in appendix A based on the well-known absence of a forward energy cascade in QG turbulence. We recast the equality (1) in the form:
\[\overline{wb}=\mathcal{S}(z)\overline{vb}\,, \tag{2}\]
which shows that the mean buoyancy transport is directed along the mean isopycnals.
We derive a second constraint on the fluxes based on the conservation of the cross-invariant \(bq\). Multiply the QGPV evolution equation (3) with \(b\) and the buoyancy conservation equation (4) with \(q\). Summing the resulting equations and averaging over time, \(x\) and \(y\) leads to:
\[\overline{wq}=\mathcal{S}(z)\overline{\nu q}-\frac{\beta-\mathcal{S}^{\prime} (z)}{N^{2}(z)}\overline{vb}\,, \tag{3}\]
where the equality holds in the low-diffusivity limit, where, as shown in appendix A, the contributions from viscosity and buoyancy diffusivity vanish.
## 4 Arbitrary tracer: Gent-McWilliams/Redi diffusion tensor
Define the \(z\)-dependent eddy diffusivities \(K_{R}(z)\) and \(K_{GM}(z)\) as:
\[K_{R}(z)=\frac{\overline{\nu q}}{-\beta+\mathcal{S}^{\prime}(z)}\qquad\text{ and}\qquad K_{GM}(z)=\frac{\overline{vb}}{U^{\prime}(z)}\,, \tag{4}\]
namely, \(K_{R}\) is the ratio of the meridional PV flux over minus the background meridional PV gradient, while \(K_{GM}\) is the ratio of the meridional buoyancy flux over minus the background meridional buoyancy gradient. The notations \(K_{GM}\) and \(K_{R}\) will become obvious at the end of the derivation to come.
Now consider a tracer \(\tau\) stirred by the 3D flow and subject to horizontally uniform gradients \(G_{y}(z)=\mathcal{O}(\epsilon)\) and \(G_{z}(z)=\mathcal{O}(1)\) in the meridional and vertical directions, respectively. That is, the total tracer field reads:
\[\int_{-1}^{z}G_{z}(\tilde{z})\,\mathrm{d}\tilde{z}+y\,G_{y}(z)+\tau(x,y,z,t)\,. \tag{5}\]
Under these conditions and with the scalings (1) the evolution equation for \(\tau\) reads:
\[\partial_{t}\tau+U(z)\,\tau_{x}+J(p,\tau)=-p_{x}G_{y}(z)-wG_{z}(z)+\mathcal{D }_{\tau}\,, \tag{6}\]
where \(\mathcal{D}_{\tau}\) denotes small-scale diffusion. A few remarks are in order regarding the background meridional and vertical gradients: \(G_{y}\) and \(G_{z}\) above should be understood as the lowest-order background gradients that enter QG dynamics. Naturally, a subdominant vertical gradient \(G_{z}^{(1)}=y\,\partial_{z}G_{y}(z)=\mathcal{O}(\epsilon)\) exists to ensure the equality of the cross-derivatives, \(\partial_{y}(G_{z}+G_{z}^{(1)})=\partial_{z}G_{y}\). However, one can easily check that \(G_{z}^{(1)}\) is subdominant and does not arise in the QG evolution equation (4.3). Similarly, keeping \(G_{z}^{(1)}\) on the right-hand side of equation (4.6) would lead to negligible corrections to the fluxes, of higher order in \(\epsilon\).
The meridional and vertical fluxes of \(\tau\) are related to the background meridional and vertical gradients \(G_{y}\) and \(G_{z}\) through a diffusion tensor:
\[\left(\frac{\overline{v\tau}}{\overline{w\tau}}\right)=\begin{bmatrix}A_{1}( z)&A_{2}(z)\\ A_{3}(z)&A_{4}(z)\end{bmatrix}\begin{pmatrix}G_{y}\\ G_{z}\end{pmatrix}\,, \tag{4.4}\]
where the \(A_{i}(z)\) are unknown \(z\)-dependent coefficients at this stage. Apply relation (4.4) to the tracers \(b\) and \(q\), the associated background gradients being readily inferred from the right-hand-side terms of equations (2.3) and (2.4): \(G_{y}=-U^{\prime}(z)\) and \(G_{z}=N^{2}\) for \(\tau=b\), and \(G_{y}=\beta-\mathcal{S}^{\prime}(z)\), \(G_{z}=0\) for \(\tau=q\). We obtain the following fluxes:
\[\left(\frac{\overline{vb}}{wb}\right)=\begin{pmatrix}-A_{1}U^{\prime}(z)+A_{2 }N^{2}\\ -A_{3}U^{\prime}(z)+A_{4}N^{2}\end{pmatrix};\left(\frac{\overline{vq}}{ \overline{wq}}\right)=\begin{pmatrix}A_{1}\left[\beta-\mathcal{S}^{\prime}(z) \right]\\ A_{3}\left[\beta-\mathcal{S}^{\prime}(z)\right]\end{pmatrix}. \tag{4.5}\]
There are four constraints on these four fluxes, which allow us to express the four coefficients \(A_{i}\) in terms of \(K_{GM}\left(z\right)\) and \(K_{R}(z)\): the first two constraints are simply the definitions (4.1) of \(K_{GM}\) and \(K_{R}\), the third constraint is (3.2), namely the fact that the mean transport of \(b\) is along the mean isopycnals, and the fourth constraint is the cross-invariant relation (3.3). After a straightforward calculation of the coefficients \(A_{i}\) the diffusion tensor connecting the fluxes to the background gradients becomes:
\[\left(\frac{\overline{v\tau}}{\overline{w\tau}}\right)=\begin{bmatrix}-K_{R} &(K_{GM}-K_{R})\mathcal{S}\\ -(K_{GM}+K_{R})\mathcal{S}&-K_{R}\mathcal{S}^{2}\end{bmatrix}\begin{pmatrix}G_ {y}\\ G_{z}\end{pmatrix}\,. \tag{4.6}\]
This form for the diffusion tensor corresponds to the Gent-McWilliams/Redi parameterization (GM/R in the following), where \(K_{GM}\left(z\right)\) denotes the \(z\)-dependent Gent-McWilliams coefficient and \(K_{R}(z)\) denotes the Redi diffusivity. The former represents the skew flux associated with adiabatic transport by the eddying flow (Griffies, 1998). Using the definition (4.1) of \(K_{GM}\), we check in appendix B that the GM part of the tensor (4.6) corresponds to the QG limit of the advective fluxes associated with the more general quasi-Stokes streamfunction introduced by McDougall & McIntosh (2001). The Redi part of the tensor represents mixing along the neutral direction in the limit of weak isopycnal slope \(\mathcal{S}(z)\). As can be inferred from the QGPV conservation equation, the Redi diffusivity \(K_{R}(z)\) also equals the Taylor-Kubo eddy diffusivity coefficient deduced at any height \(z\) from the Lagrangian correlation function of the horizontal QG velocity field. Several similar estimates for the PV diffusivity have been compared in channel geometry by Abernathey _et al._ (2013).
## 5 Constraints on the GM and Redi coefficients
The derivation above allows us to obtain constraints on the GM and Redi coefficients as defined by (4.1). First of all, at the upper boundary the buoyancy equation (2.5) has exactly the same structure as the QGPV conservation equation (2.3). Both equations correspond to advection by the base zonal flow and the QG flow at the upper boundary, with a source term that corresponds to the distortion of a background meridional gradient. We conclude that the diffusivities relating the meridional flux to the background meridional gradient are equal for \(q\) and \(b\) at \(z=0\) (and
given by the Taylor-Kubo eddy diffusivity coefficient associated with the surface horizontal flow). Using the definitions (10) this leads to the constraint:
\[K_{GM}\left(0\right)=K_{R}(0)\;. \tag{11}\]
The same relation holds near the bottom boundary (just above the Ekman boundary layer) when the drag coefficient is low:
\[K_{GM}\left(-1^{+}\right)=K_{R}(-1^{+})\;. \tag{12}\]
The equality of \(K_{GM}\left(z\right)\) and \(K_{R}(z)\) is a common assumption when implementing the GM/R parameterization in global models (Griffies, 1998). We put this assumption on firmer analytical footing by showing that it holds near the upper and lower boundaries, although the two coefficients generically differ in the interior of the fluid column. In the general case, however, we can further relate the vertical dependence of \(K_{GM}\left(z\right)\) and \(K_{R}(z)\) through the Taylor-Bretherton relation. Multiplying equation (2) with \(v=p_{x}\) before averaging horizontally yields, after a few integrations by parts in the horizontal directions:
\[\overline{vq}=\frac{\mathrm{d}}{\mathrm{d}z}\left(\frac{\overline{vb}}{N^{2}} \right)\;. \tag{13}\]
This equation corresponds to the horizontal average of a more general relation initially derived by Bretherton (Bretherton, 1966) and often referred to as the Taylor-Bretherton relation (Taylor, 1915; Dritschel & McIntyre, 2008; Young, 2012). Using the definitions (10) we recast (13) as:
\[K_{R}\left(\mathcal{S}^{\prime}-\beta\right)=\frac{\mathrm{d}}{\mathrm{d}z}(K _{GM}\;\mathcal{S})\;. \tag{14}\]
This equality was previously obtained by several authors (e.g. Smith & Marshall (2009)) and is recalled here for the sake of completeness only.
## 6 A numerical example
With the goal of illustrating the results above, we turn to the Direct Numerical Simulation (DNS) of an isolated horizontally homogeneous patch of ocean. The setup is chosen to reproduce conditions in the Antarctic Circumpolar Current (ACC), with surface-intensified stratification, shear and turbulence. The base state consists of a (dimensionless) stratification decreasing linearly with depth, from \(N^{2}(0)=400\) at the surface to \(N^{2}(-1)=50\) at the bottom, together with an exponential profile for the background shear flow, \(U(z)=Ro\,e^{2z}\) with \(Ro=0.03\), and a dimensionless planetary vorticity gradient \(\beta=4.0\times 10^{-5}\). These values for \(Ro\) and \(\beta\) are ten times smaller than typical values in the ACC: this choice leaves invariant the dissipation-free QG dynamics while ensuring that the numerical simulation is indeed performed in the fully QG regime. In other words, we simulate an idealized horizontally homogeneous and fully QG version of the ACC. We have also kept \(f_{0}>0\), which conveniently leads to \(\overline{vb}>0\) while being equivalent to the ACC situation up to an equatorial symmetry. We solve for the fully nonlinear evolution of the departures from the base state inside a domain of dimensionless size \((x,y,z)\in[0;500]^{2}\times[-1;0]\) using periodic boundary conditions in the horizontal directions and small values for the dissipative coefficients. The numerical procedure is detailed in appendix C.
After some transient the system reaches a statistically steady equilibrated state, illustrated in Figure 1 through a snapshot of the departure buoyancy field \(b\). We extract the time and horizontal area averages of the meridional and vertical fluxes of buoyancy and QGPV in this statistically steady state. The corresponding profiles are shown in Figure 2, the first panel of which provides the diagnosed Gent-McWilliams coefficient and Redi diffusivity. In agreement with results from re-entrant channel simulations (Abernathey _et al._, 2013), \(K_{GM}\left(z\right)\) is monotonic in the interior of
the domain, whereas \(K_{R}(z)\) exhibits a maximum at mid-depth (Treguier, 1999; Smith & Marshall, 2009). In line with the constraints (5.1) and (5.2), the interior profiles of \(K_{GM}(z)\) and \(K_{R}(z)\) tend to a common limiting value as we approach the top or bottom boundary. This tendency is disrupted by diffusive boundary-layer effects in the immediate vicinity of the boundaries (more strongly so near the surface). These boundary layers shrink as we lower the diffusivities employed in the numerical simulation.
Having diagnosed \(K_{GM}(z)\) and \(K_{R}(z)\) we turn to the vertical fluxes of buoyancy and QGPV, with the goal of comparing the numerical fluxes to the predictions of the GM/R diffusion tensor. Panel 2b shows that the vertical buoyancy flux \(\overline{wb}\) is accurately captured by the GM/R prediction \(\mathcal{S}K_{GM}U^{\prime}(z)\). Because \(\mathcal{S}K_{GM}U^{\prime}(z)=\mathcal{S}\overline{vb}\), this validates the fact that buoyancy is transported adiabatically in the interior, in line with equation (3.2). Panel 2c shows that the vertical QGPV flux \(\overline{wq}\) is accurately captured by the GM/R prediction \(-\mathcal{S}(K_{GM}+K_{R})(\beta-\mathcal{S}^{\prime})\), the latter expression being also equal to the right-hand side of the cross-invariant relation (3.3). The good agreement in panel 2c thus validates the simple and exact cross-invariant relation (3.3) in the interior of the domain.
## 7 Conclusion
We have studied the transport properties of the turbulent QG flow arising from the baroclinic instability of a horizontally homogeneous vertically sheared zonal current. While less general than the TWA formulation of the Boussinesq equations (McDougall & McIntosh, 2001; Young, 2012), the QG limit allows one to make progress on the structure of the diffusion tensor relating the eddy-induced fluxes to the background gradients. Based on the conservation of buoyancy variance and of a cross-invariant involving buoyancy and QGPV, we thus derived a particularly simple GM/R form for the diffusion tensor. First, in the interior of the domain there are no diapycnal fluxes provided the viscosity and small-scale diffusivities are small. The diffusion tensor then involves only two vertically dependent coefficients: the GM transport coefficient \(K_{GM}(z)\) and the Redi diffusivity \(K_{R}(z)\). Secondly, based on the definition of QGPV one can relate \(K_{GM}\) and \(K_{R}\) through the Taylor-Bretherton relation (5.4). Finally, based on the QGPV and buoyancy evolution equations one obtains that \(K_{GM}\) and \(K_{R}\) are equal to one another at top and bottom. These results provide some support for the common modeling assumption \(K_{GM}\simeq K_{R}\) near the
Figure 2: Time and horizontally averaged profiles from the numerical simulation. **a.** Transport coefficients as defined in (4.1) (gray region corresponding to the depth where the meridional QGPV gradient vanishes). **b.** Vertical buoyancy flux compared to the GM/R prediction, which validates adiabatic transport in the interior. **c.** Vertical QGPV flux compared to the GM/R prediction, which validates the cross-invariant relation (3.3).
boundaries (Griffies, 1998). However, the two coefficients are allowed to depart from one another in the interior of the fluid column and indeed they do in the present numerical simulation (in line with previous studies, see Abernathey _et al._ (2013)). It would be interesting to investigate whether some equivalent of the boundary relation \(K_{GM}\simeq K_{R}\) exists beyond the present idealized QG framework, for a primitive-equation or Boussinesq system. TWA would likely play a central role for such an extension.
**Acknowledgements**. We thank G. Hadjerci, R. Ferrari and W.R. Young for insightful discussions. This research is supported by the European Research Council under grant agreement FLAVE 757239. The numerical study was performed using HPC resources from GENCI-CINES and TGCC (grants 2021-A0102A12489, 2022-A0122A12489 and 2023-A0142A12489).
## Appendix A Diffusive contributions
We consider the impact of the standard diffusive terms (viscosity and buoyancy diffusivity) within the framework of QG. We use different coefficients for the diffusivities in the horizontal and vertical directions: \(E_{\perp}\) and \(E_{z}\), respectively, for the horizontal and vertical dimensionless viscosities (Ekman numbers), and \(E_{b,\perp}\) and \(E_{b,z}\), respectively, for the horizontal and vertical dimensionless buoyancy diffusivities. With these notations, the diffusive term in the buoyancy equation (4) reads:
\[\mathcal{D}_{b}=E_{b,\perp}\Delta_{\perp}b+E_{b,z}\partial_{zz}b\, \tag{1}\]
while the diffusive term in the QGPV conservation equation (3) reads:
\[\mathcal{D}_{q}=E_{\perp}\Delta_{\perp}^{2}p+E_{z}\Delta_{\perp}p_{zz}+ \partial_{z}\left[\frac{1}{N^{2}}\left(E_{b,\perp}\Delta_{\perp}b+E_{b,z} \partial_{zz}b\right)\right]. \tag{2}\]
In contrast to standard 3D turbulence, quasi-geostrophic dynamics are characterized by an inverse energy cascade (Charney, 1971), together with a forward cascade of buoyancy variance at the boundaries only: in the limit where the various diffusive coefficients \(E_{i}\) are sent to zero simultaneously, there is no 'anomalous' energy dissipation and no 'anomalous' dissipation of buoyancy variance in the interior (Lapeyre, 2017). That is, the limit \(E_{i}(\overline{\Delta_{\perp}p})^{2}\to 0\) holds for any \(z\), and the limits \(E_{i}|\overline{\mathbf{\nabla}_{\perp}b}|^{2}\to 0\) and \(E_{i}(\overline{b_{z}})^{2}\to 0\) hold pointwise for \(z\neq\{-1;0\}\). Additionally, any \(z\)-derivative of these profiles also vanishes in the vanishing-diffusivity limit for \(z\neq\{-1;0\}\). The diffusive contribution to the right-hand side of the adiabatic-transport relation (2) reads:
\[\overline{b\,\mathcal{D}_{b}}=-E_{b,\perp}\overline{|\mathbf{\nabla}_{\perp}b|^{2 }}-E_{b,z}\overline{(b_{z})^{2}}+\frac{E_{b,z}}{2}\frac{\mathrm{d}}{\mathrm{ d}z}\overline{b^{2}}\,. \tag{3}\]
The three terms vanish in the vanishing-diffusivity limit for \(z\neq\{-1;0\}\), leading to \(\overline{b\,\mathcal{D}_{b}}\to 0\) and relation (2). The diffusive contribution to the right-hand side of the cross-invariant relation (3) reads \((\overline{b\,\mathcal{D}_{q}}+\overline{q\,\mathcal{D}_{b}})/N^{2}\), where:
\[\overline{q\,\mathcal{D}_{b}}=\frac{E_{b,\perp}}{2}\frac{\mathrm{ d}}{\mathrm{d}z}\overline{(\Delta_{\perp}p)^{2}}+\frac{E_{b,z}}{2}\left[-\frac{ \mathrm{d}^{3}}{\mathrm{d}z^{3}}\overline{(\mathbf{\nabla}_{\perp}p)^{2}}+3\frac{ \mathrm{d}}{\mathrm{d}z}\overline{(\mathbf{\nabla}_{\perp}b)^{2}}\right]\] \[+E_{b,\perp}\left\{-\frac{\mathrm{d}}{\mathrm{d}z}\left[\frac{( \mathbf{\nabla}_{\perp}b)^{2}}{N^{2}}\right]+\frac{1}{2N^{2}}\frac{\mathrm{d}}{ \mathrm{d}z}\overline{(\mathbf{\nabla}_{\perp}b)^{2}}\right\}+E_{b,z}\left\{ \left[\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}\left(\frac{\overline{b^{2}}}{2} \right)-\overline{(b_{z})^{2}}\right]\frac{\mathrm{d}}{\mathrm{d}z}\left( \frac{1}{N^{2}}\right)+\frac{1}{2N^{2}}\frac{\mathrm{d}}{\mathrm{d}z}\overline {b_{z}^{2}}\right\}\,,\]
\[\overline{b\,\mathcal{D}_{q}}=\frac{E_{\perp}}{2}\frac{\mathrm{d} }{\mathrm{d}z}\overline{(\Delta_{\perp}p)^{2}}-\frac{E_{z}}{2}\frac{\mathrm{d} }{\mathrm{d}z}\overline{(\mathbf{\nabla}_{\perp}b)^{2}}\] \[+\frac{\mathrm{d}}{\mathrm{d}z}\left[-\frac{E_{b,\perp}}{N^{2}} \overline{(\mathbf{\nabla}_{\perp}b)^{2}}+\frac{E_{b,z}}{2N^{2}}\frac{\mathrm{d} ^{2}}{\mathrm{d}z^{2}}\overline{b^{2}}-\frac{E_{b,z}}{N^{2}}\overline{(b_{z}) ^{2}}\right]+\frac{E_{b,\perp}}{2N^{2}}\frac{\mathrm{d}}{\mathrm{d}z}\overline{ (\mathbf{\nabla}_{\perp}b)^{2}}-\frac{E_{b,z}}{2N^{2}}\frac{\mathrm{d}}{\mathrm{d} z}\overline{(b_{z})^{2}}\,.\]
The terms on the right-hand side of both expressions vanish in the vanishing-diffusivity limit for \(z\neq\{-1;0\}\), leading to \(q\overline{Q}_{b}+\overline{b\mathcal{D}_{q}}\to 0\). Hence the approximate relation (3.3) for low diffusivities.
## Appendix B Connection to TWA and the residual-mean approach
The evolution equations for the TWA and for the standard fixed-\(z\)-averaged tracer concentration are identical in the QG limit
For a given tracer \(\tau\), McDougall & McIntosh (2001) consider the evolution equations for the TWA \(\hat{\tau}\) and for the standard fixed-\(z\) average \(\overline{\tau}\), where the time average is to be understood as an average over a few eddy turnover times. They show that the two evolution equations differ by the divergence of some vector \(\mathbf{E}\), see their equations (53-55). With the QG scalings (2.1), however, this additional term is negligible. The horizontal components of \(\mathbf{E}\) are \(\mathcal{O}(\epsilon^{3})\), smaller than the meridional flux \(\overline{v\tau}=\mathcal{O}(\epsilon^{2})\) discussed in the present study. The vertical component of \(\mathbf{E}\) is the time-derivative of some \(\mathcal{O}(\epsilon^{2})\) material invariant, averaged over the slow QG eddy turnover timescale \(1/\epsilon\). The latter time derivative is thus at least of order \(\epsilon^{4}\), much smaller than the vertical flux \(\overline{w\tau}=\mathcal{O}(\epsilon^{3})\) discussed in the present study. The vector \(\mathbf{E}\) is thus a higher-order term that is negligible at the level of the QG approximation. In other words, the evolution equations for the TWA and the fixed-\(z\) average are identical at the level of the QG approximation.
The coefficient \(K_{GM}\) of the present study describes advection by the quasi-Stokes velocity in the QG limit
In the general TWA formulation of McDougall & McIntosh (2001), the fluxes arising from the antisymmetric part of the diffusion tensor are expressed in terms of a quasi-Stokes streamfunction \(\psi\) as:
\[\begin{bmatrix}0&-\psi\cdot\mathbf{e}_{y}\\ \psi\cdot\mathbf{e}_{y}&0\end{bmatrix}\begin{pmatrix}G_{y}\\ G_{z}\end{pmatrix}\,,\] (B1)
where we restrict attention to the case of zonally invariant statistics. With the QG scalings (2.1) the \(y\) component of the quasi-Stokes streamfunction provided in McDougall & McIntosh (2001) reduces to \(\psi\cdot\mathbf{e}_{y}=-\overline{vb}/N^{2}+\mathcal{O}(\epsilon^{3})\). Using our definition (4.1) for \(K_{GM}\), we can re-express the right-hand side as \(-K_{GM}\mathcal{S}+\mathcal{O}(\epsilon^{3})\). This shows that (B1) is indeed equivalent to the GM part of the tensor (4.6) of the present study at the QG level of approximation.
## Appendix C Direct numerical simulation
The numerical simulations are performed using an intermediate set of equations between the Boussinesq equations and the QG system. Indeed, on the one hand the QG system - equation (2.3) with the boundary conditions (2.5-2.6) - is rather impractical for implementation in standard pseudo-spectral solvers. On the other hand, going back to the full primitive equations is also impractical because the latter are incompatible with periodic boundary conditions in the horizontal directions: they involve terms that are proportional to \(y\) thus breaking the invariance to translations in \(y\). Fortunately these terms are subdominant in the QG range of parameters and a convenient way to simulate the QG dynamics of a patch of ocean consists in discarding them from the set of Boussinesq equations. There are two such terms: first, the base state has a \(z\)-dependent meridional buoyancy gradient, and the vertical advection of the associated buoyancy field leads to coefficients that are proportional to \(y\). We neglect this subdominant vertical advection of the background meridional buoyancy gradient. Secondly, the planetary vorticity gradient leads to a stretching term of the form \((f_{0}+\beta y)\partial_{z}\mathbf{u}\) in the vorticity equation. We neglect the subdominant contribution
\(\beta y\partial_{z}\mathbf{u}\) in the following. Specifically, we end up with the following set of dimensionless primitive-like equations for the departure fields:
\[\partial_{t}\mathbf{u}+U^{\prime}(z)w\,\mathbf{e}_{x}+U(z)\partial_ {x}\mathbf{u}+(\mathbf{u}\cdot\boldsymbol{\nabla})\mathbf{u}+\mathbf{e}_{z} \times\mathbf{u}-\beta\left[\psi\mathbf{e}_{y}+\Delta_{\perp}^{-1}\{\psi_{yz} \}\mathbf{e}_{z}\right] \tag{10}\] \[\qquad=-\boldsymbol{\nabla}p+b\,\mathbf{e}_{z}+E_{\perp}\Delta_{ \perp}\mathbf{u}+E_{z}\partial_{zz}\mathbf{u}\,,\] \[\partial_{t}b+U(z)b_{x}-U^{\prime}(z)v+N^{2}(z)w+\mathbf{u}\cdot \boldsymbol{\nabla}b=E_{b,\perp}\Delta_{\perp}b+E_{b,z}\partial_{zz}b\, \tag{11}\]
where \(\mathbf{u}=(u,v,w)\) now denotes the velocity departure from the base state. The toroidal streamfunction \(\psi(x,y,z,t)\) has a vanishing horizontal area average at every depth \(z\) and is defined as \(\psi=\Delta_{\perp}^{-1}\{\partial_{y}u-\partial_{x}v\}\). For rapid rotation and strong stratification the set of equations (10-11) reduces to the limiting QG system (equation (3) with the boundary conditions (5-6)). Additionally, the form of the \(\beta\) term ensures conservation of mechanical energy.
We solve equations (10-11) inside a horizontally periodic domain with the pseudo-spectral solver Coral (Miquel, 2021), previously used for the Lady model (Gallet _et al._, 2022) and for turbulent convective flows (Miquel _et al._, 2020; Bouillaut _et al._, 2021), and validated against both analytical results (Miquel _et al._, 2019) and solutions computed with the Dedalus software (Burns _et al._, 2020). The background stratification is strong and the global rotation is fast (low Rossby number) to ensure a strongly QG regime. We use insulated boundary conditions at top and bottom for \(b\), free-slip boundary conditions at the surface for the velocity departure, and a frictional boundary condition at the bottom, \(\partial_{z}u|_{-1}=-(\tilde{\kappa}/E_{z})u|_{-1}\), \(\partial_{z}v|_{-1}=-(\tilde{\kappa}/E_{z})v|_{-1}\) and \(w|_{-1}=0\). Such parameterized bottom drag is detailed in the study of the Lady model (Gallet _et al._, 2022) together with the connection between the coefficient \(\tilde{\kappa}\) and the QG friction coefficient \(\kappa\) arising in equation (6). The dissipative coefficients in the simulation have values \(\tilde{\kappa}=4.5\times 10^{-4}\), \(E_{z}=3\times 10^{-6}\), \(E_{\perp}=0.003\), \(E_{b,z}=3\times 10^{-7}\), \(E_{b,\perp}=0.001\).
|
2310.15137 | Factorial growth at low orders in perturbative QCD: Control over
truncation uncertainties | A method, known as ``minimal renormalon subtraction'' [Phys. Rev. D 97 (2018)
034503, JHEP 2017 (2017) 62], relates the factorial growth of a perturbative
series (in QCD) to the power~$p$ of a power correction $\Lambda^p/Q^p$.
($\Lambda$ is the QCD scale, $Q$ some hard scale.) Here, the derivation is
simplified and generalized to any~$p$, more than one such correction, and cases
with anomalous dimensions. Strikingly, the well-known factorial growth is seen
to emerge already at low or medium orders, as a consequence of constraints on
the $Q$ dependence from the renormalization group. The effectiveness of the
method is studied with the gluonic energy between a static quark and static
antiquark (the ``static energy''). Truncation uncertainties are found to be
under control after next-to-leading order, despite the small exponent of the
power correction ($p=1$) and associated rapid growth seen in the first four
coefficients of the perturbative series. | Andreas S. Kronfeld | 2023-10-23T17:44:14Z | http://arxiv.org/abs/2310.15137v2 | # Factorial growth at low orders in perturbative QCD: Control over truncation uncertainties
###### Abstract
A method, known as "minimal renormalon subtraction" [_Phys. Rev. D_**97** (2018) 034503, _JHEP_**2017** (2017) 62], relates the factorial growth of a perturbative series (in QCD) to the power \(p\) of a power correction \(\Lambda^{p}/Q^{p}\). (\(\Lambda\) is the QCD scale, \(Q\) some hard scale.) Here, the derivation is simplified and generalized to any \(p\), more than one such correction, and cases with anomalous dimensions. Strikingly, the well-known factorial growth is seen to emerge already at low or medium orders, as a consequence of constraints on the \(Q\) dependence from the renormalization group. The effectiveness of the method is studied with the gluonic energy between a static quark and static antiquark (the "static energy"). Truncation uncertainties are found to be under control after next-to-leading order, despite the small exponent of the power correction (\(p=1\)) and associated rapid growth seen in the first four coefficients of the perturbative series.
Keywords:Large-order behavior of perturbation theory, Renormalons ArXiv ePrint: yymm.xxXxx
## 1 Introduction
In 2018, the Fermilab Lattice, MILC, and TUMQCD collaborations [1] used lattice-QCD calculations of heavy-light meson masses to obtain results for renormlized quark masses in the modified minimal subtraction (\(\overline{\text{MS}}\)) scheme. The total uncertainty ranges from below 1% (for bottom, charm, and strange) to 1-2% (for down and up). The \(\overline{\text{MS}}\) scheme inevitably entails perturbation theory. Usually a top source of uncertainty would come from truncating the perturbative series in the strong coupling \(\alpha_{\text{s}}\). In ref. [1], however, the error budgets exhibit negligible uncertainty from truncation (cf., figure 4[1]). The associated uncertainty was estimated by omitting the highest-order coefficient (of \(\alpha_{\text{s}}^{4}\)) in the relation between the pole mass and the \(\overline{\text{MS}}\) mass. It was found to be comparable to the statistical uncertainty and much smaller than the parametric uncertainty in \(\alpha_{\text{s}}\).
Essential to ref. [1] is a reinterpretation of the perturbation series [2] that in turn relies crucially on a formula for the normalization of the leading renormalon ambiguity of the pole mass [3]. Readers who are not familiar with renormalons are encouraged to indulge the jargon for a moment: clearly it is worth pursuing how to generalize refs. [2; 3], in the hope of controlling the truncation uncertainty in further applications. This paper takes up that pursuit.
The coefficients of many perturbative series in quantum mechanics [4; 5] and quantum field theory [6; 7; 8] are known to grow factorially. In QCD and other asymptotically free theories, a class of leading and subleading growths arises from soft loop momenta in Feynman
diagrams. Details of the growth can be obtained from studying implications of the renormalization group. At the same time, the growth is related to power-law corrections to the perturbation series. For now, let us characterize the growth of the \(l^{\rm th}\) coefficient as \(Ka^{l}l^{b}l!\) for some \(K\), \(a\), and \(b\). A basic renormalization-group analysis (e.g., ref. [9]) determines \(a\) and \(b\) but not the normalization \(K\). There are, however, at least three expressions in the literature for \(K\)[3; 10; 11; 12; 13]. The expressions in refs. [3] and [13] bear some resemblance to each other, but the one in refs. [10; 11; 12] is different.
The generalizations initially sought in the present work started modest: I wanted to look at scale dependence of \(\alpha_{\rm s}\) to see (as a co-author of refs. [1; 2]) whether our quoted uncertainties held up, and I wanted to treat arbitrary power corrections. Dissatisfaction with my understanding of the normalization derived in ref. [3] led to a simple way of analyzing the problem with interesting findings:
* the normalization of ref. [3] is reproduced, at least in practical terms;
* the standard factorial growth starts at low orders, not just at asymptotically large \(l\);
* the second coefficient of the \(\beta\) function and the exponent of the power correction determine the order at which the factorial growth becomes a practical matter;
* the way to deal with a sequence of power corrections becomes clear.
The third item is well known, but, even so, many analyses of large-order effects use a one-term \(\beta\) function. The last item was mentioned in v1 and v2 on arXiv.org of ref. [3], but the discussion was removed from the final publication. The derivation of the factorial growth presented below is so straightforward, it is almost surprising that it has not been known for decades. If it has appeared in the literature before, it is obscure.
The rest of this paper is organized as follows. Section 3 recalls ref. [2] and generalizes its ideas to an arbitrary (single) power correction. Section 4 considers cases with more than one power-suppressed contribution. Sections 3 and 4 rely on a special renormalization scheme that simplifies the algebra; other schemes are discussed in section 5. Section 6 considers the complication of anomalous dimensions. Proposals to improve perturbation theory should study at least one example, so section 7 applies section 3 to the static energy between a heavy quark-antiquark pair, for which four terms in the perturbation series are known (like the pole-mass-\(\overline{\rm MS}\)-mass relation). A summary and some outlook is offered in section 8. A modification of the Borel summation used in sections 3 and 4 is given in appendix A.
## 2 Notation and setup
The problem at hand is to compute in QCD, or other asymptotically free quantum field theory, a physical quantity that depends on a high-energy scale \(Q\) (or, as in section 7, short distance \(r=1/Q\)). The hard scale \(Q\) can be used to obtain a dimensionless version of the physical quantity. The dimensionless quantity can be approximated order-by-order in
perturbation theory up to power corrections:
\[\mathscr{R}(Q)=r_{-1}+R(Q)+C_{p}\frac{\Lambda^{p}}{Q^{p}},\qquad R(Q)=\sum_{l=0}r _{l}(\mu/Q)\alpha_{\rm s}(\mu)^{l+1}, \tag{1}\]
where the term \(r_{-1}\) can be \(0\) or not, \(C_{p}\) is (for now) independent of \(Q\), \(\Lambda\sim\mu{\rm e}^{-1/2\beta_{0}\alpha_{\rm s}(\mu)}\) is the scale arising from dimensional transmutation, \(\alpha_{\rm s}\) is the gauge coupling in some scheme, and \(\mu\) is the renormalization scale. The power \(p\) can be deduced from the operator-product expansion, an effective field theory, or other considerations. For now, let is consider the case with only one power correction, postponing until section 4 the more general case. Laboratory measurements or the continuum limit of lattice gauge theory can be used to provide a nonperturbative determination of \(\mathscr{R}(Q)\). Fits of data for \(\mathscr{R}(Q)\) could, ideally, be used to determine \(\alpha_{\rm s}\) with nuisance parameter \(C_{p}\). As an asymptotic expansion, the sum representing \(R(Q)\) in eq. (1) diverges, however, so an upper summation limit does not make sense without further discussion. Indeed, the definition of the power correction rests on how the sum is treated.
\(\mathscr{R}\) and \(R\) do not depend of \(\mu\), so the \(\mu\) dependence of the coefficients is intertwined with the \(\mu\) dependence of \(\alpha_{\rm s}\) and, thus, dictated by
\[\dot{\alpha}_{\rm s}(\mu)\equiv 2\beta(\alpha_{\rm s})=-2\alpha_{\rm s}(\mu) \sum_{k=0}^{\infty}\beta_{k}\alpha_{\rm s}(\mu)^{k+1}, \tag{2}\]
where \(\dot{g}={\rm d}g/{\rm d}\ln\mu\). The derivatives of the coefficients must satisfy
\[\dot{r}_{l}(\mu/Q)=2\sum_{j=0}^{l-1}(j+1)\beta_{l-1-j}r_{j}(\mu/Q). \tag{3}\]
Integrating these equations (in a mass-independent renormalization scheme) one after the other leads to
\[r_{0}(\mu/Q) =r_{0}, \tag{4a}\] \[r_{1}(\mu/Q) =r_{1}+2\beta_{0}\ln(\mu/Q)r_{0},\] (4b) \[r_{2}(\mu/Q) =r_{2}+2\ln(\mu/Q)\left(2\beta_{0}r_{1}+\beta_{1}r_{0}\right)+[2 \beta_{0}\ln(\mu/Q)]^{2}r_{0},\] (4c) \[r_{3}(\mu/Q) =r_{3}+2\ln(\mu/Q)\left(3\beta_{0}r_{2}+2\beta_{1}r_{1}+\beta_{2 }r_{0}\right)+3[2\beta_{0}\ln(\mu/Q)]^{2}r_{1}\] \[\qquad+10\beta_{0}\beta_{1}\ln^{2}(\mu/Q)r_{0}+[2\beta_{0}\ln(\mu /Q)]^{3}r_{0}, \tag{4d}\]
and so on, with constants of integration \(r_{l}\equiv r_{l}(1)\). The dependence of \(R(Q)\) on \(Q\) is, thus, tied to the renormalization-dictated dependence on \(\mu\).
Equation (3) is a matrix equation, \(\dot{\mathbf{r}}=2{\bf D}\cdot\mathbf{r}\), with \(D_{lj}=(j+1)\beta_{l-1-j}\) if \(l>j\) and \(D_{lj}=0\) otherwise. For sections 3 to 5, it is convenient to develop this matrix notation further, for instance writing
\[R=\mathfrak{A}_{\rm s}\cdot\mathbf{r}_{\rm s}=\begin{bmatrix}\alpha_{\rm s}&\alpha _{\rm s}^{2}&\alpha_{\rm s}^{3}&\alpha_{\rm s}^{4}&\cdots\end{bmatrix}\begin{bmatrix} r_{0}\\ r_{1}\\ r_{2}\\ r_{3}\\ \vdots\end{bmatrix}. \tag{5}\]
Floorless delimiters \(\lceil\ \rceil\) are used instead of brackets \([\ ]\) or parentheses as a reminder that the vectors are infinite sequences. Below it will be useful to think of the subscript "s" as standing for "starting scheme", in practice \(\overline{\text{MS}}\).
The matrix notation makes scheme and scale dependence manifest and eases derivations. For example, if
\[\alpha_{b}=\alpha_{\text{s}}+b_{1}\alpha_{\text{s}}^{2}+b_{2}\alpha_{\text{s} }^{3}+b_{3}\alpha_{\text{s}}^{4}+\cdots, \tag{6}\]
then \(\mathfrak{A}_{b}=\mathfrak{A}_{\text{s}}\cdot\mathbf{b}^{-1}\) with scheme-conversion matrix
\[\mathbf{b}^{-1}=\left[\begin{array}{ccccc}1&0&0&0&\cdots\\ b_{1}&1&0&0&\cdots\\ b_{2}&2b_{1}&1&0&\cdots\\ b_{3}&b_{1}^{2}+2b_{2}&3b_{1}&1&\cdots\\ \vdots&\vdots&\vdots&\ddots&\ddots\end{array}\right],\ \ \ \ \ \mathbf{b}=\left[ \begin{array}{ccccc}1&0&0&0&\cdots\\ -b_{1}&1&0&0&\cdots\\ 2b_{1}^{2}-b_{2}&-2b_{1}&1&0&\cdots\\ 5b_{1}b_{2}-5b_{1}^{3}-b_{3}&5b_{1}^{2}-2b_{2}&-3b_{1}&1&\cdots\\ \vdots&\vdots&\vdots&\ddots&\ddots\end{array}\right]. \tag{7}\]
The coefficients in the "\(b\)" scheme are \(\mathbf{r}_{b}=\mathbf{b}\cdot\mathbf{r}_{\text{s}}\). The lower-triangular structure of these and other matrices is the key to the forthcoming analysis.
The \(\overline{\text{MS}}\) scheme can be thought of as the "laboratory frame", where \(\mathbf{r}_{\text{s}}\) is most easily obtained. The "center-of-mass frame", which reduces subsequent labor, is the "geometric scheme" defined by [14]
\[\beta(\alpha_{\text{g}})=-\frac{\beta_{0}\alpha_{\text{g}}^{2}}{1-(\beta_{1}/ \beta_{0})\alpha_{\text{g}}}. \tag{8}\]
Equivalently, \(\beta_{k}=\beta_{0}(\beta_{1}/\beta_{0})^{k}\), so the \(\beta\)-function series, eq. (2), is geometric. In eq. (6), \(b_{1}=2\beta_{0}\ln\Lambda_{\text{g}}/\Lambda_{\overline{\text{MS}}}\); taking \(b_{1}=0\) not only eliminates or simplifies many entries in the scheme-conversion matrix but also means \(\Lambda_{\text{g}}=\Lambda_{\overline{\text{MS}}}\) requires no conversion. Expressions for the \(b_{i}\) connecting the geometric and \(\overline{\text{MS}}\) schemes are less interesting than the entries of the conversion matrix:
\[\mathbf{b}_{\text{g}}=\left[\begin{array}{ccccc}1&0&0&0&0&\cdots\\ 0&1&0&0&0&\cdots\\ \delta_{2}&0&1&0&0&\cdots\\ \frac{1}{2}\delta_{3}&2\delta_{2}&0&1&0&\cdots\\ \frac{1}{3}\delta_{4}-\frac{1}{6}\delta_{3}\tilde{\beta}_{1}+\frac{5}{3} \delta_{2}^{2}+\frac{1}{3}\delta_{2}\tilde{\beta}_{1}^{2}&\delta_{3}&3\delta_{ 2}&0&1&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots&\ddots\end{array}\right], \tag{9}\]
where \(\delta_{k}=\tilde{\beta}_{k}-\tilde{\beta}_{1}^{k}\), \(\tilde{\beta}_{k}=\beta_{k}/\beta_{0}\), with the nonuniversal \(\beta_{k}\) (\(k>1\)) of the original scheme. The geometric scheme can be reached from any starting point: first introduce a scale change to align, say, \(\Lambda_{\text{lat}}\) with \(\Lambda_{\overline{\text{MS}}}\); then the coefficient vector \(\mathbf{r}_{\text{g}}=\mathbf{b}_{\text{g}}\cdot\mathbf{r}\) is independent of the ultraviolet regulator and renormalization used to obtain \(\mathbf{r}\).
One power correction
Let us recall how refs. [2; 3] handle the pole mass. The heavy-quark effective theory provides an expression for a heavy-light hadron mass [15; 16; 17] along the lines of eq. (1):
\[\mathscr{M}=\bar{m}\left(1+\sum_{l=0}r_{l}\alpha_{\rm s}^{l+1}(\bar{m})\right)+ \bar{\Lambda}+{\rm O}(1/\bar{m}), \tag{10}\]
where \(\bar{m}=m_{\overline{\rm MS}}(\mu)\) evaluated at \(\mu=\bar{m}\), and \(\bar{\Lambda}\), which is of order \(\Lambda\), is the energy of gluons and light quarks. The series times \(\bar{m}\) is known as the pole (or on-shell) mass. The coefficients \(r_{l}\) are obtained from the quark self-energy by putting the quark on shell iteratively at each order in perturbation theory. The coefficients are infrared finite and gauge independent at every order of the iteration [18], but they grow factorially with the order \(l\)[19; 20; 21; 22]. The series thus diverges, rendering its interpretation ambiguous. A hadron mass cannot be ambiguous, so the ambiguity in the series must be canceled by \(\bar{\Lambda}\) (and higher-power terms) [23].
Komijani [3] exploited the fact that the leading factorial growth in the series, being related to \(\bar{\Lambda}\), is independent of \(\bar{m}\). Therefore, taking a derivative with respect to \(\bar{m}\) generates a quantity without \(\bar{\Lambda}\). The derivative yields
\[1+\sum_{l=0}r_{l}\alpha_{\rm s}^{l+1}(\bar{m})+2\beta\left(\alpha_{\rm s}(\bar {m})\right)\sum_{l=0}(l+1)r_{l}\alpha_{\rm s}^{l}(\bar{m})\equiv 1+\sum_{k=0}f_{k} \alpha_{\rm s}^{k+1}(\bar{m}), \tag{11}\]
where the \(f_{k}\) are obtained by expanding out \(\beta(\alpha_{\rm s})\) on the left-hand side:
\[f_{k}=r_{k}-2\sum_{l=0}^{k-1}(l+1)\beta_{k-1-l}r_{l}. \tag{12}\]
Equation (12) is eq. (2.3) of ref. [3].
Komijani recast eqs. (11) and (12) as a differential equation (eq. (1.6) of ref. [3]),
\[r(\alpha)+2\beta(\alpha)r^{\prime}(\alpha)=f(\alpha), \tag{13}\]
where the prime denotes a derivative with respect to \(\alpha\). The appendix of ref. [3] derives an asymptotic solution to eq. (13) that pins down the normalization of the large-order coefficients \(r_{l}\), \(l\gg 1\), i.e., the quantity denoted \(K\) in section 1. Note that ref. [3] obtains a particular solution to eq. (13). A general solution consists of any particular solution plus a solution to the corresponding homogeneous equation with \(0\) instead of \(f(\alpha)\) on the right-hand side. The solution of the homogeneous equation is a constant of order \(\Lambda\). In this paper, eq. (12) is used instead of eq. (13) as the starting point in search of a particular solution.
Before presenting the solution, let us generalize Komijani's idea to eq. (1): multiply \(\mathscr{R}\) by \(Q^{p}\) so the \(\Lambda^{p}\) term no longer depends on \(Q\), differentiate once with respect to \(Q\), and then divide by \(pQ^{p-1}\):
\[\mathscr{F}^{(p)}(Q)\equiv\hat{Q}^{(p)}\mathscr{R}(Q)\equiv\frac{1}{pQ^{p-1}} \frac{{\rm d}\,Q^{p}\mathscr{R}}{{\rm d}Q}=r_{-1}+F^{(p)}(Q). \tag{14}\]
In this case \(F^{(p)}=\hat{Q}^{(p)}R\) also, and a nonzero \(r_{-1}\) cancels out just like the \(1\) in eq. (10). Introducing a series for \(F^{(p)}\) and collecting like powers of \(\alpha_{\mathrm{s}}\),
\[F^{(p)}(Q)=\sum_{k=0}f_{k}^{(p)}(\mu/Q)\alpha_{\mathrm{s}}(\mu)^{l+1},\qquad f_{ k}^{(p)}=r_{k}-\frac{2}{p}\sum_{l=0}^{k-1}(l+1)\beta_{k-1-l}r_{l}.\] (12a) In matrix notation, \[\mathbf{f}^{(p)}=\mathbf{Q}^{(p)}\cdot\mathbf{r},\qquad\mathbf{Q}^{(p)}=\mathbf{1}- \frac{2}{p}\mathbf{D},\] (12b) with \[\mathbf{D}\] defined above.
Equations (12) can be derived either by keeping \(\alpha_{\mathrm{s}}(\mu)\) independent of \(Q\) and taking the derivative of the coefficients or by setting \(\mu=Q\), as in eq. (10), so the coefficients are constant with \(\alpha_{\mathrm{s}}(Q)\) encoding the \(Q\) dependence. Equations (12) generalize eqs. (11) and (13) to arbitrary \(p\); the differential equation a la eq. (13) corresponding to eqs. (12) has \(2/p\) multiplying \(\beta(\alpha)\). The particular solution to the differential equation is simply obtained by solving eq. (12b): \(\mathbf{r}=\mathbf{Q}^{(p)}{}^{-1}\cdot\mathbf{f}^{(p)}\).
At this point, one might wonder what could be gained this way. For some \(L\), the \(r_{l}\), \(l<L\), are available in the literature. Via eq. (12a), just as many \(f_{k}^{(p)}\) are obtained from these \(L\) terms and the first \(L\) coefficients \(\beta_{j}\) (eq. (11)). Solving eq. (12b) should just return the original information. That is, of course, correct, but the solution, spelled out below, _also_ yields information about the \(r_{l}\) for \(l\geq L\). Exploiting this additional information is the gist of this analysis.
The solution of eq. (12b) is easiest in the geometric scheme. Let \(b\equiv\beta_{1}/2\beta_{0}^{2}\), so that \(2\beta_{k}=(2\beta_{0})^{k+1}b^{k}\) (in the geometric scheme), and let \(\tau\equiv 2\beta_{0}/p\). Then \(\mathbf{Q}_{\mathrm{g}}^{(p)}=\mathbf{b}_{\mathrm{g}}\cdot\mathbf{Q}^{(p)} \cdot\mathbf{b}_{\mathrm{g}}^{-1}\) has elements
\[\left[Q_{\mathrm{g}}^{(p)}\right]_{kl}=\begin{cases}0,&k<l,\\ 1,&k=l,\\ -(l+1)\tau^{k-l}(pb)^{k-l-1},&k>l,\end{cases}\] (13a) which looks like \[\mathbf{Q}_{\mathrm{g}}^{(p)}=\begin{bmatrix}1&0&0&0&0&0&0&\cdots\\ -\tau&1&0&0&0&0&\cdots\\ -\tau^{2}pb&-2\tau&1&0&0&0&\cdots\\ -\tau(\tau pb)^{2}&-2\tau^{2}pb&-3\tau&1&0&0&\cdots\\ -\tau(\tau pb)^{3}&-2\tau(\tau pb)^{2}&-3\tau^{2}pb&-4\tau&1&0&0&\cdots\\ -\tau(\tau pb)^{4}&-2\tau(\tau pb)^{3}&-3\tau(\tau pb)^{2}&-4\tau^{2}pb&-5\tau &1&0&\cdots\\ -\tau(\tau pb)^{5}&-2\tau(\tau pb)^{4}&-3\tau(\tau pb)^{3}&-4\tau(\tau pb)^{2}& -5\tau^{2}pb&-6\tau&1&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\ddots&\ddots\\ \end{bmatrix}. \tag{13b}\]
\(\mathbf{Q}_{\mathrm{g}}^{(p)}\) exhibits geometric but not factorial growth. The inverse is easily obtained row-by-row:
\[\mathbf{Q}_{\mathrm{g}}^{(p)-1}=\begin{bmatrix}1&0&0&0&0&0&0&\cdots\\ \tau&1&0&0&0&0&0&\cdots\\ \tau^{2}\frac{\Gamma(3+pb)}{\Gamma(2+pb)}&2\tau&1&0&0&0&0&\cdots\\ \tau^{3}\frac{\Gamma(4+pb)}{\Gamma(2+pb)}&2\tau^{2}\frac{\Gamma(4+pb)}{\Gamma (3+pb)}&3\tau&1&0&0&0&\cdots\\ \tau^{4}\frac{\Gamma(5+pb)}{\Gamma(2+pb)}&2\tau^{3}\frac{\Gamma(5+pb)}{(3+pb) }&3\tau^{2}\frac{\Gamma(5+pb)}{\Gamma(4+pb)}&4\tau&1&0&0&\cdots\\ \tau^{5}\frac{\Gamma(6+pb)}{\Gamma(4+pb)}&2\tau^{4}\frac{\Gamma(6+pb)}{\Gamma (3+pb)}&3\tau^{3}\frac{\Gamma(6+pb)}{\Gamma(4+pb)}&4\tau^{2}\frac{\Gamma(6+pb) }{\Gamma(5+pb)}&5\tau&1&0&\cdots\\ \tau^{6}\frac{\Gamma(7+pb)}{\Gamma(2+pb)}&2\tau^{5}\frac{\Gamma(7+pb)}{\Gamma (3+pb)}&3\tau^{4}\frac{\Gamma(7+pb)}{\Gamma(4+pb)}&4\tau^{3}\frac{\Gamma(7+pb) }{\Gamma(5+pb)}&5\tau^{2}\frac{\Gamma(7+pb)}{\Gamma(6+pb)}&6\tau&1&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\ddots\end{bmatrix}\] (10a) or, expressed as in eq. ( 11a ), \[\left\lceil Q_{\mathrm{g}}^{(p)-1}\right\rceil_{lk}=\begin{cases}0,&l<k,\\ 1,&l=k,\\ (k+1)\frac{\tau^{l}\Gamma(l+1+pb)}{\tau^{k}\Gamma(k+2+pb)},&l>k.\end{cases} \tag{10b}\]
From one row to the next, the entries increase both in a factorial way and by powers of \(\tau\). As stated in section 1, the growth starts at low orders. From one column to the next, the entries _decrease_ factorially (and by powers of \(\tau\)). Both factorials grow rapidly only once \(l\gg pb\), \(k\gg pb\), so -- again as stated in section 1 -- the higher the power \(p\), the longer the growth need not be apparent from explicit expressions for the coefficients. Growth is also postponed for large \(b\), which happens if \(\beta_{0}\) is small but \(\beta_{1}\) is not.
Reexpressing eqs. (10) as series coefficients,
\[r_{l}=f_{l}^{(p)}+\left(\frac{2\beta_{0}}{p}\right)^{l}\Gamma(l+1+pb)\sum_{k= 0}^{l-1}\frac{k+1}{\Gamma(k+2+pb)}\left(\frac{p}{2\beta_{0}}\right)^{k}f_{k}^ {(p)}, \tag{11}\]
which holds (in the geometric scheme) for all \(l\). Equation (11) is similar to eq. (22) of ref. [3], except for three details: eq. (22) of ref. [3] omits the first term \(f_{l}^{(p)}\), has \(\infty\) as the upper limit of the sum, and holds only asymptotically (i.e., the relation is \(\sim\) instead of \(=\)). \(\mathbf{f}^{(p)}\) grows more slowly than \(\Gamma(l+1+pb)\) or \(\Gamma(k+2+pb)\), so for \(l\gg 1\) it is accurate to neglect the first term and to extend the sum to \(\infty\). The crucial difference is that eq. (11) holds for all \(l\), starting with the next few orders beyond the known \(r_{l}\).
Recall that \(L\) terms are available. Nowadays, \(L=4\) for some problems (e.g., eq. (10) and section 7) and \(L=3\) for others. For \(l<L\), eq. (11) returns the \(r_{l}\) available at the outset. For \(l\geq L\), eq. (11) suggests estimating \(r_{l}\) (in the geometric scheme) by
\[r_{l}\approx R_{l}^{(p)} \equiv R_{0}^{(p)}\left(\frac{2\beta_{0}}{p}\right)^{l}\frac{ \Gamma(l+1+pb)}{\Gamma(1+pb)},\quad l\geq L, \tag{12a}\] \[R_{0}^{(p)} \equiv\sum_{k=0}^{L-1}(k+1)\frac{\Gamma(1+pb)}{\Gamma(k+2+pb)} \left(\frac{p}{2\beta_{0}}\right)^{k}f_{k}^{(p)}. \tag{12b}\]
The expression for \(R_{0}^{(1)}\) is the same as that for \(N_{k_{\rm max}}\) (with \(k_{\rm max}=L-1\)) in eq. (2.23) of ref. [3]. It is also resembles the formula (taken in the geometric scheme) for \(P_{1/2}\) in eqs. (17) of ref. [13]. Applying eqs. (3.10) to the series \(R(Q)\) yields
\[R(Q)\approx\sum_{l=0}^{L-1}r_{l}\alpha_{\rm g}^{l+1}(Q)+\sum_{l=L}^{\infty}R_{ l}^{(p)}\alpha_{\rm g}^{l+1}(Q).\] (3.11a) The first \[L\] terms are as usual and the others are estimated via their fastest growing part. For subsequent analysis, it is better to start the second sum at \[l=0,\] \[R(Q)\approx\sum_{l=0}^{L-1}\left(r_{l}-R_{l}^{(p)}\right)\alpha_{\rm g}^{l+ 1}(Q)+\sum_{l=0}^{\infty}R_{l}^{(p)}\alpha_{\rm g}^{l+1}(Q), \tag{3.11b}\]
which follows from subtracting and adding \(\sum_{l=0}^{L-1}R_{l}^{(p)}\alpha_{\rm g}^{l+1}(Q)\). For convenience below, let
\[R_{\rm RS}^{(p)}(Q)\equiv\sum_{l=0}^{L-1}\left(r_{l}-R_{l}^{(p)}\right)\alpha_ {\rm s}^{l+1}(Q),\qquad R_{\rm B}^{(p)}(Q)\equiv\sum_{l=0}^{\infty}R_{l}^{(p) }\alpha_{\rm g}(Q)^{l+1}. \tag{3.12}\]
\(R_{\rm RS}^{(p)}\) is similar to the truncation to \(L\) terms of the "renormalon subtracted" (RS) scheme for \(R\)[12]. Here, \(R_{\rm RS}^{(p)}\) arises not by intentional subtraction but from rearranging terms. In the examples of the pole mass [2] and the static energy (section 7), \(r_{l}-R_{l}^{(p)}\) is smaller than \(r_{l}\), especially for \(l=3\), \(4\).
Because of the factorial growth of the \(R_{l}^{(p)}\), the series \(R_{\rm B}^{(p)}\) does not converge. It can be assigned meaning through Borel summation, however. Using the integral representation of \(\Gamma(l+1)\),
\[R_{\rm B}^{(p)}(Q) =R_{0}^{(p)}\sum_{l=0}^{\infty}\left[\frac{\Gamma(l+1+pb)}{\Gamma (1+pb)\Gamma(l+1)}\int_{0}^{\infty}\left(\frac{2\beta_{0}t}{p}\right)^{l}{\rm e }^{-t/\alpha_{\rm g}(Q)}{\rm d}t\right],\] \[\to R_{0}^{(p)}\int_{0}^{\infty}\frac{{\rm e}^{-t/\alpha_{\rm g}(Q )}}{(1-2\beta_{0}t/p)^{1+pb}}{\rm d}t, \tag{3.13}\]
where the second line is obtained by swapping the order of summation and integration. Strictly speaking, the swap is not allowed because the integrand has a branch point at \(t=p/2\beta_{0}\). This singularity is known as a renormalon [8]. It is customary to place the cut on the real axis from the branch point to \(+\infty\). In ref. [2], we split the integral into two parts, over the intervals \([0,p/2\beta_{0})\) before the cut and \([p/2\beta_{0},\infty)\) along the cut. The first integral is unambiguous and given below.
For the interval \([p/2\beta_{0},\infty)\), the contour must be specified. Taking it slightly above or below the cut, for example, yields
\[\delta R^{(p)}\equiv R_{0}^{(p)}\int_{p/2\beta_{0}\pm{\rm i}\varepsilon}^{ \infty\pm{\rm i}\varepsilon}\frac{{\rm e}^{-t/\alpha_{\rm g}(Q)}}{(1-2\beta_{0 }t/p)^{1+pb}}{\rm d}t=-R_{0}^{(p)}{\rm e}^{\pm{\rm i}pbr}\frac{p^{1+pb}}{2^{1+ pb}\beta_{0}}\Gamma(-pb)\left[\frac{{\rm e}^{-1/[2\beta_{0}\alpha_{\rm g}(Q)]}}{ [\beta_{0}\alpha_{\rm g}(Q)]^{b}}\right]^{p}, \tag{3.14}\]
and the factor \({\rm e}^{\pm{\rm i}bp\pi}\) illustrates the ambiguity. The quantity inside the bracket is identically \(\Lambda_{\rm g}/Q=\Lambda_{\overline{\rm MS}}/Q\), so without loss \(\delta R^{(p)}\propto(\Lambda/Q)^{p}\) can be lumped into the solution of the homogeneous differential equation a la eq. (10) or, equivalently, the power correction \(C_{p}\Lambda^{p}/Q^{p}\) in eq. (1) [2].
Because the interchange of summation and integration in eq. (11) is not allowed, \(R_{\rm B}^{(p)}\) can be _assigned_ to be (taking \(b<0\) at first and then applying analytic continuation)
\[R_{\rm B}^{(p)}(Q) =R_{0}^{(p)}\int_{0}^{p/2\beta_{0}}\frac{{\rm e}^{-t/\alpha_{\rm g }(Q)}}{(1-2\beta_{0}t/p)^{1+pb}}{\rm d}t=R_{0}^{(p)}\frac{p}{2\beta_{0}} \mathscr{J}(pb,1/2\beta_{0}\alpha_{\rm g}(Q)), \tag{12a}\] \[\mathscr{J}(c,y) =e^{-y}\Gamma(-c)\gamma^{\star}(-c,-y), \tag{12b}\]
which is acceptable because the asymptotic (small \(\alpha_{\rm g}\)) expansion of \(\mathscr{J}\) returns the original series in eq. (10). Here \(\gamma^{\star}(a,x)\equiv[1/\Gamma(a)]\int_{0}^{1}{\rm d}t\,t^{a-1}{\rm e}^{- xt}\) is known as the limiting function of the incomplete gamma function [24]. It is analytic in \(a\) and \(x\) and has a convergent expansion
\[\gamma^{\star}(a,-y)=\frac{1}{\Gamma(a)}\sum_{n=0}^{\infty}\frac{y^{n}}{n!(n+ a)},\ \ \forall y, \tag{13}\]
which saturates quickly, also when \(a=-pb<0\).
Combining the various ingredients leads to the prescription
\[\mathscr{R}(Q)\equiv r_{-1}+R_{\rm RS}^{(p)}(Q)+R_{\rm B}^{(p)}(Q)+C_{p}\frac {\Lambda^{p}}{Q^{p}} \tag{14}\]
for estimating \(\mathscr{R}(Q)\). Here, \(R_{\rm RS}^{(p)}(Q)\) is introduced in eq. (10) and \(R^{(p)}(\alpha_{\rm g}(Q))\) is defined by the right-hand side of eq. (12b). Equation (14) is just eq. (25) of ref. [2], generalized to \(p\) different from \(1\).
For the relation between the pole mass and \(\overline{\rm MS}\) mass, ref. [2] referred to eq. (14) as "minimal renormalon subtraction" (MRS) in analogy with the RS mass of ref. [12]. The derivation given here arguably does not subtract anything but instead adds new information to the usual truncated perturbation series, rearranges a few terms, and then assigns meaning to an otherwise ill-defined series expression. Even so, this paper continues to refer to the procedure as MRS. For example, it is often convenient to consider \(R_{\rm RS}^{(p)}(Q)+R_{\rm B}^{(p)}(Q)\equiv R_{\rm MRS}(Q)\) as a single object. The asymptotic (small \(\alpha_{\rm g}\)) expansion of \(R_{\rm MRS}(Q)\) is identical to the original series \(R(Q)\).
Starting with eqs. (10), the renormalization scale has been chosen to be \(\mu=Q\). If \(\mu=sQ\) is chosen instead, the derivations do not change. The coupling \(\alpha_{\rm g}(Q)\) simply becomes \(\alpha_{\rm g}(sQ)\) and the coefficients \(r_{l}=r_{l}(1)\) and \(f_{k}^{(p)}=f_{k}^{(p)}(1)\) become \(r_{l}(s)\) and \(f_{k}^{(p)}(s)\). How these effects play out in practice is discussed in section 7. In \(\delta R^{(p)}\), the bracket in eq. (11) becomes \([\Lambda_{\rm g}/sQ]^{p}\), so the overall change is to replace \(R_{0}^{(p)}(1)\) with \(R_{0}^{(p)}(s)/s^{p}\).
## 4 Cascade of power corrections
In general, problems like eq. (1) have more than one power correction. If there are two, with \(p_{2}>p_{1}\), \(\mathscr{F}^{p_{1}}\) still contains \((p_{1}-p_{2})C_{p_{2}}\Lambda^{p_{2}}/p_{1}Q^{p_{2}}\), which can be removed with \(\hat{Q}^{(p_{2})}\):
\[\mathbf{f}^{\{p_{1},p_{2}\}}\equiv{\bf Q}^{(p_{2})}\cdot\mathbf{f}^{(p_{1})}\quad \Rightarrow\quad\mathbf{f}^{(p_{1})}={\bf Q}^{(p_{2})}{}^{-1}\cdot\mathbf{f}^{\{p_{1},p_{2}\}}. \tag{15}\]
These coefficients could then be used in eq. (11). A similar idea was mentioned in v1 and v2 on arXiv.org of ref. [3]. With the early onset of the "large-\(l\)" behavior not yet clear when ref. [3] was written, the utility of eq. (10) was also not clear. For whatever reason, the discussion was removed from the final publication.
More concretely and in general, if the set of powers is \(\{p_{1},p_{2},\ldots,p_{n}\}\), the operator (with \(\hat{Q}^{(p_{1})}\) rightmost)
\[\hat{Q}^{\{p_{i}\}}=\prod_{j=0}^{n-1}\hat{Q}^{(p_{n-j})} \tag{12}\]
fully removes the power corrections associated with these powers. In matrix notation, the \(F\)-series coefficients
\[\boldsymbol{f}^{\{p_{i}\}}=\mathbf{Q}^{\{p_{i}\}}\cdot\boldsymbol{r}=\prod_{j =0}^{n-1}\mathbf{Q}^{(p_{n-j})}\cdot\boldsymbol{r} \tag{13}\]
are obtained with \(\mathbf{Q}^{\{p_{i}\}}\), which is the obvious matrix representation of \(\hat{Q}^{\{p_{i}\}}\). This equation can be solved for
\[\boldsymbol{r}=\mathbf{Q}^{\{p_{i}\}}{}^{-1}\cdot\boldsymbol{f}^{\{p_{i}\}}= \prod_{j=1}^{n}\mathbf{Q}^{(p_{j})}{}^{-1}\cdot\boldsymbol{f}^{\{p_{i}\}}, \tag{14}\]
and, as above, the series \(R(Q)\) is approximated by using the \(L\) known terms of \(\boldsymbol{r}\) while using the rest of them from this solution.
Because the \(\mathbf{Q}^{(p_{i})}\) commute, their inverses do, so a partial-fraction decomposition turns the product into a sum,
\[\prod_{j=1}^{n}\mathbf{Q}^{(p_{j})}{}^{-1}=\sum_{j=1}^{n}h_{j}^{\{p_{i}\}} \mathbf{Q}^{(p_{j})}{}^{-1},\qquad h_{j}^{\{p_{i}\}}=\prod_{k=1,k\neq j}^{n} \frac{p_{k}}{p_{k}-p_{j}}. \tag{15}\]
Note that \(\sum_{j=1}^{n}h_{j}^{\{p_{i}\}}=1\), \(\sum_{j=1}^{n}p_{j}h_{j}^{\{p_{i}\}}=0\); table 1 shows the \(h_{j}^{\{p_{i}\}}\) for various sets \(\{p_{i}\}\). The solution is thus,
\begin{table}
\begin{tabular}{|c|c c c c c c|} \hline \(\{p_{i}\}\ \backslash\ \ j\) & \(1\) & \(2\) & \(3\) & \(4\) & \(6\) & \(8\) \\ \hline \{1,2\} & \(2\) & \(-1\) & – & – & – & – & – \\ \{1,3\} & \(3/2\) & – & \(-1/2\) & – & – & – \\ \{1,2,3\} & \(3\) & \(-3\) & \(1\) & – & – & – \\ \{1,2,4\} & \(8/3\) & \(-2\) & – & \(1/3\) & – & – \\ \{1,2,3,4\} & \(4\) & \(-6\) & \(4\) & \(-1\) & – & – \\ \{2,4\} & – & \(2\) & – & \(-1\) & – & – \\ \{2,4,6,8\} & – & \(4\) & – & \(-6\) & \(4\) & \(-1\) \\ \{4,6\} & – & – & – & \(3\) & \(-2\) & – \\ \{4,6,8\} & – & – & – & \(6\) & \(-8\) & \(3\) \\ \{1,2,4,6\} & \(16/5\) & – & \(-3\) & \(1\) & \(-1/5\) & – \\ \{1,3,4,6,8\} & \(96/35\) & – & \(-32/5\) & \(6\) & \(-8/5\) & \(9/35\) \\ \hline \end{tabular}
\end{table}
Table 1: Partition coefficients \(h_{j}^{\{p_{i}\}}\) for various sets of powers \(p_{i}\).
\[\mathbf{r}=\sum_{j=1}^{n}h_{j}^{\{p_{i}\}}\mathbf{Q}^{\{p_{j}\}}{}^{-1}\cdot\mathbf{f}^{\{p_ {i}\}}. \tag{4.6}\]
which generalizes eq. (3.9). Using again the prescription is to taks the first \(L\)\(r_{l}\) as computed in the literature and approximate the rest with the leading factorials in eq. (4.6). That means
\[\mathscr{R}(Q) \equiv r_{-1}+R_{\text{RS}}^{(p)}(Q)+R_{\text{B}}^{(p)}(Q)+\sum_{i= 1}^{n}C_{p_{i}}\frac{\Lambda^{p_{i}}}{Q^{p_{i}}}, \tag{4.7a}\] \[R_{\text{RS}}(Q) \equiv\sum_{l=0}^{L-1}\left(r_{l}-R_{l}^{\{p_{i}\}}\right)\alpha_ {\text{g}}^{l+1}(Q),\] (4.7b) \[R_{\text{B}}(Q) \equiv\sum_{j=1}^{n}h_{j}^{\{p_{i}\}}R_{0}^{\{p_{i}\}(p_{j})} \frac{p_{j}}{2\beta_{0}}\mathscr{J}\left(p_{j}b,p_{j}/2\beta_{0}\alpha_{\text {g}}(Q)\right), \tag{4.7c}\]
where
\[R_{l}^{\{p_{i}\}} =\sum_{j=1}^{n}h_{j}^{\{p_{i}\}}R_{0}^{\{p_{i}\}(p_{j})}\left( \frac{2\beta_{0}}{p_{j}}\right)^{l}\frac{\Gamma(l+1+p_{j}b)}{\Gamma(1+p_{j}b)}, \tag{4.7d}\] \[R_{0}^{\{p_{i}\}(p_{j})} =\sum_{k=0}^{L-1}(k+1)\frac{\Gamma(1+p_{j}b)}{\Gamma(k+2+p_{j}b) }\left(\frac{p_{j}}{2\beta_{0}}\right)^{k}f_{k}^{\{p_{i}\}}. \tag{4.7e}\]
The same \(f_{k}^{\{p_{i}\}}\) appear in all \(R_{0}^{\{p_{i}\}(p_{j})}\), hence the somewhat fussy notation. For lack of a better name, MRS now stands for "multiple renormalon subtraction" even though, again, the procedure as developed here adds information. A possible notation to distinguish how many power corrections have been removed from a given series is "MRS\(\{p_{1},p_{2},\ldots,p_{n}\}\)".
## 5 Other renormalization schemes
While the geometric scheme simplifies the solution of the matrix equations, it is useful to generalize MRS to arbitrary schemes. Given the algebra of section 3, the simplest way to solve to eq. (3.6b) is to combine eqs. (2.9) and (3.8a), yielding
\[\mathbf{Q}^{(p)}{}^{-1}=\mathbf{b}_{\text{g}}^{-1}\cdot\mathbf{Q}_{\text{g}}^ {(p)}{}^{-1}\cdot\mathbf{b}_{\text{g}}=\mathbf{Q}_{\text{g}}^{(p)}{}^{-1}+ \mathbf{\Delta}^{(p)}. \tag{5.1}\]
The lower-triangular matrix \(\mathbf{\Delta}^{(p)}\) contains the \(\delta_{k}\) introduced immediately after eq. (2.9), which parametrize the deviation from the geometric scheme of the arbitrary-scheme \(\beta\)-function coefficients. The same result is obtained, of course, by solving eq. (3.6b) directly and eliminating the \(\beta_{k}\) in favor of the \(\delta_{k}\).
Another way to express eq. (5.1) is
\[\mathbf{Q}^{(p)}{}^{-1}=\mathbf{Q}_{\text{g}}^{(p)}{}^{-1}\left(\mathbf{1}+ \mathbf{K}^{(p)}\right), \tag{5.2}\]
where \(\mathbf{K}^{(p)}\) looks like
\[\mathbf{K}^{(p)}=\left[\begin{matrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ \tau\delta_{2}&0&0&0&0&0\\ 2\tau^{2}\delta_{2}+\tau\delta_{3}&2\tau\delta_{2}&0&0&0&0\\ 3\tau^{3}(2+pb)\delta_{2}+2\tau^{2}\delta_{3}+\tau\delta_{4}&6\tau^{2}\delta_ {2}+2\tau\delta_{3}&3\tau\delta_{2}&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\end{matrix}\right]. \tag{33}\]
The matrix \(\mathbf{K}^{(p)}\) can be decomposed into matrix coefficients of \(\delta_{i}\), \(\delta_{i}\delta_{j}\), etc. The matrix multiplying single powers of \(\delta_{i}\) possess an easily seen pattern:
\[\left.\frac{\partial K_{lk}}{\partial\delta_{i}}\right|_{\forall j,\delta_{j }=0}=\left\{\begin{array}{ll}0,&l<k+i+1\\ (k+1)\tau,&l=k+i+1\\ (k+1)(l-i)\frac{\tau^{l-i}\Gamma(l-i+pb)}{\tau^{k}\Gamma(k+2+pb)},&l>k+i+1. \end{array}\right. \tag{34}\]
For example, the term \(3\tau^{3}(2+pb)\delta_{2}\) in \(K_{50}^{(p)}\) is the first nontrivial term. Starting on the \(l=6\) row (not shown in eq. (33)), \(\mathbf{K}^{(p)}\) contains pieces proportional to \(\delta_{i}^{2}\); similarly, starting on the \(l=7\) row, \(\mathbf{K}^{(p)}\) contains pieces proportional to \(\delta_{i}\delta_{j}\). In neither case is any pattern to the matrix coefficient apparent.
The original correction \(\mathbf{\Delta}^{(p)}\) looks similar to the right-hand side of eq. (33), but its structure, which is most easily constructed from \(\mathbf{Q}_{\text{g}}^{(p)}{}^{-1}\cdot\mathbf{K}^{(p)}\), is less illuminating than \(\mathbf{K}^{(p)}\)'s. The terms in \(r_{l}\), \(l\geq L\), stemming from \(\mathbf{\Delta}^{(p)}\) are smaller than those from \(\mathbf{Q}_{\text{g}}^{(p)}{}^{-1}\). In previous work on the large-\(l\) behavior of the \(r_{l}\)[3; 9; 22; 25], the \(\delta_{j}\) appear in a way that does not look like the medium-\(l\) pattern accessible by the matrix derivation.
In practice, however, the details of \(\mathbf{K}^{(p)}\) may not matter. Only the first few \(\delta_{j}\) are known. In the geometric scheme they enter the coefficients \(\mathbf{r}_{\text{g}}\) and \(\mathbf{f}_{\text{g}}\). Thus, they may as well be absorbed into \(\mathbf{r}_{\text{s}}\) and \(\mathbf{f}_{\text{s}}\) by introducing
\[\mathbf{f}_{\text{Ks}}^{(p)}\equiv\left(\mathbf{1}+\mathbf{K}^{(p)}\right)\cdot \mathbf{f}_{\text{s}}^{(p)},\qquad\mathfrak{A}_{\text{s}}\cdot\mathbf{r}_{\text{s}}= \mathfrak{A}_{\text{s}}\cdot\mathbf{Q}_{\text{g}}^{(p)}{}^{-1}\cdot\mathbf{f}_{ \text{Ks}}^{(p)}. \tag{35}\]
Then Borel summation can be applied by combining the growing part of \(\mathbf{Q}_{\text{g}}^{(p)}{}^{-1}\) with \(\mathfrak{A}_{\text{s}}\) and combining the diminishing part of \(\mathbf{Q}_{\text{g}}^{(p)}{}^{-1}\) with \(\mathbf{f}_{\text{Ks}}^{(p)}\) to form the normalization factor. Indeed, if \(L\) orders are available, and the scheme is chosen so that \(\delta_{j}=0\) for all \(j\leq L-2\), then the upper-left \(L\times L\) block of \(\mathbf{K}^{(p)}\) vanishes, and the knowable part of \(\mathbf{f}_{\text{Ks}}^{(p)}\) coincides with \(\mathbf{f}_{\text{s}}^{(p)}\).
A reason to consider schemes other than the geometric coupling is that \(\alpha_{\text{g}}(\mu)\) runs into a branch point of the Lambert-\(W\) function [26] at \(\mu=(\text{e}/2b)^{b}\Lambda\). (For \(N_{c}=n_{f}=3\), \((\text{e}/2b)^{b}\approx 1.629\).) Figure 1 shows the running of \(\alpha_{\text{g}}\) and \(\alpha_{2}\) (\(\alpha_{n}\) for \(n=2\)), in SU(3) gauge theory with three massless flavors. The pole in the geometric \(\beta\) function, which is the source of the problem, can be removed while retaining a closed-form relation between \(\ln(\mu/\Lambda)\) and
a family of schemes \(\alpha_{n}\):
\[\beta(\alpha_{n}) =-\frac{\beta_{0}\alpha_{n}^{2}}{1-(\beta_{1}/\beta_{0})\alpha_{n}+ n(\beta_{1}\alpha_{n}/\beta_{0})^{n+1}}, \tag{5.6a}\] \[\ln(\mu/\Lambda) =\frac{1}{2\beta_{0}\alpha_{n}}+b\ln(\beta_{0}\alpha_{n})-b\left( \frac{\beta_{1}\alpha_{n}}{\beta_{0}}\right)^{n}. \tag{5.6b}\]
In section 7, \(\alpha_{2}\) is used to study how MRS works in practice. Like \(\alpha_{\rm g}\), \(\alpha_{2}\) has \(\delta_{2}=0\), so that \({\bf K}^{(p)}\) can be neglected (for \(L\leq 4\)). \(\alpha_{\overline{\rm MS}}\) can be formulated by integrating the \(\overline{\rm MS}\)\(\beta\) function with either \(1/\beta(\alpha_{\rm s})\) or \(\beta(\alpha_{\rm s})\) expanded to fixed order. Both have an undesirable fixed point a la \(\alpha_{\rm g}\). Truncating with \(\beta_{3}\), the former choice -- also used in section 7 -- is valid only for \(\mu\geq 2.1797\Lambda\), at which point \(\alpha_{\rm s}=0.97601\) (cf., figure 1). The latter (again truncating with \(\beta_{3}\)) is valid only for \(\mu\geq 0.87645\Lambda\), asymptotically as \(\alpha_{\rm s}\to\infty\) and in practice for \(\alpha_{\rm s}\gtrsim 50\).
## 6 Anomalous dimensions
The \(Q\) dependence is not always as simple as the power law in eq. (2.1), because \(C_{p}\) can depend on \(Q\) via \(\alpha_{\rm s}(Q)\). In the operator-product expansion, for example, power corrections take the form
\[C_{p}(\mu/Q,\alpha_{\rm s}(\mu))\frac{\langle\mathscr{O}(\mu)\rangle}{Q^{p}}= \widehat{C}_{p}(\alpha_{\rm s}(Q))\frac{\langle\mathscr{O}_{\rm RGI}\rangle}{ Q^{p}}. \tag{6.1}\]
On the right-hand side, the renormalization group has been used to factor the \(\mu\) dependence, such that \(\langle\mathscr{O}_{\rm RGI}\rangle\propto\Lambda^{p}\). The renormalization-group-invariant (RGI) Wilson coefficient can
be written
\[\widehat{C}_{p}(\alpha_{\rm s})=(2\beta_{0}\alpha_{\rm s})^{\psi}\sum_{l=-1}c_{l} \alpha_{\rm s}^{l+1}, \tag{10}\]
where \(\psi=\gamma_{0}/2\beta_{0}\) and \(\gamma_{0}\) is the one-loop anomalous dimension of \(\mathscr{O}\). Some of the leading coefficients may (for some reason) vanish, and the series is known in practice only to some order. Strategies for truncating the series in eq. (10) lie beyond the scope of this paper.
Let us assume \(c_{-1}\neq 0\). It is convenient to extend the matrix notation to \(\mathfrak{A}_{\rm g}=\begin{bmatrix}1&\alpha_{\rm g}&\alpha_{\rm g}^{2}& \alpha_{\rm g}^{3}&\alpha_{\rm g}^{4}&\cdots\end{bmatrix}\), \(\mathbf{r}_{\rm g}=\begin{bmatrix}r_{-1}&r_{0}&r_{1}&r_{2}&r_{3}&\cdots\end{bmatrix}^ {\rm T}\), and so on. The \(r_{-1}\) entry is useful for bookkeeping; it cannot influence the final result, so below it can be set to \(0\), which is equivalent to changing to physical quantity to \(\mathscr{R}(Q)-r_{-1}\).
To isolate \(\Lambda^{p}\) so that it can be differentiated away, it is necessary to multiply by \(Q^{p}/\widehat{C}_{p}\). Division by the Wilson coefficient changes the series \(\mathfrak{A}_{\rm g}\cdot\mathbf{r}_{\rm g}\) to \((2\beta_{0}\alpha_{\rm s})^{-\psi}\mathfrak{A}_{\rm g}\cdot\mathbf{C}^{-1} \cdot\mathbf{r}_{\rm g}\), where
\[\mathbf{C}=\left[\begin{array}{ccccc}c_{-1}&0&0&0&\cdots\\ c_{0}&c_{-1}&0&0&\cdots\\ c_{1}&c_{0}&c_{-1}&0&\cdots\\ c_{2}&c_{1}&c_{0}&c_{-1}&\cdots\\ \vdots&\vdots&\ddots&\ddots&\ddots\end{array}\right]. \tag{11}\]
The operation \(\hat{Q}^{(p)}\) is applied to \((2\beta_{0}\alpha_{\rm s})^{-\psi}\mathfrak{A}_{\rm g}\cdot\mathbf{C}^{-1} \mathbf{r}_{\rm g}\), followed by multiplication by \(\widehat{C}_{p}\). These steps yield \(\mathfrak{A}_{\rm s}\cdot\mathbf{f}_{\rm g}^{(p,\psi,\widehat{C})}\) with
\[\mathbf{f}_{\rm g}^{(p,\psi,\widehat{C})}=\mathbf{C}\cdot\mathbf{Q}_{\rm g}^{(p, \psi)}\cdot\mathbf{C}^{-1}\cdot\mathbf{r}_{\rm g}, \tag{12}\]
and \(\mathbf{Q}_{\rm g}^{(p,\psi)}\) has the same entries as in eqs. (10) but with \(l\to l-\psi\) and \(k\to k-\psi\). The inverse \(\mathbf{Q}_{\rm g}^{(p,\psi)}{}^{-1}\) is given by eqs. (11) with the same substitutions.
Equation (12) can be solved for
\[\mathbf{r}_{\rm g}=\mathbf{C}\cdot\mathbf{Q}_{\rm g}^{(p,\psi)}{}^{-1}\cdot\mathbf{ C}^{-1}\cdot\mathbf{f}_{\rm g}^{(p,\psi,\widehat{C})}, \tag{13}\]
which has the same structure as the scheme change eq. (9). Thus,
\[\mathbf{C}\cdot\mathbf{Q}_{\rm g}^{(p,\psi)}{}^{-1}\cdot\mathbf{C}^{-1}= \mathbf{Q}_{\rm g}^{(p,\psi)}{}^{-1}\cdot\left(\mathbf{1}+\mathbf{K}^{(p,\psi,\widehat{C})}\right), \tag{14}\]
and \(\mathbf{K}^{(p,\psi,\widehat{C})}\) can be absorbed into the coefficients \(\mathbf{f}_{\rm g}^{(p,\psi,\widehat{C})}\), as in eq. (13) when estimating \(r_{l}\), \(l\geq L\). In the basic formulas for the improved series, eqs. (12), it makes sense (along with \(l\to l-\psi\) and \(k\to k-\psi\)) to change the conventional factor \(\Gamma(1+pb)\) to \(\Gamma(1+pb-\psi)\) and to omit \(\psi\) in the powers of \(2\beta_{0}/p\). The change to the normalization factor, \(R_{0}^{(p,\psi)}\), is straightforward. In the Borel summation leading up to eqs. (10), \(\psi\) always appears as \(pb-\psi\); the sum over \(l\) and splitting of the integration follow exactly as in section 3.
If more than one power correction has an anomalous dimension, they still can be removed successively. Now every step affects all subsequent steps. The case of removing two
power corrections reveals how complications ensue. Let the two power terms be \(\widehat{C}_{i}\Lambda^{p_{i}}/Q^{p_{i}}\), \(i=1,2\). The first step converts the second Wilson coefficient
\[\widehat{C}_{2} =(2\beta_{0}\alpha_{\rm s})^{\psi_{2}}\mathfrak{A}_{\rm s}\cdot \boldsymbol{c}_{2}\mapsto\frac{p_{1}-p_{2}}{p_{1}}(2\beta_{0}\alpha_{\rm s})^{ \psi_{2}}\mathfrak{A}_{\rm s}\cdot\boldsymbol{c}_{2/1}, \tag{11a}\] \[\boldsymbol{c}_{2/1} =\mathbf{C}_{1}\cdot\mathbf{Q}^{(p_{1}-p_{2},\psi_{1}-\psi_{2})} \cdot\mathbf{C}_{1}^{-1}\cdot\boldsymbol{c}_{2}. \tag{11b}\]
The second step then leads to
\[\boldsymbol{f}_{\rm g}^{\{(p_{1},\psi_{1},\widehat{C})_{1},(p_{2},\psi_{2}, \widehat{C})_{2}\}}=\mathbf{C}_{2/1}\cdot\mathbf{Q}_{\rm g}^{(p_{2},\psi_{2}) }\cdot\mathbf{C}_{2/1}^{-1}\cdot\mathbf{C}_{1}\cdot\mathbf{Q}_{\rm g}^{(p_{1},\psi_{1})}\cdot\mathbf{C}_{1}^{-1}\cdot\boldsymbol{r}_{\rm g}, \tag{12}\]
Note that the same outcome is obtained if the \(\Lambda^{p_{2}}\) term is removed first, i.e.,
\[\mathbf{C}_{2/1}\cdot\mathbf{Q}_{\rm g}^{(p_{2},\psi_{2})}\cdot\mathbf{C}_{2 /1}^{-1}\cdot\mathbf{C}_{1}\cdot\mathbf{Q}_{\rm g}^{(p_{1},\psi_{1})}\cdot \mathbf{C}_{1}^{-1}=\mathbf{C}_{1/2}\cdot\mathbf{Q}_{\rm g}^{(p_{1},\psi_{1}) }\cdot\mathbf{C}_{1/2}^{-1}\cdot\mathbf{C}_{2}\cdot\mathbf{Q}_{\rm g}^{(p_{2},\psi_{2})}\cdot\mathbf{C}_{2}^{-1}, \tag{13}\]
and similarly for their inverses. A decomposition of the right-hand side of eq. (12) along the lines of eq. (10) seems possible by isolating \(\mathbf{Q}_{\rm g}^{(p_{1},\psi_{1})}\) and \(\mathbf{Q}_{\rm g}^{(p_{2},\psi_{2})}\) and pragmatically absorbing the rest into the coefficients (as in eq. (11)), but an elegant arrangement has (so for) eluded me.
Suppose the \(c_{l}\) vanish for \(l<n\). The first nonzero term, \(c_{n}\), should not be connected to the \(r_{l}\), \(l\leq n\). A possible route forward is to subtract \(\sum_{l=-1}^{n}r_{l}\alpha_{\rm s}^{l+1}\) from \(\mathscr{R}\), and the difference is still a valid observable. The factorially growing contributions can then be treated as before. If \(p_{2}=p_{1}\) but \(\psi_{2}\neq\psi_{1}\), the vector \(\boldsymbol{c}_{2/1}\) in eqs. (11) must be redefined as \(-2\mathbf{C}_{1}\cdot\mathbf{D}^{(\psi_{1}-\psi_{2})}\cdot\mathbf{C}_{1}^{-1} \cdot\boldsymbol{c}_{2}\) with \(c_{2/1,-1}=0\), so the second step will have to be tweaked in a similar way.
## 7 The static energy
To see MRS in action, the procedure is applied in this section to the gluonic energy stored between a static quark and a static antiquark, \(E_{0}(r)\), called the "static energy" for short. It is computed in lattice gauge theory from the exponential fall-off at large \(t\) of a \(t\times r\) Wilson loop [27; 28]. The lattice quantity is the sum of a physical quantity plus twice the linearly divergent self-energy of a static quark. Dimensional regularization has no linear divergence, but on general grounds a constant of order \(\Lambda\) is possible. Setting \(\mathscr{R}(1/r)=-rE_{0}(r)/C_{F}\) yields a quantity of the form given in eq. (1) with \(r_{-1}=0\) and \(p=1\).
The static energy is a good candidate to test MRS because four orders in perturbation theory are known, thus enabling a thorough test. Beyond the tree-level result of order \(\alpha_{\rm s}\), \(\overline{\rm MS}\)-scheme results are available at order \(\alpha_{\rm s}^{2}\)[29; 30], \(\alpha_{\rm s}^{3}\)[31; 32; 33], and \(\alpha_{\rm s}^{4}\)[34; 35; 36; 37]. The one-loop [38; 39], two-loop [40; 41], three-loop [42; 43], and four-loop [44; 45; 46] coefficients of the \(\overline{\rm MS}\)\(\beta\) function are also needed. The five-loop coefficient \(\beta_{4}\)[47; 48; 49] is not needed here.
References [29; 30; 31; 32; 33; 34; 35; 36; 37] compute the static potential, \(V(q)\), in momentum space, finding it to be infrared divergent starting at order \(\alpha_{\rm s}^{4}\)[50]. This behavior reflects the emergence of an "ultrasoft" scale \(\alpha_{\rm s}r^{-1}\) in addition to the hard scale \(r^{-1}\). Ultrasoft contributions can be described in a multipole expansion and thereby demonstrated to render the static energy infrared finite [51; 52; 53]. If \(\alpha_{\rm s}r^{-1}\gg\Lambda\), the ultrasoft part can be calculated perturbatively [51;
53], and the total static energy is explicitly seen to be infrared finite [51; 53; 35]. A remnant of the cancellation remains in logarithms of the ratio of the two scales, \(\ln[(\alpha_{\rm s}r^{-1})/r^{-1}]=\ln\alpha_{\rm s}\).
Following the exposition of Garcia i Tormo [54], a momentum-space quantity, here denoted \(\tilde{\mathscr{R}}(q)\), poses a second problem a la eq. (1), again with \(r_{-1}=0\) but now with \(p>1\). To distinguish the series coefficients associated with \(\tilde{\mathscr{R}}(q)\) and \(\mathscr{R}(1/r)\) from each other and the distance \(r\), the notation used here is
\[\tilde{R}(q)=\sum_{l=0}a_{l}(\mu/q)\alpha_{\rm s}(\mu)^{l+1},\qquad R(1/r)= \sum_{l=0}v_{l}(\mu r)\alpha_{\rm s}(\mu)^{l+1}. \tag{12}\]
The coefficients \(a_{l}(1)\) are available in the literature [29; 30; 31; 32; 33; 34; 35; 36; 37] and can be found in a consistent notation in the accompanying Mathematica [55] notebook. Each \(v_{l}(\mu r)/r\) is \(4\pi\) times the Fourier transform of \(a_{l}(\mu/q)/q^{2}\). Indeed, the \(p=1\) factorial growth of the \(v_{l}\) arises from the Fourier transform of the logarithms (cf., eqs. (4)) in \(a_{l}(\mu/q)\). The series \(F^{(1)}(1/r)\), derived as in section 3 from \(R(1/r)\), is related to the "static force", \(\mathfrak{F}(r)=-{\rm d}E_{0}/{\rm d}r\), by \(\mathscr{F}(r)=F^{(1)}(1/r)=-r^{2}\mathfrak{F}(r)/C_{F}\). Note that \(\mathfrak{F}(r)\) -- and, hence \(\mathscr{F}(r)\) and \(F^{(1)}(1/r)\) -- is expected to be free of renormalon ambiguities [52; 56], because the change in static energy from one distance to another is physical. The series \(f_{l}\) should eventually exhibit factorial growth owing to instantons, i.e., with \(p\geq 4\pi\beta_{0}=\frac{11}{3}C_{A}-\frac{4}{3}\sum_{f}T_{f}\).
The remainder of this section gives numerical and graphical results for \(\mathrm{SU}(3)\) gauge theory with three massless flavors. For brevity, the superscript "\((1)\)" on \(F^{(1)}\), \(R_{0}^{(1)}\), etc., is omitted. To obtain numerical results and prepare plots, \(\alpha_{\rm s}\) in the ultrasoft logarithm, \(\ln\alpha_{\rm s}\), must be specified. This \(\alpha_{\rm s}\) can be taken to run, namely taken to be the same as the expansion parameter \(\alpha_{\rm s}(\mu)\). Alternatively, \(\alpha_{\rm s}\) can be held fixed. Below, \(\alpha_{\rm s}(s/r)\) (or \(\alpha_{\rm s}(sq)\)), for various fixed \(s\) is used as an expansion parameter, and the ultrasoft \(\alpha_{\rm s}\) is chosen either to be the same or, for comparison, a fixed value \(\alpha_{\rm s}=\frac{1}{3}\). This value arises at scales where perturbation theory starts to break down, making it a reasonable alternative. Resummation of logarithms \(\alpha_{\rm s}^{3+n}\ln^{n}\alpha_{\rm s}\)[57] and \(\alpha_{\rm s}^{4+n}\ln^{n}\alpha_{\rm s}\)[58; 59] is not considered here.
Table 2 shows the first four \(a_{l}\) and \(f_{l}\) in three different renormalization schemes, \(\overline{\rm MS}\), geometric, and eqs. (10) with \(n=2\). The scheme dependence in the two- and three-loop coefficients is about 10%. The (non)growth in \(l\) conforms with expectations: \(a_{l}\) is perhaps growing slowly and \(f_{l}\) is not growing yet. (Recall, \(p>1\) for \(a_{l}\) and \(p\geq 9\) for \(f_{l}\).) Table 3 shows the first four \(v_{l}\) in the same three schemes. The growth is obvious. Table 3 also shows the subtracted coefficients \(v_{l}(1)-V_{l}(1)\). The cancellation is striking.
The cancellation at \(s=1\) is robust, as shown in figure 2 over an illustrative interval of \(\ln s\). The range of \(v_{3}(s)\) and even \(v_{2}(s)\) dwarfs that of all \(v_{l}(s)-V_{l}(s)\): \(v_{3}(s)-V_{3}(s)\) (\(v_{2}(s)-V_{2}(s)\)) is 50-100 (5-10) times smaller than \(v_{3}(s)\) (\(v_{2}(s)\)). Near \(\ln s=0\), these two subtracted coefficients are unusually small. Overall, the cancellation is best for \(\ln s\approx\frac{1}{4}\), where \(|v_{0}-V_{0}|\) is especially small, while the others are of typical size.
Interestingly, as \(\ln s\) is taken negative both factors in the first term \([v_{l}(s)-V_{l}(s)]\alpha_{\rm s}(s/r)\) increase. This behavior can be traced to the normalization factor \(R_{0}(s)\), which is plotted in figure 3 for the three schemes. There is not much scheme dependence. Curves for \(L=4\) and \(L=3\) in eq. (10b) are shown. They are close, or even very close, to each other
for \(|\ln s|\leq\ln 2\). Sample numerical values are given in table 4, again using both four and three terms in eq. (3.10b). The shape of \(R_{0}(s)\) follows from the positivity of the highest-power logarithmic term in eqs. (2.4) and the positivity of the coefficients in eq. (3.10b). Near \(\ln s=-2\), the four-term \(R_{0}(s)\) goes negative, which is a reflection of \(v_{3}(s)\) being run to an absurd extreme while omitting (unknown) higher orders. Indeed, the three-term approximation to \(R_{0}(s)\) turns up near \(\ln s=-2\), which is a reflection of \(v_{2}(s)\) being run to an absurd extreme. Figure 3 also shows \(R_{0}(s)/s\), which multiplies the term absorbed into the power correction (cf., last sentence in section 3). It is nearly constant over a wide range, especially once \(L=4\).
\begin{table}
\begin{tabular}{|c|c c|c c|c c|} \hline & \multicolumn{3}{c|}{\(\overline{\rm MS}\)} & \multicolumn{3}{c|}{geometric} & \multicolumn{2}{c|}{eqs. (5.6), \(n=2\)} \\ \(l\) & \(a_{l}(1)\) & \(f_{l}(1)\) & \(a_{l}(1)\) & \(f_{l}(1)\) & \(a_{l}(1)\) & \(f_{l}(1)\) \\ \hline
0 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & \(0.557\,042\) & \(-0.048\,552\) & \(0.557\,042\) & \(-0.048\,552\) & \(0.557\,042\) & \(-0.048\,552\) \\
2 & \(1.702\,18\) & \(0.687\,291\) & \(1.834\,97\) & \(0.820\,079\) & \(1.834\,97\) & \(0.820\,079\) \\
3 & \(2.436\,87\) & \(0.323\,257\) & \(2.832\,68\) & \(0.558\,242\) & \(3.013\,89\) & \(0.739\,452\) \\ \hline \end{tabular}
\end{table}
Table 2: Perturbation series coefficients with \(s=1\) for \(\tilde{R}(q)\) (\(a_{l}\)) and \(F(r)\) (\(f_{l}\)). Here \(\alpha_{\rm s}=\frac{1}{3}\) for \(a_{3}\) and \(f_{3}\).
\begin{table}
\begin{tabular}{|c|c c|c c|c c|} \hline & \multicolumn{3}{c|}{\(\overline{\rm MS}\)} & \multicolumn{3}{c|}{geometric} & \multicolumn{2}{c|}{eqs. (5.6), \(n=2\)} \\ \(l\) & \(v_{l}(1)\) & \(v_{l}(1)-V_{l}(1)\) & \(v_{l}(1)\) & \(v_{l}(1)-V_{l}(1)\) & \(v_{l}(1)\) & \(v_{l}(1)-V_{l}(1)\) \\ \hline
0 & 1 & \(0.206\,061\) & 1 & \(0.182\,531\) & 1 & \(0.177\,584\) \\
1 & \(1.383\,84\) & \(-0.202\,668\) & \(1.383\,84\) & \(-0.249\,689\) & \(1.383\,84\) & \(-0.259\,574\) \\
2 & \(5.462\,28\) & \(0.019\,479\) & \(5.595\,07\) & \(-0.009\,046\) & \(5.595\,07\) & \(-0.042\,959\) \\
3 & \(26.6880\) & \(0.219\,262\) & \(27.3034\) & \(0.050\,179\) & \(27.4846\) & \(0.066\,468\) \\ \hline \end{tabular}
\end{table}
Table 3: Perturbation series coefficients with \(s=1\) for \(R(r)\) and \(R_{\rm RS}\) (with \(V_{l}\) derived from \(v_{l}\) as \(R_{l}\) from \(r_{l}\) in section 3). Here \(\alpha_{\rm s}=\frac{1}{3}\) for \(v_{3}\) and \(v_{l}-V_{l}\).
Figure 2: Scale dependence of \(v_{l}(s)\) (left) and \(v_{l}(s)-V_{l}(s)\) (right) vs. \(\ln s\). Note the difference in vertical scale. Blue, gold, green, and red correspond to \(l=0\), \(1\), \(2\), and \(3\), respectively. Dotted, dashed, and solid curves correspond to the \(\overline{\rm MS}\), geometric, and \(\alpha_{2}\) schemes, respectively.
The coefficients' variation with \(s\) is set up to compensate that of \(\alpha_{\rm s}(sq)\) or \(\alpha_{\rm s}(s/r)\). Figure 4 shows how \(\tilde{R}(q)\), \(F(1/r)\), \(R(1/r)\), and \(R_{\rm MRS}(1/r)\) depend on \(\Lambda/q\) or \(r\Lambda\) for \(s\in\{\frac{1}{2},1,2\}\). (Plotted this way, the high-\(q\), short-\(r\) domain, where perturbation theory works best without any effort, is shrunk into a small region.) The variation with \(s\) is mild for \(\tilde{R}(q)\), even milder for \(F(1/r)\), and catastrophic for \(R(1/r)\). After MRS, however, the scale variation is as mild for \(R_{\rm MRS}(1/r)\) as for the renormalon-free \(F(1/r)\). As shown in figure 5, the fractional difference of both remains a few percent for \(r\Lambda\lesssim 0.1\) (with \(s=1\) and running ultrasoft \(\ln\alpha_{\rm s}\) as the baseline).
The mild variation with \(s\) is a pleasant outcome given the \(s\) dependence of the subtracted coefficients (cf., figure 2). Figure 6 shows the variation with \(s\) as a function of \(r\) of the Borel sum \(R_{\rm B}(1/r)\) (left, eq. (3.17)) and the subtracted series \(R_{\rm RS}(1/r)\) for \(L=4\) (right). Both are quite sensitive to \(s\), but their sum (bottom right of figure 4) is not.
The first two orders suffice to lift the \(s\) dependence, as shown in figure 7. Here, \(R_{\rm B}(1/r)\) is shown (dotted curve) and each term \((v_{l}-V_{l})\alpha_{\rm s}^{l+1}\), \(l=0,1,2,3\), in \(R_{\rm RS}(1/r)\) is accumulated (dashed curves with longer dashes as the order increases) until the total \(L=4\) (solid) result \(R_{\rm MRS}(1/r)\) is reached. Adding the tree-level term \((v_{0}-V_{0})\alpha_{\rm s}\) to the Borel sum overshoots the full (solid) result, but adding the one-loop term \((v_{1}-V_{1})\alpha_{\rm s}^{2}\) yields a curve almost indistinguishable from \(R_{\rm MRS}(1/r)\). Indeed, it is hard to distinguish the longer-dashed curves from the solid ones, underscoring that the two-loop term \((v_{2}-V_{2})\alpha_{\rm s}^{3}\) makes a small change
\begin{table}
\begin{tabular}{|c|c c|c c|c c|} \hline & \multicolumn{2}{c|}{\(\overline{\rm MS}\)} & \multicolumn{2}{c|}{geometric} & \multicolumn{2}{c|}{eqs. (5.6), \(n=2\)} \\ \(s\) & \(L=4\) & \(L=3\) & \(L=4\) & \(L=3\) & \(L=4\) & \(L=3\) \\ \hline \(\frac{1}{2}\) & 0.386 864 & 0.437 281 & 0.403 196 & 0.454 397 & 0.397 605 & 0.437 281 \\
1 & 0.793 939 & 0.785 114 & 0.817 469 & 0.802 230 & 0.801 081 & 0.785 114 \\
2 & 1.523 44 & 1.387 07 & 1.554 17 & 1.404 19 & 1.526 98 & 1.387 07 \\ \hline \end{tabular}
\end{table}
Table 4: Normalization factor \(R_{0}(s)\) of the \(p=\) factorial growth in three schemes for \(s\in\{\frac{1}{2},1,2\}\) at three (\(L=4\)) and two (\(L=3\)) loops. Here \(\alpha_{\rm s}=\frac{1}{3}\) for \(L=4\).
Figure 3: Scale dependence of \(R_{0}(s)\) and \(R_{0}(s)/s\) over a very wide range (left) and a relevant range (right). Blue and orange (green and yellow) curves corresponds to \(L=4\) (\(L=3\)) in eq. (3.10b). Blue and green (orange and yellow) curves corresponds to \(R_{0}(s)\) (\(R_{0}(s)/s\)). Dotted, dashed, and solid curves correspond to the \(\overline{\rm MS}\), geometric, and \(\alpha_{2}\) schemes, respectively.
Figure 4: Scale variation in the \(\alpha_{2}\) scheme up to and including the \(\alpha_{\rm s}^{4}\) term. Top: \(\widetilde{R}(q)\) and \(F(1/r)\); neither suffers the \(p=1\) renormalon. Bottom: \(R(1/r)\) (with \(p=1\) renormalon) and \(R_{\rm MRS}(1/r)\) (after MRS). Red, green, and blue curves correspond to \(s=\frac{1}{2}\), \(s=1\), and \(s=2\), respectively. Solid (dashed) curves correspond to a running (fixed) \(\alpha_{\rm s}\) in the ultrasoft \(\ln\alpha_{\rm s}\). Note that the vertical scale for \(R(1/r)\) is twice that of the other three plots.
Figure 5: Scale variation in the \(\alpha_{2}\) scheme of the fractional difference of \(F(1/r)\) (left) and \(R_{\rm MRS}(1/r)\) (right), with respect to \(s=1\) with running ultrasoft \(\alpha_{\rm s}\). Curve and color code as in figure 4.
while the three-loop term \((v_{3}-V_{3})\alpha_{\rm s}^{4}\) makes hardly any change. As with the pole mass [2], MRS perturbation theory converges (in the practical sense) quickly.
Let us return to the term-by-term change in \(R_{0}\) (cf., figure 3 and table 4). The highest-order term in eq. (3.10b) can be used to estimate the uncertainty in \(R_{0}\) from omitting even higher orders [2]. In the case at hand, the term with \(f_{3}\) yields the estimate, which is around \(10\%\) or less (cf., table 4). Figure 8 (right) overlays the resulting \(s=1\) uncertainty band on \(R_{\rm MRS}(1/r)\) on the curves (at various \(s\)) of figure 4 (bottom right). The uncertainty propagated to \(R_{\rm MRS}\) is smaller than \(10\%\) because changes in \(R_{0}\) push \(R_{\rm B}(1/r)\) and \(R_{\rm RS}(1/r)\) in opposite directions. Indeed, the uncertainty in \(R_{\rm MRS}\) stemming from \(R_{0}\) is smaller than the difference between the \(s=1\) and the \(s=\frac{1}{2}\) and \(2\) curves. Note, however, that the \(R_{0}\)-uncertainty, as defined here, is smaller at \(s=1\) than at \(s=\frac{1}{2}\) and \(2\) (cf., figure 3 and table 4). The uncertainty bands of these other choices (not shown) cover all three.
Figure 8: Same as figure 4 (bottom right) but with a band stemming from the uncertainty in \(R_{0}\) (taken equal to the last term in eq. (3.10b)) and an expanded vertical scale. Curve and color code as in figure 4.
Summary and outlook
The initial aim of this work was to study and extend the discussion of factorial growth and renormalons started in refs. [2; 3]. I found, however, that the perspective, derivation, and interpretation could be simplified: a straightforward analysis extracts information from the renormalization-group constraints on the series coefficients. The only other ingredient is the knowledge (or assumption) of the powers \(\{p_{i}\}\) of the power-suppressed corrections to the perturbative series of a physical observable. A by-product of adding this information to the series is to subtract the leading factorial growth (aka "renormalon effects") from the first few series coefficients. Remarkably, the factorial growth is not just a large-order phenomenon: it starts at low orders. How it comes to dominate the coefficients depends of the power of the power correction. (The lower the power, the more powerful the factorial!)
The worked example of the static energy (section 7) seems successful in removing a power correction of order \(r\Lambda\) from (a dimensionless version of the) static energy, \(-rE_{0}/C_{F}\). The conventional choice of \(\mu=1/r\) (in the \(\overline{\rm MS}\) scheme) seems near an optimum: perturbation theory converges with the MRS treatment as well as it does for the static force, which is thought to suffer corrections only of high power. Varying \(\mu\) by a factor of two rearranges contributions between the tree and one-loop fixed order contributions, on the one hand, and (a specific definition of) the Borel sum of the factorial growth, on the other. At _very_ short distances, \(r\Lambda\lesssim\frac{1}{16}\), the total result (using all information through order \(\alpha_{\rm s}^{4}\)) does not vary over a wide range of \(\mu\). It will therefore be interesting to fit to lattice-QCD data (e.g., that of ref. [28]) and compare with other approaches to taming the series. (Some other approaches are described in refs. [60; 61; 62; 63; 64] and earlier work cited there.)
Some practical issues remain before applying the MRS procedure to, say, an \(\alpha_{\rm s}\) determinations. The MRS method, like standard perturbation theory, does not say what scale \(s\) to choose: starting in the \(\overline{\rm MS}\) scheme with \(s=1\) and varying by a factor of 2 is conventional. When the factorial growth of coefficients matters, i.e., when MRS has something to offer, the \(\ln^{n}s\) contributions associated with scale setting cannot tame the coefficients. Scale setting in light of MRS may warrant a closer look. Another issue is that in many applications, some of the quarks cannot be taken massless. A nonzero quark mass in a loop alters the loop's growth, removing the factorials from the infrared. It is probably best to add massive quark-loop effects at fixed order and not to use the massless result as a stand-in for the massive one [65]. Last, when anomalous dimensions are an important feature for more than one round of MRS, the method (as presented in section 6) remains to cumbersome to be appealing. It may suffice to neglect the anomalous dimensions, but only practical experience will tell.
## Appendix A Modified Borel summation
Alternatives to the standard Borel resummation are possible [14], and a natural variant is pursued here, leading to the same endpoint. Start with eq. (3.12):
\[R_{\rm B}^{(p)}\equiv\sum_{l=0}^{\infty}R_{l}^{(p)}\alpha^{l+1}=R_{0}^{(p)} \alpha\sum_{l=0}^{\infty}\left(\frac{2\beta_{0}\alpha}{p}\right)^{l}\frac{ \Gamma(l+1+pb)}{\Gamma(1+pb)}.\] (A.1)
The \(l\)-dependent \(\Gamma\) function can be expressed as \(\Gamma(l+1+pb)=\int_{0}^{\infty}t^{l+pb}\mathrm{e}^{-t}\mathrm{d}t\). Swapping the order of summation and integration
\[R_{\mathrm{B}}^{(p)}=\frac{R_{0}^{(p)}\alpha}{\Gamma(1+pb)}\int_{0}^{\infty} \sum_{l=0}^{\infty}\left(\frac{2\beta_{0}\alpha t}{p}\right)^{l}t^{pb}\mathrm{e }^{-t}\mathrm{d}t=\frac{R_{0}^{(p)}\alpha}{\Gamma(1+pb)}\int_{0}^{\infty} \frac{t^{pb}\mathrm{e}^{-t}}{1-2\beta_{0}\alpha t/p}\mathrm{d}t, \tag{10}\]
which only has a simple pole instead of a branch point. After integrating
\[R_{\mathrm{B}}^{(p)}=R_{0}^{(p)}\frac{p}{2\beta_{0}}\mathscr{J}(pb,p/2\beta_{0 }\alpha)-R_{0}^{(p)}\mathrm{e}^{\pm\mathrm{i}pb\pi}\frac{p^{1+pb}}{2^{1+pb} \beta_{0}}\Gamma(-pb)\left[\frac{\mathrm{e}^{-1/2\beta_{0}\alpha}}{(\beta_{0 }\alpha)^{b}}\right]^{p}, \tag{11}\]
where the factor \(\mathrm{e}^{\pm\mathrm{i}pb\pi}\) in the second term corresponds to passing the contour below or above the pole. As before, the second term can be absorbed into the power correction, and the first -- the principal part -- is taken to define \(R_{\mathrm{B}}^{(p)}\), the same as eqs. (3.15).
###### Acknowledgements.
This work is supported in part by the Technical University of Munich, Institute for Advanced Study, funded by the German Excellence Initiative. Fermilab is managed by Fermi Research Alliance, LLC, under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy.
|
2302.01353 | SmeftFR v3 -- Feynman rules generator for the Standard Model Effective
Field Theory | We present version 3 of SmeftFR, a Mathematica package designed to generate
the Feynman rules for the Standard Model Effective Field Theory (SMEFT)
including the complete set of gauge invariant operators up to dimension-6 and
the complete set of bosonic operators of dimension-8. Feynman rules are
generated with the use of FeynRules package, directly in the physical (mass
eigenstates) basis for all fields. The complete set of interaction vertices can
be derived, including all or any chosen subset of SMEFT operators. As an
option, the user can also choose preferred gauge fixing, generating Feynman
rules in unitary or $R_\xi$-gauges. The novel feature in version-3 of SmeftFR
is its ability to calculate SMEFT interactions consistently up to dimension-8
in EFT expansion (including quadratic dimension-6 terms) and express the
vertices directly in terms of user-defined set of input-parameters. The derived
Lagrangian in the mass basis can be exported in various formats supported by
FeynRules, such as UFO, FeynArts etc. Initialisation of numerical values of
Wilson coefficients of higher dimension operators is interfaced to WCxf format.
The package also includes a dedicated Latex generator allowing to print the
result in clear human-readable form. The SmeftFR v3 is publicly available at
www.fuw.edu.pl/smeft. | A. Dedes, J. Rosiek, M. Ryczkowski, K. Suxho, L. Trifyllis | 2023-02-02T19:00:02Z | http://arxiv.org/abs/2302.01353v2 | # SmeftFR v3 - Feynman rules generator for the Standard Model Effective Field Theory
###### Abstract
We present version 3 of SmeftFR, a Mathematica package designed to generate the Feynman rules for the Standard Model Effective Field Theory (SMEFT) including the complete set of gauge invariant operators up to dimension-6 and the complete set of bosonic operators of dimension-8. Feynman rules are generated with the use of FeynRules package, directly in the physical (mass eigenstates) basis for all fields. The complete set of interaction vertices can be derived, including all or any chosen subset of SMEFT operators. As an option, the user can also choose preferred gauge fixing, generating Feynman rules in unitary or \(R_{\xi}\)-gauges. The novel feature in version-3 of SmeftFR is its ability to calculate SMEFT interactions consistently up to dimension-8 in EFT expansion (including quadratic dimension-6 terms) and express the vertices directly in terms of user-defined set of input-parameters. The derived Lagrangian in the mass basis can be exported in various formats supported by FeynRules, such as UFO, FeynArts, _etc._ Initialisation of numerical values of Wilson coefficients of higher dimension operators is interfaced to WCxf format. The package also includes a dedicated Latex generator allowing to print the result in clear human-readable form. The SmeftFR v3 is publicly available at www.fuw.edu.pl/smeft.
keywords: Standard Model Effective Field Theory, Feynman rules, unitary and \(R_{\xi}\)-gauges +
**PROGRAM SUMMARY**
_Manuscript Title:_
SmeftFR v3 - Feynman rules generator for the Standard Model Effective Field Theory
_Authors:_ A. Dedes, J. Rosiek, M. Ryczkowski, K. Suxho, L. Trifyllis
_Program Title:_
SmeftFR v3.0
_Journal Reference:_
_Catalogue identifier:_
_Licensing provisions:_
None
_Programming language:_
Mathematica 12.1 or later (earlier versions were reported to have problems running this code)
_Computer:_
any running Mathematica
_Operating system:_
any running Mathematica
_RAM:_
allocated dynamically by Mathematica, at least 4GB total RAM suggested
_Number of processors used:_
allocated dynamically by Mathematica
_Supplementary material:_
None
_Keywords:_
Standard Model Effective Field Theory, Feynman rules, unitary and \(R_{\xi}\) gauges
_Classification:_
\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;
## 1 Introduction
The Standard Model Effective Field Theory (SMEFT) [1; 2; 3] is a useful tool in parameterizing phenomena beyond the, successful so far, Standard Model (SM) [4; 5; 6] predictions that may appear in current and/or future particle experiments. The SMEFT Lagrangian is given by
\[{\cal L}_{\rm SMEFT}\ =\ {\cal L}_{\rm SM}\ +\ \sum_{i}\frac{C_{i}\,{\cal O}_{i} }{\Lambda^{d_{i}-4}}\;, \tag{1.1}\]
where scale \(\Lambda\) is the cut-off scale of the EFT (i.e., the mass of the lightest heavy particle decoupled from the underlying theory), \({\cal O}_{i}\) is a set of \(d_{i}\)-dimensional, SM gauge group invariant, operators, and \(C_{i}\) are the associated Wilson coefficients (WCs). For one fermion generation including Hermitian conjugation, we have for \(d=5\) two independent operators e.g. \(i=2\), for \(d=6\) we have \(i=84\), for \(d=7\) we have \(i=30\), for \(d=8\) we have \(i=993\), and so on and so forth [7]. When expanding in flavour, the actual number of operators explodes from few to several thousand of operators and interaction vertices. This proliferation of vertices must be included in matrix element calculators when mapping the WCs to experimental data. This is the scope of this article: to describe the code SmeftFR v3.0 which consistently provides the Feynman Rules for dimension-6 and the bosonic part of dimension-8 operators for further symbolic or numerical manipulations.
Admittedly, SMEFT is a hugely complicated model. Including all possible CP-, flavour-, baryon-, and lepton-number violating interactions at dimension-6 level, it already contains 2499 free parameters in a non-redundant basis, such as the Warsaw basis [8]. In addition, recent experimental and theoretical progress of high energy processes at LHC involving vector boson scattering requires subsets of dimension-8 operators, in particular the bosonic ones, making the structure of possible interactions even more involved. Due to large number and complicated structure of new terms in the Lagrangian, theoretical calculations of physical processes within the SMEFT can be very challenging -- it is enough to notice that the number of primary vertices when SMEFT is quantized in \(R_{\xi}\)-gauges and in "Warsaw mass" basis, printed for the first time in ref. [9], is almost 400 without counting the hermitian conjugates.
As a result, it is important to develop technical methods and tools facilitating such calculations, starting from developing the universal set of the Feynman rules for propagators and vertices for physical fields, after Spontaneous Symmetry Breaking (SSB) of the full effective theory in the most commonly studied, Warsaw basis of operators [8]. The initial version of the relevant package, SmeftFR v1.0, was announced and briefly described for the first time in Appendix B of ref. [9]. The SmeftFR code was further developed and supplied with many new options capabilities and published as SmeftFR v2.0 in [10]. In this paper we present SmeftFR v3.0, a _Mathematica_ symbolic language package generating Feynman rules in several formats, based on the formulae developed in ref. [9]. The most important new capability implemented in the code, comparing to version 2, allows for performing consistent calculations up to dimension-8 operators in EFT expansion, including also expressing the Feynman rules directly in terms of any user-defined set of input parameters. We summarise here the main features of SmeftFR code, noting in particular advances introduced in its 3rd version (v3):
* SmeftFR is written as an overlay to FeynRules package [11; 12], used as the engine to generate Feynman rules.
* SmeftFR v3 is able to generate interactions in the most general form of the SMEFT Lagrangian up to dimension-6 order in Warsaw basis [8], without any restrictions on the structure of flavour violating terms and on CP-, lepton- or baryon-number conservation. In addition, it also contains all _bosonic_ operators of dimension-8 order, in the basis defined in ref. [13].
* Feynman rules are expressed in terms of physical SM fields and canonically normalised Goldstone and ghost fields. Expressions for interaction vertices are analytically expanded in powers of inverse New Physics scale \(1/\Lambda\). The novel feature implemented in SmeftFR v3 is the consistent inclusion of all terms up to maximal dimension-8, including both terms quadratic in Wilson coefficients of dimension-6 and linear contributions from Wilson coefficients of dimension-8. Terms of order higher than \(d=8\) are consistently truncated.
* Another important novel feature of SmeftFR v3 is the possibility of expressing Feynman rules directly in terms of a predefined set of input parameters (usually chosen to be observables directly measurable in experiments). This allows for consistent calculation of processes in SMEFT without the complicated and error-prone procedure of using "intermediate" set of Lagrangian parameters and later re-expressing them in terms of preferred input quantities.
* SmeftFR v3 allows for choosing _any_ set of input parameters, assuming that the user provides appropriate routines relating them to "standard" SM Lagrangian parameters (defined later in Sec. 3) to a required (maximum 8th) order of SMEFT expansion. Two most frequently used input schemes in the electroweak sector, \((G_{F},M_{Z},M_{W},M_{H})\) and \((\alpha_{em},M_{Z},M_{W},M_{H})\) are predefined in current version, including all terms up to dimension-8. In both cases, the strong coupling constant and all quark and lepton masses are also inputs. In addition, SmeftFR v3 also includes a predefined input scheme for the CKM matrix adopted from ref. [14]. For the neutrino mixing matrix we use as input the standard PMNS matrix not (as yet) corrected by SMEFT.
* Including the full set of SMEFT parameters in model files for FeynRules may lead to very slow computations. SmeftFR can generate FeynRules model files dynamically, including only the user defined subset of higher dimension operators. It significantly speeds up the calculations and produces a simpler final result, containing only the Wilson coefficients relevant to the process that she/he has chosen to analyse. It is worth noting that optimisations included in SmeftFR v3 sped it up comparing to SmeftFR v2 by approximately an order of magnitude for a comparable subset of chosen operators of dimension-6 and calculations done up to \(1/\Lambda^{2}\) accuracy (maximally achievable in SmeftFR v2).
* Feynman rules can be generated in the unitary or in linear \(R_{\xi}\)-gauges by exploiting four different gauge-fixing parameters \(\xi_{\gamma},\xi_{Z},\xi_{W},\xi_{G}\) for thorough amplitude checks. In the latter case also all relevant ghost and Goldstone vertices are obtained. This procedure is described in detail in ref. [9] and implemented already in SmeftFR v2 [10].
* Feynman rules are calculated first in _Mathematica_/FeynRules format. They can be further exported in other formats: UFO[15] (importable to Monte-Carlo generators like
MadGraph5_aMC@NLO 5[16], Sherpa[17], CalcHEP[18], Whizard[19, 20]), FeynArts[21] which generates inputs for loop amplitude calculators like FeynCalc[22], or FormCalc[23], and other output types supported by FeynRules.
* SmeftFR provides a dedicated Latex generator, allowing to display vertices and analytical expressions for Feynman rules in clear human readable form, best suited for hand-made calculations.
* SmeftFR is interfaced to the WCxf format [24] of Wilson coefficients. Numerical values of SMEFT parameters in model files can be read from WCxJ JSON-type input produced by other computer codes written for SMEFT. Alternatively, SmeftFR can translate FeynRules model files to the WCxf format.
* Further package options allow to treat neutrino fields as massless Weyl or (in the case of non-vanishing dimension-5 operator) massive Majorana fermions, to correct signs in 4-fermion interactions not yet fully supported by FeynRules and to perform some additional operations as described later in this manual.
It has also been made and tested to be compatible with many other publicly available high-energy physics related computer codes accepting standardised input and output data formats.
Feynman rules derived in ref. [9] using the SmeftFR package have been used successfully in many articles, including refs. [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59], and have passed certain non-trivial tests, such as gauge-fixing parameter independence of the \(S\)-matrix elements, validity of Ward identities, cancellation of infinities in loop calculations, _etc_.
We note, here, that there is a growing number of publicly available codes performing computations related to SMEFT [60]. These include, Wilson[61], Flavio[62], DSixTools[63, 64], RGESolver[65], MatchingTools[66], CoDEx[67], HighPT[68], STream[69], SuperTracer[70], Matchmakereft[71], Matchete[72], which are codes for running and matching Wilson coefficients and FeynOnium[73] for automatic calculations in non-relativistic EFTs. Packages mostly relevant to the purposes of SmeftFR are SMEFTsim[74, 75], Dim6Top[76] and SMEFT@NLO[77] which are all codes for calculating physical observables in SMEFT. To a degree, these codes (especially the ones supporting WCxf format) can be used in conjunction with SmeftFR. For example, some of them can provide the numerical input for Wilson coefficients of higher dimensional operators at scale \(\Lambda\), while others, the running of these coefficients from that scale down to the EW one. Alternatively, Feynman rules evaluated by SmeftFR can be used with Monte-Carlo event generators to test the predictions of other codes.
The rest of the paper is organised as follows. In Sec. 2, we define the notation and conventions of the SMEFT Lagrangian and the field normalisations used in transition to mass basis. In Sec. 3 and Appendix A, we describe the input schemes, i.e. the user-defined choices of observables which can be used to parametrize SMEFT interactions and give examples of the corresponding output of the code. In Sec. 4, we present the code's algorithmic structure and installation procedure. Sec. 5 is the main part of the paper, illustrating in detail how to derive the set of SMEFT vertices in mass basis starting from \(d=6\) operators in Warsaw basis [8] and \(d=8\) bosonic operators in basis of ref. [13] (all operators used by SmeftFR v3 are collected
for completeness in Appendix B). A sample program with SmeftFR v3 commands, generating Feynman rules in various formats, is given in Sec. 6. We conclude in Sec. 7.
## 2 SMEFT Lagrangian in gauge and mass basis
The classification of higher order operators in SMEFT is done in terms of fields in electroweak basis, before Spontaneous Symmetry Breaking (SSB). For the dimension-5 and -6 operators, SmeftFR uses the so-called "Warsaw basis" [8] as a starting point to calculate physical states in SMEFT and their interactions (for the specification of Warsaw basis, see ref. [8], in particular eq. (3.1) defining the \(d=5\) Weinberg operator \(Q_{\nu\nu}^{(5)}\) and Tables 2 and 3 containing the full list of \(d=6\) operators). For the dimension-8 operators, we include all operators containing bosonic fields only, as listed in Tables 2 and 3 of ref. [13] with an exception of two operators. The definitions and the list of all operators used by SmeftFR v3 is described in Appendix B, and Tables B.1, B.2, B.3, and B.4.
We decided to neglect \(d=7\) (which always contain fermionic fields) and fermionic \(d=8\) operators, both for theoretical and practical purposes. Dimension-7 operators are all lepton or baryon number violating and strongly constrained by many, related, experiments. In most BSM models, dimension-8 operators are also strongly suppressed and can lead to substantial measurable effects only when their contributions are enhanced, which typically (as can be justified on dimensional ground) happens at high energies. Such effects could be in particular investigated in experimental searches that involve Vector Boson Scattering at the LHC (see e.g. [78; 79; 80; 44]), and therefore, including bosonic operators is particularly important for such contemporary analyses. On the practical side, including fermionic \(d=8\) operators in full generality is simply not feasible today - there are over 40000 types of them (and this number does not yet include their flavour structure), no current code can accommodate within reasonable CPU running time and computer memory limits.1
Footnote 1: Selected dimension-8 operators can be added to SmeftFR v3 code. Please contact the authors for further detailed instructions.
The SMEFT Lagrangian which we use is the sum of the dimension-4 terms and operators of order up to dimension-8 (the latter only in the bosonic sector):
\[{\cal L}={\cal L}_{{\rm SM}}^{(4)}+\frac{1}{\Lambda}C^{\nu\nu}Q_{\nu\nu}^{(5) }+\frac{1}{\Lambda^{2}}\sum_{boson,fermion}C_{(b,f)}^{(6)}Q_{(b,f)}^{(6)}+ \frac{1}{\Lambda^{4}}\sum_{boson}C_{b}^{(8)}Q_{b}^{(8)}\;. \tag{2.1}\]
Physical fields in SMEFT are obtained after SSB. In the gauge and Higgs sectors, physical and Goldstone fields (\(h,G^{0},G^{\pm},W_{\mu}^{\pm},Z_{\mu}^{0},A_{\mu}\)) are related to initial (Warsaw basis) fields (\(\varphi,W_{\mu}^{i},B_{\mu},G_{\mu}^{A}\)) by field normalisation constants:2
Footnote 2: Note the notation difference with ref. [9]: Quantities \(Z_{W}\) and \(Z_{G}\) defined in eq. 2 are denoted as their inverses, \(Z_{W}^{-1}\) and \(Z_{G}^{-1}\), in ref. [9].
\[\left(\begin{array}{c}\varphi^{+}\\ \varphi^{0}\end{array}\right) = \left(\begin{array}{c}Z_{G^{+}}^{-1}G^{+}\\ \frac{1}{\sqrt{2}}(v+Z_{h}^{-1}h+iZ_{G^{0}}^{-1}G^{0})\end{array}\right)\;,\]
\[\left(\begin{array}{c}W_{\mu}^{3}\\ B_{\mu}\end{array}\right) = Z_{\gamma Z}\left(\begin{array}{c}Z_{\mu}\\ A_{\mu}\end{array}\right)\;,\] \[W_{\mu}^{1} = \frac{Z_{W}}{\sqrt{2}}\left(W_{\mu}^{+}+W_{\mu}^{-}\right)\,,\] \[W_{\mu}^{2} = \frac{iZ_{W}}{\sqrt{2}}\left(W_{\mu}^{+}-W_{\mu}^{-}\right)\,,\] \[G_{\mu}^{A} = Z_{G}\,g_{\mu}^{A}\;. \tag{2.2}\]
In addition, we define the effective gauge couplings, chosen to preserve the natural form of covariant derivative:
\[g\,=\,Z_{g}\bar{g}\qquad g^{\prime}=Z_{g^{\prime}}\bar{g}^{\prime}\qquad g_{s} =Z_{g_{s}}\bar{g}_{s}\,. \tag{2.3}\]
Up to \(d=8\), the normalisation constants multiplying the gauge couplings read as:
\[Z_{g} = \left(1-\frac{2v^{2}}{\Lambda^{2}}C_{\varphi W}-\frac{v^{4}}{ \Lambda^{4}}C_{W2\varphi 4n1}\right)^{1/2}\;, \tag{2.4}\] \[Z_{g^{\prime}} = \left(1-\frac{2v^{2}}{\Lambda^{2}}C_{\varphi B}-\frac{v^{4}}{ \Lambda^{4}}C_{B2\varphi 4n1}\right)^{1/2}\;,\] (2.5) \[Z_{g_{s}} = \left(1-\frac{2v^{2}}{\Lambda^{2}}C_{\varphi G}-\frac{v^{4}}{ \Lambda^{4}}C_{G2\varphi 4n1}\right)^{1/2}\;, \tag{2.6}\]
where relevant operators are defined in [8, 13] and formally all expressions have to be expanded to the order \(\frac{v^{4}}{\Lambda^{4}}\).
The above field normalisation constants \(Z_{X}\), the corrected Higgs field vev, \(v\), and the gauge and Higgs boson masses, \(M_{Z}\), \(M_{W}\) and \(M_{h}\), are not encoded as fixed analytical expressions but calculated by SmeftFR using the condition that bilinear part of the Lagrangian must have canonical form in the mass eigenstates basis. In this way, all relations automatically contain only the subset of non-vanishing SMEFT Wilson coefficients chosen by the user, as described in Sec. 5. The analytical expressions for the normalisation constants for a chosen set of higher dimension operators after running SmeftFR initialisation procedure are stored in variables listed in Table 1 (as discussed later in Sec. 3, expressions for the SM parameters in terms of user-defined input quantities are also available, see Table 2). One should note that _at any order_ in SMEFT, \(SU(2)\) and \(SU(3)\) gauge field and gauge normalisation constants are related, \(Z_{W}=Z_{g}^{-1}\), \(Z_{G}=Z_{g_{s}}^{-1}\).
It is also easy to eventually further expand the program in future by adding even higher than dimension-8 operators, as the routine diagonalizing the field bilinears does not depend on their particular dependence on Wilson coefficients of higher dimension operators until the very final stage where such dependence is substituted and further expanded in \(1/\Lambda\) powers.
The basis in the fermion sector is not fixed by the structure of gauge interactions and allows for unitary rotation freedom in the flavour space:
\[\psi_{X}^{\prime}=U_{\psi_{X}}\,\psi_{X}\;, \tag{2.7}\]
with \(\psi=\nu,e,u,d\) and \(X=L,R\). We choose the rotations such that \(\psi_{X}\) eigenstates correspond to real and non-negative eigenvalues of \(3\times 3\) fermion mass matrices:
\[\begin{array}{cc}M^{\prime}_{\nu}=-v^{2}C^{\prime\nu\nu}\;,&M^{\prime}_{e}= \frac{v}{\sqrt{2}}\,\left(\Gamma_{e}-\frac{v^{2}}{2}C^{\prime e\varphi}\right), \\ M^{\prime}_{u}=\frac{v}{\sqrt{2}}\,\left(\Gamma_{u}-\frac{v^{2}}{2}C^{\prime u \varphi}\right),&M^{\prime}_{d}=\frac{v}{\sqrt{2}}\,\left(\Gamma_{d}-\frac{v^ {2}}{2}C^{\prime d\varphi}\right).\end{array} \tag{2.8}\]
The fermion flavour rotations can be adsorbed in redefinitions of Wilson coefficients, as a result leaving CKM and PMNS matrices (denoted in \(\mathtt{SmeftFR}\) as \(K\) and \(U\) respectively) multiplying them. The complete list of redefinitions of flavour-dependent Wilson coefficients is given in Table 4 of ref. [9]. After rotations, they are defined in so called "Warsaw mass" basis (as also described in WCxf standard [24]). \(\mathtt{SmeftFR}\) assumes that the numerical values of Wilson coefficients of \(d=6\) fermionic operators (see Table B.1) are given in this particular basis.
In summary, Feynman rules generated by the \(\mathtt{SmeftFR}\) code describe interactions of SMEFT physical (mass eigenstates) fields, with numerical values of Wilson coefficients defined in the "Warsaw mass" basis of ref. [9] extended with bosonic subset of dimension-8 operators in the basis defined in ref. [13].
It is also important to stress that in the general case of lepton number flavour violation, with the non-vanishing dimension-5 Weinberg operator \(Q^{(5)}_{\nu\nu}\), neutrinos are massive Majorana spinors, whereas under the assumption of \(L\)-conservation they can be regarded as massless Weyl spinors. As described in the Sec. 5.1, \(\mathtt{SmeftFR}\) is capable to generate Feynman rules for neutrino interactions in both cases, depending on the choice of initial options. One should remember that treating neutrinos as Majorana particles requires special set of rules for propagators, vertices and diagram combinatorics. We follow here the treatment described in refs. [9; 81; 82; 83].
## 3 Parametrization of the SMEFT interactions
### SMEFT input parameter selection
The standard way of parameterizing the SMEFT Lagrangian is to use the natural set of couplings defining the dimension-4 renormalizable interactions (i.e. the SM Lagrangian) supplied with the Wilson coefficients of the higher order operators. The commonly used set of quantities parameterizing the \(d=4\) part of Lagrangian is:
\[\bar{g},\bar{g}^{\prime},\bar{g}_{s}\qquad\qquad SU(2),U(1),SU(3)\ \text{gauge couplings}\]
\begin{table}
\begin{tabular}{c c c c} \hline Constant & Variable & Constant & Variable \\ \hline \(Z_{g_{s}}\) & \(\mathtt{gsnorm}\) & \(Z_{G}\) & \(\mathtt{Gnorm}\) \\ \(Z_{g}\) & \(\mathtt{gwnorm}\) & \(Z_{W}\) & \(\mathtt{Wnorm}\) \\ \(Z_{g^{\prime}}\) & \(\mathtt{g1norm}\) & \(Z^{ij}_{\gamma Z}\) & \(\mathtt{AZnorm[i,j]}\) \\ \(Z_{h}\) & \(\mathtt{Hnorm}\) & \(Z_{G^{0}}\) & \(\mathtt{GOnorm}\) \\ \(Z_{G^{+}}\) & \(\mathtt{GPnorm}\) & & \\ \hline \end{tabular}
\end{table}
Table 1: Names of normalisation constants and corresponding internal \(\mathtt{SmeftFR}\) variables.
\[M_{h},\lambda \mbox{Higgs boson mass and quartic coupling}\] \[m_{q} \mbox{quark masses},q=u,c,t,d,s,b\] \[K \mbox{CKM quark mixing matrix} \tag{3.1}\] \[m_{\ell},m_{\nu_{\ell}} \mbox{charged lepton and neutrino masses},\ell=e,\mu,\tau\] \[U \mbox{PMNS lepton mixing matrix}\]
In the list above we assume that gauge couplings \(\bar{g},\bar{g}^{\prime},\bar{g}_{s}\) are already redefined as in eq. (2.3) and \(v\) is the minimum of the full Higgs boson potential, including the higher order operators.
SMEFT Feynman rules evaluated by SmeftFR v3 can be expressed in terms of such set of parameters and WCs of higher dimension operators. We further called it to be the "default" parametrization set, selected using Option\(\rightarrow\) '''smeft'' in various routines of the code, as described in Sec. 4. Expressing observables calculated in SMEFT in terms of "default" parameter gives a natural extension of the corresponding formulae in SM, as the latter can be immediately obtained by setting all WCs to zero. However, some parameters in eq. (3.2), namely gauge and Higgs couplings, \(K\) and \(U\) mixing matrices (also particle masses if they are not chosen to be physical pole masses) are not directly measurable quantities. Their numerical values in SMEFT have to be derived by choosing an appropriate "input parameter scheme", i.e. set of observables \(O_{1},\ldots,O_{n}\), and expressing them in terms of such input parameters and WCs:
\[\bar{g} = \bar{g}(O_{1},\ldots,O_{n},C_{i})\;,\] \[\bar{g}^{\prime} = \bar{g}^{\prime}(O_{1},\ldots,O_{n},C_{i})\;, \tag{3.2}\] \[\ldots.\]
Such a procedure leads to additional complications in calculating processes within SMEFT. All physical quantities have to be consistently calculated to a given order of \(1/\Lambda\) expansion in order to keep the result gauge invariant. Therefore, any observable, \({\cal A}\), calculated in terms of "default" parameters of eq. (3.2) has to be re-expanded to a given EFT order after expressing in terms of input parameters:
\[{\cal A} = {\cal A}_{4}(\bar{g},\bar{g}^{\prime},\ldots)+\frac{1}{\Lambda^{ 2}}{\cal A}_{6}^{i}(\bar{g},\bar{g}^{\prime},\ldots)C_{6}^{i} \tag{3.3}\] \[+ \frac{1}{\Lambda^{4}}\left({\cal A}_{8}^{1ij}(\bar{g},\bar{g}^{ \prime},\ldots)C_{6}^{i}C_{6}^{j}+{\cal A}_{8}^{2i}(\bar{g},\bar{g}^{\prime}, \ldots)C_{8}^{i}\right)+\ldots\] \[= {\cal A}^{\prime}{}_{4}(O_{1},O_{2},\ldots)+\frac{1}{\Lambda^{2}} {\cal A}^{\prime i}{}_{6}(O_{1},O_{2},\ldots)C_{6}^{i}\] \[+ \frac{1}{\Lambda^{4}}\left({\cal A}^{\prime}{}_{8}^{1ij}(O_{1},O_ {2},\ldots)C_{6}^{i}C_{6}^{j}+{\cal A}^{\prime 2i}{}_{8}(O_{1},O_{2},\ldots)C_{8}^{ i}\right)+\ldots\]
where for simplicity we neglected odd powers in \(1/\Lambda\) expansion as they are always lepton or baryon number violating and strongly suppressed.
Re-expressing SMEFT amplitudes and re-expanding them in \(1/\Lambda\) powers can be technically tedious and error-prone, especially at higher EFT orders. Therefore, it is useful to have SMEFT interaction vertices expressed from the very beginning directly in terms of a set of measurable
physical observables. Calculations done in terms of such Feynman rules can be simply truncated at required EFT order, without the need of re-parametrization. SmeftFR v3 provides such capability of evaluating the SMEFT Lagrangian and interaction vertices directly in terms of _any_ user defined set of input parameters.
### User-defined input parameters
SmeftFR v3 allows users to choose their own preferred set of input parameters, providing they are defined in the correct format and related to the "default" parameters set defined in eq. (3.2). The user-defined input parameters in SmeftFR should fulfil the following conditions:
* they are assumed to be measurable physical observables or other quantities which do not depend on the SMEFT parameters, in particular on WCs of higher dimension operators.
* they should be real scalar numbers, i.e. do not carry any flavor or gauge indices. If necessary, indexed arrays of flavor or gauge parameters should be represented by the relevant set of separate scalar entries.
* names of user-defined parameters should not overlap the names of variables already used by the code. SmeftFR performs checks for overlapping names of variables and displays if necessary relevant warnings.
* user-defined parameters and relations between them and "default" parameters should be defined in the file code/smeft_input_scheme.m.
* the format for defining user input parameters follows the standard format of FeynRules model definition files, as illustrated in the example below: SM$InputParameters = { (* observables used as input parameters in gauge and Higgs sector *) alphas == { ParameterType -> External, Value -> 0.1176, InteractionOrder -> {QCD,2}, TeX -> Subscript[\(\backslash\)[Alpha],s], Description -> "average alpha_s at MZ scale" },... } A more detailed example of user input parameter definition can be found in the header of the file code/smeft_input_scheme.m supplied with the SmeftFR v3 distribution.
* the chosen set of user input parameters must be sufficient to fully define "default" SMEFT parameters in terms of them and WCs of higher dimension operators. After choosing their own input parameters, further referred to as "input schemes", the users are supposed to provide the corresponding routine with analytical expressions for _all_ variables listed in Table 2. The example of such a routine and predefined most-often used SMEFT
input scheme are again provided in the file code/smeft_input_scheme.m (see routine SMEFTInputScheme).
### Predefined input schemes
Although SmeftFR v3 in principle allows defining any set of user-defined input parameters, some input schemes are more natural and technically easier to use than others. In particular, it is almost obligatory to use physical masses of SM particles as part of the input parameter set. Otherwise, if masses are calculated as combinations of other variables and WCs, the latter appear in the particle propagators, making all amplitude calculations and \(1/\Lambda\) expansions significantly more difficult. This leaves only \(\bar{g}\), \(\bar{g}^{\prime}\), the vev \(v\), and \(\lambda\) in the electroweak sector, \(\bar{g}_{s}\) in the strong sector, CKM matrix \(K\) in the quark sector and PMNS matrix \(U\) in the lepton sector to be defined in terms of input parameters.
SmeftFR v3 provides predefined routines realising the most commonly used SMEFT input schemes which can be selected by calling the SMEFTInputScheme routine with relevant options:
* Gauge sector:
* (\(G_{F},M_{Z},M_{W},M_{H}\)) input scheme or
* (\(\alpha_{em},M_{Z},M_{W},M_{H}\)) input scheme where \(M_{Z},M_{W},M_{H}\) are the physical gauge and Higgs boson masses and \(G_{F}\) is the Fermi constant derived from the muon lifetime. In both cases "default" electroweak sector parameters \(\bar{g},\bar{g}^{\prime},v\) and \(\lambda\) are expressed in terms of input parameters listed above including linear and quadratic corrections from all contributing \(d=6\) operators and linear corrections from only-bosonic \(d=8\) operators. Strong coupling \(\bar{g}_{s}\) is defined as \(\sqrt{4\pi\alpha_{s}(M_{Z})}\) with some input value of \(\alpha_{s}(M_{Z})\) assumed. Currently, SmeftFR v3 distribution does not include _any_ corrections from higher order operators, leaving it eventually to further modifications by users. It is not an easy task - the experimental value of \(\alpha_{s}(M_{Z})\) cited in literature is an average from various types of measurements. The correct derivation of such an average in SMEFT should take
\begin{table}
\begin{tabular}{l c c c c} \hline Gauge and Higgs sector & \multicolumn{2}{c}{Quark sector} & \multicolumn{2}{c}{Lepton sector} \\ \hline UserInput$vev & \(v\) & UserInput$MQU & \(m_{u}\) & UserInput$MLE & \(m_{e}\) \\ UserInput$GW & \(\bar{g}\) & UserInput$MQC & \(m_{c}\) & UserInput$MLM & \(m_{\mu}\) \\ UserInput$G1 & \(\bar{g}^{\prime}\) & UserInput$MQT & \(m_{t}\) & UserInput$MLT & \(m_{\tau}\) \\ UserInput$GS & \(\bar{g}_{s}\) & UserInput$MQD & \(m_{d}\) & UserInput$MVE & \(m_{\nu_{e}}\) \\ UserInput$hlambda & \(\lambda\) & UserInput$MQS & \(m_{s}\) & UserInput$MVM & \(m_{\nu_{\mu}}\) \\ UserInput$MZ & \(M_{Z}\) & UserInput$MQB & \(m_{b}\) & UserInput$MVT & \(m_{\nu_{\tau}}\) \\ UserInput$MW & \(M_{W}\) & UserInput$CKM & \(K\) & UserInput$PMNS & \(U\) \\ UserInput$MH & \(M_{H}\) & & & \\ \hline \end{tabular}
\end{table}
Table 2: Names of normalisation constants and corresponding internal SmeftFR variables.
into account the fact that different processes used to determine \(\alpha_{s}(M_{Z})\) are affected in different ways by the presence of the higher dimension operators, thus the relation of the "averaged" \(\alpha_{s}(M_{Z})\) to \(\bar{g}_{s}\) has a complicated dependence on WCs of such operators. To our knowledge, no such analysis exists yet in the literature, providing formulae which could be implemented in the symbolic or numerical codes.
* even if such notion is unclear in case of light quarks, their values usually do not affect in substantial way most of practical calculations, so also the exact definitions are not so important in this case. Corrections to CKM matrix \(K\) are evaluated using the formulae derived in ref. [14]. They are accurate up to \(d=6\) linear terms. One should note that non-vanishing values of some flavor off-diagonal 4-quark operators can lead to numerically very large corrections to CKM elements. If they are larger than 20%, \(\mathtt{SmeftFR}\) v3 displays a relevant warning and does not include corrections to CKM matrix at all. They can be forced to be included independently on how large they appear using the option \(\mathtt{CKMInput}\,\rightarrow\,\)"force" in \(\mathtt{SMEFTInitializeModel}\) routine.
* Lepton sector: Charged lepton masses are assumed to be physical masses. Neutrino masses are calculated as proportional to the WC of \(d=5\) Weinberg operator, \(m_{\nu_{i}}=v^{2}|C^{i}_{\nu\nu}|\). The PMNS matrix is currently evaluated from measured neutrino mixing angles without including corrections from higher order operators, again leaving it to eventual future modifications by users.
In the predefined input scheme routines in the gauge sector, all re-parametrizations are done analytically. Analytical formulae for corrections to \(K\) matrix element are lengthy and complicated, leading to very long and hardly readable expressions for interaction vertices and as result also transition amplitudes. Therefore, currently, corrections to CKM matrix elements from the \(d=6\) operators are in \(\mathtt{SmeftFR}\) v3 evaluated numerically and added to SM central values.
### Output parametrization
Following the options described above, \(\mathtt{SmeftFR}\) v3 can calculate the interaction vertices in mass basis parametrized in three (user-selectable) forms:
1. The "unexpanded" (selected as option \(\mathtt{Expansion}\rightarrow\,\)"none" in relevant routines as described in Sec. 5) parametrization. Interaction vertices are given in terms of "default" parameters, WCs and \(Z_{X}\) normalisation constants without expressing the latter explicitly in terms of "default" or "user-defined" parameters. Such output is compact and fast to produce. Also, it is the most universal one - adding additional higher order operators (like fermionic \(d=8\) operators or even higher EFT orders), apart from directly appearing new vertices, can be easily accommodated by adding new contributions to expressions for \(Z_{X}\). However, in such form, consistent expansion to a given EFT order is hidden and can be done only after substituting explicit expressions for \(Z_{X}\). Sample vertices in such parametrization are displayed in Fig. 1.
2. The "default" (chosen by the option Expansion\(\to\)"smeft") parametrization. Interaction vertices are given in terms of "default" parameters and WCs, with shifts of SM fields and couplings expanded accordingly. The result is truncated to user-selectable EFT order (\(d=4\), 6 or 8). Sample vertices in such parametrization are displayed in Fig. 2. \[- \frac{i}{2\sqrt{\bar{g}^{\prime 2}+\bar{g}^{2}}}\delta_{f_{1}f_{2}} \left(\left(\bar{g}^{\prime 2}-\bar{g}^{2}\right)\gamma^{\mu_{3}}P_{L}+2\bar{g}^{ \prime 2}\gamma^{\mu_{3}}P_{R}\right)\] \[+ \frac{i\bar{g}^{\prime}\bar{g}v^{2}}{2\left(\bar{g}^{\prime 2}+ \bar{g}^{2}\right)^{3/2}}\delta_{f_{1}f_{2}}\left(\left(\bar{g}^{\prime 2}-\bar{g}^{2} \right)\gamma^{\mu_{3}}P_{L}-2\bar{g}^{2}\gamma^{\mu_{3}}P_{R}\right)C^{\varphi WB}\] \[+ \frac{\sqrt{2}\bar{g}^{\prime}v}{\sqrt{\bar{g}^{\prime 2}+\bar{g}^{2} }}p_{3}^{\nu}\left(C_{f_{2}f_{1}}^{eB*}\sigma^{\mu_{3}\nu}P_{L}+C_{f_{1}f_{2}} ^{eB}\sigma^{\mu_{3}\nu}P_{R}\right)\] \[+ \frac{\sqrt{2}\bar{g}v}{\sqrt{\bar{g}^{\prime 2}+\bar{g}^{2} }}p_{3}^{\nu}\left(C_{f_{2}f_{1}}^{eW*}\sigma^{\mu_{3}\nu}P_{L}+C_{f_{1}f_{2}} ^{eW}\sigma^{\mu_{3}\nu}P_{R}\right)\] \[+ \frac{1}{2}iv^{2}\sqrt{\bar{g}^{\prime 2}+\bar{g}^{2}}C_{f_{1}f_{2}} ^{\varphi e}\gamma^{\mu_{3}}P_{R}+\frac{1}{2}iv^{2}\sqrt{\bar{g}^{\prime 2}+ \bar{g}^{2}}C_{f_{1}f_{2}}^{\varphi l1}\gamma^{\mu_{3}}P_{L}\] \[+ \frac{1}{2}iv^{2}\sqrt{\bar{g}^{\prime 2}+\bar{g}^{2}}C_{f_{1}f_{2}} ^{\varphi l3}\gamma^{\mu_{3}}P_{L}\] \[+ \frac{1}{2}i\bar{g}^{2}v\eta_{\mu_{2}\mu_{3}}+\frac{1}{2}i\bar{g }^{2}v^{3}\eta_{\mu_{2}\mu_{3}}C^{\varphi\Box}-\frac{1}{8}i\bar{g}^{2}v^{3} \eta_{\mu_{2}\mu_{3}}C^{\varphi D}\] \[+ 4ivC^{\varphi W}\left(p_{2}^{\mu_{3}}p_{3}^{\mu_{2}}-p_{2}\cdot p _{3}\eta_{\mu_{2}\mu_{3}}\right)\]
Figure 2: Same as in Fig. 1 but in default (\(\bar{g}^{\prime}\), \(\bar{g}\), \(v\)) parametrization scheme (the \(Z_{X}\) couplings are expanded up to maximal dimension-6 terms).
Figure 1: \(Z\ell^{+}\ell^{-}\) and \(hW^{+}W^{-}\) vertices before expansion of \(Z_{X}\) couplings (including a sample list of operators up to maximal dimension-6). For simplicity in displaying every Feynman rule, the \(1/\Lambda^{2}\)-factor accompanying every \(d=6\) Wilson Coefficient is omitted e.g. \(C^{\varphi W}\to C^{\varphi W}/\Lambda^{2}\).
3. The "user" (chosen by the option Expansion\(\,\rightarrow\,\)"user") parametrization. Interaction vertices are given directly in terms of user-defined input parameters and WCs, again with shifts of SM fields and couplings expanded accordingly. The result is truncated to user-selectable EFT order (\(d=4\), 6 or 8). Sample vertices for the \((G_{F},M_{Z},M_{W},M_{h})\) input scheme in the electroweak sector (see discussion in Sec. 3.3) are displayed in Fig. 3. \[- \frac{i2^{1/4}\sqrt{G_{F}}}{M_{Z}}\delta_{f_{1}f_{2}}\left(\left(M_ {Z}^{2}-2M_{W}^{2}\right)\gamma^{\mu_{3}}P_{L}+2\left(M_{Z}^{2}-M_{W}^{2} \right)\gamma^{\mu_{3}}P_{R}\right)\] \[+ \frac{i2^{3/4}M_{W}\sqrt{M_{Z}^{2}-M_{W}^{2}}}{M_{Z}\sqrt{G_{F}}} \delta_{f_{1}f_{2}}C^{\varphi WB}\gamma^{\mu_{3}}\] \[+ \frac{2^{1/4}\sqrt{M_{Z}^{2}-M_{W}^{2}}}{\sqrt{G_{F}}M_{Z}}p_{3} ^{\nu}\left(C_{f_{2}f_{1}}^{eB*}\sigma^{\mu_{3}\nu}P_{L}+C_{f_{1}f_{2}}^{eB} \sigma^{\mu_{3}\nu}P_{R}\right)\] \[+ \frac{2^{1/4}M_{W}}{\sqrt{G_{F}}M_{Z}}p_{3}^{\nu}\left(C_{f_{2}f _{1}}^{eW*}\sigma^{\mu_{3}\nu}P_{L}+C_{f_{1}f_{2}}^{eW}\sigma^{\mu_{3}\nu}P_{R }\right)\] \[+ \frac{i\,\delta_{f_{1}f_{2}}}{2^{9/4}\sqrt{G_{F}}M_{Z}}C^{\varphi D }\left(\left(2M_{W}^{2}+M_{Z}^{2}\right)\gamma^{\mu_{3}}P_{L}+2\left(M_{W}^{2} +M_{Z}^{2}\right)\gamma^{\mu_{3}}P_{R}\right)\] \[+ \frac{iM_{Z}}{2^{1/4}\sqrt{G_{F}}}C_{f_{1}f_{2}}^{\varphi e} \gamma^{\mu_{3}}P_{R}+\frac{iM_{Z}}{2^{1/4}\sqrt{G_{F}}}C_{f_{1}f_{2}}^{\varphi l 1}\gamma^{\mu_{3}}P_{L}+\frac{iM_{Z}}{2^{1/4}\sqrt{G_{F}}}C_{f_{1}f_{2}}^{\varphi l 3}\gamma^{\mu_{3}}P_{L}\] \[+ \frac{i\,\delta_{f_{1}f_{2}}}{2^{9/4}\sqrt{G_{F}}M_{Z}}C_{2112}^{ ll}\left(\left(M_{Z}^{2}-2M_{W}^{2}\right)\gamma^{\mu_{3}}P_{L}+2\left(M_{Z}^{2}-M_{W}^{ 2}\right)\gamma^{\mu_{3}}P_{R}\right)\] \[+ \frac{i\,\delta_{f_{1}f_{2}}}{2^{9/4}\sqrt{G_{F}}M_{Z}}\left(C_{1 1}^{e!3}+C_{22}^{e!3}\right)\left(\left(2M_{W}^{2}-M_{Z}^{2}\right)\gamma^{\mu _{3}}P_{L}+2\left(M_{W}^{2}-M_{Z}^{2}\right)\gamma^{\mu_{3}}P_{R}\right)\] \[+ i2^{3/4}\sqrt{G_{F}}M_{W}^{2}\eta_{\mu_{2}\mu_{3}}+\frac{i2^{3/4 }M_{W}^{2}}{\sqrt{G_{F}}}\eta_{\mu_{2}\mu_{3}}C^{\varphi\Box}-\frac{iM_{W}^{2} }{2^{3/4}\sqrt{G_{F}}}\eta_{\mu_{2}\mu_{3}}C^{\varphi D}\] \[- \frac{iM_{W}^{2}}{2^{3/4}\sqrt{G_{F}}}\eta_{\mu_{2}\mu_{3}}C_{2112 }^{ll}+\frac{iM_{W}^{2}}{2^{3/4}\sqrt{G_{F}}}\eta_{\mu_{2}\mu_{3}}\left(C_{1 1}^{e!3}+C_{22}^{e!3}\right)\] \[+ \frac{i2^{7/4}}{\sqrt{G_{F}}}C^{\varphi W}\left(p_{2}^{\mu_{3}} p_{3}^{\mu_{2}}-p_{2}\cdot p_{3}\eta_{\mu_{2}\mu_{3}}\right)\]
As described in more details in the next Section, the form of the output can be selected by choosing various code options.
## 4 SmeftFR installation and code structure
### Installation
SmeftFR package works using the FeynRules system, so both need to be properly installed first. A recent version and installation instructions for the FeynRules package can be downloaded from the address:
[https://feynrules.irmp.ucl.ac.be](https://feynrules.irmp.ucl.ac.be)
Figure 3: Same as in Fig. 1 but in the \((G_{F},M_{Z},M_{W},M_{h})\) input scheme (the \(Z_{X}\) couplings are expanded up to maximal dimension-6 terms).
SmeftFR v3 has been tested with FeynRules version 2.3.49. It should be used with _Mathematica_ version 12.1 or later, as also the newest FeynRules version was modified to be compatible with _Mathematica_ upgrades.
Standard FeynRules installation assumes that the new models' description is put into Models sub-directory of its main tree. We follow this convention, so that the SmeftFR file archive should be unpacked into
Models/SMEFT_N_NN
catalogue, where N_NN denotes the package version (currently version 3_00). After installation, Models/SMEFT_N_NN contains the following files and sub-directories listed in Table 3.
Before running the package, one needs to set properly the main FeynRules installation directory, defining the $FeynRulesPath variable at the beginning of smeft_fr_init.m and smeft_fr_interfaces.m files. For non-standard installations (not advised!), also the variable SMEFT$Path has to be updated accordingly.
### Code structure
The most general version of SMEFT, including all possible flavour violating couplings, is very complicated. Symbolic operations on the full SMEFT Lagrangian, including the complete
\begin{table}
\begin{tabular}{|c l|} \hline \multicolumn{2}{|c|}{SmeftFR-init.nb} & Notebook and equivalent text script generating SMEFT \\ \multicolumn{2}{|c|}{smeft\_fr\_init.m} & Lagrangian in mass basis and Feynman rules in _Mathematica_ format. \\ \multicolumn{2}{|c|}{SmeftFR-interfaces.nb} & Notebook and text script with routines for exporting \\ \multicolumn{2}{|c|}{smeft\_fr\_interfaces.m} & Feynman rules in various formats: WCxf, Latex, UFO and FeynArts. \\ \multicolumn{2}{|c|}{SmeftFR\_v3.pdf} & package manual in pdf format. \\ \multicolumn{2}{|c|}{code} & sub-directory with package code and utilities. \\ \multicolumn{2}{|c|}{lagrangian} & sub-directory with expressions for the SM Lagrangian \\ \multicolumn{2}{|c|}{and} & dimension-5, 6 and 8 operators coded in FeynRules format. \\ \multicolumn{2}{|c|}{definitions} & sub-directory with templates of SMEFT “model files” and example of numerical input for Wilson coefficients in WCxf format. \\ \multicolumn{2}{|c|}{output} & sub-directory with dynamically generated model “parameter files” and output for Feynman rules in various formats, by default _Mathematica_, Latex, UFO and FeynArts are generated. \\ \hline \end{tabular}
\end{table}
Table 3: Files and directories included in SmeftFR v3.00 package.
set of dimension-5 and-6 operators and bosonic dimension-8 operators, with numerical values of all Wilson coefficients assigned, are time-consuming and can take hours or even days on a standard personal computer. For most of the physical applications it is sufficient to derive interactions only for a subset of operators.3
Footnote 3: Eventually, operators must be selected with care as in general they may mix under renormalisation [84; 85; 86].
To speed up the calculations, SmeftFR can evaluate Feynman rules for a chosen subset of operators only, generating dynamically the proper FeynRules "model files". The calculations are divided in three stages, as illustrated in the flowchart of Fig. 4.
* First, before initialising the FeynRules engine, a routine relating default and user-defined input parameters are executed. Numerical values of parameters depending on WCs of higher order operators are calculated. Then, two FeynRules model files for SMEFT (for gauge and mass basis) are dynamically generated, containing all variables required to fully describe interactions in various parametrizations (see Sec. 3.4).
* Next, the SMEFT Lagrangian is initialised in gauge basis and transformed to mass eigenstates basis analytically. At this stage, \(Z_{X}\) normalisation constants are evaluated in terms of both "default" and "user-defined" input parameters, but such explicit expressions are not substituted in interaction vertices. This very significantly speeds up the calculations (approximately by an order of magnitude comparing to SmeftFR v2) and produces expressions that are remarkably compact for such a complicated model. All terms which are explicitly of order in \(1/\Lambda\) higher than requested by users (maximum \(1/\Lambda^{4}\)) are truncated, but for consistent \(1/\Lambda\) expansions such terms must be neglected once more after inserting an explicit expression for \(Z_{X}\). The resulting mass basis Lagrangian, normalisation constants \(Z_{X}\) and Feynman rules written in Mathematica format are stored on disk.
* UFO, FeynArts and others. At this stage, users can choose the form of output parametrization, with \(Z_{X}\) normalisation constants also replaced by their corresponding explicit forms.
## 5 Deriving SMEFT Feynman rules with SmeftFR package
### Model initialisation
In the first step, the relevant FeynRules model files must be generated. This is done by calling the function:
SMEFTInitializeModel[Option1\(\rightarrow\) Value1, Option2\(\rightarrow\) Value2,...]
with the allowed options listed in Table 4.
The list and the naming of operators employed by SmeftFR v3 is arranged and explained in Appendix B. Names of operators used in SmeftFR are derived from the subscript indices of
Figure 4: Structure of the SmeftFR v3 code.
\begin{table}
\begin{tabular}{l l l} \hline \hline Option & Allowed values & Description \\ \hline Operators & list of operators & Subset of SMEFT operators included in calculations. \\ Gauge & **Unitary**, Rxi & Choice of gauge fixing conditions. \\ ExpansionOrder & 0, **1** or 2 & SMEFT interactions are expanded to \(1/\Lambda^{2\,\text{ExpansionOrder}}\) (default: \(1/\Lambda^{2}\)). \\ WCXFInitFile & **””** & Name of file with numerical values of Wilson coefficients in the WCxf format. If this option is not set, all WCs are initialised to 0. \\ RealParameters & **False**, True & Some codes like MadGraph 5 accept only real values of parameters. If this option is set to True, imaginary part of complex parameters are truncated in FeynRules model files. \\ InputScheme & **”GF”**, & Selection of input parameters scheme, see discussion in Sec. 3. \\ CKMInput & ”no”, **”yes”**, & Decides if corrections to CKM matrix are included (use ”force” to add them even their relative size exceeds the threshold defined in variable SMEFT$CKMTreshold (default: 0.2). \\ MaxParticles & **6** & Only Feynman rules with less then MaxParticles external legs are calculated. Does not affect UFO and FeynArts output. \\ MajoranaNeutrino & **False**, True & Neutrinos are treated as Majorana spinors if \(Q_{\nu\nu}\) is included in the operator list or this option is set to True, massless Weyl spinors otherwise. \\ Correct4Fermion & False, **True** & Corrects relative sign of some 4-fermion interactions, fixing results of FeynRules. \\ WBFirstLetter & **”c”** & Customisable first letter of Wilson coefficient names in Warsaw basis (default \(c_{G},\ldots\)). \\ MBFirstLetter & **”C”** & Customisable first letter of Wilson coefficient names in mass basis (default \(C_{G},\ldots\)). \\ \hline \hline \end{tabular}
\end{table}
Table 4: The allowed options of SMEFTInitializeModel routine. If an option is not specified, the default value (marked above in boldface) is assumed.
operators listed in Tables 2 and 3 of ref [8] (rewritten here in Table B.1 for complementarity) with obvious transcriptions of "tilde" symbol and Greek letters to Latin alphabet. By default, all possible 59+1+4 SMEFT (\(d=5\) and \(d=6\)) operator classes and no \(d=8\) operators are included in calculations, which is equivalent to setting operator list to:
OpList = { "G", "Gtilde", "W", "Wtilde", "phi", "phiBox", "phiD", "phiW", "phiB", "phiWB", "phiWtilde", "phiBtilde", "phiWtilde", "phiWtilde", "phiWtilde", "phiWtilde", "phiGtilde", "phiG", "phiH", "quhi", "euW", "eB", "uG", "uW", "uB", "dG", "dW", "dB", "phil1", "phil3", "phie", "phiq1", "phiq3", "phiu", "phid", "phid", "phid", "ll", "qq1", "qq3", "lq1", "lq3", "ee", "uu", "dd", "eu", "ed", "udl", "ud8", "le", "lu", "ld", "qe", "qu1", "qu8", "qdl", "qd8", "ledq", "quqd1", "quqd8", "lequi", "lequ3", "vv", "duq", "qqu", "qqq", "duu" }
Moreover, any or all bosonic operators of dimension-8 defined in the basis of ref. [13] can be added to the list above. Following the notation in Tables B.2, B.3, and B.4, again with obvious transcriptions their names are (see examples in Appendix B):
{ "phi8", "phi6Box", "phi6D2", "G2phi4n1", "G2phi4n2", "W2phi4n1", "W2phi4n2", "W2phi4n2", "W2phi4n3", "W2phi4n4n", "W2phi4n1", "W2phi4n2", "B2phi4n1", "B2phi4n2", "G4n1", "G4n2", "G4n3", "G4n4", "G4n5", "G4n6", "G4n7", "G4n8", "G4n9", "W4n1", "W4n2", "W4n3", "W4n4", "W4n5", "W4n6", "B4n1", "B4n2", "B4n3", "G3Bn1", "G3Bn2", "G3Bn3", "G3Bn4", "G2W2n1", "G2W2n2", "G2W2n3", "G2W2n4", "G2W2n6", "G2W2n7", "G2B2n7", "G2B2n1", "G2B2n2", "G2B2n2", "G2B2n3", "G2B2n4", "G2B2n5", "G2B2n6", "G2B2n7", "W2B2n1", "W2B2n2", "W2Bn3", "W2B2n4", "W2B2n6", "W2B2n7", "phi4n1", "phi4n2", "phi4n3", "G3phi2n1", "G3phi2n2", "W3phi2n1", "W3phi2n2", "W2Bphi2n1", "W2Bphi2n2", "W2Bphi2D2n1", "G2phi2D2n2", "G2phi2D2n3", "W2phi2D2n3", "W2phi2D2n1", "W2phi2D2n2", "W2phi2D2n3", "W2phi2D2n4", "W2phi2D2n5", "W2phi2D2n6", "W2Bphi2D2n1", "W2Bphi2D2n2", "W2phi2D2n3", "W2Bphi2D2n4", "W2Bphi2D2n5", "W2Bphi2D2n6", "B2phi2D2n1", "B2phi2D2n2", "B2phi2D2n3", "W2phi4D2n1", "W2phi4D2n2", "W2phi4D2n3", "W2phi4D2n4", "W2phi4D2n1", "Bphi4D2n2" }
To speed up the derivation of Feynman rules and to get more compact expressions, the user can restrict the list above to any preferred subset of operators.
SmeftFR is fully integrated with the WCxf standard. Apart from numerically editing Wilson coefficients in FeynRules model files, reading them from the WCxf input is the only way of automatic initialization of their numerical values. Such an input format is exchangeable between a larger set of SMEFT-related public packages [24] and helps in comparing their results.
An additional advantage of using WCxf input format comes in the flavour sector of the theory. Here, Wilson coefficients are in general tensors with flavour indices, in many cases symmetric under various permutations. WCxf input requires initialisation of only the minimal set of flavour dependent Wilson coefficients, those which could be derived by permutations are also automatically properly set.4
Footnote 4: We would like to thank D. Straub for supplying us with a code for symmetrisation of flavour-dependent Wilson coefficients.
There is no commonly accepted standard for initialisation of numerical values of WCs of \(d=8\) operators, but as we only including scalar (no flavor indices) bosonic operators, adding them to WCxf-type input files is straightforward, we follow the convention for \(d=6\) bosonic
operators just using the new names for \(d=8\) entries.
Further comments concern MajoranaNeutrino and Correct4Fermion options. They are used to modify the analytical expressions only for the Feynman rules, not at the level of the mass basis Lagrangian from which the rules are derived. This is because some FeynRules interfaces, like UFO, intentionally leave the relative sign of 4-fermion interactions uncorrected5, as it is later changed by Monte-Carlo generators like MadGraph5. Correcting the sign before generating UFO output would therefore lead to wrong final result. Similarly, treatment of neutrinos as Majorana fields could not be compatible with hard coded quantum number definitions in various packages. On the other hand, in the manual or symbolic computations it is convenient to have from the start the correct form of Feynman rules, as done by SmeftFR when both options are set to their default values.
Footnote 5: B. Fuks, private communication.
Currently, the predefined input scheme for initialization of CKM matrix elements is based on the approach of ref. [14]. It can lead to numerically very large corrections to CKM matrix from some of the flavor off-diagonal 4-quark dimension-6 operators. Such large corrections usually mean that the assumed values of 4-quark WCs violate experimental bounds on flavor transitions and should be modified. In such case, by default SmeftFR v3 displays a relevant warning and does not include corrections to CKM matrix at all, expecting WC values to be modified. Such behaviour can be overwritten (so that even huge corrections are included, but the warning is still displayed) setting option ForceCKMInput\(\,\rightarrow\,\)True. The maximum allowed size of corrections to CKM any of CKM elements is defined by variable SMEFT$CKMTreshold in the file code/smeft_variables.m and by default set to SMEFT$CKMTreshold=0.2. Users can modify this number to their preferred sensitivity level.
After execution, SMEFTInitializeModel creates in the output subdirectory two model files:
* smeft_par_WB.fr: SMEFT parameter file with Wilson coefficients in gauge basis (defined as "Internal", with no numerical values assigned).
* smeft_par_MB.fr: SMEFT parameter file with Wilson coefficients in mass basis (defined as "External", numerical values of WCs imported from the input file in WCxf format).
Note that field definitions are not generated dynamically and stored as fixed files named smeft_fields_WB.fr and smeft_fields_MB.fr in definitions subdirectory.
Parameter files generated by SMEFTInitializeModel contain also definitions of SM parameters, copied from several template files in definitions subdirectory and, most importantly, from the header of the code/smeft_input_scheme.m file, where the user-defined input parameters should be listed. Only the latter, values of user-defined parameters, are copied unchanged to model files, numerical values of other parameters can be updated to include corrections from higher order operators (thus hand-made modifications in files in definitions subdirectory are not advised and will be overwritten by the code).
As mentioned above, in all analytical calculations performed by SmeftFR, terms suppressed by terms of the order higher than \(\mathcal{O}(1/\Lambda^{2\,\mathrm{ExpansionOrder}})\) are always neglected. Therefore,
the resulting Feynman rules can be consistently used to calculate physical observables, symbolically or numerically by Monte-Carlo generators, up to the maximum quadratic order in dimension-6 operators and linear order in dimension-8 operators. This information is encoded in FeynRules SMEFT model files by assigning the "interaction order" parameter to Wilson coefficients: NP=1 for \(d=6\) WCs and NP=2 for \(d=8\) operators. ExpansionOrder parameter is passed also to model files smeft_par_WB.fr and smeft_par_MB.fr as:
\begin{tabular}{l l} \(\mathsf{MS}\)InteractionOrderLimit = & \{ \\ & \{QCD,99\}, \\ & \{NP,ExpansionOrder\}, \\ & \{QED,99\} \\ & \} \\ \end{tabular}
### Calculation of mass basis Lagrangian and Feynman rules
By loading the FeynRules model files, the derivation of SMEFT Lagrangian in mass basis is performed by calling the following sequence of routines:
\begin{tabular}{l l} SMEFTLoadModel[ ] & Loads output/smeft_par_WB.par model file and imports \\ SMEFT Lagrangian in gauge basis for chosen subset of operators. \\ SMEFTFindMassBasis[ ] & Finds field bilinears and analytical transformations diagonalizing \\ & mass matrices. Calculates the expressions for \(Z_{X}\) normalisation \\ SMEFTFeynmanRules[ ] & Evaluates analytically SMEFT Lagrangian and Feynman rules in the mass basis to a required order in \(\mathcal{O}(1/\Lambda)\), _without substituting explicit expressions_ for \(Z_{X}\) constants (see example in Fig. 1). \\ SMEFTOutput[ _Options_ ] & By default stores SMEFT model file with parameters in mass basis as output/smeft_par_MB.m and mass basis Lagrangian and vertices in output/smeft_feynman_rules.m. To generate output in different locations, use options _ModelFile_\(\rightarrow\)_filename1_ and _TargetFile_\(\rightarrow\)_filename2_. \\ \end{tabular}
The calculation time may vary considerably depending on the choice of operator (sub-)set and gauge fixing conditions chosen. For the full list of SMEFT \(d=5\) and \(d=6\) operators and in \(R_{\xi}\)-gauges, one can expect CPU time necessary to evaluate all Feynman rules for up to about an hour on a typical personal computer, depending on its speed capabilities. Adding \(d=8\) operators can obviously increase the CPU time, therefore it is advisable to choose only the operators relevant to a given analysis.
One should note that when neutrinos are treated as Majorana particles, (as necessary in case of non-vanishing Wilson coefficient of \(d=5\) Weinberg operator), their interactions involve lepton number non-conservation. Baryon and lepton (BL) number is also not conserved when explicitly BL-violating 4-fermion operators are included in Lagrangian. When FeynRules is dealing with such cases, it produces warnings of the form:
_QN::NonConserv: Warning: non quantum number conserving vertex encountered! Quantum number LeptonNumber not conserved in vertex..._
Obviously, such warnings should be ignored.
Evaluation of Feynman rules for vertices involving more than two fermions is not fully implemented yet in FeynRules, and some warnings are displayed. To our experience, in most cases 4-fermion vertices are calculated correctly in spite of such warnings, apart from the issue of relative sign of four fermion diagrams mentioned earlier. Some cases are still problematic, e.g. the correct automatic derivation of quartic interactions with four Majorana neutrinos. For these special cases, SmeftFR overwrites the FeynRules result with manually calculated formulae encoded in Mathematica format.
Another remark concerns the hermicity property of the SMEFT Lagrangian. For some types of interactions, e.g. four-fermion vertices involving two-quarks and two-leptons, the function CheckHermicity provided by FeynRules reports non-Hermitian terms in the Lagrangian. However, such terms are actually Hermitian if permutation symmetries of indices of relevant Wilson coefficients are taken into account. Such symmetries are automatically imposed if numerical values of Wilson coefficients are initialised with the use of SMEFTInitializeMB or SMEFTToWCXF routines (see Sec. 5.3 and 5.3.1).
Results of the calculations are by default collected in file output/smeft_feynman_rules.m. The Feynman rules and parts of the mass basis Lagrangian for various classes of interactions are stored in the variables with self-explanatory names listed in Table 5.
File output/smeft_feynman_rules.m contains also expressions for the normalisation factors \(Z_{X}\) relating Higgs and gauge fields and couplings in the Warsaw and mass basis, in "default" and "user" parametrizations (see Table 1 for corresponding names of code variables). In addition, formulae for tree level corrections to SM mass parameters and Yukawa couplings are stored in variables SMEFT$vev, SMEFT$MH, SMEFT$MW, SMEFT$MZ, SMEFT$YL[i,j], SMEFT$YD[i,j] and SMEFT$YU[i,j], as well as the selected user-defined program options.
\begin{table}
\begin{tabular}{l l} LeptonGaugeVertices & QuarkGaugeVertices \\ LeptonHiggsGaugeVertices & QuarkHiggsGaugeVertices \\ QuarkGluonVertices & GaugeSelfVertices \\ GluonSelfVertices & GluonHiggsVertices \\ GhostVertices & FourLeptonVertices \\ FourQuarkVertices & TwoQuarkTwoLeptonVertices \\ DeltaLTwoVertices & BLViolatingVertices \\ \hline \end{tabular}
\end{table}
Table 5: Names of variables defined in the file output/smeft_feynman_rules.m containing expressions for Feynman rules. Parts of mass basis Lagrangian are stored in equivalent set of variables, with “Vertices” replaced by “Lagrangian” in part of their names (i.e. LeptonGaugeVertices\(\rightarrow\) LeptonGaugeLagrangian, _etc._).
As mentioned before, in expressions for Lagrangian parts and vertices stored in variables of Table 5 the \(Z_{X}\) constants are left in an unexpanded form, as in Fig. 1. To produce formulae fully expanded in \(1/\Lambda\) powers to a required order, one must call the routine SMEFTExpandVertices, e.g. for vertices in "default" parametrization up to \(1/\Lambda^{4}\) terms one should use
SMEFTExpandVertices[Input -> "smeft", ExpOrder -> 2] (another possible choice is Input \(\rightarrow\) "user"). Then expanded version of vertices is copied to variables ending with "Exp" (LeptonGaugeVerticesExp, QuarkGaugeVerticesExp etc.) and can be displayed or used in further calculations using standard FeynRules format.
At this point the Feynman rules for the mass basis Lagrangian are already calculated, but the definitions for fields and parameters used to initialise the SMEFT model in FeynRules are still given in gauge basis. To avoid inconsistencies, before exporting calculated expressions to other formats supported by FeynRules and SmeftFR one should quit the current Mathematica kernel and start a new one reloading the mass basis Lagrangian together with the compatible model files with fields defined also in mass basis, as described next in Sec. 5.3. All further calculations should be performed within this new kernel (routine SMEFTExpandVertices can be also used with this new kernel in the same way as described above).
### Output formats and interfaces
SmeftFR output in some of the portable formats must be generated from the SMEFT Lagrangian transformed to mass basis, with all numerical values of parameters initialised. As FeynRules does not allow for two different model files loaded within a single _Mathematica_ session, one needs to quit the kernel used to run routines necessary to obtain Feynman rules and, as described in the previous Section, start a new _Mathematica_ kernel. Within it, the user must reload FeynRules and SmeftFR packages and call the following routine:
SMEFTInitializeMB[ _Options_ ]
Allowed options are given in Table 6. After a call to SMEFTInitializeMB, mass basis model files are read and the mass basis Lagrangian is stored in a global variable named SMEFT$MBLagrangian for further use by interface routines.
#### 5.3.1 WCxf input and output
Translation between FeynRules model files and WCxf format is done by the functions SMEFTToWCXF and WCXFToSMEFT. They can be used standalone and do not require loading FeynRules and calling first SMEFTInitializeMB routine to work properly.
Exporting numerical values of Wilson coefficients of operators in the WCxf format is done by the function:
SMEFTToWCXF[ SMEFT_Parameter_File, WCXF_File, _FirstLetter \(\rightarrow\) SMEFT$MB ]_
where the arguments SMEFT_Parameter_File, WCXF_File define the input model parameter file in the FeynRules format and the output file in the WCxf JSON format, respectively. Option FirstLetter denote the first letter of names of WCs in a parameter file and needs
to be initialised only if it differs from variable MBFirstLetter in Table 4. The created JSON file can be used to transfer numerical values of Wilson coefficients to other codes supporting WCxf format. Note that in general, the FeynRules model files may contain different classes of parameters, according to the Value property defined to be a number (real or complex), a formula or even not defined at all. Only the Wilson coefficients with Value defined to be a number are transferred to the output file in WCxf format.
\begin{table}
\begin{tabular}{p{71.1pt} p{142.3pt} p{142.3pt}} \hline Option & Allowed values & Description \\ \hline Expansion & “none”,**“smeft”**, “user” & Decides which parametrization is used to describe interaction vertices - with \(Z_{X}\) normalisation constants in an unexpanded form (“none”), using “default” SMEFT parameters (“smeft”) or user-defined set of parameters (“user”) (see Sec. 3.4 and examples in Figs. 1, 2, 3). \\ InteractionFile & _filename_ & Name of the file with mass basis Lagrangian and vertices generated by SMEFTOutput routine. Default: output/smeft\_feynman\_rules.m \\ ModelFile & _filename_ & Name of the model file containing SMEFT parameters in mass basis generated by SMEFTOutput routine. Default: output/smeft\_par\_MB.fr \\ Include4Fermion & False, **True** & 4-fermion vertices are not fully supported by FeynRules - for extra safety calculations of them can be switched off by setting this option to False. \\ IncludeBL4Fermion & **False**, True & Baryon and lepton number violating 4-fermion vertices can be in principle evaluated by FeynRules, but including them may lead to compatibility problems with other codes - e.g. MadGraph 5 reports errors if such vertices are present in UFO file. Thus in SmeftFR evaluation of such vertices is by default switched off. Set this option to True to include them. \\ \hline \end{tabular}
\end{table}
Table 6: Options of SMEFTInitializeMB routine, with default values marked in boldface.
Conversely, files in WCxf format can be translated to FeynRules parameter files using two routines:
ReadWCXFInput[ WCXF_File, _Options_ ]
WCXFToSMEFT[ SMEFT_Parameter_File, _Options_]
where ReadWCXFInput reads values of WC from an input file in the WCxf format and WCXFTToSMEFT creates parameter model file for FeynRules which contain all necessary entries, including, apart from WCs, also the definitions and numerical values of "default" and "user-defined" SMEFT input parameters. The allowed options for both routines defined in Table 7.
#### 5.3.2 Latex output
SmeftFR provides a dedicated Latex generator (not using the generic FeynRules Latex export routine). Its output has the following structure:
* For each interaction vertex, the diagram is drawn, using the axodraw style [87]. Expressions for Feynman rules are displayed next to corresponding diagrams.
* In analytical expressions, all terms multiplying a given Wilson coefficient are collected together and simplified.
* Long analytical expressions are automatically broken into many lines using breakn style (this does not always work perfectly but the printout is sufficiently readable).
* Latex output can be generated only for vertices expressed in terms of "default" SMEFT parameters, with \(Z_{X}\) constants expanded in terms of WCs or kept as symbols (corresponding to options "smeft" or "none" in Tables 6 and 8). This is because the simplification
\begin{table}
\begin{tabular}{l l l} \hline Option & Allowed values & Description \\ \hline Operators & default: all operators & List with subset of Wilson coefficients to be included in the SMEFT parameter file \\ RealParameters & False, True & Decides if only real values of Wilson coefficients given in WCxf file are included in SMEFT parameter file. The default value of this option is the same as set in the routine SMEFTInitializeModel, see Table 4. \\ OverwriteTarget & **False**, True & If set to True, target file is overwritten without warning \\ \hline \end{tabular}
\end{table}
Table 7: Options of ReadWCXFInput and WCXFTToSMEFT routines. Default values are marked in boldface. Options RealParameters and OverwriteTarget affect only WCXFTToSMEFT.
of Latex formulae is optimised for such particular parametrizations, vertices calculated in terms of completely general "user-defined" parameter set may not be well readable.
* Only terms up to maximal dimension 6 are included in Latex output. Again, as above, this is because including higher order terms leads in most cases to lengthy and not very readable expressions.
Latex output is generated by the function:
SMEFTToLatex[ _Options_ ]
with the allowed options listed in Table 8. The function SMEFTToLatex assumes that the variables listed in Table 5 are initialised,thus it should be called after reloading the mass basis Lagrangian with the SMEFTInitializeMB routine, see Sec. 5.3.
Latex output is stored in output/latex subdirectory, split into smaller files, each containing one primary vertex. The main file is named smeft_feynman_rules.tex. The style files necessary to compile Latex output are supplied with the SmeftFR distribution.
Note that the correct compilation of documents using "axodraw.sty" style requires creating an intermediate Postscript file. Programs like _pdflatex_ producing directly PDF output will not work properly. One should instead run in terminal in the correct directory e.g.:
latex smeft_feynman_rules.tex dvips smeft_feynman_rules.dvi ps2pdf smeft_feynman_rules.ps
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} \hline Option name & Allowed values & Description \\ \hline Expansion & **“none”**, “smeft” & Decides which parametrization is used to describe interaction vertices - with \(Z_{X}\) normalisation constants in an unexpanded form (“none”) or using default SMEFT parameters (“smeft”) (see discussion in Sec. 3.4 and examples in Figs. 1, 2,3). \\ FullDocument & False, **True** & By default a complete document is generated, with all headers necessary for compilation. If set to False, headers are stripped off and the output file can be, without modifications, included into other Latex documents. \\ ScreenOutput & **False**, True & For debugging purposes, if set to True the Latex output is printed also to the screen. \\ \hline \end{tabular}
\end{table}
Table 8: Options of SMEFTToLatex routine, with default values marked in boldface.
or equivalent set of commands, depending on the Latex package used.
The smeff_feynman_rules.tex does not contain analytical expressions for five and six gluon vertices. Such formulae are very long (multiple pages, hard to even compile properly) and not useful for hand-made calculations. If such vertices are needed, they should be rather directly exported in some other formats, as described in the next subsection.
Other details not printed in the Latex output, such as, the form of field propagators, conventions for parameters and momenta flow in vertices (always incoming), manipulation of four-fermion vertices with Majorana fermions _etc_, are explained thoroughly in the Appendices A1-A3 of ref. [9].
#### 5.3.3 FeynArts and other standard FeynRules interfaces
After calling the initialization routine, SMEFTInitializeMB, one can generate output formats supported by native FeynRules interfaces, in particular one can export SMEFT interactions and parameters to files which could be imported by FeynArts (another especially important format, UFO, is discussed separately in the next section). For the descriptions of the available output formats and commands used to produce them, users should consult the FeynRules manual. For instance, to generate FeynArts output for the full mass basis Lagrangian, one could call:
WriteFeynArtsOutput[ SMEFT$MBLagrangian, _Output \(\rightarrow\) "output/FeynArts",..._]
It is important to note that FeynRules interfaces like FeynArts (or UFO described in Sec. 5.3.4), generate their output starting from the level of SMEFT mass basis Lagrangian. Thus, options of SMEFTInitializeModel function like MajoranaNeutrino and Correct4Fermion (see Table 4) have no effect on output generated by the interface routines. As explained in Sec. 5.1 they affect only the expressions for Feynman rules in FeynRules/Mathematica format (which are also used to generate Latex output file).
One should also note that FeynRules interfaces sometimes seem to be "non-commuting". For example, calling FeynArts export routine first, may lead to errors in subsequent execution of UFO interfaces (like signalling problems with incorrect handling of vertices containing explicit \(\sigma^{\mu\nu}\) Dirac matrices or issues with colour indices of SU(3) group structure constants), while the routines called in opposite order are both working properly. Therefore, it is safer to generate one type of FeynRules-supported output format at a time and reinitialise model in mass basis if more output types should be produced (WCxf and Latex generators does not suffer from such issues and can be safely used together with others).
Finally, we have tested that our Feynman rules communicate properly with FeynArts. An example of a non-trivial physics test we performed is the following: we used the programs' chain SmeftFR \(\rightarrow\)FeynArts \(\rightarrow\)FormCalc and calculated matrix elements for longitudinal vector boson scattering processes, \(V_{L}V_{L}\to V_{L}V_{L}\) with \(V=W^{\pm},Z\) at tree level with the full set of \(d=6\) operators. According to the Goldstone-Boson-Equivalence Theorem (GBET) [88; 89; 90; 91], at high energy this should be equal to the matrix elements for the Goldstone Boson scattering processes \(GG\to GG\) where, \(G=G^{\pm},G^{0}\) which should only contain WCs associated to operators with powers of pure Higgs field \(\varphi\) and its derivatives. All other, and there are many, WCs cancel out non-trivially in all input 'user'' schemes employed by SmeftFR v3.
Similarly, we have also checked the validity of GBET (at tree level) for \(V_{L}V_{L}\to V_{L}V_{L}\) by including \(d=6\) and \(d=8\) operators involving the full set of pure Higgs boson operators and its derivatives. Finally, several checks using Feynman Rules from SmeftFR with FormCalc or FeynCalc or by hand of various Ward-Identities have been performed, and we always found agreement.
#### 5.3.4 UFO format and MadGraph 5 issues
Correct generation of UFO format requires more care. UFO format requires an extra parameter, "interaction order" (IO), to be assigned to all couplings, to help Monte-Carlo generators like MadGraph 5 decide the maximal order of diagrams included in amplitude calculations. It is customary to assign QED IO\(=-1\) to Higgs boson VEV, \(v\), as it is numerically a large number and multiplying by \(v\) can effectively cancel the suppression from smaller Yukawa or gauge couplings. In the SM, where all couplings are maximum dimension-4, such procedure never leads to total negative IO for any vertex. Unfortunately, in SMEFT some vertices are proportional to higher \(v\) powers and technically can have negative total "QED" interaction order, generating warnings when the UFO file is imported to MadGraph 5. However, all such vertices have simultaneously another type of IO assigned, "NP=0,1,2", defining their EFT order (which is \(1/\Lambda^{2\,\mathrm{NP}}\)). The "NP" order is sufficient for MC generators to truncate the amplitude in a correct way, thus negative "QED" IO warnings can be ignored for such vertices. To avoid complications, SmeftFR v3 by default performs post-processing on UFO output files, removing "QED" IOs from all vertices proportional to WCs of higher dimension operators. Such post-processing can be switched off by setting the relevant option as described below.
Instead of FeynRules's WriteUFO command, in SmeftFR v3 the UFO output format can be generated by calling the routine:
SMEFTToUFO[ Lagrangian, _Options_ ]
with options defined in Table 9. By default, argument Lagrangian should be set to variable named SMEFT$MBLagrangian, unless the user prefers to generate only interaction for some subsector of the theory, then it can be one of the variables defined in Table 5, with obvious name replacements like LeptonGaugeVertices\(\,\rightarrow\,\) LeptonGaugeLagrangian etc.
One should note that some Monte-Carlo generators like MadGraph 5 support only real parameters, thus to generate UFO output working properly one should use option RealParameters\(\,\rightarrow\,\)True when calling SMEFTInitializeMB routine. Also, MadGraph 5 has some hard coded names for QED and QCD coupling constants (ee, aEWM1, aS). For compatibility, SmeftFR v3 preserves those names, independently of how the "user-defined" input parameters are named (e.g., whatever is the name of the variable defining the strong coupling constant, it is always copied to aS used by MadGraph 5, and similarly for other "special" variables). If necessary for compatibility with other codes, more such "special" variable names could be added to the SmeftFR, editing the routine UpdateSpecialParameters in the file smeft_parameters.m.
If four-fermion vertices are included in SMEFT Lagrangian, the UFO generator produces warning messages of the form (similar warnings may appear also when using other FeynRules output routines):
_Warning: Multi-Fermion operators are not yet fully supported!_
Therefore, although in our experience it seems to work properly, the output for four-fermion interactions in UFO or other formats must be treated with care and limited trust -- performing appropriate checks is left to users' responsibility.
Implementation in FeynRules of baryon and lepton number (BL) violating four-fermion interactions, with charge conjugation matrix appearing explicitly in vertices, is even more problematic. Thus, for safety in the current SmeftFR v3 such terms are by default not included in SMEFT$MBLagrangian variable, unless the option IncludeBL4Fermion in SMEFTInitializeMB routine is explicitly set to True. In such case, FeynArts output seems to work for such BL-violating vertices, but MadGraph 5 displays warnings that they are not yet supported and aborts process generation.
Exporting to UFO or other formats can take a long time, hours or more for \(R_{\xi}\)-gauges and complete dimension-6 SMEFT Lagrangian with fully general flavour structure and all numerical values of parameters initialized. Including also all dimension-6 squared terms and the full set of bosonic dimension-8 operators at once may not be feasible at all, as the computations can exhaust even large computer memory and/or human patience. Therefore, again, we advise users to generate necessary interactions only for a subset of operators relevant to a given analysis.
We have tested that SmeftFR works properly with MadGraph5. In particular, we ran without errors test simulations in MadGraph5 v3.4.1 using UFO model files produced by SmeftFR v3. Furthermore, we compared cross-sections for various processes obtained with it against the results obtained with SMEFT$NLO package up to terms of \(\mathcal{O}(\Lambda^{-2})\). Note that SMEFT$NLO, Dim6Top and SMEFTsim have been formally validated up to this order [92], so it is sufficient to compare with only one of these codes.
For this comparison, CKM and PMNS matrices are approximated by unit matrices and all particle widths are set to zero. Furthermore, all fermion masses and Yukawa couplings, except for top quark, are assumed to be zero. Each cross section is calculated assuming that all but
\begin{table}
\begin{tabular}{c c c} \hline Option name & Allowed values & Description \\ \hline Output & **“output/UFO”** & default UFO output subdirectory, can be modified to other user-defined location. \\ CorrectIO & False, **True** & By default only “NP” interaction order parameter is left in vertices containing WCs of higher order operators. By setting this option to “False”, preserves all IOs generated by native FeynRules UFO interface \\ AddDecays & **False**, True & UFO format can contain expressions for 2-body decays, switched off by default. \\ \hline \end{tabular}
\end{table}
Table 9: Options of SMEFTtoUFO routine, with default values marked in boldface.
one Wilson coefficients are set to zero and the non-vanishing one (displayed in the left column of Table 10) has the value of \(\left|\frac{C_{i}}{\Lambda^{2}}\right|=10^{-6}\) GeV\({}^{-2}\) (the sign of WCs is always chosen to increase \(\mathcal{O}(\Lambda^{-2})\) cross section w.r.t. SM). For the comparison we used the \((G_{F},M_{W},M_{Z},M_{H})\) input parameter scheme (option InputScheme \(\rightarrow\) "GF" in SMEFTInitializeModel routine) with values of input parameters set to central values given in ref. [93]. The results are summarised in the 2nd and 3rd column of Table 10. As one can see, possible differences between considered models are smaller than 1%.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & SMEFT@NLO \(\mathcal{O}(\Lambda^{-2})\) & SmeftFR \(\mathcal{O}(\Lambda^{-2})\) & SmeftFR \(\mathcal{O}(\Lambda^{-4})\) \\ \hline \multicolumn{4}{|c|}{\(\mu^{+}\mu^{-}\to t\bar{t}\)} \\ \hline SM & \(0.16606\pm 0.00026\) & \(0.16608\pm 0.00024\) & - \\ \hline \(C_{uW}^{33}\) & \(0.41862\pm 0.00048\) & \(0.41816\pm 0.00047\) & - \\ \hline \(C_{\phi u}^{33}\) & \(0.16725\pm 0.00027\) & \(0.16730\pm 0.00025\) & - \\ \hline \(C_{ll}^{2233}\) & \(6.488\pm 0.016\) & \(6.491\pm 0.014\) & - \\ \hline \(C_{\varphi WB}\) & \(0.21923\pm 0.00032\) & \(0.21940\pm 0.00030\) & \(0.22419\pm 0.00030\) \\ \hline \(C_{\varphi D}\) & \(0.18759\pm 0.00030\) & \(0.18759\pm 0.00027\) & \(0.18829\pm 0.00027\) \\ \hline \multicolumn{4}{|c|}{\(\gamma\gamma\to t\bar{t}\)} \\ \hline SM & \(0.0037498\pm 0.0000050\) & \(0.0037498\pm 0.0000050\) & - \\ \hline \(C_{uW}^{33}\) & \(0.008229\pm 0.000012\) & \(0.008235\pm 0.000012\) & - \\ \hline \(C_{\varphi WB}\) & \(0.0053056\pm 0.0000086\) & \(0.0053056\pm 0.0000086\) & \(0.0055809\pm 0.0000090\) \\ \hline \(C_{\varphi D}\) & \(0.0045856\pm 0.0000061\) & \(0.0045895\pm 0.0000064\) & \(0.0045882\pm 0.0000069\) \\ \hline \multicolumn{4}{|c|}{\(c\bar{c}\to t\bar{t}\)} \\ \hline SM & \(0.9553\pm 0.0017\) & \(0.9511\pm 0.0023\) & - \\ \hline \(C_{uG}^{33}\) & \(1.1867\pm 0.0023\) & \(1.1854\pm 0.0021\) & - \\ \hline \(C_{uW}^{33}\) & \(0.9641\pm 0.0018\) & \(0.9599\pm 0.0024\) & - \\ \hline \(C_{\varphi u}^{33}\) & \(0.9555\pm 0.0017\) & \(0.9513\pm 0.0023\) & - \\ \hline \(C_{\varphi q3}^{33}\) & \(0.9558\pm 0.0017\) & \(0.9515\pm 0.0023\) & - \\ \hline \(C_{q\mu 1}^{2233}\) & \(1.0111\pm 0.0018\) & \(1.0059\pm 0.0015\) & - \\ \hline \(C_{\varphi WB}\) & \(0.9568\pm 0.0018\) & \(0.9520\pm 0.0018\) & \(0.9522\pm 0.0018\) \\ \hline \(C_{\varphi D}\) & \(0.9558\pm 0.0017\) & \(0.9511\pm 0.0018\) & \(0.9511\pm 0.0018\) \\ \hline \multicolumn{4}{|c|}{\(pp\to t\bar{t}\)} \\ \hline SM & \(510.35\pm 0.72\) & \(510.46\pm 0.68\) & - \\ \hline \(C_{uG}^{33}\) & \(664.33\pm 1.16\) & \(666.34\pm 0.90\) & \(671.08\pm 0.97\) \\ \hline \(C_{uW}^{33}\) & \(510.63\pm 0.70\) & \(510.70\pm 0.80\) & - \\ \hline \(C_{\varphi q3}^{33}\) & \(510.37\pm 0.72\) & \(510.47\pm 0.68\) & - \\ \hline \(C_{\varphi q3}^{33}\) & \(510.39\pm 0.72\) & \(510.65\pm 0.80\) & - \\ \hline \(\sum_{i=1,2}C_{q\mu 1}^{ii33}\) & \(516.31\pm 0.58\) & \(516.14\pm 0.64\) & - \\ \hline \(C_{\varphi WB}\) & \(510.49\pm 0.68\) & \(510.52\pm 0.71\) & \(508.94\pm 0.79\) \\ \hline \(C_{\varphi D}\) & \(510.38\pm 0.72\) & \(510.47\pm 0.68\) & \(508.89\pm 0.79\) \\ \hline \end{tabular}
\end{table}
Table 10: Cross-sections (in pb) obtained using MadGraph5 with UFO models provided by SMEFTatNLO at the \(\mathcal{O}(\Lambda^{-2})\) order of the EFT expansion and SmeftFR at the \(\mathcal{O}(\Lambda^{-2})\) and \(\mathcal{O}(\Lambda^{-4})\) orders of the EFT expansion for a chosen set of processes and SMEFT operators. An empty cell indicates that no \(\mathcal{O}(\Lambda^{-4})\) terms appear in the amplitude.
The novel capability of SmeftFR v3 is the consistent inclusion of \(O(1/\Lambda^{4})\) terms in the interaction vertices. Therefore, SmeftFR v3 is able to _exactly_ calculate dimension-6 squared terms in the amplitude. For completeness, we have checked the impact of such \(\mathcal{O}(\Lambda^{-4})\) terms for the same processes. The corresponding cross sections can be found in the 4th column of the Table 10. The effect of higher order contributions is visible albeit small for the chosen small input values of WCs.
## 6 Sample programs
After setting the variable $FeynRulesPath to the correct value, in order to evaluate mass basis SMEFT Lagrangian and analytical form of Feynman rules for some sample set of dimension-6 and 8 operators one can use the following sequence of commands:
SMEFT$MajorVersion = "3"; SMEFT$MinorVersion = "00"; SMEFT$Path = FileNameJoin[{$FeynRulesPath, "Models", "SMEFT_" <> SMEFT$MajorVersion <> "_" <> SMEFT$MinorVersion}];
Get[ FileNameJoin[{$FeynRulesPath,"FeynRules.m"}] ]; Get[ FileNameJoin[{ SMEFT$Path, "code", "smeft_package.m"}] ];
OpList6 = {"phi", "phiBox", "phiD", "phiW", "phiWB", "eB", "uW", "dphi", "ll"}; OpList8 = {"phi8", "phi4n1", "phi4n3"}; OpList = Join[OpList6, OpList8];
SMEFTInitializeModel[ Operators -> OpList, Gauge -> Rxi, WCKFInitFile -> "wcxf_input_file_with_path.json" ExpansionOrder -> 1, InputScheme -> "GF", CKMInput -> "yes", RealParameters -> True, MaxParticles -> 4, MajoranaNeutrino -> True, Correct4Fermion -> False ];
SMEFTLoadModel[ ]; SMEFTFindMassBasis[ ]; SMEFTFeynmanRules[ ]; SMEFTOutput[ ];
or alternatively rerun the supplied programs: the notebook SmeftFR-init.nb or the text script smeft_fr_init.m.
After running the sequence of commands listed above, interaction vertices in different parametrizations become available and can be displayed on screen or used in further calcu
lations. For example, the Higgs-photon-photon vertex for the fields in mass basis can be extracted in different schemes by using the commands:
Print["Higgs-photon-photon vertex in "none" scheme: ", SelectVertices[GaugeHiggsVertices, SelectParticles -> H, A, A]]; SMEFTExpandVertices[Input -> "smeft", ExpOrder -> 2]; Print["Higgs-photon-photon vertex in "smeft" scheme: ", SelectVertices[GaugeHiggsVerticesExp, SelectParticles -> H, A, A]]; SMEFTExpandVertices[Input -> "user", ExpOrder -> 2]; Print["Higgs-photon-photon vertex in "user" scheme: ", SelectVertices[GaugeHiggsVerticesExp, SelectParticles -> H, A, A]];
As described before, Latex, WCxf, UFO and FeynArts formats can be exported after rerunning first SmeftFR-init.nb or equivalent set of commands generating file smeft_feynman_rules.m containing the expressions for the mass basis Lagrangian. Then, the user needs to start a new _Mathematica_ kernel and rerun the notebook file SmeftFR-interfaces.nb or the script smeft_fr_interfaces.m. Alternatively, one can manually type the commands, if necessary changing some of their options as described in previous Sections:
Get[ FileNameJoin[{$FeynRulesPath,"FeynRules.m"}]] ]; Get[ FileNameJoin[{$MEFT$Path, "code", "smeft_package.m"}] ]; SMEFTInitializeMB[ Expansion->"user", Include4Fermion->True, ]; SMEFTToWCXF[ SMEFT$Path<>"output/smeft_par_MB.fr", SMEFT$Path<>"output/smeft_wcxf_MB.json" ]; SMEFToLatex[ Expansion -> "smeft" ]; SMEFTToUF0[ SMEFT$MBLagrangian, CorrectIO -> True, Output ->... ]; WriteFeynArtsOutput[ SMEFT$MBLagrangian, Output ->... ];
## 7 Summary
In recent years, SMEFT has become the standard framework for a concrete, robust, organized, and fairly model independent way of capturing physics beyond the SM. Huge efforts among the high energy community physicists, both theoretical and experimental, have been devoted to understand how to precisely map experimental observable and fit them onto the Wilson coefficients of the SMEFT Lagrangian in eq. 2.1. Even deriving the Feynman rules - a straightforward and most of the time effortless procedure in renormalizable theories - is not trivial in SMEFT: The abundance of operators and associated parameters, especially when climbing up in EFT-dimensionality, makes the computer aid necessary, if not indispensable.
In this paper, we present a new version of a code, the SmeftFR v3, previous versions of which had been tested in many work studies. SmeftFR v3 is able to express the SMEFT interaction vertices in terms of chosen, predefined or user-defined, set of observable input parameters,
avoiding the need for reparametrizations required in calculations when expressed in terms of the SM gauge, Yukawa and Higgs coupling constants. One of SmeftFR v3 main advantages is that, it can calculate SMEFT interactions _a la carte_ for user-defined subset of dimension-5, 6 and 8 operators, selected to be relevant to scattering matrix elements for observable (or observables) under scrutiny. It generates dynamically the corresponding FeynRules model files with the minimal required content, in effect producing more compact analytical formulae and significantly speeding up the numerical computations. The SMEFT Feynman rules can be calculated by SmeftFR v3 in unitary and \(R_{\xi}\)-gauges, following the procedure described in ref. [9]. A number of additional SmeftFR v3's options is described in details in this paper.
The output of the package can be printed in Latex or exported in various formats supported by FeynRules, such as UFO, FeynArts, _etc_. Input parameters for Wilson coefficients used in SmeftFR v3 can communicate with WCxf format for further numerical handling.
We have also performed a number of analytical and numerical consistency checks that came out from SmeftFR v3 calculations. Analytically, for example, we checked that the produced Feynman rules lead to correct non-trivial cancellations in Vector Boson Scattering helicity amplitudes in our predefined input-parameter schemes, certain Ward identities and positivity of combinations of dimension-8 Wilson coefficients. Numerically, we found very good agreement with other codes, such as SMEFTsim and SMEFT@NLO, commonly used for Monte-Carlo simulations in SMEFT. Compared to those codes, SmeftFR v3 offers in addition several important improvements: the precision of including consistently terms up to \(O(1/\Lambda^{4})\) (that is all (dimension-6)\({}^{2}\) terms and the full set of terms linear in WCs of bosonic dimension-8 operators), the physical input-parameter-schemes not only for the gauge and Higgs sector but also for the flavour sector by including SMEFT corrections to the CKM matrix, the inclusion of the SMEFT neutrino sector, and inclusion of the Baryon and Lepton number violating \(d=6\) interaction vertices.
The current version of SmeftFR v3 code and its manual can be downloaded from
www.fuw.edu.pl/smeft
We believe that SmeftFR v3 is an important tool, facilitating the computations within SMEFT from the theoretical Lagrangian level all the way down to amplitude calculations required by the beyond the SM physics experimental analyses.
## Acknowledgements
The work of MR was supported in part by Polish National Science Centre under research grant DEC-2019/35/B/ST2/0200817. The research of JR has received funding from the Norwegian Financial Mechanism for years 2014-2021, under the grant no 2019/34/H/ST2/00707. The research work of LT was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grand (fellowship number: 1588). LT would like to thank the University of Warsaw for hospitality and financial support during his stay there. JR would like to thank the University of Ioannina and CERN for hospitality during his stays there. AD would like to thank CERN for hospitality. We would like to thank Dimitrios Beis for checking the SMEFT contributions to CKM and the numerical output of SmeftFR with an independent code.
## Appendix A Input schemes for the electroweak sector
The electroweak sector parameters, \(\bar{g}\), \(\bar{g}^{\prime}\), \(v\) and \(\lambda\), after expansion in \(1/\Lambda\)-powers can be written in the following form:
\[\bar{g} = \bar{g}_{SM}+\frac{1}{\Lambda^{2}}\bar{g}_{D6}+\frac{1}{\Lambda^{ 4}}\bar{g}_{D8}\;,\] \[\bar{g}^{\prime} = \bar{g}^{\prime}_{SM}+\frac{1}{\Lambda^{2}}\bar{g}^{\prime}_{D6} +\frac{1}{\Lambda^{4}}\bar{g}^{\prime}_{D8}\;,\] \[v = v_{SM}+\frac{1}{\Lambda^{2}}v_{D6}+\frac{1}{\Lambda^{4}}v_{D8}\;,\] \[\lambda = \lambda_{SM}+\frac{1}{\Lambda^{2}}\lambda_{D6}+\frac{1}{\Lambda^{ 4}}\lambda_{D8}\;. \tag{100}\]
where the exact form of "SM", "D6" and "D8" terms depends on the chosen input scheme. Below, we present relevant expressions for the two most commonly used SMEFT input schemes, both included as predefined routines in the SmeftFR v3 distribution.
### "GF" input scheme
In this scheme Fermi constant \(G_{F}\) (evaluated from the muon lifetime measurement) and gauge and Higgs boson masses \(M_{Z},M_{W},M_{H}\) are used as the input parameters. To relate them to quantities defined in eq. (A.1), let us first define the following abbreviations
\[\Delta M = \sqrt{M_{Z}^{2}-M_{W}^{2}}\;,\] \[{\cal B}_{6}(C_{ll},C_{\varphi l3}) = -2(C_{ll}^{2112}-C_{\varphi l3}^{11}-C_{\varphi l3}^{22})\;,\] \[{\cal B}_{8}(C_{ll},C_{\varphi l3},C_{\varphi l1}) = (C_{ll}^{2112})^{2}+\frac{1}{4}(C_{le}^{2112})^{2}-2C_{ll}^{2112} C_{\varphi l3}^{11}-2C_{ll}^{2112}C_{\varphi l3}^{22} \tag{101}\] \[+ (C_{\varphi l3}^{11})^{2}+(C_{\varphi l3}^{22})^{2}+4C_{\varphi l 3}^{11}C_{\varphi l3}^{22}\] \[+ C_{\varphi l1}^{21}C_{\varphi l3}^{12}-C_{\varphi l1}^{12}C_{ \varphi l3}^{21}+C_{\varphi l1}^{12}C_{\varphi l1}^{21}-C_{\varphi l3}^{12}C_ {\varphi l3}^{21}\;.\]
Then one can express quantities in eq. (A.1), as
\[v_{SM} = \frac{1}{2^{1/4}\sqrt{G_{F}}}\;,\] \[v_{D6} = \frac{v_{SM}}{4\sqrt{2}G_{F}}{\cal B}_{6}\;,\] \[v_{D8} = \frac{v_{SM}}{64G_{F}^{2}}({\cal B}_{6}^{2}+8{\cal B}_{8})\;, \tag{102}\] \[\bar{g}_{SM} = 2^{5/4}\sqrt{G_{F}}M_{W}\;,\] \[\bar{g}_{D6} = -\frac{\bar{g}_{SM}}{4\sqrt{2}G_{F}}{\cal B}_{6}\;,\] \[\bar{g}_{D8} = \frac{\bar{g}_{SM}}{64G_{F}^{2}}({\cal B}_{6}^{2}-8{\cal B}_{8})\;,\] (103) \[\bar{g}^{\prime}_{SM} = 2^{5/4}\sqrt{G_{F}}\Delta M^{2}\;,\]
\[\bar{g}^{\prime}_{D6} = \frac{\bar{g}^{\prime}_{SM}}{4\sqrt{2}G_{F}\Delta M}\left(-M_{Z}^{2} C_{\varphi D}-4M_{W}\Delta MC_{\varphi WB}-\Delta M^{2}{\cal B}_{6}\right)\;,\] \[\bar{g}^{\prime}_{D8} = \frac{\bar{g}^{\prime}_{SM}}{16G_{F}^{2}\Delta M^{2}}\biggl{[}-2M_ {Z}^{2}(2C_{\varphi^{6}D^{2}}+{\cal B}_{6}C_{\varphi D})+\Delta M^{2}({\cal B} _{6}^{2}-8{\cal B}_{8}-16C_{\varphi WB}^{2})\] (A.5) \[- 8M_{W}\Bigl{(}2M_{W}C_{W^{2}\varphi^{4}}^{(3)}+2\Delta MC_{WB\varphi ^{4}}^{(1)}+\Delta M({\cal B}_{6}+4C_{\varphi B}+4C_{\varphi W})C_{\varphi WB} \Bigr{)}\biggr{]}\;,\] \[\lambda_{SM} = \sqrt{2}G_{F}M_{H}^{2}\;,\] \[\lambda_{D6} = \frac{\lambda_{SM}}{4G_{F}}\biggl{[}\frac{6}{G_{F}M_{H}^{2}}C_{ \varphi}-\sqrt{2}\Bigl{(}{\cal B}_{6}+4C_{\varphi\Box}-C_{\varphi D}\Bigr{)} \biggr{]}\;,\] \[\lambda_{D8} = \frac{\lambda_{SM}}{16G_{F}^{2}}\biggl{[}\Bigl{(}{\cal B}_{6}^{2} -4{\cal B}_{8}-8C_{\varphi^{6}\Box}+2C_{\varphi^{2}D^{2}}\Bigr{)}+\frac{6\sqrt {2}}{G_{F}M_{H}^{2}}\Bigl{(}{\cal B}_{6}C_{\varphi}+2C_{\varphi 8}\Bigr{)} \biggr{]}\;.\] (A.6)
### "AEM" input scheme
In this scheme input parameters for the electroweak sector are chosen to be the electromagnetic coupling \(\alpha_{em}\), and the gauge and Higgs boson masses \(M_{Z},M_{W},M_{H}\). Using again the abbreviation \(\Delta M=\sqrt{M_{Z}^{2}-M_{W}^{2}}\), for the quantities defined in eq. (A.1), one has:
\[v_{SM} = \frac{M_{W}\Delta M}{M_{Z}\sqrt{\pi\alpha_{em}}}\;,\] \[v_{D6} = -\frac{\bar{g}_{SM}M_{W}^{3}}{4\pi\alpha_{em}M_{Z}^{2}}\left(M_{W }C_{\varphi D}+4\Delta MC_{\varphi WB}\right)\;,\] \[v_{D8} = \frac{v_{SM}M_{W}^{5}}{32\pi^{2}\alpha_{em}^{2}M_{Z}^{4}}\biggl{[} 3M_{W}^{3}C_{\varphi D}^{2}-4M_{W}\Delta M^{2}C_{\varphi^{6}D^{2}}-8(M_{Z}^{2} -5M_{W}^{2})\Delta MC_{\varphi D}C_{\varphi WB}\] (A.7) \[+ 16\Delta M^{2}\Bigl{(}4M_{W}C_{\varphi WB}^{2}-\Delta MC_{WB \varphi^{4}}^{(1)}+\frac{M_{Z}^{2}-2M_{W}^{2}}{M_{W}}C_{WB\varphi^{4}}^{(3)} \Bigr{)}\] \[- 32\Delta M^{3}(C_{\varphi B}+C_{\varphi W})C_{\varphi WB}\biggr{]}\;,\] \[\bar{g}_{SM} = \frac{2M_{Z}\sqrt{\pi\alpha_{em}}}{\Delta M}\;,\] \[\bar{g}_{D6} = -v_{D6}\;,\] \[\bar{g}_{D8} = \frac{\bar{g}_{SM}M_{W}^{5}}{32\pi^{2}\alpha_{em}^{2}M_{Z}^{4}} \biggl{[}-M_{W}^{3}C_{\varphi D}^{2}+4M_{W}\Delta M^{2}C_{\varphi^{6}D^{2}}+8 (M_{Z}^{2}-3M_{W}^{2})\Delta MC_{\varphi D}C_{\varphi WB}\] (A.8) \[- 16\Delta M^{2}\Bigl{(}2M_{W}C_{\varphi WB}^{2}-\Delta MC_{WB \varphi^{4}}^{(1)}+\frac{M_{Z}^{2}-2M_{W}^{2}}{M_{W}}C_{WB\varphi^{4}}^{(3)} \Bigr{)}\] \[+ 32\Delta M^{3}(C_{\varphi B}+C_{\varphi W})C_{\varphi WB}\biggr{]}\;,\] \[\bar{g}^{\prime}_{SM} = \frac{2M_{Z}\sqrt{\pi\alpha_{em}}}{M_{W}}\;,\] \[\bar{g}^{\prime}_{D6} = -\frac{\bar{g}^{\prime}_{SM}\Delta M^{2}M_{W}^{2}}{4\pi\alpha_{em }M_{Z}^{2}}C_{\varphi D}\;,\]
\[\bar{g}^{\prime}_{D8} = \frac{\bar{g}^{\prime}_{SM}M_{W}^{4}\Delta M^{2}}{32\pi^{2}\alpha_{ em}^{2}M_{Z}^{4}}\bigg{[}(M_{W}^{2}+3M_{Z}^{2})C_{\varphi D}^{2}-16\Delta M^{2}C_{ \varphi WB}^{2}+16M_{W}\Delta MC_{\varphi D}C_{\varphi WB}\] (A.9) \[- 4\Delta M^{2}\Big{(}C_{\varphi^{6}D^{2}}+4C_{W^{2}\varphi^{4}}^{(3 )}\Big{)}\bigg{]}\;,\] \[\lambda_{SM} = \frac{\pi\alpha_{em}M_{H}^{2}M_{Z}^{2}}{\Delta M^{2}}\;,\] \[\lambda_{D6} = \frac{3\Delta M^{2}M_{W}^{2}}{\pi\alpha_{em}M_{Z}^{2}}C_{\varphi }-2M_{H}^{2}C_{\varphi\Box}+\frac{M_{H}^{2}M_{Z}^{2}}{2\Delta M^{2}}C_{\varphi D }+2M_{W}\Delta MC_{\varphi WB}\;,\] \[\lambda_{D8} = \frac{M_{W}^{2}}{4\pi^{2}\alpha_{em}^{2}M_{Z}^{4}}\bigg{[}12M_{W }^{2}\Delta M^{4}2C_{\varphi 8}-6M_{W}^{3}\Delta M^{2}C_{\varphi}(M_{W}C_{ \varphi D}+4\Delta MC_{\varphi WB})\] (A.10) \[+ \pi\alpha_{em}M_{Z}^{2}M_{H}^{2}\Big{(}-4\Delta M^{2}C_{\varphi^{ 6}\Box}+M_{Z}^{2}C_{\varphi^{2}D^{2}}+8M_{W}\Delta M(C_{\varphi B}+C_{\varphi W })C_{\varphi WB}\] \[+ \frac{2M_{W}(M_{Z}^{2}-2M_{W}^{2})}{\Delta M}C_{\varphi D}C_{ \varphi WB}-4M_{W}^{2}C_{\varphi WB}^{2}\] \[+ 4M_{W}\Delta MC_{WB\varphi^{4}}^{(1)}-4(M_{Z}^{2}-2M_{W}^{2})C_{ WB\varphi^{4}}^{(3)}\Big{)}\bigg{]}\;.\]
## Appendix B Operators and their naming used in SmeftFR
All dimension-6 operators in Warsaw basis are given in Table 1. Naming of SmeftFR variables corresponding to WCs of these operators is straightforward: each variable name consists of subscripts identifying a given operator (operator names are represented by strings, to avoid accidental use of similarly named variables for other purposes). For example, one may include in OpList6 (list of dimension-6 operators, see examples in Sec. 6):
\[Q_{\varphi} \rightarrow\text{``phi"}\] \[Q_{\varphi D} \rightarrow\text{``phiD"}\] \[Q_{\varphi\Box} \rightarrow\text{``phiBox"}\] \[Q_{\varphi\widetilde{W}} \rightarrow\text{``phiWtilde"}\] \[Q_{lq}^{(3)} \rightarrow\text{``q3"}\] \[Q_{quqd8}^{(8)} \rightarrow\text{``quqd8"}\] \[....\]
Similarly, SmeftFR takes as input bosonic dimension-8 operators from Tables 2, 3, 4. For example, one can use the following names in the list of dimension-8 operators:
\[Q_{\varphi^{4}D^{4}}^{(1)} \rightarrow\text{``phi4D4n1"}\] \[Q_{\varphi^{6}\Box} \rightarrow\text{``phi6Box"}\] \[Q_{G^{2}B^{2}}^{(4)} \rightarrow\text{``G2B2n4"}\] \[Q_{W^{2}B\varphi^{2}}^{(2)} \rightarrow\text{``W2Bphi2n2"}\] \[Q_{W^{2}\varphi^{2}D^{2}}^{(1)} \rightarrow\text{``W2phi2D2n1"}\]
Table B.2 collects the pure Higgs operators, i.e. operators constructed only out of the Higgs doublet, \(\varphi\), and covariant derivatives. There, we performed a change of basis in the operators of the \(\varphi^{6}D^{2}\) class so that they have immediate connection with the Warsaw basis. The original operators where defined in [13] as
\[Q^{(1)}_{\varphi^{6}} =(\varphi^{\dagger}\varphi)^{2}(D_{\mu}\varphi^{\dagger}D^{\mu} \varphi)\,,\] \[Q^{(2)}_{\varphi^{6}} =(\varphi^{\dagger}\varphi)(\varphi^{\dagger}\tau^{I}\varphi)(D_ {\mu}\varphi^{\dagger}\tau^{I}D^{\mu}\varphi)\,,\]
and here we use instead the set
\[Q_{\varphi^{6}\Box} =(\varphi^{\dagger}\varphi)^{2}\Box(\varphi^{\dagger}\varphi)\,,\] \[Q_{\varphi^{6}D^{2}} =(\varphi^{\dagger}\varphi)(\varphi^{\dagger}D_{\mu}\varphi)^{*} (\varphi^{\dagger}D^{\mu}\varphi)\,,\]
which naturally extends the definition of the dimension 6 operators \(Q_{\varphi\Box}\) and \(Q_{\varphi D}\) from table B.1. This change of basis is consistent with the rest of the basis from ref. [13]. A proof of this result can be found in appendix F of ref. [94] for any order in the EFT expansion. Additionally, we added the number of covariant derivatives in the naming of the operators that belong in the third class, \(\varphi^{4}D^{4}\), to avoid confusion with the SM quartic Higgs operator, \(\varphi^{4}\).
Table B.3 collects the operators that are constructed purely from gauge field strengths. Therefore, each operator there contains exactly four field strengths, and the operator classes are further divided as \(X^{4}\), where only one of the field strengths of the \(B\), \(W\) or \(G\) gauge fields appears in the operator, \(X^{3}X^{\prime}\), where the \(G\) field strength appears thrice together with a \(B\) field strength in the operator, and finally \(X^{2}X^{\prime 2}\), where the operators are consisted of two pairs of different field strengths. The notation in this table follows exactly ref. [13]. Finally, table B.4 collects the operators that are constructed from a combination of Higgs doublets, \(\varphi\), and gauge field strengths.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\(X^{3}\)} & \multicolumn{2}{|c|}{\(\varphi^{6}\) and \(\varphi^{4}D^{2}\)} & \multicolumn{2}{|c|}{\(\psi^{2}\varphi^{3}\)} \\ \hline \(Q_{G}\) & \(f^{ABC}G_{u}^{A\nu}G_{\nu}^{B\rho}G_{\rho}^{C\mu}\) & \(Q_{\varphi}\) & \((\varphi^{\dagger}\varphi)^{3}\) & \(Q_{e\varphi}\) & \((\varphi^{\dagger}\varphi)(\bar{l}_{p}e_{r}\varphi)\) \\ \(Q_{\widetilde{G}}\) & \(f^{ABC}\widetilde{G}_{u}^{A\nu}G_{\nu}^{B\rho}G_{\rho}^{C\mu}\) & \(Q_{\varphi\Box}\) & \((\varphi^{\dagger}\varphi)\Box(\varphi^{\dagger}\varphi)\) & \(Q_{u\varphi}\) & \((\varphi^{\dagger}\varphi)(\bar{q}_{p}u_{r}\widetilde{\varphi})\) \\ \(Q_{W}\) & \(\epsilon^{IJK}W_{\mu}^{I\nu}W_{\nu}^{J\rho}W_{\rho}^{K\mu}\) & \(Q_{\varphi D}\) & \(\left(\varphi^{\dagger}D^{\mu}\varphi\right)^{*}\left(\varphi^{\dagger}D_{\mu }\varphi\right)\) & \(Q_{d\varphi}\) & \((\varphi^{\dagger}\varphi)(\bar{q}_{p}d_{r}\varphi)\) \\ \(Q_{\widetilde{W}}\) & \(\epsilon^{IJK}\widetilde{W}_{\mu}^{I\nu}W_{\nu}^{J\rho}W_{\rho}^{K\mu}\) & & & & \\ \hline \hline \multicolumn{2}{|c|}{\(X^{2}\varphi^{2}\)} & \multicolumn{2}{|c|}{\(\psi^{2}X\varphi\)} & \multicolumn{2}{|c|}{\(\psi^{2}\varphi^{2}D\)} \\ \hline \(Q_{\varphi G}\) & \(\varphi^{\dagger}\varphi\,G_{\mu\nu}^{A}G^{A\mu\nu}\) & \(Q_{eW}\) & \((\bar{l}_{p}\sigma^{\mu\nu}e_{r})\tau^{I}\varphi W_{\mu\nu}^{I}\) & \(Q_{\varphi l}^{(1)}\) & \(i(\varphi^{\dagger}\overleftrightarrow{D}_{\mu}\varphi)(\bar{l}_{p}\gamma^{ \mu}l_{r})\) \\ \(Q_{\varphi\widetilde{G}}\) & \(\varphi^{\dagger}\varphi\,\widetilde{G}_{\mu\nu}^{A}G^{A\mu\nu}\) & \(Q_{eB}\) & \((\bar{l}_{p}\sigma^{\mu\nu}e_{r})\varphi B_{\mu\nu}\) & \(Q_{\varphi l}^{(3)}\) & \(i(\varphi^{\dagger}\overleftrightarrow{D}_{\mu}^{I}\varphi)(\bar{l}_{p}\tau^ {I}\gamma^{\mu}l_{r})\) \\ \(Q_{\varphi W}\) & \(\varphi^{\dagger}\varphi\,W_{\mu\nu}^{I}W^{I\mu\nu}\) & \(Q_{uG}\) & \((\bar{q}_{p}\sigma^{\mu\nu}T^{A}u_{r})\widetilde{\varphi}\,G_{\mu\nu}^{A}\) & \(Q_{\varphi e}\) & \(i(\varphi^{\dagger}\overleftrightarrow{D}_{\mu}\varphi)(\bar{e}_{p}\gamma^{ \mu}e_{r})\) \\ \(Q_{\varphi\widetilde{W}}\) & \(\varphi^{\dagger}\varphi\,\widetilde{W}_{\mu\nu}^{I}W^{I\mu\nu}\) & \(Q_{uW}\) & \((\bar{q}_{p}\sigma^{\mu\nu}u_{r})\tau^{I}\widetilde{\varphi}\,W_{\mu\nu}^{I}\) & \(Q_{\varphi q}^{(1)}\) & \(i(\varphi^{\dagger}\overleftrightarrow{D}_{\mu}\varphi)(\bar{q}_{p}\gamma^{ \mu}q_{r})\) \\ \(Q_{\varphi B}\) & \(\varphi^{\dagger}\varphi\,B_{\mu\nu}B^{\mu\nu}\) & \(Q_{uB}\) & \((\bar{q}_{p}\sigma^{\mu\nu}u_{r})\widetilde{\varphi}\,B_{\mu\nu}\) & \(Q_{\varphi q}^{(3)}\) & \(i(\varphi^{\dagger}\overleftrightarrow{D}_{\mu}^{I}\varphi)(\bar{q}_{p}\tau^ {I}\gamma^{\mu}q_{r})\) \\ \(Q_{\varphi\widetilde{B}}\) & \(\varphi^{\dagger}\varphi\,\widetilde{B}_{\mu\nu}B^{\mu\nu}\) & \(Q_{dG}\) & \((\bar{q}_{p}\sigma^{\mu\nu}T^{A}d_{r})\varphi\,G_{\mu\nu}^{A}\) & \(Q_{\varphi u}\) & \(i(\varphi^{\dagger}\overleftrightarrow{D}_{\mu}\varphi)(\bar{u}_{p}\gamma^{ \mu}u_{r})\) \\ \(Q_{\varphi WB}\) & \(\varphi^{\dagger}\tau^{I}\varphi\,W_{\mu\nu}^{I}B^{\mu\nu}\) & \(Q_{dW}\) & \((\bar{q}_{p}\sigma^{\mu\nu}d_{r})\tau^{I}\varphi\,W_{\mu\nu}^{I}\) & \(Q_{\varphi d}\) & \(i(\varphi^{\dagger}\overleftrightarrow{D}_{\mu}\varphi)(\bar{d}_{p}\gamma^{ \mu}d_{r})\) \\ \(Q_{\varphi\widetilde{W}B}\) & \(\varphi^{\dagger}\tau^{I}\varphi\,\widetilde{W}_{\mu\nu}^{I}B^{\mu\nu}\) & \(Q_{dB}\) & \((\bar{q}_{p}\sigma^{\mu\nu}d_{r})\varphi\,B_{\mu\nu}\) & \(Q_{\varphi ud}\) & \(i(\widetilde{\varphi}^{\dagger}D_{\mu}\varphi)(\bar{u}_{p}\gamma^{\mu}d_{r})\) \\ \hline \hline \multicolumn{2}{|c|}{\((LL)(LL)\)} & \multicolumn{2}{|c|}{\((RR)(RR)\)} & \multicolumn{2}{|c|}{\((LL)(RR)\)} \\ \hline \(Q_{ll}\) & \((\bar{l}_{p}\gamma_{\mu}l_{r})(\bar{l}_{s}\gamma^{\mu}l_{t})\) & \(Q_{ee}\) & \((\bar{e}_{p}\gamma_{\mu}e_{r})(\bar{e}_{s}\gamma^{\mu}e_{t})\) & \(Q_{le}\) & \((\bar{l}_{p}\gamma_{\mu}l_{r})(\bar{e}_{s}\gamma^{\mu}e_{t})\) \\ \(Q_{qq}^{(1)}\) & \((\bar{q}_{p}\gamma_{\mu}q_{r})(\bar{q}_{s}\gamma^{\mu}q_{t})\) & \(Q_{uu}\) & \((\bar{u}_{p}\gamma_{\mu}u_{r})(\bar{u}_{s}\gamma^{\mu}u_{t})\) & \(Q_{lu}\) & \((\bar{l}_{p}\gamma_{\mu}l_{r})(\bar{u}_{s}\gamma^{\mu}u_{t})\) \\ \(Q_{qq}^{(3)}\) & \((\bar{q}_{p}\gamma_{\mu}\tau^{I}q_{r})(\bar{q}_{s}\gamma^{\mu}\tau^{I}q_{t})\) & \(Q_{dd}\) & \((\bar{d}_{p}\gamma_{\mu}d_{r})(\bar{d}_{s}\gamma^{\mu}d_{t})\) & \(Q_{ld}\) & \((\bar{l}_{p}\gamma_{\mu}l_{r})(\bar{d}_{s}\gamma^{\mu}d_{t})\) \\ \(Q_{lq}^{(1)}\) & \((\bar{l}_{p}\gamma_{\mu}l_{r})(\bar{q}_{s}\gamma^{\mu}q_{t})\) & \(Q_{eu}\) & \((\bar{e}_{p}\gamma_{\mu}e_{r})(\bar{u}_{s}\gamma^{\mu}u_{t})\) & \(Q_{qe}\) & \((\bar{q}_{p}\gamma_{\mu}q_{r})(\bar{e}_{s}\gamma^{\mu}e_{t})\) \\ \(Q_{lq}^{(3)}\) & \((\bar{l}_{p}\gamma_{\mu}\tau^{I}l_{r})(\bar{q}_{s}\gamma^{\mu}\tau^{I}q_{t})\) & \(Q_{ed}\) & \((\bar{e}_{p}\gamma_{\mu}e_{r})(\bar{d}_{s}\gamma^{\mu}d_{t})\) & \(Q_{qu}^{(1)}\) & \((\bar{q}_{p}\gamma_{\mu}q_{r})(\bar{u}_{s}\gamma^{\mu}u_{t})\) \\ & & \(Q_{ud}^{(1)}\) & \((\bar{u}_{p}\gamma_{\mu}u_{r})(\bar{d}_{s}\gamma^{\mu}d_{t})\) & \(Q_{qu}^{(8)}\
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{\(\varphi^{8}\)} & \multicolumn{2}{|c|}{\(\varphi^{6}D^{2}\)} & \multicolumn{2}{|c|}{\(\varphi^{4}D^{4}\)} \\ \hline \(Q_{\varphi^{8}}\) & \((\varphi^{\dagger}\varphi)^{4}\) & \(Q_{\varphi^{6}\Box}\) & \((\varphi^{\dagger}\varphi)^{2}\Box(\varphi^{\dagger}\varphi)\) & \(Q^{(1)}_{\varphi^{4}D^{4}}\) & \((D_{\mu}\varphi^{\dagger}D_{\nu}\varphi)(D^{\nu}\varphi^{\dagger}D^{\mu}\varphi)\) \\ & & \(Q_{\varphi^{6}D^{2}}\) & \((\varphi^{\dagger}\varphi)(\varphi^{\dagger}D_{\mu}\varphi)^{*}(\varphi^{ \dagger}D^{\mu}\varphi)\) & \(Q^{(2)}_{\varphi^{4}D^{4}}\) & \((D_{\mu}\varphi^{\dagger}D_{\nu}\varphi)(D^{\mu}\varphi^{\dagger}D^{\nu}\varphi)\) \\ & & & \(Q^{(3)}_{\varphi^{4}D^{4}}\) & \((D_{\mu}\varphi^{\dagger}D^{\mu}\varphi)(D_{\nu}\varphi^{\dagger}D^{\nu}\varphi)\) \\ \hline \end{tabular}
\end{table}
Table B.2: Dimension 8 operators containing only the Higgs field. Table taken from ref. [13] except for the two operators in \(\varphi^{6}D^{2}\) class that have been modified as discussed in this Appendix.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{\(X^{4}\), \(X^{3}X^{\prime}\)} & \multicolumn{2}{|c|}{\(X^{2}X^{\prime 2}\)} \\ \hline \(Q^{(1)}_{G^{4}}\) & \((G^{A}_{\mu\nu}G^{A\mu\nu})(G^{B}_{\rho\sigma}G^{B\rho\sigma})\) & \(Q^{(1)}_{G^{2}W^{2}}\) & \((W^{I}_{\mu\nu}W^{I\mu\nu})(G^{A}_{\rho\sigma}G^{A\rho\sigma})\) \\ \(Q^{(2)}_{G^{2}}\) & \((G^{A}_{\mu\nu}\widetilde{G}^{A\mu\nu})(G^{B}_{\rho\sigma}\widetilde{G}^{B\rho \sigma})\) & \(Q^{(2)}_{G^{2}W^{2}}\) & \((W^{I}_{\mu\nu}\widetilde{W}^{I\mu\nu})(G^{A}_{\rho\sigma}\widetilde{G}^{A\rho \sigma})\) \\ \(Q^{(3)}_{G^{4}}\) & \((G^{A}_{\mu\nu}G^{B\mu\nu})(G^{A}_{\rho\sigma}G^{B\rho\sigma})\) & \(Q^{(3)}_{G^{2}W^{2}}\) & \((W^{I}_{\mu\nu}G^{A\mu\nu})(W^{I}_{\rho\sigma}G^{A\rho\sigma})\) \\ \(Q^{(4)}_{G^{4}}\) & \((G^{A}_{\mu\nu}\widetilde{G}^{B\mu\nu})(G^{A}_{\rho\sigma}\widetilde{G}^{B\rho \sigma})\) & \(Q^{(4)}_{G^{2}W^{2}}\) & \((W^{I}_{\mu\nu}\widetilde{G}^{A\mu\nu})(W^{I}_{\rho\sigma}\widetilde{G}^{A\rho \sigma})\) \\ \(Q^{(5)}_{G^{4}}\) & \((G^{A}_{\mu\nu}G^{A\mu\nu})(G^{B}_{\rho\sigma}\widetilde{G}^{B\rho\sigma})\) & \(Q^{(5)}_{G^{2}W^{2}}\) & \((W^{I}_{\mu\nu}\widetilde{W}^{I\mu\nu})(G^{A}_{\rho\sigma}G^{A\rho\sigma})\) \\ \(Q^{(6)}_{G^{4}}\) & \((G^{A}_{\mu\nu}G^{B\mu\nu})(G^{A}_{\rho\sigma}\widetilde{G}^{B\rho\sigma})\) & \(Q^{(6)}_{G^{2}W^{2}}\) & \((W^{I}_{\mu\nu}W^{I\mu\nu})(G^{A}_{\rho\sigma}\widetilde{G}^{A\rho\sigma})\) \\ \(Q^{(7)}_{G^{4}}\) & \(d^{ABE}d^{CDE}(G^{A}_{\mu\nu}G^{B\mu\nu})(G^{C}_{\rho\sigma}G^{D\rho\sigma})\) & \(Q^{(7)}_{G^{2}W^{2}}\) & \((W^{I}_{\mu\nu}G^{A\mu\nu})(W^{I}_{\rho\sigma}\widetilde{G}^{A\rho\sigma})\) \\ \(Q^{(8)}_{G^{4}}\) & \(d^{ABE}d^{CDE}(G^{A}_{\mu\nu}\widetilde{G}^{B\mu\nu})(G^{C}_{\rho\sigma} \widetilde{G}^{D\rho\sigma})\) & \(Q^{(1)}_{G^{2}B^{2}}\) & \((B_{\mu\nu}B^{\mu\nu})(G^{A}_{\rho\sigma}G^{A\rho\sigma})\) \\ \(Q^{(9)}_{G^{4}}\) & \(d^{ABE}d^{CDE}(G^{A}_{\mu\nu}G^{B\mu\nu})(G^{C}_{\rho\sigma}\widetilde{G}^{D \rho\sigma})\) & \(Q^{(1)}_{G^{2}B^{2}}\) & \((B_{\mu\nu}\widetilde{B}^{\mu\nu})(G^{A}_{\rho\sigma}G^{A\rho\sigma})\) \\ \(Q^{(1)}_{W^{4}}\) & \((W^{I}_{\mu\nu}W^{I\mu\nu})(W^{J}_{\rho\sigma}W^{J\rho\sigma})\) & \(Q^{(3)}_{G^{2}B^{2}}\) & \((B_{\mu\nu}G^{A\mu\nu})(B_{\rho\sigma}G^{A\rho\sigma})\) \\ \(Q^{(2)}_{W^{4}}\) & \((W^{I}_{\mu\nu}\widetilde{W}^{I\mu\nu})(W^{J}_{\rho\sigma}\widetilde{W}^{J\rho \sigma})\) & \(Q^{(4)}_{G^{2}B^{2}}\) & \((B_{\mu\nu}\widetilde{G}^{A\mu\nu})(B_{\rho\sigma}\widetilde{G}^{A\rho\sigma})\) \\ \(Q^{(3)}_{W^{4}}\) & \((W^{I}_{\mu\nu}W^{J\mu\nu})(W^{J}_{\rho\sigma}\widetilde{W}^{J\rho\sigma})\) & \(Q^{(5)}_{G^{2}B^{2}}\) & \((B_{\mu\nu}\widetilde{B}^{\mu\nu})(G^{A}_{\rho\sigma}G^{A\rho\sigma})\) \\ \(Q^{(4)}_{W^{4}}\) & \((W^{I}_{\mu\nu}\widetilde{W}^{J\mu\nu})(W^{I}_{\rho\sigma}\widetilde{W}^{J\rho \sigma})\) & \(Q^{(6)}_{G^{2}B^{2}}\) & \((B_{\mu\nu}B^{\mu\nu})(G^{A}_{\rho\sigma}\widetilde{G}^{A\rho\sigma})\) \\ \(Q^{(5)}_{W^{4}}\) & \((W^{I}_{\mu\nu}W^{I\mu\nu})(W^{J}_{\rho\sigma}\widetilde{W}^{J\rho\sigma})\) & \(Q^{(7)}_{G^{2}B^{2}}\) & \((B_{\mu\nu}G^{A\mu\nu})(B_{\rho\sigma}\widetilde{G}^{A\rho\sigma})\) \\ \(Q^{(6)}_{W^{4}}\) & \((W^{I}_{\mu\nu}W^{J\mu\nu})(W^{J}_{\rho\sigma}\widetilde{W}^{J\rho\sigma})\) & \(Q^{(7)}_{G^{2}B^{2}}\) & \((B_{\mu\nu}G^{A\mu\nu})(B_{\rho\sigma}\widetilde{G}^{A\rho\sigma})\) \\ \(Q^{(1)}_{W^{4}}\) & \((W^{I}_{\mu\nu}W^{J\mu\nu})(W^{J}_{\rho\sigma}\widetilde{W}^{J\rho\sigma})\) & \(Q^{(3)}_{W^{2}B^{2}}\) & \((B_{\mu\nu}B^{\mu\nu})(W^{I}_{\rho\sigma}W^{I\rho\sigma})\) \\ \(Q^{(1)}_{B^{4}}\) & \((B_{\mu\nu}B^{\mu\nu})(B_{\rho\sigma}\widetilde{B}^{\rho\sigma})\) & \(Q^{(3)}_{W^{2}B^{2}}\) & \((B_{\mu\nu}W^{I\mu\nu})(B_{\rho\sigma}W^{I\rho\sigma})\) \\ \(Q^{(3)}_{B^{4}}\) & \((B_{\mu\nu}B^{\mu\nu})(B_{\rho\sigma}\widetilde{B}^{\rho\sigma})\) & \(Q^{(4)}_{W^{2}B^{2}}\) & \((B_{\mu\nu}\widetilde{W}^{I\mu
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{\(X^{3}\varphi^{2}\)} & \multicolumn{2}{|c|}{\(X^{2}\varphi^{4}\)} \\ \hline \(Q^{(1)}_{G^{3}\varphi^{2}}\) & \(f^{ABC}(\varphi^{\dagger}\varphi)G^{A\nu}_{\mu}G^{B\rho}_{\nu}G^{C\mu}_{\rho}\) & \(Q^{(1)}_{G^{2}\varphi^{4}}\) & \((\varphi^{\dagger}\varphi)^{2}G^{A}_{\mu\nu}G^{A\mu\nu}\) \\ \(Q^{(2)}_{G^{3}\varphi^{2}}\) & \(f^{ABC}(\varphi^{\dagger}\varphi)G^{A\mu}_{\nu}G^{B\rho}_{\nu}G^{C\mu}_{\rho}\) & \(Q^{(2)}_{G^{2}\varphi^{4}}\) & \((\varphi^{\dagger}\varphi)^{2}\widetilde{G}^{A}_{\mu\nu}G^{A\mu\nu}\) \\ \(Q^{(1)}_{W^{3}\varphi^{2}}\) & \(\epsilon^{IJK}(\varphi^{\dagger}\varphi)W^{I\nu}_{\mu}W^{J\rho}_{\nu}W^{K\mu}_ {\rho}\) & \(Q^{(1)}_{W^{3}\varphi^{4}}\) & \((\varphi^{\dagger}\varphi)^{2}W^{I}_{\mu\nu}W^{I\mu\nu}\) \\ \(Q^{(2)}_{W^{3}\varphi^{2}}\) & \(\epsilon^{IJK}(\varphi^{\dagger}\varphi)W^{I\nu}_{\mu}W^{J\rho}_{\nu}\widetilde {W}^{K\mu}_{\rho}\) & \(Q^{(2)}_{W^{2}\varphi^{4}}\) & \((\varphi^{\dagger}\varphi)^{2}\widetilde{W}^{I}_{\mu\nu}W^{I\mu\nu}\) \\ \(Q^{(1)}_{W^{3}B\varphi^{2}}\) & \(\epsilon^{IJK}(\varphi^{\dagger}\tau^{I}\varphi)B^{\nu}_{\mu}W^{J\rho}_{\nu}W^{ K\mu}_{\rho}\) & \(Q^{(3)}_{W^{2}\varphi^{4}}\) & \((\varphi^{\dagger}\tau^{I}\varphi)(\varphi^{\dagger}\tau^{J}\varphi)W^{I}_{\mu \nu}W^{J\mu\nu}\) \\ \(Q^{(2)}_{W^{2}B\varphi^{2}}\) & \(\epsilon^{IJK}(\varphi^{\dagger}\tau^{I}\varphi)(\widetilde{B}^{\mu\nu}W^{J}_ {\nu\rho}W^{K\rho}_{\mu}+B^{\mu\nu}W^{J}_{\nu\rho}\widetilde{W}^{K\rho}_{\mu})\) & \(Q^{(4)}_{W^{2}\varphi^{4}}\) & \((\varphi^{\dagger}\tau^{I}\varphi)(\varphi^{\dagger}\tau^{J}\varphi)\widetilde {W}^{I}_{\mu\nu}W^{J\mu\nu}\) \\ & & & \(Q^{(1)}_{W^{3}B\varphi^{4}}\) & \((\varphi^{\dagger}\varphi)(\varphi^{\dagger}\tau^{I}\varphi)W^{I}_{\mu\nu}B^{ \mu\nu}\) \\ & & & \(Q^{(2)}_{W^{3}B\varphi^{4}}\) & \((\varphi^{\dagger}\varphi)(\varphi^{\dagger}\tau^{I}\varphi)\widetilde{W}^{I}_{ \mu\nu}B^{\mu\nu}\) \\ & & & \(Q^{(1)}_{B^{2}\varphi^{4}}\) & \((\varphi^{\dagger}\varphi)^{2}B_{\mu\nu}B^{\mu\nu}\) \\ & & & \(Q^{(2)}_{B^{2}\varphi^{4}}\) & \((\varphi^{\dagger}\varphi)^{2}\widetilde{B}_{\mu\nu}B^{\mu\nu}\) \\ \hline \hline \multicolumn{2}{|c|}{\(X^{2}\varphi^{2}D^{2}\)} & \multicolumn{2}{|c|}{\(X\varphi^{4}D^{2}\)} \\ \hline \(Q^{(1)}_{G^{2}\varphi^{2}D^{2}\)} & \((D^{\mu}\varphi^{\dagger}D^{\nu}\varphi)G^{A}_{\mu\rho}G^{A\rho}\) & \(Q^{(1)}_{W\varphi^{4}D^{2}}\) & \((\varphi^{\dagger}\varphi)(D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi)W^{I} _{\mu\nu}\) \\ \(Q^{(2)}_{G^{2}\varphi^{2}D^{2}\)} & \((D^{\mu}\varphi^{\dagger}D_{\mu}\varphi)G^{A}_{\nu\rho}G^{A\nu\rho}\) & \(Q^{(2)}_{W\varphi^{4}D^{2}}\) & \((\varphi^{\dagger}\varphi)(D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi) \widetilde{W}^{I}_{\mu\nu}\) \\ \(Q^{(3)}_{G^{2}\varphi^{2}D^{2}\)} & \((D^{\mu}\varphi^{\dagger}D_{\mu}\varphi)G^{A}_{\nu\rho}\widetilde{G}^{A\nu\rho}\) & \(Q^{(3)}_{W\varphi^{4}D^{2}}\) & \(\epsilon^{IJK}(\varphi^{\dagger}\tau^{I}\varphi)(D^{\mu}\varphi^{\dagger}\tau ^{J}D^{\nu}\varphi)W^{K}_{\mu\nu}\) \\ \(Q^{(1)}_{W^{2}\varphi^{2}D^{2}\)} & \((D^{\mu}\varphi^{\dagger}D^{\nu}\varphi)W^{I}_{\mu\rho}W^{I\rho}_{\nu}\) & \(Q^{(4)}_{W\varphi^{4}D^{2}}\) & \(\epsilon^{IJK}(\varphi^{\dagger}\tau^{I}\varphi)(D^{\mu}\varphi^{\dagger}\tau ^{J}D^{\nu}\varphi)\widetilde{W}^{K}_{\mu\nu}\) \\ \(Q^{(2)}_{W^{2}\varphi^{2}D^{2}\)} & \((D^{\mu}\varphi^{\dagger}D_{\mu}\varphi)W^{I}_{\nu\rho}W^{I\nu\rho}\) & \(Q^{(1)}_{B\varphi^{4}D^{2}}\) & \((\varphi^{\dagger}\varphi)(D^{\mu}\varphi^{\dagger}D^{\nu}\varphi)B_{\mu\nu}\) \\ \(Q^{(3)}_{W^{2}\varphi^{2}D^{2}\)} & \((D^{\mu}\varphi^{\dagger}D_{\mu}\varphi)W^{I}_{\nu\rho}\widetilde{W}^{I\nu\rho}\) & \(Q^{(2)}_{B\varphi^{4}D^{2}}\) & \((\varphi^{\dagger}\varphi)(D^{\mu}\varphi^{\dagger}D^{\nu}\varphi)\widetilde{B}_{\mu\nu}\) \\ \(Q^{(4)}_{W^{2}\varphi^{2}D^{2}\)} & \(i\epsilon^{IJK}(D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi)W^{J}_{\mu\rho}W^{ K\rho}_{\nu}\) & & \\ \(Q^{(5)}_{W^{2}\varphi^{2}D^{2}\)} & \(\epsilon^{IJK}(D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi)(W^{J}_{\mu\rho} \widetilde{W}^{K\rho}_{\nu}-\widetilde{W}^{J}_{\mu\rho}W^{K\rho}_{\nu})\) & & \\ \(Q^{(6)}_{W^{2}\varphi^{2}D^{2}\)} & \(i\epsilon^{IJK}(D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi)(W^{J}_{\mu\rho} \widetilde{W}^{K\rho}_{\nu}+\widetilde{W}^{J}_{\mu\rho}W^{K\rho}_{\nu})\) & & \\ \(Q^{(1)}_{WB}\varphi^{2}D^{2}\) & \((D^{\mu}\varphi^{\dagger}\tau^{I}D_{\mu}\varphi)B_{\nu\rho}W^{I\nu\rho}\) & & \\ \(Q^{(2)}_{W^{3}B\varphi^{2}D^{2}\)} & \((D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi)B_{\nu\rho}\widetilde{W}^{I\nu\rho}\) & & \\ \(Q^{(3)}_{WB}\varphi^{2}D^{2}\) & \(i(D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi)(B_{\mu\rho}W^{I\rho}_{\nu}-B_{ \nu\rho}W^{I\rho}_{\mu})\) & & \\ \(Q^{(4)}_{WB}\varphi^{2}D^{2}\) & \((D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi)(B_{\mu\rho}W^{I\rho}_{\nu}+B_{ \nu\rho}W^{I\rho}_{\mu})\) & & \\ \(Q^{(5)}_{WB}\varphi^{2}D^{2}\) & \((D^{\mu}\varphi^{\dagger}\tau^{I}D^{\nu}\varphi)(B_{\mu\rho}\widetilde{W}^{I\rho} _{\nu}-B_{\nu\rho}\widetilde{W}^{I\rho}_{\mu})\) & & \\ \(Q^{(6)}_{WB}\varphi^{2}D^{2}\) & |
2304.03812 | High-order Spatial Interactions Enhanced Lightweight Model for Optical
Remote Sensing Image-based Small Ship Detection | Accurate and reliable optical remote sensing image-based small-ship detection
is crucial for maritime surveillance systems, but existing methods often
struggle with balancing detection performance and computational complexity. In
this paper, we propose a novel lightweight framework called
\textit{HSI-ShipDetectionNet} that is based on high-order spatial interactions
and is suitable for deployment on resource-limited platforms, such as
satellites and unmanned aerial vehicles. HSI-ShipDetectionNet includes a
prediction branch specifically for tiny ships and a lightweight hybrid
attention block for reduced complexity. Additionally, the use of a high-order
spatial interactions module improves advanced feature understanding and
modeling ability. Our model is evaluated using the public Kaggle marine ship
detection dataset and compared with multiple state-of-the-art models including
small object detection models, lightweight detection models, and ship detection
models. The results show that HSI-ShipDetectionNet outperforms the other models
in terms of recall, and mean average precision (mAP) while being lightweight
and suitable for deployment on resource-limited platforms. | Yifan Yin, Xu Cheng, Fan Shi, Xiufeng Liu, Huan Huo, Shengyong Chen | 2023-04-07T18:40:49Z | http://arxiv.org/abs/2304.03812v1 | High-order Spatial Interactions Enhanced Lightweight Model for Optical Remote Sensing Image-based Small Ship Detection
###### Abstract
Accurate and reliable optical remote sensing image-based small-ship detection is crucial for maritime surveillance systems, but existing methods often struggle with balancing detection performance and computational complexity. In this paper, we propose a novel lightweight framework called _HSI-ShipDetectionNet_ that is based on high-order spatial interactions and is suitable for deployment on resource-limited platforms, such as satellites and unmanned aerial vehicles. HSI-ShipDetectionNet includes a prediction branch specifically for tiny ships and a lightweight hybrid attention block for reduced complexity. Additionally, the use of a high-order spatial interactions module improves advanced feature understanding and modeling ability. Our model is evaluated using the public Kaggle marine ship detection dataset and compared with multiple state-of-the-art models including small object detection models, lightweight detection models, and ship detection models. The results show that HSI-ShipDetectionNet outperforms the other models in terms of recall, and mean average precision (mAP) while being lightweight and suitable for deployment on resource-limited platforms.
Small ship detection, Optical remote sensing images, Convolutional neural networks, Spatial interaction, Lightweight model.
## I Introduction
Monitoring the position and behavior of ships plays a critical role in maintaining marine traffic safety and supporting social and economic development. The use of optical remote sensing images provides valuable information for various applications such as fishery management, marine spatial planning, marine casualty investigation, and pollution treatment [1, 2]. However, when the altitude and angle of satellite photography vary, ship targets can have a large scale of variation, so there are a large number of small target ships in the images. The complex sea state can significantly impact the detection performance of small ships. Waves can cause variations in pixel values in the optical image due to the reflection of the sun and skylight off their slopes [3]. Additionally, satellites may encounter clouds or sunlight when observing the Earth, which can make it difficult to distinguish ships from the background, even for the naked eye [4]. Therefore, it is still difficult to accurately locate and recognize small ships from optical remote sensing images.
Over the past few decades, there has been a significant amount of research on small ship detection in optical remote sensing images. Traditional methods have mainly focused on feature design, including ship candidate extraction and ship identification [5]. Ship candidate extraction techniques such as statistical threshold segmentation [6, 7], visual saliency [8], and local feature descriptor [9] have been commonly used. In the identification stage, the support vector machine (SVM) [10] has been a frequently adopted method for ship classification. However, traditional methods may not be effective in complex conditions as the impact of variable weather factors on optical image imaging is uncontrollable. Additionally, these algorithms rely heavily on manual and expert experience for feature production and generation, resulting in poor generalization ability.
Recently, the use of convolutional neural networks (CNNs) has greatly improved the accuracy and efficiency of ship detection. However, the continuous downsampling characteristic of CNNs can still present challenges for detecting small ships in optical remote sensing images. One important way to improve the detection accuracy of small objects is to address the issue of multi-scale feature learning. Shallow layers of convolutional neural networks (CNNs) typically have higher resolutions and smaller receptive fields, which are more suitable for detecting small objects [11]. Several methods have been developed to make use of these shallow layers for small object detection, including the Single Shot MultiBox Detector (SSD) [12] and the top-down feature pyramid network (FPN) with lateral connections [13]. In addition to multi-scale feature learning, the use of contextual information can also be beneficial for improving object detection performance, particularly for small objects with insufficient pixels [11]. This is because specific objects often appear in specific environments, such as ships sailing in the sea. Context-based small object detection methods can be divided into two categories: local context modeling [14, 15] and global context modeling [16, 17, 18].
Despite the advancements made by CNN-based detection
networks in improving the detection performance of small objects, several limitations persist. These limitations include:
* The utilization of multi-scale feature learning has had a positive impact on the detection accuracy of small ships. However, it has been observed that most existing networks are limited to three scales [19, 20]. This is deemed to be insufficient as the shallow features, which are crucial in detecting tiny objects, are not fully utilized. How to take the full utilization of shallow features is challenging.
* The use of CNN-based models for small object detection has been shown to be effective, but these models often have a high number of parameters and are complex. For example, the TPH-YOLOv5 detector [18] is well-known for its proficiency in small object detection, but it requires 60 million parameters. This complexity can lead to time delays when transmitting data from the platform to ground stations for processing [21]. To address this, it is necessary to migrate ship detection models from ground to space-borne platforms. However, hardware resources on such platforms are often limited, such as the NVIDIA Jetson TX2 which only has 8 GB of memory [22]. This makes it difficult to reduce model complexity while still maintaining accuracy in ship detection. Therefore, finding an optimal balance between model accuracy and complexity is an ongoing research challenge.
* As the depth of the network layers increases, the high-level features at the end of the backbone exhibit an abundance of combinatorial information. While these higher-level features carry richer semantic information, the location information they convey is ambiguous. This ambiguity can negatively impact the accuracy of small object detection, particularly for objects with insufficient pixels [23], making it challenging to accurately localize and regress small target ships. Additionally, the complexity of the background texture and harsh environmental conditions can weaken the ability of CNNs to extract features of ships, making it difficult to distinguish small ships from their background.
Given the limitations of existing methods in small ship detection and the need to balance detection performance with the limited storage space available on satellites, this paper proposes a novel lightweight ship detection framework based on high-order spatial interactions (HSI). The contributions of this study can be summarized as follows:
* This study proposes an enhanced ship detection network, HSI-ShipDetectionNet, which is designed to be more lightweight and effective for ship detection in optical remote sensing images. Furthermore, the proposed network demonstrates improved accuracy in the localization and identification of small ships.
* To make the detection model more accurate in detecting tiny ships, we add a predictive branch of tiny ships (\(P_{tiny}\)). To support this branch, we increase the number of layers in the neck of the detection frame, making the model more sensitive to tiny ships. Then, we design a lightweight hybrid attention block (LHAB) to replace the SE block in GhostNet, which is the backbone of the HSI-ShipDetectionNet, reducing the number of parameters, computations, and storage space required by the model. Finally, A high-order spatial interactions (HSI-Former) module is introduced at the tail of the backbone, extending the interaction between spatial elements to any order and strengthening the model's ability to understand and process advanced features in deep layers. And in it, we use large convolutional kernels for context modeling to improve the accuracy of ship position regression.
* We comprehensively evaluate the proposed ship detection framework using optical satellite remote sensing images. The performance of the proposed model is compared with that of state-of-the-art small object detection models, lightweight detection models, and ship detection models. The experimental results indicate that the proposed HSI-ShipDetectionNet demonstrates remarkable performance in detecting small ships under diverse sea conditions, as well as under a wide range of altitude and angle variations. Furthermore, the lightweight nature of the proposed model makes it highly suitable for deployment on resource-constrained satellite platforms.
The remainder of this article is organized as follows: Section II reviews related work on this topic. Section III outlines the framework of our discussed methodology. Section IV describes the experimental results and analysis, and Section V concludes the whole study.
## II Related Work
### _Methods for Small Ship Detection_
Accurate and dependable detection of small ships is crucial for maritime surveillance systems. In recent years, there have been numerous efforts to improve the performance of small ship detection.
With the development of deep learning, the use of convolutional neural networks (CNNs) for ship detection has become mainstream. For example, Wu et al. [24] proposed a multi-scale detection strategy that uses a coarse-to-fine ship detection network (CF-SDN) with a feature pyramid network (FPN) to improve the resolution and semantic information of shallow and deep feature maps, respectively. Xie et al. [20] introduced an adaptive feature enhancement (AFE) module into FPN to adaptively reinforce the locations of deep ship features based on shallow features with rich spatial information. Wang et al. [25] developed a ship detection model based on YOLOX that incorporates a multi-scale convolution (MSC) for feature fusion and a feature transformer module (FTM) for context modeling. Jin et al. [26] input patches containing targets and surroundings into a CNN to improve small ship detection results. Tian et al. [4] proposed an image enhancement module base on generative adversarial network (GAN), and introduced the receptive field expansion module to improve the capability to extract features from target ships of different sizes.
Despite the remarkable detection performance demonstrated by existing ship detection models, these models are
often characterized by large and complex network architectures, as evidenced by the substantial number of parameters and computational demands. This presents a significant challenge for resource-constrained applications, where the available hardware resources are limited. To overcome this limitation, we design a lightweight attention block and construct a lightweight ship detection framework, which reduces the number of parameters, computations, and storage space required by the model.
### _Methods for Lightweight CNNs_
Lightweight design of CNNs is crucial for deploying models to resource-limited devices such as satellites, as it helps to reduce the number of parameters and computational requirements. A number of approaches have been proposed in the literature to achieve this goal, including SqueezeNet [27], which reduces the number of parameters by using \(1\times 1\) convolution kernels to decrease the size of the feature maps; the MobileNet series [28, 29, 30], which uses depthwise separable convolution to factorize standard convolution into a depthwise convolution and a pointwise convolution, reducing the number of parameters and computational requirements; ShuffleNet [31, 32], which replaces pointwise convolution with pointwise group convolution and performs channel shuffle to further reduce the number of parameters and address the disadvantages of group convolution; and GhostNet [33], which embraces abundant and redundant information through cheap operations as a cost-efficient way to improve network performance.
In the field of ship detection, it can be challenging to balance the performance and computational complexity of the model. To address this issue, Li et al. [34] optimized the backbone of YOLOv3 using dense connections and introduced spatial separation convolution to replace standard convolution in FPN, resulting in a significant reduction in parameters. Jiang et al. [35] developed YOLO-V4-light by reducing the number of convolutional layers in CSP-DarkNet53. Liu et al. [36] also improved upon YOLOv4 by substituting the original backbone with MobileNetv2, significantly reducing the complexity of the ship detection model. Zheng et al. [37] used BN scaling factor \(\gamma\) to compress the YOLOv5 network, achieving higher detection accuracy and shorter computational time compared to other object detection models.
### _Methods for Attention Mechanism_
Attention mechanisms have become a key concept in the field of computer vision, with the ability to significantly improve the performance of networks [38]. Channel attention allows networks to model dependencies between the channels of their convolutional features, such as in the Squeeze-and-excitation (SE) network [39], which adaptively recalibrates channel-wise features using global information to selectively highlight important features. Wang et al. [40] further developed this concept with the efficient channel attention (ECA) module, which can be implemented using 1D convolution and has been shown to be more efficient and effective. Spatial attention, on the other hand, focuses on identifying specific positions in the image that should be emphasized, such as in CCNet [41], which captures full-image contextual information using criss-cross attention. The Convolutional Block Attention Module (CBAM) [42] combines channel and spatial attention, emphasizing important features in both dimensions.
The Transformer model, proposed by Vaswani et al. [43], has been a major milestone in the development of attention mechanisms, and its application to the field of computer vision is known as the Vision Transformer (ViT) [44]. The core idea of the Transformer is to use self-attention to dynamically generate weights that establish long-range dependencies. Self-attention achieves this through matrix multiplication between queries, keys, and values, allowing for the interaction of two spatial elements. However, it has been noted that the Transformer architecture is limited in its capability to model higher-order spatial interactions, which can potentially enhance the overall visual modeling performance [45]. In this work, we propose a novel lightweight ship detection framework for small ships that includes the following elements: an extension of FPN through the addition of a predictive branch for tiny ships, the use of the lightweight hybrid attention block (LHAB), and the introduction of the high-order spatial interactions (HSI-Former) module, resulting in more accurate and reliable ship detection in surveillance systems. Ablation and comparison experiments will be conducted to demonstrate the superior performance of our model.
## III Methodology
### _Overview_
The proposed lightweight HSI-ShipDetectionNet for small ship detection, as depicted in Fig. 1, consists of three key components: the Backbone, the HSI-Former module, and the Neck. The input optical remote sensing images undergo processing in the backbone, which extracts the detailed features of the ship. To address the challenge of small ship detection, a predictive branch specifically designed for tiny ships is added to the shallow layer of the backbone, as discussed in detail in Section III-B. To further reduce the complexity of the model, the Ghost bottleneck in GhostNet has been improved with the implementation of a new Lightweight Hybrid Attention Block (LHAB), which replaces the SE block [39]. This results in a LHAB-Ghneck with a reduced number of parameters, computational effort, and occupied storage space, as explained in Section III-C. In addition, the HSI-Former module, which is designed to reinforce contextual learning and modeling capability of advanced features in deep layers, is introduced at the tail of the backbone. The function and implementation of the HSI-Former module are detailed in Section III-D. Finally, the neck layer fuses the features, and four separate output heads are employed to predict tiny, small, medium, and large ship targets, respectively.
### _The Predictive Branch of Tiny Ships_
The problem of low detection accuracy for small ships in satellite imagery is a well-known issue. This is due to the continuous down-sampling of features by the convolutional layers in the backbone network, which results in the loss of resolution and information for small ships. Small ships are often present in satellite images, making it crucial to address this problem to improve overall detection accuracy. To address this issue, we propose adding a branch that predicts tiny ships in stage 1 of the backbone, as shown in Fig. 1. This branch, named \(P_{tiny}\), is specifically designed to be more sensitive to tiny ships. Additionally, the number of layers in the PANet in the neck of the detection frame is increased to enhance the feature fusion effect for tiny ships. This structure gradually fuses shallow features with deep layers, ensuring that the feature maps of different sizes contain both semantic information and feature information of ships. This ultimately ensures the detection accuracy of ships with different scales, particularly for tiny ships. By extracting features before the continuous downsampling process, the detection accuracy of small ships is expected to be improved.
Along with this new branch, we also add an additional set of anchors specifically tailored for tiny ships based on the original three groups of anchors of YOLOv5, resulting in a total of four groups of anchors. Instead of using the anchors generated by COCO dataset as in the original YOLOv5, we employ clustering to generate new anchors specifically for ship sizes in our dataset. This makes the regression of the anchors more accurate. As per the research in [46], we have chosen 1-IOU as the distance for the clustering instead of Euclidean distance for better results. The sizes of the four groups of anchors are as follows: (7,16, 10,9, 18,7), (16,15, 20,27, 34,16), (37,30, 60,21, 26,58) and (63,34, 45,54, 66,57), each of which has three different sizes of anchors, resulting in a total of twelve anchors.
### _LHAB-GhostCNN_
We propose using GhostNet as the backbone of the detection network for small ships. The core idea behind GhostNet is "cheap operation" which is well-suited for small ship detection. The authors of GhostNet found that some of the feature maps generated by the first residual group in ResNet-50 were very similar, indicating that there was abundant and redundant information in the feature maps. Rather than discarding these redundant feature maps, they chose to accept them in a cost-efficient way.
Small ships occupy fewer pixel units, making the information about them extremely valuable. Removing redundant information to reduce the complexity of the network is not a good approach for small ship detection. However, GhostNet's approach of embracing redundant information in
Fig. 1: Overview of the proposed HSI-ShipDetectionNet for small ship detection in optical remote sensing images. In the **Backbone**, a predictive branch is added to the shallow layer specifically for detecting tiny ships. The **Lightweight Hybrid Attention Block (LHAB)** in **LHAB-Ghneck** is designed, resulting in a significant reduction in the number of parameters, computational effort, and storage space required by the network. The **HSI-Former module** is added to the end of the Backbone to enhance the contextual learning and modeling of advanced features in the deep layers. The **Neck** layer then performs feature fusion, and four output heads are used to predict tiny, small, medium, and large ships respectively.
a cost-effective way is beneficial for small target detection. Therefore, we have selected GhostNet as the backbone of our lightweight ship detector and further simplified it. We name this architecture as _LHAB-GhostCNN_.
#### Iii-A1 Ghost Module
The Ghost module is a crucial element of the proposed LHAB-GhostCNN architecture for small ship detection. Its purpose is to maintain the same number of feature maps as a standard convolution while reducing the number of parameters and computational effort. Specifically, when the input feature maps are \(C\) and the output feature maps after standard convolution are \(D\), the Ghost module can also produce \(D\) feature maps while minimizing the number of parameters and computations, without compromising redundant information. The process can be defined as follows.
For the input feature \(X\in\mathbb{R}^{H\times W\times C}\), the \(m\) intrinsic feature maps are first generated by a standard convolution, represented by the set \(Y_{1}\):
\[Y_{1}=Conv\left(X\right),\quad Y_{1}\in\mathbb{R}^{H^{\prime}\times W^{\prime} \times m} \tag{1}\]
where \(m\leq D\). To obtain the desired \(D\) feature maps, each of the \(m\) intrinsic feature maps in \(Y_{1}\) undergoes \(s\) cheap operations, implemented through depthwise separable convolution (DW-Conv), resulting in \(m\times s\) ghost feature maps \(Y_{2}\):
\[\begin{split} Y_{2}&=\Phi\left(Y_{1}\right):y_{ij} =DW\_Conv_{ij}\left(y_{i}\right),\\ \forall i&=1,\cdots,m,\quad j=1,\cdots,s\end{split} \tag{2}\]
where \(y_{i}\) represents the \(i\)-th intrinsic feature map in \(Y_{1}\), and the \(j\)-th feature map \(y_{ij}\) is generated by the \(j\)-th linear operation \(DW-Conv_{ij}\). As a result, these \(m\) intrinsic feature maps can eventually generate \(ms\) feature maps, that is, \(Y_{2}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times ms}\). The final output of the Ghost module is the concatenation of \(Y_{1}\) and \(Y_{2}\):
\[Y_{out}=Y_{1}\oplus Y_{2} \tag{3}\]
By employing the Ghost module, \(D\) feature maps can be obtained while maintaining the same number of feature maps as a standard convolution. Consequently, the output feature maps \(Y_{out}\) have a dimension of \(m+ms=D\).
**Analysis of complexities.** We define \(r_{F}\) as the speed-up ratio of FLOPs of the Ghost module to FLOPs of the standard convolution:
\[\begin{split} r_{F}&=\frac{k\cdot k\cdot C\cdot m \cdot H^{\prime}\cdot W^{\prime}+d\cdot d\cdot m\cdot s\cdot H^{\prime}\cdot W ^{\prime}}{k\cdot k\cdot C\cdot D\cdot H^{\prime}\cdot W^{\prime}}\\ &=\frac{C\cdot m+m\cdot s\cdot 9}{C\cdot m\cdot(1+s)}=\frac{C+s \cdot 9}{C\cdot(1+s)}\approx\frac{1}{1+s}\end{split} \tag{4}\]
where \(k=1\) is the standard convolution kernel size, while \(d=3\) is the kernel size of each linear operation, and \(C\gg s\). Similarly, the compression ratio \(r_{P}\) of the parameters of the Ghost module to the parameters of the standard convolution is:
\[\begin{split} r_{P}&=\frac{k\cdot k\cdot C\cdot m +d\cdot d\cdot m\cdot s}{k\cdot k\cdot C\cdot D}\\ &=\frac{C\cdot m+m\cdot s\cdot 9}{C\cdot m\cdot(1+s)}=\frac{C+s \cdot 9}{C\cdot(1+s)}\approx\frac{1}{1+s}\end{split} \tag{5}\]
In this paper, we set the value of \(s\) to 1. As a result, the Ghost module can effectively reduce the number of parameters and the computational effort of the network by half.
#### Iii-A2 LHAB-Ghneck
Similar to the basic residual block in ResNet [47], the Ghost bottleneck with LHAB (LHAB-Ghneck) integrates two Ghost modules and a shortcut, as shown in Fig. 2. The first Ghost module serves as an expansion layer to increase the number of channels, while the second Ghost module reduces the number of channels to match the shortcut connection. The shortcut is connected between the inputs and outputs of these two Ghost modules. When Stride=2, a depthwise separable convolution (DW-Conv) is added after the first Ghost module to reduce the size of the feature maps by half, at this time the shortcut path goes through a downsampling layer to match the size of the feature maps. If LHAB=1, the Lightweight Hybrid Attention Block (LHAB) is selected. Compared with the SE attention [39] used in the original Ghost bottleneck, LHAB can further reduce the complexity of the network while enhancing the response of key features.
The SE block, a widely used channel attention mechanism, has limitations in ignoring spatial attention and adding complexity to the model. To balance the trade-off between model performance and complexity, we propose the Lightweight Hybrid Attention Block (LHAB), which is a lightweight and efficient attention block. LHAB consists of a channel attention block and a spatial attention block, enabling it to highlight significant information in both dimensions simultaneously.
**Channel attention block.** The channel attention block is a key component of the LHAB, which aims to capture interdependencies between channels. SENet [39] employed global average pooling to aggregate channel-wise statistics, but it overlooks the potential of max-pooling in inferring fine channel attention, as pointed out by Woo et al. [42]. Therefore, they proposed to use both average-pooling and max-pooling operations in tandem and generated the channel attention map using a shared network. In contrast, we believe that max-pooled features and average-pooled features each play distinct roles and therefore require dedicated parameters to store unique feature information. Therefore, we do not use shared parameters and instead employ two different one-dimensional convolutions for the max-pooled features and average-pooled features, respectively. This approach allows us to store different information and acquire cross-channel interactions without reducing the channel dimensionality. Furthermore, since we use one-dimensional convolution, the
Fig. 2: LHAB-Ghneck. Stride=1 and Stride=2 go through different branches.
increase in the number of parameters is negligible even if no parameters are shared. The specific operation details are outlined below.
As shown in Fig. 3, we simultaneously apply max-pooling and average-pooling operations to the input feature map \(\mathbf{U}\in\mathbb{R}^{\mathbf{H}\times\mathbf{W}\times\mathbf{C}}\), generating max-pooled features \(\mathbf{U}_{\mathbf{C}}^{\mathbf{max}}\) and average-pooled features \(\mathbf{U}_{\mathbf{C}}^{\mathbf{avg}}\), respectively. In contrast to SE [39], which used fully connected layers to achieve cross-channel interactions, we use two different one-dimensional convolutions (\(\mathbf{C}\mathbf{1}\mathbf{D}_{\mathbf{k}}\)) of size \(k\) for \(\mathbf{U}_{\mathbf{C}}^{\mathbf{max}}\) and \(\mathbf{U}_{\mathbf{C}}^{\mathbf{avg}}\), respectively, to avoid the negative effects of channel dimensionality reduction and reduce model complexity. The kernel size \(k\) is defined as the coverage of \(k\) neighbors to participate in the interaction between channels, which is calculated using the equation from ECA-Net [40]:
\[k=\psi\left(C\right)=\left|\frac{log_{2}\left(C\right)}{\gamma}+\frac{b}{ \gamma}\right|_{odd} \tag{6}\]
where \(C\) is the number of channels and \(\left|t\right|_{odd}\) represents the nearest odd number of \(t\). \(\gamma\) and \(b\) are set to 2 and 1 respectively in this paper. Through the mapping \(\psi\), kernel size \(k\) can be adaptively confirmed by the number of channels \(C\).
Then we merge these two feature vectors \(\mathbf{C}\mathbf{1}\mathbf{D}_{\mathbf{k_{1}}}\left(\mathbf{U}_{\mathbf{C}}^ {\mathbf{max}}\right)\) and \(\mathbf{C}\mathbf{1}\mathbf{D}_{\mathbf{k_{2}}}\left(\mathbf{U}_{\mathbf{C}}^ {\mathbf{avg}}\right)\) using element-wise summation and pass the result through the sigmoid function. The final outcome is obtained by multiplying the original feature map \(\mathbf{U}\) with the result of the sigmoid function to obtain \(\mathbf{U}^{\prime}\) for adaptive feature refinement. In a word, the channel attention block is summarized as:
\[\mathbf{U}^{\prime} =\sigma\left(\mathbf{C}\mathbf{1}\mathbf{D}_{\mathbf{k_{1}}} \left(MP\left(\mathbf{U}\right)\right)\oplus\mathbf{C}\mathbf{1}\mathbf{D}_{ \mathbf{k_{2}}}\left(AP\left(\mathbf{U}\right)\right)\right)\otimes\mathbf{U} \tag{7}\] \[=\sigma\left(\mathbf{C}\mathbf{1}\mathbf{D}_{\mathbf{k_{1}}} \left(\mathbf{U}_{\mathbf{C}}^{\mathbf{max}}\right)\oplus\mathbf{C}\mathbf{1} \mathbf{D}_{\mathbf{k_{2}}}\left(\mathbf{U}_{\mathbf{C}}^{\mathbf{avg}} \right)\right)\otimes\mathbf{U}\]
Where \(\sigma\) refers to sigmoid function. \(MP\) and \(AP\) refer to the max-pooling operation and average-pooling operation respectively.
**Spatial attention block.** To strengthen the inter-spatial relationship of features, we design a spatial attention block. As a complement to channel attention, which pays attention to "what" is essential, spatial attention concentrates on "where" is the important and informative area. Similar to channel attention block, we first apply max-pooling and average-pooling operations along the channel axis to generate two 2D feature maps and then send them to two different two-dimensional convolution layers, which do not share parameters. We describe the detailed operation below.
As shown in Fig. 4, for the intermediate feature map \(\mathbf{U}^{\prime}\in\mathbb{R}^{\mathbf{H}\times\mathbf{W}\times\mathbf{C}}\) from the channel attention block, we aggregate channel information by max-pooling and average-pooling operations to obtain two new maps: \(\mathbf{U}_{\mathbf{S}}^{\mathbf{max}}\in\mathbb{R}^{\mathbf{H}\times\mathbf{W }\times\mathbf{1}}\)and \(\mathbf{U}_{\mathbf{S}}^{\mathbf{avg}}\in\mathbb{R}^{\mathbf{H}\times\mathbf{ W}\times\mathbf{1}}\). Those are then convolved by two different two-dimensional convolution layers (\(\mathbf{C}\mathbf{2}\mathbf{D}_{\mathbf{7}\times\mathbf{7}}\)), respectively. The kernel size of these two-dimensional convolutions is \(7\times 7\), which helps to generate larger receptive fields. Then, we merge these two feature maps \(\mathbf{C}\mathbf{2}\mathbf{D}_{\mathbf{7}\times\mathbf{7}}\left(\mathbf{U}_{ \mathbf{S}}^{\mathbf{max}}\right)\) and \(\mathbf{C}\mathbf{2}\mathbf{D}_{\mathbf{7}\times\mathbf{7}}\left(\mathbf{U}_{ \mathbf{S}}^{\mathbf{avg}}\right)\) using element-wise summation. The result is activated by the sigmoid function and finally \(\mathbf{U}^{\prime}\) multiply it to get the end map \(\mathbf{U}^{\prime\prime}\). In a word, the channel attention block is summarized as:
\[\mathbf{U}^{\prime\prime} =\sigma\left(\mathbf{C}\mathbf{2}\mathbf{D}_{\mathbf{7}\times \mathbf{7}}\left(MP\left(\mathbf{U}^{\prime}\right)\right)\oplus\mathbf{C} \mathbf{2}\mathbf{D}_{\mathbf{7}\times\mathbf{7}}\left(AP\left(\mathbf{U}^{ \prime}\right)\right)\right)\otimes\mathbf{U}^{\prime} \tag{8}\] \[=\sigma\left(\mathbf{C}\mathbf{2}\mathbf{D}_{\mathbf{7}\times \mathbf{7}}\left(\mathbf{U}_{\mathbf{S}}^{\mathbf{max}}\right)\oplus\mathbf{C} \mathbf{2}\mathbf{D}_{\mathbf{7}\times\mathbf{7}}\left(\mathbf{U}_{\mathbf{S}}^ {\mathbf{avg}}\right)\right)\otimes\mathbf{U}^{\prime}\]
where \(\sigma\) refers to sigmoid function. \(MP\) and \(AP\) refer to the max-pooling operation and average-pooling operation respectively.
In conclusion, the LHAB module is composed of a channel attention block and a spatial attention block, arranged sequentially with the channel attention block being in front of the spatial attention block. The LHAB can make a significant reduction in the number of parameters, the computational effort, and the occupied storage space of the network, while still effectively capturing important information from the feature maps.
#### Iii-B3 LHAB-GhostNet
The architecture of the proposed LHAB-GhostNet, which serves as the backbone for the HSI-ShipDetectionNet, is summarized in Table I. In this table, the parameters \(Exp\) and \(Out\) indicate the number of intermediate and output channels, respectively, and \(s\) represents the stride. The architecture of LHAB-GhostNet is based on MobileNetV3 [30], with the bottleneck block replaced by LHAB-Gbneck. The first layer of LHAB-GhostNet is a
Fig. 4: Diagram of spatial attention block of LHAB. Due to the max-pooling operations and average-pooling operations playing different roles in aggregating channel dimension information, we design an adaptive spatial attention block containing these two operations. Then, the two 2D maps are passed through two different two-dimensional convolutions and further activated by the sigmoid function. Finally, the resulting vectors are multiplied by the input feature map for adaptive feature refinement.
Fig. 3: Diagram of channel attention block of LHAB. Due to the max-pooling operations and average-pooling operations playing different roles in aggregating spatial dimension information, we design an adaptive channel attention block containing these two operations. The max-pooled and average-pooled features are passed through two separate one-dimensional convolutions, and then activated by the sigmoid function. The resulting vectors are then multiplied by the input feature map for adaptive feature refinement.
standard convolution operation, and the network is divided into 5 stages based on the input feature map sizes. The stride of the last LHAB-Gbneck in each stage (except for stage 5) is set to 2. Furthermore, LHAB is integrated into some LHAB-Gbnecks, as illustrated in Table I, to further simplify the backbone.
### _High-Order Spatial Interaction Mechanism_
In recent years, the Transformer has gained popularity in vision applications and has challenged the dominance of CNNs by achieving excellent results in classification, detection, and segmentation tasks. Scholars have started exploring the use of Transformer in the field of small object detection, as seen in recent studies such as [18] and [48]. The success of Transformer in vision tasks can be attributed to its core architecture, which is self-attention. Self-attention's ability to capture long-range dependencies allows the model to learn contextual information more effectively, which in turn facilitates the detection of small objects. Moreover, self-attention can perform second-order spatial interactions by performing matrix multiplication between queries, keys, and values, which enhances the model's ability to identify spatial relationships.
Despite its effectiveness, self-attention has some limitations that need to be addressed. For instance, its spatial interaction ability is limited to two orders, while research by Rao et al. [45] has shown that higher-order spatial interactions can improve visual models' modeling ability. Moreover, self-attention introduces a quadratic complexity as it requires each token to attend to every other token. Lastly, self-attention lacks some of the inductive biases present in CNNs, which can make it difficult to generalize well with limited data. To overcome these limitations, we introduce the Iterative Gated Convolution (g\({}^{n}\)Conv), a convolution-based architecture that replaces self-attention in our method. Specifically, we take g\({}^{3}\)Conv as an example to illustrate its principle, as shown in Fig. 5.
To process the input feature \(x\in\mathbb{R}^{H\times W\times C}\), we first use a linear projection layer implemented as a convolution operation to mix the channels. After this operation, the number of channels is doubled to obtain the intermediate feature \(x^{\prime}\in\mathbb{R}^{H\times W\times 2C}\). The formula for this process can be expressed as follows:
\[x^{\prime}=Conv_{in}\left(x\right) \tag{9}\]
Then, the feature map \(x^{\prime}\) is split along the channel dimension, which is expressed as follows:
\[\left[a_{0},b_{0},b_{1},b_{2}\right]=Split\left(x^{\prime}\right) \tag{10}\]
where the number of channels for \(a_{0}\) is \(\frac{C}{4}\), and the number of channels for \(b_{0}\), \(b_{1}\), and \(b_{2}\) is \(\frac{C}{4}\), \(\frac{C}{2}\), and \(C\), respectively. Then, the depth separable convolution (DW-Conv) is performed on \(b_{0}\), \(b_{1}\), and \(b_{2}\), and the results are iteratively subjected to gated convolution operations with \(a_{0}\), \(a_{1}\), and \(a_{2}\), respectively by:
\[\begin{split} a_{1}&=h_{0}\left(a_{0}\right)\otimes DW \_Conv_{0}\left(b_{0}\right)\\ a_{2}&=h_{1}\left(a_{1}\right)\otimes DW\_Conv_{1} \left(b_{1}\right)\\ a_{3}&=h_{2}\left(a_{2}\right)\otimes DW\_Conv_{2} \left(b_{2}\right)\end{split} \tag{11}\]
where \(\otimes\) is the multiplication of the elements in the matrix at the corresponding positions. The role of \(\{h_{i}\}\) is to change the number of channels of \(a_{i}\) to match the number of channels of \(b_{i}\). When \(i=0\), \(h_{0}\) is an identity mapping;
Fig. 5: **g\({}^{3}\)Conv.** We take g\({}^{3}\)Conv as an example to illustrate g\({}^{n}\)Conv’s principle. This module can extend the spatial interactions to three orders so that the correlation between features is gradually enhanced through the multiplication.
when \(i\) is 1 or 2, \(h_{i}\) doubles the channels of \(a_{i}\). Finally, the \(a_{3}\) received from the above steps is continued into a linear projection to obtain the final result of g\({}^{3}\)Conv:
\[y=Conv_{out}\left(a_{3}\right) \tag{12}\]
Based on the above analysis, g\({}^{3}\)Conv can be generalized to the n-order spatial interaction, i.e. g\({}^{n}\)Conv. For the input feature map \(x\in\mathbb{R}^{H\times W\times C}\), the process is similar to g\({}^{3}\)Conv, as follows:
\[x^{\prime}=Conv_{in}\left(x\right)\in\mathbb{R}^{H\times W\times 2C} \tag{13}\]
\[\left[a_{0}^{H\times W\times C_{0}},b_{0}^{H\times W\times C_{0}},\cdots,b_{n -1}^{H\times W\times C_{n-1}}\right]=Split\left(x^{\prime}\right) \tag{14}\]
Where,
\[C_{0}+\sum_{0\leq i\leq n-1}C_{i}=2C \tag{15}\]
\[C_{i}=\frac{C}{2^{n-i-1}},0\leq i\leq n-1 \tag{16}\]
Equation (16) specifies how the channel dimensions are allocated in each order of the g\({}^{n}\)Conv operation. This allocation is designed to reduce the number of channels used to compute lower orders, thereby avoiding a large computational overhead. After splitting the intermediate feature map \(x^{\prime}\), the gated convolution continues iteratively:
\[a_{i+1}=h_{i}\left(a_{i}\right)\otimes DW\_Conv_{i}\left(b_{i}\right),i=0,1, \cdots,n-1 \tag{17}\]
where,
\[h_{i}\left(x\right)=\left\{\begin{array}{ll}x,&i=0\\ Conv\left(C_{i-1},C_{i}\right),&1\leq i\leq n-1\end{array}\right. \tag{18}\]
The final result for g\({}^{n}\)Conv is acquired by equation (19), as follows:
\[y=Conv_{out}\left(a_{n}\right) \tag{19}\]
The proposed HSI-ShipDetectionNet model uses g\({}^{n}\)Conv instead of the self-attention mechanism found in the Transformer encoder to create the High-Order Spatial Interaction (HSI-Former) module. This is illustrated in Figure 1. The g\({}^{n}\)Conv offers several advantages over self-attention, including its ability to extend spatial interactions to higher orders, resulting in improved feature correlation. Moreover, using a convolution-based architecture avoids the quadratic complexity of self-attention, while channel division reduces computational cost. In addition, convolutional operations introduce inductive biases that are helpful for ship detection tasks, such as translation equivariance and locality [44]. This prior knowledge can be beneficial for network learning. In the g\({}^{n}\)Conv, the depth-separable convolution utilizes large \(77\) convolution kernels to increase the receptive field. This improves context modeling and enhances the detection performance of small ships.
## IV Experiments
### _Experimental Setup_
#### Iv-A1 Settings
All experiments in this paper are conducted on a server equipped with NVIDIA Titan V100 GPUs, and the deep learning algorithms are implemented using PyTorch v1.9.0 and Python v3.8.0. During the training process, we set the batch size to 4 and use the SGD optimizer with momentum and weight decay of 0.937 and 5e-4, respectively, and an initial learning rate of 0.01. We stop training after 500 epochs. Rather than searching for the best hyperparameters in the hyperparameter space, we use the same training parameters as those in the corresponding models.
#### Iv-A2 Dataset
The dataset used in our experiments is sourced from the Kaggle competition for marine ship detection1. The dataset comprises 29GB of high-resolution optical remote sensing images, consisting of a total of 192,556 images in the training set and 15,606 images in the test set. Each image has a resolution of \(768\times 768\) pixels. To evaluate the effectiveness of our model in detecting small ships, we randomly select 1000 images from the dataset that contain small target ships and divide them into three subsets: a training set, a validation set, and a test set, with a ratio of 7:2:1.
Footnote 1: [https://www.kaggle.com/c/airbus-ship-detection](https://www.kaggle.com/c/airbus-ship-detection)
#### Iv-A3 Evaluation Metrics
In order to provide a comprehensive evaluation of our proposed method, we consider not only the standard metrics of **Precision**, **Recall**, and the **mean Average Precision (mAP)**, but also the model size, the number of parameters, and the calculated amount. These metrics are commonly used in the field of object detection and can provide a clear understanding of the performance of our model in comparison to other state-of-the-art methods.
These metrics are defined as follows:
\[Recall{=}\frac{TP}{TP+FN} \tag{20}\]
\[Precision{=}\frac{TP}{TP+FP} \tag{21}\]
\[mAP{=}\int_{0}^{1}Precision\left(Recall\right)d(Recall) \tag{22}\]
where TP, FP, and FN represent true positive, false positive, and false negative, respectively, and Precision (Recall) refers to the Precision-Recall curve.
### _Comparison with State-of-the-art Methods_
To evaluate the performance of our proposed method, we compare it with a total of three types of models: small object detection models, lightweight detection models, and ship detection models.
#### Iv-C1 Comparison with Small Object Detection Models
To verify the superior performance of our proposed approach on small object detection, we compare HSI-ShipDetectionNet with two state-of-the-art small object detection models, as described below.
* **TPH-YOLOv5**[18]: This is a YOLOv5-based detector aimed at densely packed small objects. It incorporates advanced techniques such as Transformer blocks, CBAM, and other experienced tricks to improve performance.
* **SPH-YOLOv5**[48]: The original prediction heads of this detector are replaced with Swin Transformer Prediction Heads (SPHs), which can reduce the computational complexity considerably. In addition, Normalization-based Attention Modules (NAMs) are introduced to improve network detection performance.
As can be seen in Table II, our proposed HSI-ShipDetectionNet has the smallest number of parameters and computational complexity, requiring only 9.2 MB of storage space. Although TPH-YOLOv5 achieves a higher mAP value than ours by 0.49, it has 14.5 times more parameters and GFLOPs than our model. Similarly, the detection accuracy of SPH-YOLOv5 is comparable to that of HSI-ShipDetectionNet, but our model requires 85.1% fewer parameters and 96.3% less computational effort. While these two small object detectors have superior detection performance, they are built on deep and dense convolutional layers. In contrast, our proposed model is much lighter and achieves comparable detection accuracy. Therefore, our method is better suited for scenarios with limited resources.
#### Iv-C2 Comparison with Lightweight Detection Models
To evaluate the performance of our model, we also compare HSI-ShipDetectionNet with the following eight lightweight detection models, described as follows.
* **MobileNetV3-Small**[30]: Based on MobileNetV2, MobileNetV3 added the SE block and improved the activation function using h-swish. The small version is targeted at low-resource use cases and therefore contains fewer bottleneck blocks.
* **PP-LCNet**[49]: This is a lightweight CPU network that utilizes the MKLDNN acceleration strategy. While the techniques used in the network are not novel and have been introduced in previous works, this model achieves a better balance between accuracy and speed through extensive experimentation.
* **ShuffleNetV2**[32]: Four policies were presented by the authors to reduce memory access costs (MAC), avoid network fragmentation, and reduce element-wise operations.
* **MobileNetV3-Large**[30]: Unlike MobileNetV3-Small, the large version is targeted at resource-intensive use cases and therefore contains more bottleneck blocks.
* **GhostNet**[33]: It has developed the Ghost module, which tends to accept abundant and redundant information in the feature maps through a cheap operation instead of discarding it.
* **Efficient-Lite0**[50]: The Efficient-Lite series is the on-device version of EfficientNet and consists of five versions, of which Efficient-Lite0 is the smallest.
* **YOLOv5s**: YOLOv5s is the smallest network in the YOLOv5 series in terms of depth and width.
* **YOLOv3-tiny**[51]: Compared to YOLOv3, YOLOv3-tiny has fewer feature layers and only two prediction branches, making it more suitable for high-speed detection tasks.
In order to ensure consistency in experimental conditions, we incorporated the aforementioned lightweight models (excluding YOLOv5s and YOLOv3-tiny) into the framework of YOLOv5 for the purpose of conducting target detection tasks.
As shown in Table III, which displays the number of parameters (Para) and recall (R), our proposed HSI-ShipDetectionNet achieves the highest recall and mAP. Compared to the second-best performing model, YOLOv5s, our model not only outperforms in terms of mAP but also has a significantly lower number of parameters, GFLOPs, and model size, at 41.1%, 38.7%, and 32.8% less respectively. Similarly, YOLOv3-tiny has a lower detection accuracy and a more complex network compared to our model. Specifically, our model has a 2.2% and 1.5% higher recall rate and mAP respectively, while also having half the number of parameters. This is attributed to the fact that YOLOv3's two prediction branches result in fewer bounding boxes, thus weakening its detection performance. As for GhostNet, it
has 5.20M parameters and a 10.4MB model size, 1.05 and 1.2 times higher than our model, but with a 2.41 lower mAP. This demonstrates the superior overall performance of HSI-ShipDetectionNet compared to GhostNet. Additionally, MobileNetV3-Large has a similar GFLOPs as our model, but a 3.83 and 4.26 lower mAP and recall rate respectively. On the other hand, ShuffleNetV2, PP-LCNet and MobileNetV3-Small are indeed lighter than our model, but their detection accuracy (mAP) is around 4 to 6 percentage points lower than that of HSI-SmallShipDetectionNet. These models prioritize lower model complexity over detection accuracy, whereas our HSI-ShipDetectionNet effectively balances both. Overall, HSI-ShipDetectionNet is more sensitive to the detection of small ships while maintaining a suitable level of model complexity.
#### Iv-B3 Comparison with Ship Detection Models
To further evaluate the performance of the proposed HSI-ShipDetectionNet in the field of ship detection, we compare it with two state-of-the-art ship detection models. These models are described as follows:
* **ShipDetectionNet**[2]: This is a lightweight ship detection network that utilizes an improved convolution unit to replace the standard convolution, resulting in a significant reduction in the number of parameters in the network.
* **Literature**[52]: This network proposes a new loss function, IEIOU_LOSS, and introduces the coordinate attention (CA) mechanism to achieve robust detection results for docked and dense ship targets.
In the experiments illustrated in Table IV, it can be seen that our proposed model outperforms all the other models in terms of all the evaluation metrics. HSI-ShipDetectionNet has almost 3.6% higher mAP than the network proposed in the literature [52]. Moreover, the number of parameters and GFLOPs of our model is 41.8% and 39.0% lower than that of the network in [52], respectively, indicating that our model consumes less storage space. Compared with ShipDetectionNet, our model has a reduction of 31.4% and 36.3% regarding parameters and GFLOPs, respectively, while achieving comparable detection accuracy. This is due to the new Lightweight Hybrid Attention Block (LHAB) proposed in our model, which replaces the SE attention mechanism used in ShipDetectionNet. Our analysis shows that the LHAB can reduce the computational effort and the number of parameters while maintaining the detection accuracy of the network. In summary, HSI-ShipDetectionNet is more lightweight and has better detection accuracy, making it more suitable for ship detection tasks on resource-limited space-borne platforms.
#### Iv-B4 The Visual Comparisons of Different Methods
To demonstrate the superior performance of our proposed method for detecting small targets, we present some inference results on the test set in Figure 6. It is evident from the results that HSI-ShipDetectionNet successfully locates and recognizes all small target ships that are missed by GhostNet and YOLOv5s. Although ShipDetectionNet also detects all small ships successfully, the confidence of its prediction box is not as high as that of HSI-ShipDetectionNet. In particular, for some images where it is challenging to distinguish the ship from the background, as shown in row (a), HSI-ShipDetectionNet more accurately wraps the target ships. This is due to the fact that HSI-Former can better understand and model advanced features in deep layers, which improves the accuracy of location and regression for prediction boxes. Furthermore, our proposed model can detect small target ships at the edge of the images with relatively high confidence, as shown in rows (c) and (d). Additionally, our model exhibits excellent detection performance in the presence of bad weather conditions, such as cloud barriers shown in row (e). When multiple ships are present in an image, our network can identify all ships more accurately, as shown in row (f).
To summarize, our proposed HSI-ShipDetectionNet enhances the detection performance of small-sized ships and demonstrates competency in detecting ships in challenging sea conditions. This results in more precise and dependable prediction boxes on optical remote sensing images.
### _Ablation Experiments and Sensitivity Analysis_
We evaluate the effectiveness of the proposed several modules by ablation analysis and sensitivity analysis.
#### Iv-C1 Ablation of the Predictive Branch of Tiny Ships
To study the influence of the predictive branch of tiny ships (\(P_{tiny}\)) on detection performance, we first conduct experiments on the detection framework with GhostNet as the backbone. We obtain results for GhostNet on the original detection framework (with only three predictive branches), and then add \(P_{tiny}\) on top of it. The results in Table V show that the introduction of \(P_{tiny}\) significantly improves mAP and recall by 1.07 and 2.22, respectively. This indicates that adding the \(P_{tiny}\) branch can improve the network's recall of small ship targets, thus improving detection accuracy.
#### Iv-C2 Ablation and Sensitivity Analysis of the High-Order Spatial Interaction Mechanism
Expanding on the detection framework described in the previous part, which already
includes the \(P_{tiny}\) branch, we now examine the effects of integrating the High-Order Spatial Interaction (HSI-Former) module on detection performance. Table VI presents the results of this analysis, where \(L\) denotes the number of HSI-Former layers and \(n\) refers to the order of g\({}^{n}\)Conv.
To investigate the impact of the order on model performance, we conduct experiments with varying n from 1 to 4, where the number of HSI-Former layers is fixed at 1. Our findings indicate that the model performs best when the order is 3, with the mAP value 1.66 higher than that without the HSI-Former module. Conversely, the worst performance is observed when the order is 1, as 1-order spatial interactions are equivalent to plain convolution, which fails to explicitly consider spatial interactions between spatial locations and their neighboring regions [45], thus contributing little to model performance. Furthermore, 2-order spatial interactions show a slight improvement in the modeling ability by 0.26, while 4-order spatial interactions yield an improvement of only 0.37 compared to the model without the HSI-Former module. This result suggests that it is not that the higher the
Fig. 6: To visualize the inference results from different detection methods on the test set, we will display the outputs of the best-performing model for each method. The methods we comparing are GhostNet, YOLOv5s, ShipDetectionNet, and HSI-ShipDetectionNet.
order of spatial interaction is, the greater the positive impact on the network will be. Further, we also try the effect of 3-order when the HSI-Former layers are 2. It is interesting to see that in the case where layers are 2 when the order of spatial interaction is 3, the model performance is slightly lower than when the HSI-Former layer is 1. This indicates that too many HSI-Former modules may burden the network.
On the other hand, as the HSI-Former is specifically designed based on the analysis of the Transformer encoder, we conduct a test to evaluate the impact of the Transformer block on the overall network performance. As shown in Table VI, we observe that the size of the model with the Transformer module is comparable to that of the model with HSI-Former(L=1, n=3). However, the mAP value decreases by 1.04, indicating that 3-order spatial interactions have more potential for learning and modeling context when compared to 2-order spatial interactions. This finding strongly suggests that the HSI-Former architecture with higher order spatial interactions has superior performance in capturing and modeling context for the given task.
#### Iv-B3 Ablation of the Lightweight Hybrid Attention Block
To simplify the network further, we design the Lightweight Hybrid Attention Block (LHAB). Our design thinking for LHAB is demonstrated through ablation experiments, and the results are presented in Table VII. Here, ECA(AP) denotes the original ECA module, where only an average-pooling operation (AP) is employed. ECA(MP+AP)\(share\) implies that both max-pooling operation (MP) and average-pooling operation (AP) are utilized in the ECA module, and the parameters of both operations are shared. Conversely, \(no\_share\) indicates that the parameters of these two operations are not shared. LHAB includes a spatial attention mechanism that does not share parameters (Spatial Attention Block) in addition to ECA(MP+AP)\(no\_share\) (Channel Attention Block).
Referring to Table VII, the inclusion of both max-pooling (MP) and average-pooling (AP) operations in a network enhances its feature extraction ability, resulting in a 0.13 increase in mAP compared to average-pooling operation (AP) alone. Additionally, since max-pooled and average-pooled features have distinct functions, the mAP value is increased by another 0.51 when the parameters of these two operations are not shared. Although this increases the network's parameter count, the use of one-dimensional convolutions for feature extraction means that only 31 (4149741-4149711 = 31) parameters are added, which is insignificant. ECA(MP+AP)\(no\_share\) refers to the Channel Attention Block described in Section III. Building upon this, we introduce an independent Spatial Attention Block, which does not share parameters, to create LHAB. In Table VII, LHAB achieves the highest mAP (74.35%), while reducing the parameter count by 1.50 (5.65-4.15 = 1.50) million compared to the values presented in Table VI.
## V Conclusion
This paper proposes a novel lightweight ship detection framework that is designed specifically for small targets. One of the main challenges with detecting small ships is achieving high detection accuracy due to the scarcity of pixel information. To address this challenge, the proposed framework introduces a predictive branch for tiny ships, which effectively utilizes rare pixel information. In addition, we presents a lightweight hybrid attention block (LHAB) to balance detection performance with model complexity by reducing the number of parameters and computational effort. To enhance the network's ability to understand high-level features, we also incorporates the high-order spatial interaction (HSI-Former) module, which improves the accuracy of ship position regression.
The proposed HSI-ShipDetectionNet is evaluated through comprehensive comparison experiments and ablation studies. The results demonstrate the effectiveness and superiority of the proposed framework in ship detection tasks.
|
2301.08141 | Self-supervised Learning for Segmentation and Quantification of Dopamine
Neurons in Parkinson's Disease | Parkinson's Disease (PD) is the second most common neurodegenerative disease
in humans. PD is characterized by the gradual loss of dopaminergic neurons in
the Substantia Nigra (SN). Counting the number of dopaminergic neurons in the
SN is one of the most important indexes in evaluating drug efficacy in PD
animal models. Currently, analyzing and quantifying dopaminergic neurons is
conducted manually by experts through analysis of digital pathology images
which is laborious, time-consuming, and highly subjective. As such, a reliable
and unbiased automated system is demanded for the quantification of
dopaminergic neurons in digital pathology images. Recent years have seen a
surge in adopting deep learning solutions in medical image processing. However,
developing high-performing deep learning models hinges on the availability of
large-scale, high-quality annotated data, which can be expensive to acquire,
especially in applications like digital pathology image analysis. To this end,
we propose an end-to-end deep learning framework based on self-supervised
learning for the segmentation and quantification of dopaminergic neurons in PD
animal models. To the best of our knowledge, this is the first deep learning
model that detects the cell body of dopaminergic neurons, counts the number of
dopaminergic neurons, and provides characteristics of individual dopaminergic
neurons as a numerical output. Extensive experiments demonstrate the
effectiveness of our model in quantifying neurons with high precision, which
can provide a faster turnaround for drug efficacy studies, better understanding
of dopaminergic neuronal health status, and unbiased results in PD pre-clinical
research. As part of our contributions, we also provide the first publicly
available dataset of histology digital images along with expert annotations for
the segmentation of TH-positive DA neuronal soma. | Fatemeh Haghighi, Soumitra Ghosh, Hai Ngu, Sarah Chu, Han Lin, Mohsen Hejrati, Baris Bingol, Somaye Hashemifar | 2023-01-11T22:47:12Z | http://arxiv.org/abs/2301.08141v2 | Self-supervised Learning for Segmentation and Quantification of Dopamine Neurons in Parkinson's Disease
###### Abstract
Parkinson's Disease (PD) is the second most common neurodegenerative disease in humans. PD is characterized by the gradual loss of dopaminergic neurons in the Substantia Nigra (a part of the mid-brain). Counting the number of dopaminergic neurons in the Substantia Nigra is one of the most important indexes in evaluating drug efficacy in PD animal models. Currently, analyzing and quantifying dopaminergic neurons is conducted manually by experts through analysis of digital pathology images which is laborious, time-consuming, and highly subjective. As such, a reliable and unbiased automated system is demanded for the quantification of dopaminergic neurons in digital pathology images. We propose an end-to-end deep learning framework for the segmentation and quantification of dopaminergic neurons in PD animal models. To the best of knowledge, this is the first machine learning model that detects the cell body of dopaminergic neurons, counts the number of dopaminergic neurons and provides the phenotypic characteristics of individual dopaminergic neurons as a numerical output. Extensive experiments demonstrate the effectiveness of our model in quantifying neurons with a high precision, which can provide quicker turnaround for drug efficacy studies, better understanding of dopaminergic neuronal health status and unbiased results in PD pre-clinical research.
## 1 Introduction
Image segmentation is a fundamental tool to developing artificial intelligence medical imaging applications [1], such as radiology and digital pathology. For instance, deep learning cell segmentation models can enable robust and fast approaches to quantify cells in histopathology images, enabling more sensitive analysis of biological experiments in animals and humans [2, 3]. However, deep learning models rely on large-scale high quality data, limiting their applications in biological use cases. In this paper we study the benefits of self-supervised learning techniques to develop robust neuronal cell segmentation and quantification models which are crucial for experimental disease models and gene-function studies. The developed model can be further optimized to separate adjacent neuronal cells for automatic quantification of neuronal cells.
In this study, we establish a deep learning-based framework for segmentation and quantification of Tyrosine Hydroxylase (TH) positive dopaminergic (DA) neurons in the Substantia Nigra (SN) of mouse brain tissues. SN is the area of the mid-brain that consists of DA neurons which are most susceptible to genetic and sporadic factors that cause their loss as observed in Parkinson's disease (PD) pathogenesis. TH is an enzyme that is specifically expressed in DA neurons. TH staining is the most reliable method used for detecting DA neurons. TH stains the soma (cell body), nucleus and the axons of DA neurons. Loss of dopaminergic neurons leads to motor neuron associated dysfunctions as observed in PD patients and animal models [4]. Preventing loss of DA neurons is the most important goal of PD targeting therapies. The TH staining intensity is also an indicator of the health status of the DA neurons and is considered the most reliable
method to identify loss of DA neurons [5]. Pre-clinical research on PD is highly dependent on segmentation and quantification of DA neurons in the SN [6, 7]. The unique morphology of DA neurons also makes it difficult to use generalized cell segmenting models to identify them and delve deeper into understanding the biology of DA neurons. Generalist cell segmentation model such as Cellpose have been developed to solve this problem but its efficiency in detecting specific type of neurons such as DA neurons is still limited [8]. A generalist model does not provide additional information that is specific to DA neurons which holds high value in PD research. Hence, it has become crucial to develop a machine learning model that can analyze and quantify DA neurons precisely in the SN with a quick turnaround time and immune to user associated bias. This will in turn make a huge impact in the field of PD pre-clinical research by identifying the efficacy of potent drugs in a shorter time-frame and accelerating the possibility of taking a potential drug into the clinic.
Our model leverages a combination of data sampling techniques and cross-domain self supervised learning [9, 10] on both unlabeled natural images and domain specific pathology images to learn transferable and generalize representations for pathology images. Such representations can be further fine-tuned and deployed for the neuronal cell segmentation using limited labeled data from the biological experiments. We compare the performance of fine tuned model which is originally trained on different data, (1) natural images, (2) pathology images, or (3) natural images followed by digital pathology images. We next compare the predicted number of TH cells from our model to manual counts done by histopathology experts to investigate the accuracy of automated quantification. Furthermore, we analyze the effects of the combination of various augmentation methods on the segmentation performance of the model. Experimental results and extensive analysis indicate that our model can outperform existing models, especially in low data scenarios.
In summary, we make the following contributions:
* The first end-to-end framework for automatic segmentation and quantification of DA neurons in whole-slide digital pathology images of PD models.
* A cross-domain self-supervised pre-training approach that exploits the power of unlabeled natural and medical images for representation learning.
* A comprehensive set of experiments that demonstrate the effectiveness and efficiency of our model in detecting and quantifying DA neurons using a limited amount of annotated data.
* A numerical and visual data output to indicate the phenotypic characteristics of DA neurons segmented by the model
## 2 Related Works
**CNN-based quantification of dopaminergic neurons.** Deep learning methods have been successfully utilized in analyzing human digital pathology images for different tasks, including cell segmentation and cell counting [11, 3, 2, 12]. However, the number of studies that employ deep learning for the quantization of DA neurons in animal models of PD are relatively limited. [13] implemented a deep learning-based method for processing whole-slide digital imaging to count DA neurons in SN of rat and mouse models. This study leverages the TH positive nucleus to detect the TH cells which is susceptible to error because of the existence of other cells of the brain which also have a nucleus and overlap in the same area. Additionally, the architecture of DA neurons in SN makes it difficult to distinguish between overlapping cells when detected only relying on nucleus as annotations. [14] developed a framework for automatic localization of SN region and detection of neurons within this region. The SN localization is achieved by using a Faster-RCNN network, whereas neuron detection is done using a LSTM network. However, these studies are limited to counting neurons and/or detecting neuron locations and do not provide additional information about individual cells, such as cell attributes and morphology, which is essential for understanding the biology behind DA neuronal loss and its association with PD pathogenesis.
**Self-supervised Learning.** Self-supervised learning methods aim to learn generalizable representations from unlabeled data. This paradigm involves training a neural network on a manually created (pretext)
task for which ground truth is obtained from the data. The learned representations can be transferred and fine-tuned for various target tasks with limited labeled data. Instance discrimination methods [15, 16, 17, 18, 19, 20] have recently sparked a renaissance in the SSL paradigm. These methods consider each image as a separate class and seek to learn representations that are invariant to image distortions. Motivated by the success in computer vision, instance discrimination SSL methods have been adopted in medical applications. A recent transfer learning study for medical imaging [21] demonstrated the efficacy of existing instance discrimination methods pre-trained on ImageNet for various medical tasks. A group of work focused on designing SSL frameworks by exploiting consistent anatomical structure within radiology scans [22, 23]. Another line of studies designed contrastive-based SSL for medical tasks [24, 10, 25, 26, 9], including whole slide image classification [27]. In contrast to the previous works, our work is the first study that investigates the efficacy of SSL for digital pathology images of PD animal models to compensate the lack of large-scale annotated datasets for training deep learning models.
## 3 Method
### Animal studies, annotations, and dataset
The data-set used in this study was obtained by manually labeling 30,000 TH positive DA neurons in 2D histology digital images. This is an internal data-set. The digital images were obtained from multiple animal studies where mouse brains were sectioned at 35 micron thickness and stained with TH and either Haematoxylin or Nissl as a background tissue stain. The sections were then imaged using a whole slide scanner microscope, Nanozoomer system (Hamamatsu Corp, San Jose, CA) at 20x resolution (0.46 microns/pixel). Whole coronal brain section images containing the SN were exported from the digital scans at 20x resolution and were used to annotate the TH positive DA neurons and train the model. This procedure helped us to obtain a large data-set which consists of multiple internal data-sets and takes into account the variability that arises from different staining conditions. The ground truth (GT) for this study was labelled and quality controlled by biologists who specialize in mouse brain anatomy and PD research. The blind test data-set used for analyzing model's efficiency was a separate animal study in which the model has not been directly trained on the study group. The DA neurons were detected by the model (red) and visually represented to compare it with the manually counted neurons (blue) by the biologist.
### Self-supervised Pre-training
Our approach is established on continual self-supervised pre-training in which a model is first pre-trained on a massive general dataset, such as ImageNet, and then pre-trained on domain-specific datasets. For the first step (see Figure 1.a), we train the self-supervised model on the ImageNet dataset using state-of-the-art instance discrimination approaches, such as Barlow Twins [15]. For the second step (see Figure 1.b),
Figure 1: An overview of our approach. To address the annotated data scarcity challenge for training deep models, we perform (a) self-supervised pre-training on natural images, and then (b) self-supervised pre-training on digital pathology images. We finally (c) fine-tune the self-supervised pre-trained model with limited annotated data for target neuron segmentation task.
we continue the self-supervised pre-training on the in-domain medical dataset. Finally, we fine-tune the pre-trained models for the neuron segmentation (target) task using labeled images (see Figure 1.c).
**Barlow Twins [15].** This SSL approach aims to reduce the amount of redundant information about each sample in the learnt representations while simultaneously making the representation invariance to image distortions. To do so, given an image sample \(X\), two distorted views of the sample are generated by applying a data augmentation function \(\mathcal{T}(.)\) on \(X\). The two distorted views \(X_{1}\) and \(X_{2}\) are then processed by the backbone network \(f_{\theta}\) to produce latent representations \(Z_{1}=f_{\theta}(\mathcal{T}(X_{1}))\) and \(Z_{2}=f_{\theta}(\mathcal{T}(X_{2}))\). The backbone network \(f_{\theta}\) includes a standard ResNet-50 encoder and a three-layer MLP projection head. The model is trained by minimizing the following loss function:
\[\mathcal{L}_{SSL}=\sum_{i}(1-\mathcal{C}_{ii})^{2}+\lambda\sum_{i}\sum_{i\neq j }\mathcal{C}_{ij}^{2} \tag{1}\]
where \(\mathcal{C}\) is the cross-correlation matrix computed between \(Z_{1}\) and \(Z_{2}\) along the batch dimension. \(\lambda\) is a coefficient to identify the weight of each loss terms. The model is trained by making the cross-correlation matrix \(\mathcal{C}\) close to the identity matrix. In particular, by equating the diagonal elements of the \(\mathcal{C}\) to 1, the learned representation will be invariant to the image distortions. By equating the off-diagonal elements of the \(\mathcal{C}\) to 0, the different elements of the representation will be decorrelated, so that the output units contain non-redundant information about the images.
### Data Preparation
We use an in-house dataset of digital microscopy images obtained from the PD mice models. This dataset consists of 1500 images among which a small fraction of 108 images have been annotated with the segmentation masks for dopamine neurons. The images' resolutions are in the range [3000, 6000]. We use all of the images for self-supervised learning, and then fine-tune the self-supervised pre-trained models with the labeled images (supervised learning). For supervised learning, we randomly divided the dataset into training (70%), validation (10%), and testing (20%).
### Network Architecture.
For target segmentation task, we use a U-Net network which consists of encoder (\(f_{\theta}\)) and decoder (\(g_{\theta}\)) parts. The encoder is a standard ResNet-50, which is initialized with the self-supervised pre-trained encoder.
### Tile sampling and augmentation.
For target segmentation task, we divide images into non-overlapping patches of size 512\(\times\)512 to ensure we sample from every part of the image. In all experiments, the raw image intensities per channel are normalized to the [0,1]. Data augmentation is essential for biological and medical image analysis due to the typically limited amount of available annotated data. We use different data augmentation techniques to enforce the model to capture more robust and generalizable representations. In particular, we use Flip, Rotation, RGBShift, Blur, GaussianNoise, and RandomResizedCrop to teach the expected appearance and color variation to the deep model.
### Fine-tuning protocol
. We initialize the encoder of the target model (i.e. U-Net) with the pre-trained models and fine-tune all target model parameters. We train the target models using the Adam optimizer with a learning rate of 1e-3 and (\(\beta_{1}\), \(\beta_{2}\)) = (0.9,0.999). We use ReduceLROnPlateau learning rate decay scheduler. We use batch size of 32 and train all models for 200 epochs. We employ early-stop mechanism using the validation data to avoid over-fitting. We use Dice coefficient loss function for training the target task. Dice coefficient is used for evaluating the accuracy of the target segmentation task. We run each method ten times on downstream task and report the average and standard deviation performance over all runs.
### Cell counting.
The automatic cell counting is a challenging task due to the overlapping cells which share boundary; distinguishing overlapping cells requires certain post-processing to enhance the counting accuracy. In particular, we first calculate the minimum and average cell size using the ground truth for the training data. Then, we take the models predictions (segmentation masks) and extract the connected components within the prediction masks; each connected component represents one or more cells (in the case of overlapping cells). We then filter out components that are smaller than the minimum cell size. For the remaining components, we count cells by dividing the cell size by the average cell size.
## 4 Experiments and Results
**Self-supervised models provide more generalizable representations**
**Experimental setup.** In this experiment, we evaluate the transferability of three popular SSL methods using officially released models, including DeepCluster-v2 [17], Barlow Twins [15], and SwAV [17]. All SSL models are pre-trained on the ImageNet dataset and employ a ResNet-50 backbone. As the baseline, we consider (1) training the target model from random initialization (without pre-training) and (2) transfer learning from the standard supervised pre-trained model on ImageNet, which is the _de facto_ transfer learning pipeline in medical imaging [10]. Both baselines benefit from the same ResNet-50 backbone as the SSL models.
**Results.** Table 0(a) displays the results, from which we draw the following conclusions: (1) transfer learning from the supervised ImageNet model lags behind training from random initialization. We attribute this inferior performance to the remarkable domain shift between the pre-training and target tasks. In particular, supervised ImageNet models are encouraged to capture domain-specific semantic features, which may be inefficient when the pre-training and target data distributions are far apart. Our observation is in line with recent studies [28] on different medical tasks suggests that transfer learning from supervised ImageNet models may offer limited performance gains when the target dataset scale is large enough to compensate for the lack of pre-training. (2) Transfer learning from self-supervised models provide superior performance compared with both training from random initialization and transfer learning from the supervised ImageNet model. In particular, the best self-supervised model (i.e. SwAV) yields 1.3% and 2.27% performance boosts compared with training from random initialization and the supervised ImageNet model, respectively. Intuitively, self-supervised pre-trained models, in contrast to supervised pre-trained models, encode features that are not biased to task-relevant semantics, providing improvement across domains. Our observation in accordance with previous studies [21] demonstrates the effectiveness of self-supervised ImageNet models for medical applications.
**Self-supervised models provide superior performance in semi-supervised learning**
**Experimental setup.** We conduct further experiments to evaluate the advantage that self-supervised pre-trained models can provide for small data regimes. To do so, we randomly select 25% of the training data and fine-tune the self-supervised pre-trained models on this subset of data. We then compare the performance of self-supervised models with training the target model from random initialization and fine-tuning the supervised ImageNet model.
\begin{table}
\end{table}
Table 1: Comparison of different initialization methods on target segmentation task.
**Results.** The results are shown in Table (b)b. First, we observe that transfer learning from either supervised or self-supervised pre-trained models can offer significant performance improvements compared with training from random initialization. In particular, the supervised ImageNet model provides a 9.5% performance improvement compared to the random initialization of the target model. Moreover, self-supervised models-DeepCluster-v2, Barlow Twins, and SwAV, offer 11.5%, 12.3%, and 13.6% performance boosts, respectively, in comparison with random initialization. These observations imply the effectiveness of pre-training in providing more robust target models in low data regimes. Second, we observe that self-supervised models provide significantly better performance than the supervised ImageNet model. Specifically, DeepCluster-v2, Barlow Twins, and SwAV achieve 1.96%, 2.74%, and 4% performance boosts, respectively, compared to the supervised ImageNet baseline. These observations restate the efficacy of self-supervised models in delivering more generic representations that can be used for target tasks with limited data, resulting in reduced annotation costs.
**Impact of pre-training data on self-supervised learning**
**Experimental setup.** We investigate the impact of pre-training datasets on self-supervised learning. To do so, we train Barlow Twins on three data schemas, including (1) SSL on the ImageNet dataset, (2) SSL on the medical dataset (referred to as the in-domain), and (3) SSL on both ImageNet and in-domain datasets (referred to as ImageNet\(\rightarrow\)In-domain). For ImageNet\(\rightarrow\)In-domain pre-training, we initialize the model with SvAW pre-trained on ImageNet, followed by SSL on our in-domain dataset. We fine-tune all pre-trained models for the neuron segmentation task using 25% of training data.
**Results.** Table 2 shows the segmentation accuracy measured by the Dice score (%) for different pretraining scenarios. First, we observe that pre-training on only in-domain dataset yields lower performance than pre-training on only the ImageNet dataset. We attribute this inferior performance to the limited number of in-domain pre-training data compared with the ImageNet dataset (1500 vs. 1.3M). Moreover, we observe that the best performance is achieved when both ImageNet and in-domain datasets are utilized for pre-training. In particular, ImageNet\(\rightarrow\)In-domain pre-training surpasses both in-domain and ImageNet pre-trained models. These results imply that pre-training on ImageNet is complementary to pre-training on in-domain medical datasets, resulting in more powerful representations for medical applications.
**Dopaminergic Neuron Detection and counting**
**Experimental setup.** The DA neurons segmented by the model were compared to the DA neurons detected by a biologist in the same tissue section from the blind data-set. The biologist detected the DA neurons and counted them manually on an image analysis platform ImageJ. The output from the model was overlaid with the manually detected cells and based on the color coding of the DA neurons by the model, the true
\begin{table}
\begin{tabular}{c|c|c} \hline Pre-training Method & Pre-training Dataset & Dice(\%) \\ \hline Random & - & 67.22\(\pm\)8.24 \\ \hline Barlow Twins & ImageNet & 79.50\(\pm\)2.02 \\ SwAV & ImageNet & 80.83\(\pm\)1.17 \\ \hline Barlow Twins & In-domain & 70.92\(\pm\)5.41 \\ \hline Barlow Twins & ImageNet\(\rightarrow\)In-domain & **81.73\(\pm\)1.03** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of pre-training dataset for self-supervised learning.
\begin{table}
\begin{tabular}{c|c} \hline Metric & Score (\%) \\ \hline Precision & 95.25 \\ Recall & 95.49 \\ F1-score & 95.31 \\ \hline \end{tabular} \begin{tabular}{c|c} \hline Method & Counting Error (\%) \\ \hline Connected components & 21.66 \\ Our approach & **9.08** \\ \hline \end{tabular}
\begin{tabular}{c|c} \hline Connected components & 21.66 \\ Our approach & **9.08** \\ \hline \end{tabular}
\end{table}
Table 3: Neuron detection and counting results
positive (TP), false positive (FP) and false negative (FN) were calculated by the biologist. We calculated precision, recall and F1-score metrics for the detected neurons in the test images. In these measures, TP is the number of neurons successfully detected by the model; FP is the number of neurons detected by the model but are not actually neurons; and FN is the number of neurons not detected by the model. We further compare the performance of our method in neuron counting to human counting. To do so, we calculate the percentage error between the total number of neurons counted by our method and human counting. We also conduct an ablation study to illustrate the superiority of our cell counting method over the naive approach of counting cells by the number of connected components in the images.
**Results.** The performance metrics for neuron detection are shown in Table (a)a. As seen, our method can effectively detect dopaminergic neurons in whole-slide digital pathology images; in particular, our approach achieves a precision, recall, and F1-score of 95.25%, 95.49%, and 95.31%, respectively. Moreover, Table (b)b presents the neuron counting results against human counting. As seen, automatic counting of the cells through computing the connected components within segmentation masks yields an error rate of 21.66%, while incorporating the connected components' sizes in counting significantly decreases the error rate to 9%. This results demonstrate the effectiveness of our approach in handling the overlapping neurons and providing a reliable automatic system for neuron counting.
**Ablation Experiments Experimental setup.** We conduct extensive ablation experiments on different data augmentation techniques and network architectures. We examine seven different combinations of transformation that are commonly used in the literature, including (1) no augmentation (mode 1), (2) Flip, Rotation, RandomBrightnessContrast, and RandomGamma (mode 2), (3) Flip, Rotation, RGBShift, Blur, GaussianNoise (mode 3), (4) Flip, Rotation, RGBShift, Blur, GaussianNoise, RandomResizedCrop (mode 4), (5) Flip, Rotation, RGBShift, Blur, GaussianNoise, RandomResizedCrop, Elastic Transformation (mode 5), (6) Flip, Rotation, RandomBrightnessContrast, RandomGamma, RGBShift, Blur, GaussianNoise, RandomResizedCrop (mode 6), and (7) Flip, Rotation, RandomBrightnessContrast, RandomGamma, RGBShift, Blur, GaussianNoise, RandomResizedCrop, Elastic Transformation (mode 7). For network architectures, we examine U-Net and DeepLabV3+. In ablation experiments, all models are initialized with SvAW pre-trained model and fine
\begin{table}
\end{table}
Table 4: Ablation Experiments.
Figure 2: Visualization of Mouse brain 2D Image depicting DA neurons in the SN and segmentation results produced by our method.
tuned with 25% of data.
**Results.** Table 3(a) shows the results of different data augmentation techniques. According to these results, the lowest performance comes from mode 1 (no augmentation), highlighting that combining pre-training with data augmentation techniques yields more accurate segmentation results for downstream tasks with limited amounts of data. Additionally, the combination of Flip, Rotation, RGBShift, Blur, GaussianNoise, RandomResizedCrop (mode 4) provides the best performance among all data augmentation approaches. This implies that color transformations such as RGBShift, Blur, and GaussianNoise can help the deep model in gleaning more generalizable representations. Furthermore, a comparison of the results obtained by modes 3 and 4, the latter of which includes an additional RandomResizedCrop, reveals that random cropping significantly contributes to performance improvements. Moreover, a comparison of the results obtained by modes 4 and 5, the latter of which includes an additional elastic transformation, demonstrates that elastic transformation has a negative impact on performance; the same observation can be drawn from the comparison of modes 6 and 7.
Table 3(b) presents the results of different network architectures for downstream neuron segmentation task. As seen, U-Net, which was originally designed for medical segmentation tasks, provides superior performance over DeepLabV3+.
**Qualitative results Experimental setup.** We visualize the segmentation results of our best model from Table 1(on the test data. To do so, we first employ zero padding to make the size of the test images equal to a power of 512. Then, we divide the test images into non-overlapping 512\(\times\)512 patches and then feed patches to the network. We then assemble the model's predictions for images patches to generate the prediction for the whole image. To examine the model's efficiency in counting DA neurons, a biologist counted the cells manually (Ground Truth) in the same section (blind dataset). We then ran a correlation statistics to measure the \(R^{2}\) between
Figure 4: Correlation plot depicting the number of DA neurons counted by a biologist vs the number of DA neurons counted by the model. Blind data set was used to count the neurons from 18 brain sections stained with TH staining to identify the DA neurons and Nissl stain to stain the brain tissue. The sections for this study were chosen from multiple animal studies.
Figure 3: Visualization of cell segmentation results.
Figure 5: Correlation plot depicting the number of DA neurons counted by a biologist vs the number of DA neurons counted by Cellpose model [8]. Blind data set was used to count the neurons that was previously used to analyze model efficiency in Figure 4.
Figure 6: Data showing the comparison between Cellpose model and the model developed in this study to count DA neurons in individual sections. The green, black and brown dots depict the cells counted by the model, ground truth (GT) and Cellpose respectively. The red lines indicate the comparison between GT and the Model. The blue lines indicate the comparison between GT and Cellpose. Sections were selected from the blind dataset.
the model and the GT. We additionally compared the GT to the latest generalist cell segmentation model-Cellpose and ran a correlation statistics to compare. Finally, the counts for DA neurons from our model, GT and Cellpose were plotted head to head to examine the efficiency of our model.We measured the TH intensity after converting the image into grayscale (8-bit, 0-255 range). The lower the number or closer to 0, the darker the stain is. The higher the number or closer to 255, the lighter the stain. The TH intensity was measured on ImageJ, a platform used to analyze digital data.
**Results.** Figures 2 and 3 presents the visualization of the segmentation results from our best model. As seen, our method can effectively detect and segment the dopaminergic neurons of varying size and shape. Our quantitative results in Table 1, together with the qualitative results in Figures 2 and 3 demonstrate the capability of our framework in providing an effective solution for segmentation of dopaminergic neurons. Figure 4 shows the correlation plot between GT and model counted DA neurons. \(R^{2}\) of 0.95 with a \(pvalue<0.0001\) was achieved by our model in correlation statistical analysis. Under same parameters and dataset, Cellpose achieved a \(R^{2}\) of 0.89 with a \(pvalue<0.0001\) in the correlation statistical analysis (see Figure 5). In Figure 6, the statistics shows there is not significant changes between the DA neurons counted by the model or Cellpose when compared to GT (One way ANOVA followed by post-hoc analysis). Deeper analysis into the data shows that Cellpose had a significant difference from GT in three sections but our model was able to detect the DA neurons with higher accuracy.Figure 7 shows the TH intensity (brown color) of individual DA neuronal cell body in 5 different gradients. The gradient was obtained by measuring the TH intensity for an entire data-set and splitting it into 5 different groups and a visual and numerical data was obtained for each neuron.
valuable to the scientific community. For such task, there are always challenges to consider such as limited datasets, staining profile of tissues, overlapping cells but our model has demonstrated very high efficiency taking into consideration all these factors. With the advancement in machine learning and biology, these models will improve and provide solutions to the ever increasing demand for data-analysis in research biology. Our data suggests that we could extrapolate this method to other species that are used as animal models in PD. With the addition of more dataset, we could go deeper in understanding the biology of DA neuronal loss by capturing the changes which are visible or sometimes not visible to the human eye. To summarize, this method will be very useful to shorten the time needed to analyze loss of DA neurons in animal studies and accelerate the drug discovery of PD.
## 6 Author Contributions
Fatemeh Haghighi contributed to preparing the data, implementing the method, conducting experiments, analyzing the results, and writing the manuscript. Soumitra Ghosh contributed to supervision, conceptualization, data collection, data analysis, writing and revising the Manuscript. Hai Ngu contributed to data collection, data analysis, revising the manuscript. Sarah Chu contributed to data preparation and analysis. Han Lin contributed to data preparation, conducting experiments. Mohsen Hejrati contributed to advising the project, revising the manuscript. Baris Bingol contributed to supervision and conceptualization. Somaye Hashemifar contributed to advising and led the project, conception and design of the work, data preparation, writing and revising the manuscript.
|
2302.11140 | Cosmic Birefringence from Neutrino and Dark Matter Asymmetries | In light of the recent measurement of the nonzero Cosmic Microwave Background
(CMB) polarization rotation angle from the Planck 2018 data, we explore the
possibility that such a cosmic birefringence effect is induced by coupling a
fermionic current with photons via a Chern-Simons-like term. We begin our
discussion by rederiving the general formulae of the cosmic birefringence angle
with correcting a mistake in the previous study. We then identify the fermions
in the current as the left-handed electron neutrinos and asymmetric dark matter
(ADM) particles, since the rotation angle is sourced by the number density
difference between particles and antiparticles. For the electron neutrino case,
with the value of the degeneracy parameter $\xi_{\nu_e}$ recently measured by
the EMPRESS survey, we find a large parameter space which can explain the CMB
photon polarization rotations. On the other hand, for the ADM solution, we
consider two benchmark cases with $M_\chi = 5$~GeV and 5~keV. The former is the
natural value of the ADM mass if the observed ADM and baryon asymmetry in the
Universe are produced by the same mechanism, while the latter provides a warm
DM candidate. In addition, we explore the experimental constraints from the CMB
power spectra and the DM direct detections. | Ren-Peng Zhou, Da Huang, Chao-Qiang Geng | 2023-02-22T04:31:36Z | http://arxiv.org/abs/2302.11140v2 | # Cosmic Birefringence from Neutrino and Dark Matter Asymmetries
###### Abstract
In light of the recent measurement of the nonzero Cosmic Microwave Background (CMB) polarization rotation angle from the Planck 2018 data, we explore the possibility that such a cosmic birefringence effect is induced by coupling a fermionic current with photons via a Chern-Simons-like term. We begin our discussion by rederiving the general formulae of the cosmic birefringence angle with correcting a mistake in the previous study. We then identify the fermions in the current as the left-handed electron neutrinos and asymmetric dark matter (ADM) particles, since the rotation angle is sourced by the number density difference between particles and antiparticles. For the electron neutrino case, with the value of the degeneracy parameter \(\xi_{\nu_{e}}\) recently measured by the EMPRESS survey, we find a large parameter space which can explain the CMB photon polarization rotations. On the other hand, for the ADM solution, we consider two benchmark cases with \(M_{\chi}=5\) GeV and 5 keV. The former is the natural value of the ADM mass if the observed ADM and baryon asymmetry in the Universe are produced by the same mechanism, while the latter provides a warm DM candidate. In addition, we explore the experimental constraints from the CMB power spectra and the DM direct detections.
Introduction
Cosmic birefringence is a remarkable parity-violating phenomenon in which the plane of a linearly polarized photon rotates along its propagation path in the astronomical scale, which is caused by the small distinction in the phase velocities between the left- and right-handed photon polarizations [1; 2; 3; 4] (see _e.g._ Ref. [5] for a recent review on this issue). This effect can be imprinted in the Cosmic Microwave Background (CMB) polarization data as the parity-odd cross correlation between \(E\) and \(B\) modes [6]. Recently, the analysis of the CMB data from the _Planck_ public release 4 (PR4) has shown a tantalizing nonzero value of the cosmic birefringence angle \(\Delta\alpha=0.30\pm 0.11\) deg1 at 68% confidence level (CL) [7]. By taking into account the Milky Way foreground \(EB\) cross correlations, the CMB photon rotation angle is improved to be \(\Delta\alpha=0.36\pm 0.11\) deg at 68% CL, which indicates that the statistical significance exceeds \(3\sigma\). Note that the new result given in Ref. [7] is an update of the measurement of the isotropic birefringence angle \(\Delta\alpha=0.35\pm 0.14\) deg at 68% CL from the _Planck_ PR3 data [8], in which a new technique has been proposed to address the long-standing degeneracy problem of a miscalibration angle of polarimeters [9; 10]. Later, the measurement of the birefringence angle \(\Delta\alpha\) has been improved by including the high-frequency instrumental data [11] and the _WMAP_ data [12]. If the above nonzero birefringence angle is confirmed in the future, it would provide us with a new evidence for new physics beyond the Standard Model (SM). In the literature, a typical explanation of this remarkable parity-violating effect usually involves the existence of a pseudoscalar axion-like field [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34], which can play the role of dark energy or dark matter (DM) in the Universe (see _e.g._ Refs. [35; 36] for recent reviews and references therein). By coupling the axion-like field to the photon Chern-Simons (CS) term [37; 38], the two circularly-polarized photon modes can be distinguished due to their different phase velocities, which can source the cosmic birefringence in the CMB data.
Footnote 1: We hereinafter denote the CMB cosmic birefringence angle with \(\Delta\alpha\), rather than \(\beta\) in the convention used in other literature [7].
In this work, we would like to study an alternative origin of the observed cosmic birefringence [39; 40], which generates this parity-violating effect by allowing photons to couple to a fermionic current in a CS-like manner. In Refs. [39; 40], such a mechanism is thoroughly explored by identifying the fermion particles in the current as the active neutrinos in the SM. However, as shown below, the analysis in Ref. [39] contained some mistake in the final
results. In order to clarify the possible origin of the mistake, we first rederive the general formulae for the cosmic birefringence angle \(\Delta\alpha\) for an arbitrary fermionic current. We then use the obtained expression of \(\Delta\alpha\) to study two specific models in which the fermions are assumed to be the left-handed electron-type neutrinos and the DM particles, respectively. For the neutrino case, we would like to update the numerical results in Ref. [39] with the correct analytical formula of \(\Delta\alpha\). As for the DM candidates, by noticing that it is the number density difference between particles and anti-particles that induces the photon birefringence, we are led to consider the asymmetric DM (ADM) scenario [41; 42; 43; 44] (For reviews of ADM models, see Refs [45; 46] and references therein), in which all the observed DM density in the Universe is solely composed of fermionic DM particles without its anti-particle counterparts. Instead of specifying the concrete mechanism for the ADM production, we would like to explore the phenomenology of this model at two benchmark points with the ADM mass to be \(M_{\chi}=5\) GeV and \(5\) keV. The former case is the natural ADM mass value if the DM and baryon relics are generated via the same mechanism in order to explain their cosmological mass density ratio, while the latter is a legitimate warm DM candidate [47] which can help us to understand several small-scale structure problems [48]. We also take into account the experimental constraints on these two ADM cases, including the _Planck_ CMB power spectra and the DM direct detection (DD) bounds.
The paper is organized as follows. In Sec. II, we rederive the general formulae for the cosmic birefringence angle \(\Delta\alpha\) sourced by the Chern-Simons-like coupling of an arbitrary fermionic current to photons. Secs. III and IV are dedicated to the phenomenological studies by identifying the fermions in the above current as the left-handed electron neutrinos and ADM particles, respectively. Finally, we summarize in Sec. V.
## II General discussion of cosmic birefringence from a fermion current
In this section, we shall derive the formula of the isotropic birefringence angle induced by the coupling of a general fermion current \(J_{\mu}\) to the photon Chern-Simons term [39; 40]. Let us begin our discussion by writing down the following Lagrangian [39]
\[\mathcal{L}=\mathcal{L}_{\rm EM}+\mathcal{L}_{\rm CS}=-\frac{1}{4}\sqrt{g}F_{ \mu\nu}F^{\mu\nu}-\frac{1}{2}\sqrt{g}\frac{\beta}{M^{2}}J_{\mu}A_{\nu}\tilde{ F}^{\mu\nu}\,, \tag{1}\]
where \(g\equiv-{\rm det}(g_{\mu\nu})\) with \(g_{\mu\nu}\) as the metric tensor of the spacetime, and \(A_{\mu}\) denotes the electromagnetic field with its field strength as \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). We have also defined the dual field strength tensor as
\[\tilde{F}^{\mu\nu}\equiv\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}F_{ \rho\sigma}\,, \tag{2}\]
in which \(\epsilon\) is the Levi-Civita tensor defined as \(\epsilon^{\mu\nu\rho\sigma}\equiv g^{-1/2}e^{\mu\nu\rho\sigma}\) with \(e^{\mu\nu\rho\sigma}\) as the antisymmetric symbol normalized to \(e^{0123}=1\). Since the coupling of \(J_{\mu}\) to the photon Chern-Simons term is of six mass dimensions, we have introduced \(M\) as the cutoff scale to balance the dimension with \(\beta\) to be a constant of \(\mathcal{O}(1)\). Note that the term \(\mathcal{L}_{\rm CS}\) is not invariant under the electromagnetic \(U(1)\) gauge transformation. However, as shown in Refs. [39; 40], one could resort to the Stuckelberg mechanism or the anti-symmetric Kalb-Ramond field in order to maintain the gauge invariance.
It is well-known that our universe is flat, homogeneous and isotropic, so that it can be described by the following Friedman-Lamaitre-Robertson-Walker metric
\[ds^{2}=-dt^{2}+R^{2}(t)d{\bf x}^{2}\,, \tag{3}\]
where \(t\) is the physical proper time while \({\bf x}\) denotes the spatial three-dimensional comoving coordinates with \(R(t)\) being the scale factor. In this coordinate, the fermionic four-current is defined as \(J_{\mu}=(J_{t}\,,{\bf J})\), where \({\bf J}\) is the fermion flux and \(J_{t}\) is the number density difference between fermions and anti-fermions \(J_{t}=\Delta n=n-\bar{n}\), with \(n(\bar{n})\) the number density of (anti-)fermions. In the present work, we only focus on the isotropic birefringence generated by \(J_{t}\), and ignore the sub-leading anisotropic effects caused by the flux current \({\bf J}\). Thus, we fix \({\bf J}=0\) for simplicity.
We shall follow the procedure given in Refs. [1; 2] to derive the birefringence angle induced by the current \(J_{\mu}\). Firstly, we need to transform the coordinate metric into the following form
\[ds^{2}=R^{2}(\eta)(-d\eta^{2}+d{\bf x}^{2})\,, \tag{4}\]
where \(\eta\) is the conformal time with \(d\eta=dt/R\). Also, the fermion density is transformed into \(J_{\eta}=R(\eta)J_{t}=R(\eta)\Delta n\) while \({\bf J}\) keeps vanishing. By differentiating the Lagrangian in Eq. (1) with respect to \(A_{\mu}\), we can obtain the photon field equation
\[\nabla_{\mu}F^{\mu\nu}=\frac{\beta}{M^{2}}J_{\mu}\tilde{F}^{\mu \nu}\,, \tag{5}\]
together with the Bianchi identities given by
\[\nabla_{\mu}\tilde{F}^{\mu\nu}=0\,. \tag{6}\]
In order to proceed, we can represent \(F^{\mu\nu}\) and its dual \(\tilde{F}^{\mu\nu}\) by the corresponding physical electric and magnetic fields \(\mathbf{E}\) and \(\mathbf{B}\) as follows [49]
\[F^{\mu\nu}=R^{-2}\begin{pmatrix}0&E_{x}&E_{y}&E_{z}\\ -E_{x}&0&B_{z}&-B_{y}\\ -E_{y}&-B_{z}&0&B_{x}\\ -E_{z}&B_{y}&-B_{x}&0\end{pmatrix},\quad\tilde{F}^{\mu\nu}=R^{-2}\begin{pmatrix} 0&B_{x}&B_{y}&B_{z}\\ -B_{x}&0&-E_{x}&E_{y}\\ -B_{y}&E_{z}&0&-E_{x}\\ -B_{z}&-E_{y}&E_{x}&0\end{pmatrix}. \tag{7}\]
As a result, the field equations in Eqs. (5) and (6) can be written as
\[\frac{\partial}{\partial\eta}\left(R^{2}\mathbf{E}\right)-\nabla \times\left(R^{2}\mathbf{B}\right)=\frac{\beta}{M^{2}}J_{\eta}\left(R^{2} \mathbf{B}\right)\,,\quad\nabla\cdot\mathbf{E}=0\,, \tag{8}\]
and
\[\frac{\partial}{\partial\eta}\left(R^{2}\mathbf{B}\right)+ \nabla\times\left(R^{2}\mathbf{E}\right)=0\,,\quad\nabla\cdot\mathbf{B}=0\,, \tag{9}\]
where \(\nabla\cdot\) and \(\nabla\times\) here denote the conventional differential operators in the three-dimensional Cartesian space. By combining the equations in Eqs. (8) and (9) so as to eliminate \(\mathbf{E}\), we can obtain
\[\frac{\partial^{2}}{\partial\eta^{2}}\left(R^{2}\mathbf{B}\right) -\nabla^{2}\left(R^{2}\mathbf{B}\right)=-\frac{\beta}{M^{2}}J_{\eta}\nabla \times\left(R^{2}\mathbf{B}\right)\,. \tag{10}\]
Now we consider the monochromatic wave solution to Eq. (10) of the following form
\[R^{2}\mathbf{B}(\mathbf{x},\eta)=e^{-i\mathbf{k}\cdot\mathbf{x}} R^{2}\mathbf{B}(\eta)\,, \tag{11}\]
and assume that the wave propagates along the \(z\) axis so that \(\mathbf{k}\cdot\mathbf{x}=kz\). In addition, we define the two independent transverse-polarized waves in terms of their circular polarizations
\[F_{\pm}\equiv R^{2}B_{\pm}(\eta)=R^{2}(B_{x}\pm iB_{y})\,, \tag{12}\]
which can simplify the wave equation in Eq. (10) into the following form
\[\frac{d^{2}F_{\pm}}{d\eta^{2}}+\left(k^{2}\pm\frac{\beta kJ_{\eta}}{M^{2}} \right)F_{\pm}=0\,. \tag{13}\]
By assuming that the fermion density \(J_{\eta}\) evolves very slowly over the photon propagation, we can apply the WKB method to obtain the following approximated solution to Eq. (13)
\[F_{\pm}(\eta)=\exp\left[ik\int\left(1\pm\frac{\beta}{M^{2}}\frac {J_{\eta}}{k}\right)^{1/2}d\eta\right]\,. \tag{14}\]
Consequently, the solution to the original electromagnetic wave equation in Eq. (10) is given by
\[R^{2}B_{\pm}(z,\eta)=e^{-ikz}F_{\pm}(\eta)=e^{i\sigma_{\pm}}\,, \tag{15}\]
where the phase \(\sigma_{\pm}\) is defined as
\[\sigma_{\pm}=k(\eta-z)\pm\frac{\beta}{2M^{2}}\int J_{\eta}d\eta- \frac{\beta^{2}}{8kM^{4}}\int J_{\eta}^{2}d\eta+\mathcal{O}(k^{-2})\,. \tag{16}\]
As a result, the birefringence polarization rotation angle induced by the Chern-Simons-like coupling \(\mathcal{L}_{\rm CS}\) in Eq. (1) is given by
\[\Delta\alpha=\frac{1}{2}\left(\sigma_{+}-\sigma_{-}\right)\approx \frac{1}{2}\frac{\beta}{M^{2}}\int J_{\eta}d\eta=\frac{1}{2}\frac{\beta}{M^{2 }}\int\Delta n\,dt\,, \tag{17}\]
where we have used the relations \(J_{\eta}=R(\eta)\Delta n\) and \(d\eta=dt/R\) in the last equality. Note that the formula for the angle of the polarization plane rotation in Eq. (17) is different from Eq. (10) in Ref. [39], in which there was an extra scale factor \(R\) in the denominator of the integrand. Such a distinction can be traced back to the mistreatment of the current density \(J_{t}=\Delta n\) in deriving the wave equation in Eq. (8) and the subsequent calculations, in which all quantities should defined in terms of conformal time so that \(J_{\eta}\) should be employed.
We would like to emphasize that, in deriving the general formula of the birefringence angle \(\Delta\alpha\) in Eq. (17), we have not specified the origin of the fermionic current \(J_{\mu}\). In the following two sections, we shall identify it as the current of left-handed electron neutrinos and fermionic ADM particles, both of which are of phenomenological importance.
## III Neutrino current
In this section, we shall identify \(J_{\mu}\) as the active left-handed electron neutrino current as \(J_{\mu}^{\nu_{e}}=\overline{\left(\nu_{e}\right)}_{L}\gamma_{\mu}(\nu_{e})_{L}\) in the SM. Such a case has already been explored in Refs. [39; 40]. However, as mentioned before, the improper dependence of the birefringence angle on the scale factor \(R(t)\) given in Ref. [39] has made the analysis unreliable. Hence, here we would like to update the result of the neutrino-current-induced cosmic birefringence. According to our new \(\Delta\alpha\) formula in Eq. (17), the polarization angle rotation is given by
\[\Delta\alpha=\frac{1}{2}\frac{\beta}{M^{2}}\int J_{\eta}^{\nu_{e} }d\eta=\frac{1}{2}\frac{\beta}{M^{2}}\int\Delta n_{\nu_{e}}dt\,, \tag{18}\]
where \(\Delta n_{\nu_{e}}\equiv n_{\nu_{e}}-n_{\bar{\nu}_{e}}\) is the density difference between neutrinos and anti-neutrinos in the coordinate defined in Eq. (3). Note that the electron neutrino asymmetry is usually parameterized in the literature by the following parameter [50; 51]
\[\eta_{\nu_{e}}=\frac{\Delta n_{\nu_{e}}}{n_{\gamma}}=\frac{1}{12\zeta(3)}\left( \frac{T_{\nu}}{T_{\gamma}}\right)^{3}\left(\pi^{2}\xi_{\nu_{e}}+\xi_{\nu_{e}}^{ \ 3}\right)\,, \tag{19}\]
where \(n_{\gamma}\) is the number density of photons, \(T_{\nu(\gamma)}\) is the temperature of neutrinos (photons), and \(\xi_{\nu_{e}}=\mu_{\nu_{e}}/T_{\nu}\) is the degeneracy parameter with \(\mu_{\nu_{e}}\) denoting the electron neutrino chemical potential, respectively. Based on the standard cosmological evolution, the temperature ratio between neutrinos and photons can be estimated as \((T_{\nu}/T_{\gamma})^{3}=4/11\) for all neutrino types after the electron-positron annihilation. Thus, the electron neutrino asymmetry parameter can be estimated as [50; 51; 52]
\[\eta_{\nu_{e}}\simeq 0.249\xi_{\nu_{e}}\,, \tag{20}\]
where we only keep the leading-order term in \(\xi_{\nu_{e}}\). According to the standard statistical physics, the equilibrium photon number density is given by
\[n_{\gamma}=\left(\frac{2\zeta(3)}{\pi^{2}}\right)T_{\gamma}^{3}\,, \tag{21}\]
in which \(\zeta(z)\) is the Riemann function with \(\zeta(3)\simeq 1.202\). By combining Eqs. (19), (20) and (21), we can obtain the following electron neutrino asymmetry
\[\Delta n_{\nu_{e}}=\eta_{\nu_{e}}n_{\gamma}\simeq 0.061\xi_{\nu_{e}}T_{\gamma}^{3} \tag{22}\]
Predicted by the entropy conservation, the temperature of photons after their decoupling evolves as
\[T_{\gamma}R=T_{\gamma\,0}R_{0}=T_{\gamma\,D}R_{D}\,, \tag{23}\]
where \(T_{\gamma\,D}\) (\(T_{\gamma\,0}\)) and \(R_{D}\) (\(R_{0}\)) are the temperature and the scale factor at the time of recombination (at present). By defining the redshift \(z\) as \(R/R_{0}\equiv 1/(1+z)\), the photon temperature at any redshift is given by
\[T_{\gamma}=T_{\gamma\,0}(1+z). \tag{24}\]
By putting Eqs. (22) and (24) into the birefringence angle expression in Eq. (18) and transforming the integration variable from \(t\) to \(z\) with
\[dt=\frac{dR}{HR}=-\frac{dz}{(1+z)H}\,, \tag{25}\]
we can obtain
\[\Delta\alpha = 0.03\beta\left(\frac{\xi_{\nu_{e}}T_{\gamma\,0}^{3}}{M^{2}}\right) \int_{0}^{z_{D}}\frac{(1+z)^{2}}{H(z)}dz \tag{26}\] \[\approx 0.03\beta\left(\frac{\xi_{\nu_{e}}T_{\gamma\,0}^{3}}{M^{2}H_{0}} \right)\frac{2}{3}(1+z_{D})^{3/2}\,,\]
where \(z_{D}\simeq 1090\) denotes the redshift at the photon decoupling [53] and we have approximated the cosmological evolution afterward as a flat and matter-dominant Universe with the Hubble parameter given by
\[H(z)=H_{0}(1+z)^{3/2}\,, \tag{27}\]
in which the Hubble parameter \(H_{0}\) and the CMB temperature \(T_{\gamma\,0}\) at present [53] are
\[H_{0}=100h\,\text{km}\,\text{s}^{-1}\text{Mpc}^{-1}\simeq 2.1332\times 10^{-42}h \,\text{GeV}\,,\quad T_{\gamma\,0}\simeq 2.7255\,\text{K}\,. \tag{28}\]
with \(h\simeq 0.674\) given by the _Planck_ 2018 data.
Note that the angle rotated by the CMB photon polarization plane reported by _Planck_ PR4 is \(\Delta\alpha=0.30^{\circ}\pm 0.11^{\circ}=(5.24\pm 1.92)\times 10^{-3}\) rad at 68% confidence level (CL) [7], which updates the analysis based on the _Planck_ PR3 data in Ref. [8]. Moreover, the electron neutrino degeneracy parameter \(\xi_{\nu_{e}}\) reflecting the lepton asymmetry contained in \(\nu_{e}\) is usually measured and constrained by the CMB and BBN observations. In particular, the latest measurement of the primordial helium abundance in the metal poor galaxies by the EMPRESS survey [54] has indicated the existence of an exceptionally large nonzero electron neutrino asymmetry \(\xi_{\nu_{e}}=0.05\pm 0.03\)[54; 55; 56], which possibly hints to new physics beyond the SM. In the present work, we apply this latest value of \(\xi_{\nu_{e}}\) to estimate the CMB polarization rotation angle as follows
\[\Delta\alpha\simeq 5.24\times 10^{-3}\beta\left(\frac{\xi_{\nu_{e}}}{0.05} \right)\left(\frac{7.9\,\text{TeV}}{M}\right)^{2}\,. \tag{29}\]
We also plot in Fig. 1 the relevant parameter space in the \(\xi_{\nu_{e}}\)-\(\beta/M^{2}\) plane, in which the solid blue region can explain the CMB birefringence angle at \(2\sigma\) CL reported by _Planck_ PR4 while the red shaded area is the \(1\sigma\) interval of the electron neutrino degeneracy parameter measured by the latest EMPRESS survey.
## IV Asymmetric dark matter current
More and more astrophysical and cosmological evidences have shown the existence of the DM in our Universe [57], but its nature is still a great mystery. It is intriguing that the DM particle can be related to other beyond-SM physics, like the cosmic birefringence measured by _Planck_. In this section, we would like to interpret the birefringence angle in the CMB data as induced by the fermionic DM current \(J_{\mu}^{\chi}=\bar{\chi}\gamma_{\mu}\chi\) through the effective interaction \(\mathcal{L}_{\text{CS}}\) in Eq. (1). As shown in the Sec. II, it is the number density excess of the DM particles over anti-DM ones that sources the CMB polarization plane rotation in this setup. In the present work, we do not specify the origin of such DM asymmetries, and assume that all the
Figure 1: The parameter space in the \(\xi_{\nu_{e}}\)-\(\beta/M^{2}\) plane, where the solid blue region explains the CMB birefringence angle at \(2\sigma\) CL reported by Planck PR4, while the red shaded area represents the \(1\sigma\) range of the electron neutrino asymmetry parameter \(\xi_{\nu_{e}}\) measured by the EMPRESS survey with the dotted line representing its central value.
DM density in the Universe is composed of the dark fermions without any corresponding anti-fermions, _i.e._, \(n_{\chi}=\Delta n_{\chi}=J_{0}^{\chi}\). Such a scenario is usually called the ADM model [41; 42; 43; 44; 45; 46; 58; 59; 60], in which, if the cosmological baryonic matter and DM densities originate from the same mechanism, it could help to explain the observed cosmological density ratio of the visible and dark matters when the ADM mass is about 5 times of the proton/neutron mass, _i.e._, \(M_{\chi}\approx 5\) GeV.
### ADM Explanation of the Cosmic Birefringence
At present, the DM abundance is usually parametrized by the following density parameter
\[\Omega_{\chi\,0}=\frac{\rho_{\chi\,0}}{\rho_{c\,0}}=\frac{8\pi GM_{\chi}n_{ \chi\,0}}{3H_{0}^{2}}\,, \tag{30}\]
where \(M_{\chi}\) denotes the ADM particle mass, while \(H_{0}\), \(\rho_{c}\), \(\rho_{\chi\,0}\) and \(n_{\chi\,0}\) are the present-day Hubble parameter, critical density, ADM mass density and its number density, respectively. Here we have used the relations \(\rho_{\chi\,0}=M_{\chi}n_{\chi\,0}\) and \(\rho_{c\,0}=3H_{0}^{2}/(8\pi G)\) in the last equality with \(G\) the Newton constant. By further considering the evolution of the ADM density with the cosmological expansion \(n_{\chi}=(1+z)^{3}n_{\chi\,0}\), we can obtain the birefringence angle induced by ADM as follows
\[\Delta\alpha =\frac{1}{2}\frac{\beta}{M^{2}}\frac{\rho_{c\,0}\Omega_{\chi\,0}} {M_{\chi}}\int_{0}^{z_{D}}(1+z)^{3}\frac{dz}{H(1+z)}\] \[\approx\frac{1}{2}\frac{\beta}{M^{2}}\frac{3H_{0}\Omega_{\chi\,0 }}{8\pi GM_{\chi}}\frac{2}{3}(1+z_{D})^{3/2}\,, \tag{31}\]
where we also take into account the Hubble parameter evolution of \(H=(1+z)^{3/2}H_{0}\) in the matter-dominated era. By taking the experimental values of various cosmological parameters [53] into Eq. (31), the birefringence angle is given by
\[\Delta\alpha=5.24\times 10^{-3}\beta\left(\frac{1.77\,\text{GeV}}{M}\right)^{2 }\left(\frac{5\,\text{GeV}}{M_{\chi}}\right)\,, \tag{32}\]
where we have taken the benchmark ADM mass to be \(M_{\chi}\approx 5\) GeV which could explain the cosmological ratio of the DM to the ordinary baryonic matter [44].
In light of the expression of the photon polarization rotation angle \(\Delta\alpha\) in Eq. (32), we would like to investigate existing experimental constraints on the our ADM explanation of the CMB cosmic birefringence. In fact, as shown in the following subsections, DM indirect and direct searches have already placed useful bounds on the relevant parameter space.
### Constraints From CMB Power Spectra
By identifying fermions in the current \(J_{\mu}=\bar{\chi}\gamma_{\mu}\chi\) as ADM particles, the effective CS interaction of \(\mathcal{L}_{\text{CS}}\) in Eq. (1) could generate ADM-photon elastic scatterings as shown in Fig. 2, which would leave imprints on the CMB angular spectrum and the large scale structure [61; 62; 63; 64; 65]. In particular, due to the collisional damping caused by the ADM-photon interaction, the obtained matter power spectrum would show significant suppression at small scales together with a series of damped oscillations. Moreover, such a scattering between ADM and photons would also manifest itself in the CMB power spectra as modifications of relative magnitudes and shifts of positions of the acoustic peaks. Therefore, we can use the CMB angular power spectra of temperature and polarizations to constrain our ADM model.
Note that the ADM-photon interaction in Eq. (1) gives rise to the following amplitude
\[i\mathscr{M}=\frac{1}{2}\frac{\beta}{M^{2}}\bar{u}_{\chi}(k_{1})\gamma_{\mu}u_ {\chi}(p_{1})\epsilon^{\mu\nu\rho\sigma}(k_{2}+p_{2})_{\rho}\epsilon_{\nu}(p_{ 2})\epsilon_{\sigma}^{*}(k_{2})\,, \tag{33}\]
which leads to the ADM-\(\gamma\) scattering cross section, given by
\[\sigma_{\chi\gamma}\approx\frac{\beta^{2}p_{1\,\text{cm}}^{2}}{8\pi M^{4}}\,, \tag{34}\]
Figure 2: The Feynman diagram for DM-photon scatterings, which can impose the constraint on the ADM model from the CMB measurements from the Planck Collaboration.
where \(p_{1\,{\rm cm}}=|{\bf p}_{1\,{\rm cm}}|\) stands for the incoming photon momentum in the center-of-mass (cm) frame. By assuming that the ADM has already become non-relativistic around the photon decoupling, the relation \(M_{\chi}\gg p_{1\,{\rm cm}}\sim T_{\gamma}\) holds so that we only keep the leading-order term in the expansion with respect to the small ratio of \(p_{1\,{\rm cm}}/M_{\chi}\) in Eq. (34).
It is shown in Refs. [66; 67] that the quantity controlling the cosmological evolution of ADM and photons in the Boltzmann equations is the following thermally averaged ADM-photon cross section
\[\left\langle\sigma v_{\rm M\rm ol}\right\rangle_{\chi\gamma}=\frac{\int \sigma_{\chi\gamma}v_{\rm M\rm ol}dn_{\gamma}dn_{\chi}}{\int dn_{\gamma}dn_{ \chi}}\,, \tag{35}\]
where \(v_{\rm M\rm ol}\) is the Moller velocity [67] and the differential density \(dn_{i}\) is defined by
\[dn_{i}=g_{i}\frac{d^{3}p_{i}}{(2\pi)^{3}}f_{i}(p_{i})\,, \tag{36}\]
in which \(g_{i}\) is the independent degrees of freedom of the particle \(i\) and \(f_{i}(p_{i})\) is the associated distribution. Here, the distributions for photons and ADM particles are defined in the cosmic comoving frame. Since photons are always kept in the thermal equilibrium state with temperature \(T_{\gamma}\), so that they should obey the Bose-Einstein distribution
\[f_{\gamma}(p)=\frac{1}{e^{p/T_{\gamma}}-1}\,, \tag{37}\]
where we have taken the Boltzmann constant to be \(k_{\rm B}=1\). For the ADM, we do not need the explicit form of its distribution function \(f_{\chi}(p)\) here. As argued in Ref. [67], due to the relation
\[\left\langle\sigma v_{\rm M\rm ol}\right\rangle=\left\langle \sigma v_{\rm lab}\right\rangle^{\rm lab}, \tag{38}\]
it is more convenient to compute the thermally averaged cross section in the lab frame, in which the ADM particle in the scattering is initially at rest. In Eq. (38), \(v_{\rm lab}\) refers to the relative velocity and the superscript "lab" on the bracket denotes the thermal average computed in the lab frame. Also, at the leading order in the small momentum expansion, the ADM-photon scattering cross section in the lab frame takes the same form as in Eq. (34) except for the photon momentum \(p_{1\,{\rm cm}}\) replaced by the counterpart \(p_{1\,{\rm lab}}\). Therefore, by taking Eqs. (37) and (34) into Eq. (38), the thermally averaged ADM-photon cross section is given by
\[\left\langle\sigma v_{\rm M\rm ol}\right\rangle_{\chi\gamma}\simeq\frac{3\zeta (5)}{2\pi\zeta(3)}\frac{\beta^{2}T_{\gamma}^{2}}{M^{4}}=0.412\left(\frac{ \beta^{2}T_{\gamma}^{2}}{M^{4}}\right)\,, \tag{39}\]
where we have only kept the dominant term when \(T_{\gamma}\ll M_{\chi}\). In the derivation of Eq. (39), we have factored out and cancelled the ADM density \(n_{\chi}\) between the numerator and denominator in Eq. (35) since \(\sigma v_{\rm lab}\) does not depend on the ADM momentum at all.
For the given ADM-photon scatterings with the cross section quadratically proportional to the photon temperature \(T_{\gamma}\), the best constraint is given in Ref. [62] as follows
\[\langle\sigma v_{\rm Mol}\rangle_{\chi\gamma}(T_{\gamma}^{0}) \lesssim 6\times 10^{-40}\left(\frac{M_{\chi}}{\rm GeV}\right)\text{cm}^{2} \,,\quad\text{at 68\% C.L.}\,, \tag{40}\]
where the cross section on the left-hand side takes the value at present when the CMB temperature is \(T_{\gamma}^{0}=2.73\) K. By comparing Eqs. (39) and (40), we can express the CMB constraint in terms of our model parameters as follows
\[\frac{\beta}{M^{2}}\lesssim 8.24\times 10^{6}\,\text{GeV}^{-2}\left( \frac{M_{\chi}}{\rm GeV}\right)^{1/2}\,. \tag{41}\]
As a result, given parameters in Eq. (32) required to explain the observed cosmic birefringence, the limit in Eq. (41) is too weak to place any useful constraint on the model parameters, especially when the ADM particle mass is taken to be \(M_{\chi}\lesssim 10\) GeV.
Finally, we would like to mention that the ADM-photon coupling upper bound presented in Eq. (40) was derived in Ref. [62] by using the _Planck_ 2013 data on the CMB \(TT\) and \(EE\) auto power spectra, which was somewhat out of date. In particular, the _Planck_ Collaboration has released their data on the CMB angular power spectra of temperature, polarization and lensing in 2015 and 2018. Moreover, as shown in Refs. [64; 65], the inclusion of the data from BAO and weak lensing experiments can further strengthen the constraining power. Therefore, we expect that the CMB constraint on the ADM-photon interaction in Eq. (41) can be further improved by updating the CMB data along with the BAO and weak lensing data. Unfortunately, such a goal has only been achieved in Refs. [63; 64; 65] for the case with a constant DM-photon scattering cross section. For the present ADM model with the photon scattering cross section proportional to \(T_{\gamma}^{2}\), there is not any new progress after the study in Ref. [62], which provides the best experimental limit up to now.
### Constraint From DM Direct Detections
The ADM-photon interaction in Eq. (1) can also induce the effective couplings between the ADM particle \(\chi\) and SM quarks at the one-loop level as illustrated in Fig. 3, which can
diagram of Fig. 3 is logarithmically divergent due to the insertion of the nonrenormalizable ADM-photon effective operator \({\cal L}_{\rm CS}\). Therefore, it is expected that the ADM-quark scattering is dominated by the logarithmically divergent term, which can be expressed by the following effective ADM-quark interaction
\[{\cal L}_{\chi q}=-\sum_{q}\frac{1}{m_{V_{q}}^{2}}\bar{\chi}\gamma_{\mu}\chi\bar{ q}\gamma^{\mu}\gamma^{5}q\,, \tag{42}\]
where
\[\frac{1}{m_{V_{q}}^{2}}=\frac{3\alpha}{8\pi}\frac{\beta}{M^{2}}Q_{q}^{2}\ln \frac{\Lambda^{2}}{m_{q}^{2}}\,, \tag{43}\]
with \(m_{q}\) and \(Q_{q}\) denoting the mass and charge of the quark flavor \(q\), respectively, while other contributions are suppressed by small scales such as the momentum transfer or external particle momenta. The factor of \(\ln\Lambda^{2}/m_{q}^{2}\) comes from the logarithmic divergence with the UV cutoff scale identified as \(\Lambda\), which can be equal to \(M\) or not depending on model assumptions. Also, we have followed the convention in Ref. [68] to parametrize the ADM-quark couplings to be \(1/m_{V_{q}}^{2}\), as if there is a heavy vector particle \(V_{q}\) of mass \(m_{V_{q}}\) mediating the interaction between the flavor \(q\) and \(\chi\).
In order to connect the effective interactions in Eq. (42) with the observables in the DM direct searches, one can match \({\cal L}_{\chi q}\) to the nucleon-level non-relativistic (NR) operators
Figure 3: The Feynman diagram for ADM direct detections.
\({\cal O}_{7}^{\rm NR}\) and \({\cal O}_{9}^{\rm NR}\)[68; 69; 70; 71], both of which lead to velocity and momentum suppressed spin-dependent ADM-quark interactions. Hence, we expect naively that the DM DD constraints imposed on operators in Eq. (42) would be extremely weak. However, as pointed out in Refs. [68; 72; 73], the renormalization group (RG) running would cause the mixing among dimension-six DM-quark effective operators and, in particular, generate the couplings of the DM vector current to the quark vector current, such as \(\bar{\chi}\gamma_{\mu}\chi\bar{q}\gamma^{\mu}q\), which would further induce the spin-independent ADM-quark scatterings without any velocity or momentum suppression. The latter scattering channel would be even enhanced by the coherence of the large number of nucleons in the target heavy nucleus. As a result, such an operator mixing effect would significantly strengthen the DD constraints on our ADM model.
Currently, the best DM DD constraint comes from the LUX-ZEPLIN (LZ) Collaboration [74], which has excluded the DM-nucleon elastic spin-independent cross section larger than \(6.5\times 10^{-48}\,{\rm cm}^{2}\) when \(M_{\chi}=30\) GeV at 90% CL. The detailed computation of the DD exclusion limit on our ADM model parameters, as well as the relevant RG running and mixings of the dimension-six ADM-quark interaction operators, is well beyond the scope of the present work. Instead, we would like to apply the estimated LZ constraint in Ref. [68], which has shown lower bounds on the heavy mediator mass \(m_{V}\) as a function of DM mass \(M_{\chi}\) in the lower left panel of Fig. 12. However, we should take care of several differences in model assumptions in Fig. 1 of Ref. [68] from our present ADM setup. Firstly, the DM DD limit on the DM-quark operators of Eq. (42) in Sec. 4.1 of Ref. [68] was placed on the case in which all quark flavors share the same coupling, _i.e._, \(m_{V_{q}}=m_{V}\). In contrast, our ADM-quark effective interaction of each quark flavor has its respective coupling \(1/m_{V_{q}}^{2}\) which is caused by the quark electric charge \(Q_{q}\) and mass \(m_{q}\) as expressed in Eq. (43). Thus, in general, we cannot apply the result in Fig. 1 of Ref. [68] directly to our ADM model. Nevertheless, here we are only interested in the order estimation of the DM DD bounds, while the variations in quark electric charges and masses only give rise to \({\cal O}(1)\) corrections to the DD constraints, which can be illustrated by the similar result for the third-generation quark-DM coupling presented in the lower-left plot in Fig. 5 of Ref. [68]. As a result, we can simplify our analysis of the effective operators of Eq. (42) by approximating in Eq. (43) all quark charges to be \(Q_{q}=2/3\) and the logarithmic factor to be \(\ln\Lambda^{2}/m_{q}^{2}\sim 5\). A further complication comes from
the fact that the lower bounds on \(m_{V}\) in Ref. [68] are estimated by assuming the experimental exposure to be \(\omega=5600\) ton yr [76], while the LZ first search results were presented based on the 60 live-day data using the 5.5 ton fiducial-mass xenon [74] from which the exposure can be derived to be \(\omega_{0}=0.93\) ton yr. One can account for this issue by simply rescaling the original lower \(m_{V}^{\rm LZ}\) bounds as \(m_{V}^{\rm LZ\,0}=m_{V}^{\rm LZ}(\omega_{0}/\omega)^{1/4}\) since the reduction of the LZ exposure would lead to the looser DD lower limits on the DM-nucleon cross section by a factor of \(\omega/\omega_{0}\). Finally, the DM DD bounds have been given for the DM mass above 10 GeV in Fig. 1 of Ref. [68] while the real LZ or other xenon-based DD experiments can extend their sensitivity to even lower DM mass range. Here we approximate the DM DD constraints for \(M_{\chi}\) below 10 GeV by interpolating the exclusion limits on \(m_{V}^{\rm LZ\,0}\) to lower DM mass regions. With the above treatments and approximations, we can obtain the estimated DM DD constraints on the ADM models, which will be imposed on the parameter space in the following numerical analysis.
### Benchmark Scenarios
Rather than scanning over the whole parameter space for the ADM model, we would like to numerically explore two specific benchmark ADM cases with the ADM mass of \(M_{\chi}=5\) GeV and 5 keV. The former is motivated by the possible explanation of the measured DM-baryon mass density ratio in our Universe, whereas the latter would provide us a potential warm DM candidate [47] which could solve so-called "Missing Satellites Problem" [48].
In Fig. 4, we illustrate the numerical results for the ADM mass as \(M_{\chi}=5\) GeV. The solid blue band represents the parameter region explaining the cosmic birefringence observed by _Planck_, while the yellow shaded area has been excluded by the LZ DD constraints. The red line denote the ADM mass of \(M_{\chi}=5\) GeV. It is shown that, for a GeV-scale ADM particle, the favored region by the cosmic birefringence has totally ruled out by current DM DD experiments. As mentioned in subsection IV.2, the CMB constraint in Eq. (41) is so weak that we do not show it here.
On the other hand, when we lower the ADM mass to the warm DM region, _e.g._, \(M_{\chi}=5\) keV, the interpretation of the cosmic birefringence signal in the _Planck_ PR4 data requires the effective coupling of \(\mathcal{L}_{\rm CS}\) to be around \(\beta/M^{2}=1/(1.77\) TeV\()^{2}\) based on Eq. (32). Note that the stringent DM DD constraint from the LZ experiment cannot be applied to the
present case since the DD technique loses its sensitivity in such a low DM mass region. In addition, the detection of CMB spectral distortions from the FIRAS data [77] could provide an even stronger upper bound on the ADM-photon scattering cross section [78]
\[\sigma^{0}_{\chi\gamma}\lesssim 2\times 10^{-37}{\rm cm}^{2}\left(\frac{M_{ \chi}}{{\rm MeV}}\right)\left(\frac{E_{0}}{E_{\gamma}}\right)^{2}\,, \tag{44}\]
for \(M_{\chi}\) in the range of about 1 keV to 100 keV, where \(\sigma^{0}_{\chi\gamma}\) denotes the cross section at the photon energy \(E_{0}=1\) keV. Using Eq. (34) with \(E_{\gamma}=p_{1\,{\rm cm}}\), the above constraints can be expressed by our ADM model parameters as follows
\[\frac{\beta}{M^{2}}\lesssim\left(\frac{M_{\chi}}{5\,{\rm keV}}\right)^{1/2} \left(\frac{1}{1.2\times 10^{-4}\,{\rm TeV}}\right)^{2}\,, \tag{45}\]
Obviously, such a constraint is still too weak compared with the parameters obtained by the measured cosmic birefringence angle. Finally, the ADM mass of \(M_{\chi}=5\) keV is well above
Figure 4: The parameter space in the \(\log_{10}(M_{\chi}/{\rm GeV})\)- \(\log_{10}(\beta/M^{2}/({\rm GeV})^{-2})\) plane. The solid blue band illustrates the parameter region favored by the cosmic birefringence signal observed by Planck, while the yellow area has been excluded by LZ experiment. The vertical red line denotes the ADM mass to be \(M_{\chi}=5\) GeV, which is preferred by the observed DM-baryon mass ratio in our Universe.
the lowest DM mass bound \(M_{\chi}\gtrsim 1\) keV derived from the phase space density considerations in Ref. [79]. Note that there are other stringent constraints from observations of the Lyman-\(\alpha\) forest [80] and the matter power spectrum [81; 82; 83], which can also limit the DM mass and the ADM-photon interactin in Eq. (1). However, such constraints are rather indirect, and contain many uncertainties from non-linear matter evolutions. Therefore, in the present work, we do not consider their impacts on our model parameters.
## V Conclusions
Motivated by the recent measurement of the potentially nonzero cosmic birefringence angle from the _Planck_ PR4 data, we consider its possible origin from the CS-like coupling \(\mathcal{L}_{\rm CS}\) of photons with some fermionic currents, which was previously studied in Ref. [39]. We have first revisited the derivation of the general formulae for the cosmic birefringence angle \(\Delta\alpha\) from this photon-fermion effective operator, correcting a mistake in the corresponding expression in Ref. [39]. We have then identified the fermion in the current as the left-handed electron neutrino and the DM, and discuss their respective phenomenology. For the electron neutrino case, with the updated value of the degeneracy parameter \(\xi_{\nu_{e}}\) from the EMPRESS survey [54], the explanation of the birefringence requires \(\beta/M^{2}\simeq 1/(7.9\text{ TeV})^{2}\). On the other hand, if the current is assumed to be composed of fermionic DM particles, the birefringence angle should be proportional to the abundance difference between DM and anti-DM particles. For simplicity, we have further assumed that only DM particles are left in the present-day universe. In other words, we need to consider the ADM model. We have also explored the experimental constraints from the CMB power spectra and the DM direct searches. As a result, the CMB limit on the ADM model is too weak to be relevant in constraining the model parameter space, while the DM DD bounds from the LZ Collaboration have totally excluded the parameter region for \(M_{\chi}\sim 5\) GeV, which is the natural ADM mass value so as to explain the observed cosmic DM-baryon mass density ratio. In contrast, for the warm DM case with \(M_{\chi}\sim 5\) keV, the measured value of \(\Delta\alpha\) from _Planck_ 2018 data has restricted the effective coupling to be around \(\beta/M^{2}\simeq 1/(1.77\,\text{TeV})^{2}\), which is shown to satisfy all the present experimental constraints.
###### Acknowledgements.
This work is supported in part by the National Natural Science Foundation of China (NSFC) (Grant No. 12005254 and No. 12147103) and the National Key Research and Development Program of China (Grant No. 2021YFC2203003 and No. 2020YFC2201501 ).
|
2303.11269 | Method Chaining Redux: An Empirical Study of Method Chaining in Java,
Kotlin, and Python | There are possible benefits and drawbacks to chaining methods together, as is
often done in fluent APIs. A prior study investigated how Java developers chain
methods in over 2.7k open-source projects. That study observed, for the dataset
analyzed, that the use of method chaining in Java is popular and seems to be
increasing over time. That study however was limited to a smaller sample of
Java projects, and it is also not clear if the results generalize to other
languages. In this work, we first replicate the prior results by building a
similar dataset and our own analysis scripts. We then extend those results by
analyzing a much larger dataset of 89k Java projects and generalizing to other
programming languages by analyzing 26k Kotlin projects and 98k Python projects.
The results show chaining is more popular in Java and Kotlin than Python,
chaining use in Kotlin is not growing, and Python sees more use in non-testing
code. | Ali M. Keshk, Robert Dyer | 2023-03-20T16:54:05Z | http://arxiv.org/abs/2303.11269v1 | # Method Chaining Redux: An Empirical Study of Method Chaining in Java, Kotlin, and Python
###### Abstract
There are possible benefits and drawbacks to chaining methods together, as is often done in fluent APIs. A prior study investigated how Java developers chain methods in over 2.7k open-source projects. That study observed, for the dataset analyzed, that the use of method chaining in Java is popular and seems to be increasing over time. That study however was limited to a smaller sample of Java projects, and it is also not clear if the results generalize to other languages. In this work, we first replicate the prior results by building a similar dataset and our own analysis scripts. We then extend those results by analyzing a much larger dataset of 89k Java projects and generalizing to other programming languages by analyzing 26k Kotlin projects and 98k Python projects. The results show chaining is more popular in Java and Kotlin than Python, chaining use in Kotlin is not growing, and Python sees more use in non-testing code.
method chaining, empirical study, replication, Java, Kotlin, Python
## I Introduction
Most object-oriented languages support _chaining_ method calls together, to avoid the need to temporarily store a returned object the programmer intends to only immediately use as the receiver for the next method call. Any arbitrary number of method calls can be chained together and the receiver object used can either be the same for every call, or possibly different (if one call returns another object). For example, the following code builds a JSON string by chaining several methods together then calling tostring():
```
1Stringjson=newJSONObject()
2.put("Conference","MSR")
3.put("Year","2023")
4.toString();
```
Method chaining provides several potential benefits, including: elimination of storing the temporary objects returned [8], improved readability of internal domain-specific languages (DSL) [8], the ability to more easily skip optional arguments in methods with many arguments [26], and DSLs can build expressions that read more natural from left to right [8, 19]. Method chaining is also commonly incorporated in fluent APIs [9], and is often used in design patterns such as the builder pattern [10] (shown above).
Despite the apparent usefulness of the approach, there has been a lot of discussion about it. Some people claim the use of method chaining is generally a bad practice [13], possibly causing readability/comprehension problems [3, 14, 27], making it more difficult to debug in some debuggers, or even by breaking the Law of Demeter [3, 15]. Others claim method chaining leads to maintenance issues [4].
Knowledge of how developers use method chaining could benefit the research community by providing crucial evidence to lay the groundwork for future studies. For example, knowing if developers employ method chaining could help guide researchers interested in the reasons to use chaining. Does the type system (static vs. dynamic) play a role? Or maybe the language does not provide fluent style APIs?
Thus, knowing if method chaining is utilized by the programming community or not could be important for language designers. Such knowledge could help guide future language and API/library designs. If method chaining is popular in a language, the maintainers could optimize their compiler or virtual machine to account for method chains.
This study could also directly benefit practitioners, especially those that write APIs, as they could be made aware of which language(s) users seem to employ method chaining in. E.g., if someone is writing a new Python API, they might consider a non-fluent design if Python developers shy away from chaining, whereas the Kotlin API might decide to be more fluent if Kotlin developers utilize chaining.
Since it was not entirely clear if the programming community accepts the idea of method chaining or not, Nakamura _et al._[21] studied how method chaining was (or was not) adopted in the Java programming language. They performed an empirical study on over 2.7k popular open-source Java projects from GitHub and looked at how often method chains were used. Their results indicated that method chaining is relatively popular among Java projects and that the use of method chaining was increasing over time. They also observed method chaining was more popular in testing files (vs non-testing files), and they proposed a set of language enhancements to Java to encourage additional uses of method chaining. But their study focused only on a single programming language.
For example, one of the perceived benefits of method chaining is the ability to more easily skip over optional arguments of a method with many arguments. In Java for example, if a method takes 10 arguments but many are optional, the API designer has to either provide a large amount of similar looking overrides, or fall back to a fluent API design. Here, the fluent design using method chains would be more flexible and (most likely) easier to comprehend.
But this is only a benefit in certain languages. For example, Kotlin and Python allow specifying default values for arguments and calling methods with named/keyword arguments and so do not have the same difficulty with skipping optional arguments as one might have in a language like Java. Thus it is important to understand how the use of method chaining might differ in other languages that provide additional features that might discourage the need to chain methods.
In this work, we first investigate if it is possible to replicate the results of Nakamaru _et al._[21] by building a similar dataset and writing our own analysis scripts. We built a dataset containing projects from their dataset (but cloned several years later) and discovered a few small inconsistencies when compared to their results. We confirmed with the original authors these were bugs in the analysis and our results indicate those bugs did not change the overall previous results.
We then extend the study to a much larger dataset of Java projects, with 89k (35x more) projects, to see if the results generalize to a larger population. Our results show they do: we observe similar trends in the larger dataset as the original study observed in their smaller sample. What was not clear was how the results generalize to other languages.
To investigate other languages, we also used datasets with 26k Kotlin projects and 98k Python projects. We chose Kotlin as it is one of the top-ten statically-typed languages [29], the default language for Android, and designed to interoperate well with Java. Thus we wondered if the trends might be similar to Java, despite any language differences, as many of the developers are also Java developers. We chose Python as it is a popular scripting language with some syntax and style differences like enforcing whitespace (developers need to either enclose the whole chain in parentheses or end each line with a backslash) that could affect the use of method chaining. Thus we suspected Python developers might behave differently compared to Java/Kotlin developers.
The results show chaining is more popular in Java and Kotlin than Python, its use is not growing in Kotlin, and Python sees more use in non-tests than testing code. Given the prevalence of chaining in Java and Kotlin, practitioners need to be made aware of what method chaining is, how best to utilize it, and what design patterns are built on top of it. Even if they themselves are not writing code using method chains, they are very likely to stumble across such code. We need to ensure new developers are properly trained and there is sufficient documentation to support them. Conversely, it seems Python developers could possibly avoid these issues, as chaining is almost three times less prevalent. These results also give evidence that API developers can feel comfortable utilizing fluent designs when targeting Java or Kotlin, while Python library developers may want to avoid a fluent design.
In the next section, we give background information on method chains and the prior study. In Section III, we discuss this study's research questions. The approach is overviewed in Section IV and results provided in Section V. In Section VI, we discuss threats to the validity of the study. Related work is discussed in Section VII, and we conclude in Section VIII.
## II Background
In this section we give background defining what a method chain is along with some clarifying examples. Then, we summarize the prior study and its main findings.
### _Method Chains and Chain Length_
A **method invocation** issues a call to a method and requires a receiver object, hence constructor calls or super() calls inside a constructor are not method chains. Similar to Nakamaru _et al._[21] (that we also refer to as the "original study" throughout this paper), we define a **method chain** as "a sequence of one or more method invocations joined by the "." symbol" [21] and also define the **length** of a method chain "as the number of [method] invocations in the sequence" [21].
In Figure 1 we show some example method chains in Java. The first two examples simply show that, despite their appearance, constructor calls (line 1) and calls to super constructors (line 2) are not method chains. The examples on lines 4-7 all show single method invocations (which we call chain length 1), on varying receivers. The example on line 9 is the first example of what most call a method chain, with length 2. Some trickier cases are shown in lines 11-12 where there can be more than one chain, if they are nested in the arguments of a call or separated by a field access.
### _Prior Study_
The original study looked at popular Java projects from GitHub, discovered in Nov/Dec of 2019. In total, they analyzed 2,756 Java projects and focused on the years 2010-2018, as they wanted full years of data (so they could not keep 2019) and they wanted enough files/projects, so started with 2010. They then took repository snapshots for each year studied.
\[f_{n}=\frac{m_{n}}{m_{1}}\qquad\text{(1)}\qquad\qquad r=\frac{\sum_{n\geq 2 }n\cdot m_{n}}{\sum_{n\geq 1}n\cdot m_{n}} \tag{2}\]
Their analysis relied on computing two metrics: \(f_{n}\) and \(r\). \(f_{n}\) (Equation 1) is the number of chains of length \(n\) over the number of not-chained method invocations (aka length 1) [21], where \(m_{n}\) is the number of chains of length \(n\). \(f_{n}\) is not an average, but instead is computed for each file in the dataset, on a per-year basis.
Fig. 1: Example method chains in Java and their lengths
The second metric they computed were the \(r\) values (Equation 2), the ratio of all chained method invocations to all method invocations (chained or not) [21], where this ratio is computed either per-project or over all files in the whole dataset, on a per-year basis.
The third metric was \(U_{n}\): the ratio of projects containing at least one chain whose length is longer than or equal to \(n\)[21].
At a high level, their results showed the use of method chaining in Java increased from 2010 to 2018. The percentage of all method chains rose from 16.0% in 2010 to 23.1% in 2018. They also found that over half of the Java projects contained chains of length \(n\geq 8\), and less than 5% of projects contained chains of length \(n\geq 42\). When looking at extra-long chains, the three most common libraries were: Elasticsearch, Guava, and the Java standard library. They concluded that method chaining is most likely an accepted practice in Java due to the observed high and increasing number of uses.
## III Research Questions
Here we outline the study's research questions.
1. **Can we replicate the results of Nakamaru** _et al._**[**21**] on a similar dataset?** The prior study analyzed over 2.7k Java projects. We want to know if it is possible to replicate their results independently with our own analysis scripts and dataset (with as close to the same set of projects as possible).
2. **Do the observations and trends of Nakamaru** _et al._**[**21**] still hold when analyzing a much larger set of Java projects?** While the prior study looked at over 2.7k Java projects, do those trends still hold for a larger dataset with 89k Java projects (30x larger)?
3. **Do Kotlin programmers use method chaining in a way similar to Java programmers?** To see if the trends observed for Java generalize to other languages, we first look at another JVM-based language: Kotlin.
4. **Do Python programmers use method chaining in a way similar to Java or Kotlin programmers?** To see if the trends observed for Java or Kotlin generalize to other, non-JVM based, languages, we next look at Python-a popular scripting language.
5. **Can we support the language extensions proposed by Nakamaru** _et al._**[**21**]?** Their study proposed extensions to the Java language to better encourage and support the use of method chains. Are we able to support those recommendations with our larger and more diverse datasets?
In the next section, we discuss our approach to investigate each of these research questions.
## IV Approach
In this section we outline the approach used to answer our research questions. First we discuss the data used. Then we discuss how we query that data to find method chains. Finally, we discuss the methodology used to analyze the query results.
### _Datasets_
For each research question, we either built a new Boa [6, 25] dataset or used one of the existing datasets. In total we used 4 different Boa datasets. All datasets were built from public GitHub repositories marked as non-forks.
All repositories were located and cloned during the summer of 2021, thus 2020 is the last full year of data analyzed. Similar to the prior study, we use a starting year based on the dataset having at least 250 projects. An overview of the datasets is shown in Table I.
Duplicate files across projects, if they exist, are filtered out, retaining one of each duplicate set. Leaving duplicates in could bias the results, as any file(s) that are highly duplicated would contribute their method chains multiple times. Similar to Lopes _et al._[16], we filter duplicates by collecting the hash of each file's AST, which ignores whitespace and comment differences between files, and keeping one copy of each unique hash. The total number of duplicate files identified is shown in the last row of Table I. Note that for Java this is around 30% of the files, but that result matches prior studies of duplication in Java datasets [1, 16].
The **Java Original** dataset ("2021 Method Chains" in Boa) was built by first collecting the list of all projects from the original study's data [22]. We then attempted to clone each from GitHub. During that process we determined some were forks, which we excluded. Renamed projects were cloned using their new names. There were also 19 projects not available on GitHub. We attempted to locate those on the Software Heritage Archive [5] and found 13. This gave us 2,659/2,756 (96.48%) of the projects from the original study.
The original study chose the projects based on being in the daily top-1000 most starred projects in a small time window in 2019. Thus, the projects actually have star counts ranging from 1 to over 100k. Since we want a direct comparison to the prior paper, we do not filter this dataset.
The **Java** dataset ("2022 Jan/Java" in Boa) contains projects indicating Java as primary language (highest percentage of code is Java) on GitHub and sorting based on star counts before cloning. Cloning stopped when we ran out of space. Note that while we did not set any threshold, since the cloning was sorted based on star counts, all Java projects have at least 10 stars.
The **Kotlin** dataset ("2021 Aug/Kotlin") contains projects indicating Kotlin as primary language. At the time of crawling, this represented almost every Kotlin project on GitHub. Since the other three datasets were using a notion of popularity, we filtered this dataset based on star counts and kept projects with
at least 2 stars (to avoid projects with only self-stars). We considered using a higher cutoff such as 10 or 8 (to mirror the Java or Python datasets), but decided against it as filtering with these higher thresholds leads to a substantially smaller dataset. We opted to keep the dataset size the same magnitude, to avoid making the datasets imbalanced.
The **Python** dataset ("2022 Feb/Python") was built with projects indicating Python as primary language and then sorting based on star counts. Again, note that while we did not set any threshold, since the cloning was sorted based on star counts, all Python projects have at least 8 stars.
### _Finding Method Chains_
To mine method chains from the datasets, we needed to write several Boa queries. Figure 2 shows a helper function chainlen that, when given a method call expression, returns the length of that method chain.
Method chains in Boa are represented in the AST nested, meaning the tree root is the last call in the chain and as you traverse down the tree you find the earlier call(s) in the chain.
Figure 3 shows the main query. For every expression found, a sub-visit is needed to locate the chain(s). This is because there could be a path in the AST with several method calls, but with other expressions following the dot operator, such as a field access. We also don't want to locate a chain and then move one call down the AST and accidentally report another (sub)chain. This is handled with the curlen counter, where 0 means we found a new chain.
The accuracy of the method chain locating queries was verified using manually developed test cases. This was especially important to verify in Python, where chains that span multiple lines require either backslashes or the entire chain must be enclosed in parentheses.1
Footnote 1: [https://stackoverflow.com/questions/48863091/pep8-chained-methods](https://stackoverflow.com/questions/48863091/pep8-chained-methods)
### _Analysis Approach_
We analyze the data with several Python scripts and utilize the Pandas library. We first convert the text output from Boa into a CSV format, then import into Pandas for further processing. File deduplication occurs after loading from CSV.
The scripts process each dataset and generate \(f_{n}\) and \(r\) values using equations 1 and 2. Similar to the prior study, we then scatter plot the \(f_{n}\) values for the first/last years for each dataset to visualize how the frequency of method chaining for different chain lengths has changed over time. We also show a bar plot of the overall \(r\) values over that time range, to see if there is an increasing or decreasing trend.
Similar to the original study, we then investigate the distribution of varying lengths of methods chains, categorizing each chain into "short" (less than or equal to \(n\), where \(U_{n}\) is closest to 50%), "long" (where \(U_{n}\) is closest to 5%), and "extra long" (where \(U_{n}\) is less than 5%). The categories for the oldest year are used to compare the distribution of chain lengths across all years for each dataset.
Finally, similar to the original study, we look at testing vs non-testing code as our experience with Java code tells us testing code often looks different. We suspect testing behavior across languages may vary as well. Observing any differences also helps to understand if method chaining supports the specific goals of testing code. For this, we mark each file as a test based on the lowercase path containing a sub-string "test" or if the file imports one of the top testing libraries/modules. Here we present the results as both scatter plots of their \(f_{n}\) values and bar plots of their \(r\) values over time, so we can observe if the trends changed.
## V Results
In this section we detail the results of our empirical study.
### _RQ1: Can we replicate prior results on a similar dataset?_
First, we wanted to verify the analysis scripts we created to generate tables/charts (similar to the original paper) worked as expected. To verify them, we used the original data files [22]
Fig. 3: Boa query snippet to locate method chains
Fig. 2: Boa functions to measure method chain lengths
provided by the prior study [21] and processing scripts they provided directly to us. We were able to successfully use their scripts and data to reproduce the results in their paper.
```
Finding1: We were able to reproduce the prior paper's results using their data and scripts.
```
We then converted their data.txt results file by loading it into a Pandas DataFrame that our analysis scripts operate on. The results showed that our analysis scripts were able to correctly reproduce the graphs from their Figures 3, 4, 5, and 6 and their Table 1. Thus we feel confident our analysis scripts were written correctly.
During this process we did identify a single anomalous result in the test vs non-test scatter plot. After investigating, we determined that while we searched for "test" in the file path in a case-insensitive manner, they appeared to look only for lowercase matches and thus missed a single data point. However, when they calculated the percentage of extra-long chains in testing code out of all extra-long chains, they performed a case-insensitive search.
Next, we aimed to replicate the prior study by using a new (but similar) dataset we built, which we call Java Original. This dataset was built using the same set of projects from the original study, but cloned at a different point in time (thus they are not 100% identical). We then used Boa queries to mine our dataset and identify method chains. We hoped the results would be very close to the ones observed in the prior steps, indicating the Boa queries correctly mined the method chains.
However we noticed quite a few differences between these results and the chains provided by the previous study. After quite a bit of manual analysis, we were able to determine the Boa queries were identifying a lot of chains that were not included in the original paper's data file. We were able to identify a few common patterns among the missed data and then confirmed with the original authors that there were some bugs in their script used to mine the method chains (that script is not provided in the replication package).
We also identified some inconsistencies in the reported number of projects, as while 2,814 projects were reported, only 2,756 had Java source files in the date range studied. Additionally, 21 of those projects were unintentionally kept despite being forks of other projects in the dataset (and thus, exact duplicates). We also identified 10,824 files that were duplicated across (non-forked) projects that were unintentionally included in their data. Finally, it seems like the method used to snapshot projects by year may have had some issue, as we were able to identify a file with method chains that was deleted in 2015, modified in 2014, but the mined data contains no chains past 2011 for some reason.
We were able to communicate with the lead author and confirm most of the problems identified [20]. Interestingly, since some of the errors resulted in fewer chains found and others resulted in more chains (from dupes), the total number of chains found was almost the same: 152,161,181 from them vs 152,783,246 from us. The analysis on the data also resulted in the same trends with very minor variations (we do not show them here - but full results are in our replication package).
```
Finding2: Despite identifying several irregularities, we independently replicated the prior study's main results, showing that method chaining is (increasingly) popular in Java code.
```
We conclude by answering the research question: we were able to successfully replicate the prior study's results.
_RQ2: Do the prior study's observations and trends still hold when analyzing a much larger set of Java projects?_
Since we were able to replicate the prior study's results, we wanted to first see if those results generalize to a larger set of Java projects. Figure 4 shows the \(f_{n}\) and \(r\) values over the total dataset for the years 2003-2020. Note the scatter plot uses log-scale for both axes. Similar to the results for the Java Original dataset (not shown), these results indicate increasing use of method chains in Java. This observation is further confirmed when viewing the histogram and quartiles plot in Figure 5. From 2003-2020, all three quartiles increase by 3-10%.
We do however observe some minor differences from the Java Original dataset. When looking at method chaining in testing code, Figure 6 shows the scatter plots of the \(f_{n}\) values with the Java dataset on the right. Despite of the fact the years are different (first/last year of each dataset), when we looked at the non-testing plots they were quite similar but the testing code seems to show different behavior for the year 2003. This was confirmed looking at the \(r\) plots shown in Figure 7.
It appears that in older Java projects, there was more method chain use happening in non-testing tha
Fig. 5: Distribution and trend of \(r\) values per project (Java)
was not observed in the original study, possibly because their data started in 2010 when the trends flipped. This seems to indicate the growth in the use of method chains in testing code is more pronounced than previously thought.
In Table II we group method chain uses into categories based on lengths, from short, to long, to extra long. Here we also observe differences with the prior results. Compared to the prior study, our larger dataset has a higher percentage of long and extra long chains.
When observing the ratios shown in Figure 8, where \(U_{n}\) is the ratio of repositories containing at least one chain of length \(n\), we observed some very large differences. For most values of \(n\), the ratios shown here are about 2-4x smaller than the ratios from the prior study. We do however observe increasing ratios from 2003-2020, similar to the prior study.
Finally, we looked at what some of the more popular libraries are that produced the extra long chains. First, we wrote a Boa query that attempts to infer the static type of every identifier in the code as well as the static type returned from method calls. We then ran a query to find what type the first method call in each extra long chain was and then grouped them by library. The results are shown in Table III.
The prior study had a small sample to deal with (only 71 extra long chains) while we analyzed almost 13k. Given that size difference, it is good to see all three of their most commonly identified libraries are in our results as well. In addition to Guava and the Java standard library, we also identified the Spring framework in our top 5 list.
We conclude by answering the research question: the prior study's observations and trends still hold on a larger dataset.
### _RQ3: Do Kotin programmers use method chaining in a way similar to Java programmers?_
So far we have only investigated how method chaining was used in Java projects. Our next two research questions try to see if the trends are similar for other programming languages. First, we look at a JVM-based language called Kotlin that is Android's preferred programming language. Kotlin was designed to interoperate with Java, and many Kotlin projects actually contain both Kotlin and Java source files. It is thus
Fig. 8: Ratio of projects containing chains longer than or equal to \(n\) (Java)
Fig. 6: \(f_{n}\) of testing code (Java Original left, Java right)
Fig. 7: Comparing \(r\) values for non-testing vs. testing code (Java Original left, Java right)
important to know the distribution of those files over time, which we show in Figure 9.
As can be seen, the number of Kotlin files increases rapidly across the years, while the number of Java files starts increasing and then around 2018 starts decreasing. We suspect this is due to people becoming more comfortable with the Kotlin language and writing less and less code in Java over time, as 82% of the projects are Android projects (where the default language used to be Java and is now Kotlin). It is also worth mentioning that 37 projects switched their default language from Java to Kotlin.
Figure 10 shows the \(r\) values for the Kotlin dataset, broken down by Java files on the left and Kotlin files on the right. Both actually show similar trends in that they start increasing, then decrease the later years. Both are also similar in terms of magnitude, meaning at least within a Kotlin project people seem to utilize method chaining about equally in both languages. This is a bit surprising, given some of the additional language features Kotlin has that might avoid the need to chain. We investigate some language support later in Section V-E.
When observing how the \(r\) value quartiles changed in Kotlin projects, shown in Table IV, we can see differing trends. First, for Java source files we observe the first quartile going down and the third going up. This means that in 2020 there is more variance (wider spread of quartiles) of method chain use within Java files. We do not observe that for Kotlin files, where both the first and third quartiles show an overall increasing trend.
When looking at the ratios shown in Table V, we observe a very interesting result. First, for Kotlin files (on the right) we note that while the 2020 ratios are smaller but similar to the results we saw for the Java dataset, the 2014 ratios are much lower than the Java dataset's 2003 ratios. This implies that Kotlin projects adopted method chaining much more recently, compared to Java projects that adopted it early on.
Second, we note the behavior of the Java files (on the left) in the Kotlin projects that seem to be exhibiting the opposite trend as the Kotlin files. We suspect this might be due to some method chains moving from Java files over to their Kotlin replacements as people slowly replace existing Java code with newer Kotlin equivalents.
When looking at the method chains grouped as shown in Table VI, we see that Kotlin appears to have shorter overall method chains when compared to Java. However, it appears that Kotlin has a higher ratio of long/extra long chains out of the three categories compared to Java. For example, in 2020, 9.28% of chains are long chains in Kotlin while only 3.51% of chains are long chains in Java, and 0.75% of chains are extra long chains in Kotlin files while only 0.03% are in Java.
Next we look at the distribution of chains in testing vs non-testing code. Figure 11 shows the \(r\) values per year for testing
Fig. 10: Kotlin project \(r\) values (Java left, Kotlin right)
Fig. 9: Kotlin project file counts (Java left, Kotlin right)
vs. non-testing code with Java files on the left and Kotlin files on the right. For both languages, it appears starting from 2015, testing code has more method chains than non-testing code. For Kotlin files, 2015 appears to be the point where the majority of method chains switched from being in non-testing code to testing code.
When we observe the scatter plots for the Java files, shown in Figure 12, we see that despite there being more overall chains in the testing code it appears that the lengths of those method chains are shorter than the method chains in non-testing code.
However, when looking at the \(f_{n}\) values for Kotlin files shown in Figure 13, both testing and non-testing code have similar chain lengths, and both have significantly more extra long chains in 2020 than in 2014.
When looking at the scatter plots for the Kotlin files, shown in Figure 13, there is a gap in the right tail of the testing code plot, suggesting that the longest chain lengths in 2020 are between 50 and 100, with a few outliers that are greater than 100 while the chain lengths for non-testing code, for the most part, steadily increase to around 100. The fact that the \(f_{n}\) values are generally lower for the non-testing scatter plot indicates that testing code has a higher ratio of chain lengths greater than 1 to chain lengths equal to 1. The scarcity of dots for 2014 in the testing scatter plot suggest that many chains in testing code were of certain repeated lengths.
We conclude by answering the research question: Like Java, Kotlin sees more method chaining in testing code than in non-testing code. Unlike Java, however, the use of method chaining in Kotlin is not increasing, but remaining relatively constant. Therefore, we can say that developers use method chaining in Java and Kotlin for similar purposes, but method chaining is not becoming more popular in Kotlin, as it is in Java.
### _RQ4: Do Python programmers use method chaining in a way similar to Java or Kotlin programmers?_
Now that we looked at the use of method chains in Java and Kotlin, two JVM-based languages, we want to investigate one more language. We chose to look at the Python language, a popular scripting language that is also not a JVM-based language. Our hypothesis was that Python programmers will behave different compared to Java and Kotlin programmers.
The first thing we noticed was how different Python's \(r\) values are compared to the other languages. For example, you can see in Figure 14 with Java on the left and Python on the right, Python's \(r\) values are almost 3x lower than Java's.
In fact, we observe considerably less method chain use in Python than Java, which becomes even more obvious when we examine the histograms in Figure 15, with Java on the left and Python on the right. Keep in mind, there are 9k more Python projects in our dataset compared to the Java dataset. While we do see similar shapes, as would be expected, the Python histogram is skewed so far left that it is starting to almost look like a line.
Similar to the other languages we also wanted to categorize the chains in Python into short, long, and extra long categories.
Fig. 11: \(r\) per year of non-testing vs. testing in Kotlin projects (Java files (left) and Kotlin files (right))
Fig. 14: \(r\) values (Java left, Python right)
Fig. 12: \(f_{n}\) of non-testing vs testing code (Java files)
Fig. 13: \(f_{n}\) of non-testing vs testing code (Kotlin files)
The results are shown in Table VII. Here we observe that Python is similar to the other languages, in that over time there are more long and extra long chains.
When we observe the \(U_{n}\) values for Python, shown in Figure 16, we see a very small percent of the projects have chains of length 8 or more. This is a stark contrast with what we observed in Java and Kotlin where around 15-20% of projects have chains of length 8 or more. So not only does Python have substantially fewer chains, but the chains it does contain are typically much shorter.
Finally, when looking at the behavior in testing vs non-testing code, we see additional differences when compared with the other two languages. For both the Java and Python datasets, around 25% of files were test files. We can see in the scatter plot of Figure 17 the \(f_{n}\) values, while lower than Java, follow similar trends. In Figure 18 we can see the \(r\) values and here we observe some differences to Java and Kotlin. It appears that over time, Python is following a different trend and the use of chains is increasing in non-testing code while staying relatively flat for testing code.
We conclude by answering the research question: method chaining in Python occurs less frequently than in Java or Kotlin and chains in Python tend to be shorter than in Java or Kotlin. However, unlike Java and Kotlin, method chaining is more common in non-testing code than in testing code.
### _RQ5: Do we see support for the proposed language extensions from the prior study?_
Nakamaru _et al._[22] also investigated four possible language extensions for Java. They then created a sample of 385 chains from their data (out of over 150 million), manually analyzed the sample, and estimated how often such a language extension could be applicable. Here we take another look at the code patterns they identified, but don't sample the data and instead use an automated mining approach to see if we can observe support for the proposed language extensions. Note that the data here is not deduplicated.
#### Vi-E1 NullExceptionAvoidance
The first pattern was looking for method chains where the user checks the return value of one (or more) of the calls to ensure it is not null before continuing on with the chain. The original study proposed a _safe call_ syntax, similar to what Kotlin provides:
m1()?m2().m3() where the whole expression evaluates to null (and m2 and m3 are not called) if the call to m1 returns null. Since we already have a very large dataset with almost 500k Kotlin projects, we investigated how frequently such a feature is used in the full dataset (not just the 26k projects we mined for method chains).
Fig. 16: Ratio of Python projects containing chains \(\geq\) to \(n\)
Fig. 17: \(f_{n}\) of non-testing vs testing code (Python)
Fig. 15: Distribution of \(r\) values per project across all studied languages
In total, we found that 257,114 projects (51.5%) used at least one safe call and there were a total of 4,141,922 safe calls. On average, projects contained 8.3 safe calls spanning 2.5 files. If we focus just on our 26k studied projects, there were 355,132 safe calls in 22,041 projects (83.9%), averaging 21.9 safe calls in 6.2 files. This shows this feature is widely used in Kotlin and supports the conclusion that Java could possibly benefit from its addition.
#### Vi-B2 RepeatedReceiver
The second pattern was looking for when two separate method chains that appear as neighboring statements both have their original call on the same receiver:
```
o.m1(); o.m2();
```
In such a case, these could be chained together if the API being called was modified to support chaining (e.g., returning the this object from ml()). We queried the Java Original dataset and found 6,515,461 instances of this pattern. This represents about 19.07% of the method chains in that dataset.
The original paper had estimated there were 8.57% with a 99% confidence interval of 4.9-12.24%. Thus the actual value appears to be higher and outside their confidence interval. There seems to be support for possibly refactoring APIs into a more fluent style.
#### Vi-B3 DownCast
The third pattern they suggested looked for a chain that contains a cast operation:
```
((C)m1()):m2().m3();
```
The idea was that frequently this kind of down-cast is difficult to read and many developers split the chain to store the cast result into a local. An improved solution could be to support a method that performs the downcast for you, e.g.:
```
ml().asC().m2().m3();
```
This would lead to more readable code and enable chaining the methods together without splitting to store in a local.
In total, we found 16,136 (0.047%) chains matched this pattern and contained a cast in the Java Original dataset. This is considerably smaller than their estimate of 1.56%, but they were unable to compute a 99% confidence interval so this could fall within such margins.
#### Vi-B4 ConditionalExecution
The final pattern looked for method chains that were guarded with an if conditional:
```
if(o.ml()) o.m2()
```
and recommended a method that takes a lambda (the body of the conditional) and runs it if the condition is true. So in this example, it might look something like o.ifM1(x->x.m2()).
Again, when looking at the Java Original dataset we were able to mine for this pattern and found a total of 32,652 instances (0.0956%). The original paper had estimated there were 2.34% with a 99% confidence interval of 0.36-4.32%. Thus the actual value appears to be lower and outside their confidence interval.
```
Finding14: While we were able to more thoroughly mine the method chains to see if the suggested patterns occurred, we found in half the cases they occur very infrequently, often lower than the 99% confidence interval suggested by the prior study. We did however find strong support for the NullExceptionAvoidance language feature recommended by the prior study.
```
We conclude by answering the research question: we see support for half the language extensions previously proposed.
## VI Threats to Validity
Some possible threats to the internal validity of this study are that we do not know what kinds of projects are in the dataset, and thus there may be toy or educational projects included that could potentially skew the results (for example if someone was practicing how to use fluent APIs, all of their code might have a lot of chains). Filtering by star counts helps avoid some projects, but may not catch them all.
There may also be partial clones data in the dataset. We were only able to easily remove exact duplicates, aka type-1 clones. There may be additional clones in the dataset and depending on the quantity it could be skewing the results.
For RQ2, we looked at commonly used libraries in extra long chains. This analysis relied on a complicated Boa query attempting to infer types in the code. While we believe this query is sound, it is possible the type inference was not complete and the list of popular libraries would be affected.
There are some threats to external validity. Similar to the prior study, the main filter applied to select projects was star counts. Thus, it is possible the trends we observe might be different for less popular projects. And while this study expanded the original to look at two additional languages, the results we see may not generalize to other languages.
## VII Related Works
Many researchers have studied the use of Java language features [7], especially the use of lambdas [17, 18, 23, 24, 30], but few have looked at the use of normal methods (not lambdas) or method chaining in particular. Tanaka et al. [28] did look at method chains, but from the viewpoint of functional idioms in Java.
Borstler and Pacch [3] performed a study on the perceived readability of code that included method chains. Their results indicated there was no significant relationship observed, so it is not clear if method chains increase or decrease readability and perhaps when looking at how they impact comprehension, readability might not be the right metric. We did not investigate method chains impact on comprehension but rather just observed if people use method chaining in the wild.
Fig. 18: \(r\) per year of non-testing code vs. testing Code (Python)
Kasraee and Lin [14] performed an eye tracking study with participants reading code containing method chains. Their results indicate that code without method chains may be slightly more readable. If that is true, then the observations of our study indicate a lot of code could be made more readable by converting it to a non-chained form.
Grechank et al. [11] performed an empirical evaluation on Java projects. They note that most methods have either one or zero arguments. This result could impact how frequently fluent APIs get created, as some of the use cases of such APIs rely either on passing values in each call in the chain. It might be interesting in the future to see how method chain arguments are typically used.
Kabanov and Raudjarv [12] came to the conclusion that a combination of the fluent interface idiom, static functions, metadata, and closures is a better coding practice than method chaining. Their focus is on embedded DSLs for Java and they do not empirically investigate how developers have used method chaining in the past to drive their decisions.
## VIII Conclusion and Future Work
The trends of method chaining were not well understood, outside of Java. A prior study looked at the use of chaining in 2.7k Java projects. It was not clear if those results generalized to more Java projects or other languages. In this work, we first replicated their prior results then generalized them to a larger Java dataset and observed similar trends: the use of method chains is popular and increasing. We then investigated if those trends held for two other languages: Kotlin and Python. While some of the trends were similar in Kotlin, it turns out the use of method chains in Python was quite different. In Python, method chains are used considerably less frequently and when they are, the chains are generally shorter. Additionally, while Java and Kotlin see more use of method chains in testing code, Python saw the opposite: more use in non-testing code.
Finding 11 showed that Python developers use substantially fewer method chains, compared to Java. The actionable result here is for API designers, as Python API designers may wish to avoid fluent APIs as it seems Python developers tend to not use chains as much. Conversely, Java developers seem comfortable using chains, and API designers for Java may wish to employ fluent designs.
Now that we know developers use chains, some future work may investigate what common patterns appear as chains and see if a more succinct API can be developed. Or perhaps IDE developers could provide code snippets for commonly occurring chains. Because we know that a large portion of extra long chains (1/3) come from only five libraries in Java (Finding 6), this feature would be especially helpful for programmers who have to write very long chains. Further research can be done into what method chain templates are most useful for programmers.
We also envision a followup study that is more qualitative in nature to try and determine why there are differences among the languages: is it a lack of available fluent APIs, or do developers actively avoid chains in Python? Perhaps the idiomatic style of Python discourages writing chains? An extension of the previous eye tracking study [14] to Python might show interesting differences.
## IX Data Availability
The Boa queries, their outputs, and all processing scripts are available in a replication package [2] on Zenodo.
## Acknowledgements
This work was partially funded by the UNL First Year Research Experience (FYRE) program. We thank Tomoki Nakamaru for many clarifications and sharing scripts from the original study.
|
2310.11351 | Entanglement phase transitions in non-Hermitian Floquet systems | The competition between unitary time-evolution and quantum measurements could
induce phase transitions in the entanglement characteristics of quantum
many-body dynamics. In this work, we reveal such entanglement transitions in
the context of non-Hermitian Floquet systems. Focusing on noninteracting
fermions in a representative bipartite lattice with balanced gain/loss and
under time-periodic quenches, we uncover rich patterns of entanglement
transitions due to the interplay between driving and non-Hermitian effects.
Specially, we find that the monotonic increase of quenched hopping amplitude
could flip the system between volume-law and area-law entangled Floquet phases,
yielding alternated entanglement transitions. Meanwhile, the raise of gain/loss
strength could trigger area-law to volume-law reentrant transitions in the
scaling behavior of steady-state entanglement entropy, which are abnormal and
highly unexpected in non-driven systems. Connections between entanglement
transitions and parity-time-reversal (PT) transitions in Floquet spectra are
further established. Our findings not only build a foundation for exploring
entanglement phase transitions in Floquet non-Hermitian setups, but also
provide efficient means to engineer and control such transitions by driving
fields. | Longwen Zhou | 2023-10-17T15:40:12Z | http://arxiv.org/abs/2310.11351v2 | # Entanglement phase transitions in non-Hermitian Floquet systems
###### Abstract
The competition between unitary time-evolution and quantum measurements could induce phase transitions in the entanglement characteristics of quantum many-body dynamics. In this work, we reveal such entanglement transitions in the context of non-Hermitian Floquet systems. Focusing on noninteracting fermions in a representative bipartite lattice with balanced gain/loss and under time-periodic quenches, we uncover rich patterns of entanglement transitions due to the interplay between driving and non-Hermitian effects. Specially, we find that the monotonic increase of quenched hopping amplitude could flip the system between volume-law and area-law entangled Floquet phases, yielding alternated entanglement transitions. Meanwhile, the raise of gain/loss strength could trigger area-law to volume-law reentrant transitions in the scaling behavior of steady-state entanglement entropy, which are abnormal and highly unexpected in non-driven systems. Connections between entanglement transitions and parity-time-reversal (PT) transitions in Floquet spectra are further established. Our findings not only build a foundation for exploring entanglement phase transitions in Floquet non-Hermitian setups, but also provide efficient means to engineer and control such transitions by driving fields.
## I Introduction
Non-Hermitian Floquet systems have attracted great interest in recent years (see Ref. [1] for a review). The interplay between periodic drivings and non-Hermitian effects were found to generate a great variety of phases and transitions with unique dynamical and topological features in insulating [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18], superconducting [19; 20; 21], semimetallic [22; 23; 24; 25; 26] and quasicrystalline [27; 28] systems. In experiments, non-Hermitian Floquet physics has also been explored in various steps including photonics [29; 30; 31; 32; 33; 34], acoustics [35; 36], electrical circuits [37; 38] and ultracold atoms [39; 40], yielding potential applications in stabilizing topological states and controlling material features in open systems.
Despite constant progress, the entanglement properties of non-Hermitian Floquet matter are much less explored. As an intriguing phenomenon of entanglement dynamics, the measurement-induced phase transitions have garnered increasing attention since 2018 [41; 42; 43; 44; 45; 46; 47]. It was found that with the increase of measurement rates, the steady-state entanglement entropy (EE) of a quantum many-body system could undergo a volume-law to area-law transition in its scaling behavior versus the system size, which is originated from the competition between unitary dynamics and projective measurements [48; 49; 50]. Recently, such entanglement phase transitions have also been explored in the context of non-Hermitian physics [51; 52; 53; 54; 55; 56; 57; 58; 59], where the development of a dissipation gap and the presence of non-Hermitian skin effects (NHSEs) were identified as typical mechanisms of generating these transitions. As PT-symmetry breaking in the spectrum [2] and NHSEs [12] can both be flexibly controlled by time-periodic driving fields, much richer patterns of entanglement phase transitions are expected to emerge in Floquet non-Hermitian systems compared with the static case. Furthermore, the interplay between drivings and non-Hermitian effects may lead to unique types of entanglement transitions that are absent in non-driven situations, which have yet to be revealed.
In this work, we address these issues by exploring entanglement phase transitions in non-Hermitian Floquet systems. In Sec. II, we introduce one "minimal" model of non-Hermitian Floquet system, which corresponds to a periodically quenched Su-Schrieffer-Heeger (SSH) [60; 61] model with balanced gain and loss on different sublattices. We analytically obtain the Floquet spectrum of our model and discover rich patterns of PT transitions induced by driving and non-Hermitian effects. In Sec. III, we reveal diversified entanglement phase transitions in our model and establish the entanglement phase diagrams by investigating the scaling law of steady-state EE versus the system size following a long-time stroboscopic evolution. In Sec. IV, we summarize our results, comment on issues related to experiments and discuss potential future directions. The numerical method we adopted to study the entanglement dynamics of non-Hermitian Floquet systems is sketched in Appendix A.
## II Model and Floquet spectrum
To demonstrate the entanglement transitions in non-Hermitian Floquet systems, we focus on a periodically quenched SSH model with balanced gain and loss. A schematic illustration of the model is shown in Fig. 1. The Floquet operator of the system, which describes its
evolution over a complete driving period is given by
\[\hat{U}=e^{-i\hat{H}_{2}}e^{-i\hat{H}_{1}}, \tag{1}\]
where
\[\hat{H}_{1}=J_{1}\sum_{n}(\hat{c}_{n,A}^{\dagger}\hat{c}_{n,B}+\text{H.c.})+i \gamma\sum_{n}(\hat{c}_{n,A}^{\dagger}\hat{c}_{n,A}-\hat{c}_{n,B}^{\dagger}\hat{ c}_{n,B}), \tag{2}\]
\[\hat{H}_{2}=J_{2}\sum_{n}(\hat{c}_{n,B}^{\dagger}\hat{c}_{n+1,A}+\text{H.c.})+i \gamma\sum_{n}(\hat{c}_{n,A}^{\dagger}\hat{c}_{n,A}-\hat{c}_{n,B}^{\dagger}\hat {c}_{n,B}). \tag{3}\]
Here \(\hat{c}_{n,s}^{\dagger}\) (\(\hat{c}_{n,s}\)) creates (annihilates) a fermion on the sublattice \(s\) (\(=A,B\)) in the \(n\)th unit cell. Applying the Fourier transformations \(\hat{c}_{n,s}=\frac{1}{\sqrt{L}}\sum_{k}e^{ikn}\hat{c}_{k,s}\) for \(s=A,B\) to the system with \(L\) unit cells under the PBC, we can express \(\hat{H}_{1}\) and \(\hat{H}_{2}\) in the momentum space as \(\hat{H}_{1}=\sum_{k}\hat{C}_{k}^{\dagger}H_{1}(k)\hat{C}_{k}\) and \(\hat{H}_{2}=\sum_{k}\hat{C}_{k}^{\dagger}H_{2}(k)\hat{C}_{k}\), where \(\hat{C}_{k}^{\dagger}=(\hat{c}_{k,A}^{\dagger},\hat{c}_{k,B}^{\dagger})\),
\[H_{1}(k)=J_{1}\sigma_{x}+i\gamma\sigma_{z}, \tag{4}\]
\[H_{2}(k)=J_{2}\cos k\sigma_{x}+J_{2}\sin k\sigma_{y}+i\gamma\sigma_{z}. \tag{5}\]
\(\sigma_{x,y,z}\) are Pauli matrices in their usual representations. \(k\in[-\pi,\pi)\) denotes the quasimomentum. The associated Floquet operator then reads
\[\hat{U}=\sum_{k}\hat{C}_{k}^{\dagger}U(k)\hat{C}_{k},\qquad U(k)=e^{-iH_{2}(k )}e^{-iH_{1}(k)}. \tag{6}\]
It is not hard to justify that the Bloch Hamiltonians \(H_{1}(k)\) and \(H_{2}(k)\) both possess the PT symmetry, i.e., \([\mathcal{PT},H_{1}(k)]=[\mathcal{PT},H_{2}(k)]=0\), with the parity \(\mathcal{P}=\sigma_{x}\) and the time-reversal \(\mathcal{T}=\mathcal{K}\), where \(\mathcal{K}\) takes the complex conjugate. Therefore, the Bloch Floquet operator \(U(k)\), when expressed in a symmetric time frame as
\[\mathcal{U}(k)=e^{-\frac{i}{2}H_{1}(k)}e^{-iH_{2}(k)}e^{-\frac{i}{2}H_{1}(k)}, \tag{7}\]
also possesses the PT symmetry in the sense that
\[\mathcal{PTU}(k)\mathcal{PT}=\mathcal{U}^{-1}(k). \tag{8}\]
The quasienergy spectrum of \(\mathcal{U}(k)\) could thus be real in certain parameter domains even though \(\mathcal{U}(k)\) is not unitary. Since \(U(k)\) and \(\mathcal{U}(k)\) are related by a similarity transformation that does not affect the spectrum, the original system described by \(U(k)\) could also have a real quasienergy spectrum in the same parameter regions as of \(\mathcal{U}(k)\). Therefore, our periodically quenched NHSSH model holds the PT symmetry and its quasienergy spectrum may undergo real-to-complex transitions with the increase of the gain/loss strength \(\gamma\).
The Floquet spectrum of our system can be obtained by solving the eigenvalue equation \(U(k)|\psi\rangle=e^{-iE(k)}|\psi\rangle\). The resulting quasienergy dispersion take the forms \(E_{\pm}(k)=\pm E(k)\), with
\[E(k)=\arccos(\cos E_{1}\cos E_{2}-\mathbf{n}_{1}\cdot\mathbf{n}_{2}\sin E_{1} \sin E_{2}). \tag{9}\]
Here the terms \(E_{1}=\sqrt{J_{1}^{2}-\gamma^{2}}\), \(E_{2}=\sqrt{J_{2}^{2}-\gamma^{2}}\), \(\mathbf{n}_{1}=(J_{1},0,i\gamma)/E_{1}\) and \(\mathbf{n}_{2}=(J_{2}\cos k,J_{2}\sin k,i\gamma)/E_{2}\). From Eq. (9), it is not hard to verify that \(\cos[E(k)]\) is always real. Therefore, when \(|\cos[E(k)]|<1\) for all \(k\), the quasienergy dispersions \(\pm E(k)\) must be purely real and the system resides in the PT-invariant regime. When \(|\cos[E(k)]|>1\) for certain \(k\), the \(\pm E(k)\) must be complex and the system goes into a PT-broken phase. A PT transition in the system is then expected to happen at \(|\cos[E(k)]|=1\), or
\[\cos E_{1}\cos E_{2}-\mathbf{n}_{1}\cdot\mathbf{n}_{2}\sin E_{1}\sin E_{2}= \pm 1. \tag{10}\]
Note that these are also the conditions for the two Floquet bands \(E_{\pm}(k)\) to touch with each other at the center (\(E=0\), \(\cos[E(k)]=1\)) and boundary (\(E=\pi\), \(\cos[E(k)]=-1\)) of the first quasienergy Brillouin zone (BZ) \(\text{Re}E\in(-\pi,\pi]\), respectively.
In Fig. 2, we present typical cases of the Floquet spectrum \(E_{\pm}(k)\) [Eq. (9)] for our periodically quenched NHSSH model under PBC. We find that with the change of the hopping or gain/loss parameter, the line quasienergy gap between the two Floquet bands could cross at either \(E=0\) or \(E=\pi\), which is followed by the change of spectral compositions (real, purely complex or partially real). We thus expect to have both PT breaking and restoring transitions in the system, which are clearly illustrated by the panels at different \(J_{1}\) in Figs. 2(a) and 2(b). Moreover, we observe that with the increase of \(\gamma\), the Floquet spectrum does not change monotonically from real to purely complex. It could instead enter an intermediate phase with both real and complex quasienergies, as illustrated by the case with \(\gamma=1.3\) in Figs. 2(c) and 2(d). These rich spectral patterns, as identifiable from Fig. 2, clearly distinguish our Floquet model from its static non-Hermitian counterpart. They also underline the alternated and reentrant entanglement transitions we are going to reveal in the next section.
Figure 1: Schematic diagram of a periodically quenched non-Hermitian SSH (NHSSH) model. The chains in the first and second rows denote the systems in the first and the second half of a driving period \(T\). \(\ell\in\mathbb{Z}\) and \(t\) denotes time. \(J\) and \(J^{\prime}\) are the intracell and intercell hopping amplitudes. Balanced gain (\(i\mu\)) and loss (\(-i\mu\)) are acted on the sublattices A and B. In our calculations, we introduce real dimensionless parameters \(J_{1}\equiv JT/(2\hbar)\), \(J_{2}\equiv J^{\prime}T/(2\hbar)\) and \(\gamma\equiv\mu T/(2\hbar)\) for the hopping amplitudes and gain/loss strength.
To further characterize the composition of Floquet spectrum and discriminate between the PT-invariant and PT-broken phases, we introduce the ratio of real quasienergies of \(U(k)\), which is defined as
\[R=\int_{-\pi}^{\pi}\frac{dk}{2\pi}\Theta(1-|\cos[E(k)||), \tag{11}\]
where \(\Theta(x)\) is the step function. It is clear that we have \(R=1\) (\(R=0\)) if all the quasienergies of \(U(k)\) are real (complex). If \(R\in(0,1)\), real and complex quasienergies coexist in the Floquet spectrum of \(U(k)\). A PT-breaking transition then happens when the value of \(R\) starts to decrease from one.
In Fig. 3, we present the quasienergy spectrum [Eq. (9)] and the real-quasienergy ratio [Eq. (11)] versus the hopping amplitude \(J_{1}\) and the gain/loss strength \(\gamma\) separately for our periodically quenched NHSSH model. A series of alternated PT-breaking (\(R=1\to R<1\)) and PT-restoring (\(R<1\to R=1\)) transitions can be observed with the variation of \(J_{1}\). These transitions are accompanied by band touchings at the quasienergy zero or \(\pi\). They are further mediated by critical phases with coexisting real and complex quasienergies (\(0<R<1\)) in the Floquet spectrum. Meanwhile, we notice that with the raise of \(\gamma\) from zero, the system could first undergo a PT-breaking transition and its Floquet spectrum changes gradually from partially real to purely complex. However, real quasienergies could reappear in a region with larger \(\gamma\), which is rarely achievable with the raise of gain/loss strengths in static non-Hermitian systems. These reentrant PT transitions and gain/loss-induced real quasienergies are both originated from the interplay between drivings and non-Hermitian effects. Their notable influences on entanglement phase transitions in our non-Hermitian Floquet system will be revealed in Sec. III.
In Fig. 4, we show the values of \(R\) versus \((J_{1},J_{2})\)
Figure 2: Floquet quasienergy spectrum of the periodically quenched NHSSH model under PBC. Other system parameters are \((J_{2},\gamma)=(0.1\pi,0.5\pi)\) for (a), (b) and \((J_{1},J_{2})=(2.2\pi,2\pi/3)\) for (c), (d). In (a) and (c), the blue solid and red dotted lines denote \(\pm\)Re\(E(k)/\pi\) and \(\pm\)Im\(E(k)\) vs the quasimomentum \(k\). In (b) and (d), the blue solid and red dotted lines represent the two Floquet bands \(\pm E\) on the complex quasienergy plane.
and \((J_{1},\gamma)\) as two typical cases of PT phase diagrams under the PBC. In both Figs. 4(a) and 4(b), we observe rich patterns of PT-invariant (in dark red), PT-broken (in dark blue) and intermediate (\(0<R<1\)) phases with different compositions of the Floquet spectrum. Moreover, the change of each system parameter could induce a series of alternated and reentrant PT transitions in the spectrum, which is usually unavailable in static non-Hermitian systems. Specially, we find that with the increase of gain/loss strength \(\gamma\), the spectrum could first change gradually from real (\(R=1\)) to purely complex (\(R=0\)), and then going back to a mixed case (\(0<R<1\)) with coexisting real and complex quasienergies [Fig. 4(b)]. This is again unexpected in static non-Hermitian systems, where a stronger gain and loss usually prefer a larger proportion of complex eigenenergy in the spectrum. These observations imply that time-periodic driving fields could not only induce rich PT phases and transitions, but also provide a mechanism to stabilize PT-symmetric non-Hermitian systems in stronger dissipation regions.
In the next section, we will characterize the entanglement nature of Floquet phases with different spectral properties in our periodically quenched NHSSH model. The rich and alternated spectrum transitions found here will be further related to reentrant entanglement transitions in our system.
## III Entanglement phase transitions
In static non-Hermitian systems, it has been identified that the opening of a dissipation gap along the imaginary-energy axis could lead to a volume-law to area-law phase transition in the EE of free fermions [55]. The non-Hermitian skin effect constitutes another mechanism of generating such entanglement phase transitions [54]. The presence of random or quasiperiodic disorder may further collaborate with non-Hermitian effects to generate anomalous log-law to area-law entanglement transitions [58]. Beyond these static situations, we will now demonstrate how entanglement phase transitions could be induced and controlled by time-periodic drivings in our non-Hermitian Floquet system.
We first outline the methodology of obtaining the EE and its stroboscopic dynamics for noninteracting fermions in a Floquet non-Hermitian lattice. Let us consider a system initialized in the state \(|\Psi_{0}\rangle\) at \(t=0\) and evolved by the Floquet operator \(\hat{U}\) with the driving period \(T\). The normalized state of the system after a number of \(\ell\) (\(\in\mathbb{N}\)) driving periods is given by
\[|\Psi(\ell T)\rangle=\frac{\hat{U}^{\ell}|\Psi_{0}\rangle}{\langle\Psi_{0}| \hat{U}^{\dagger}\hat{U}^{\ell}\hat{U}^{\ell}|\Psi_{0}\rangle}. \tag{12}\]
Note in passing that for a non-Hermitian Floquet system, we usually have \(\hat{U}^{\dagger}\hat{U}\neq 1\), and the resulting stroboscopic dynamics is not unitary. In our numerical calculations, we take the PBC and choose the initial state as
\[|\Psi_{0}\rangle=\prod_{n=1}^{L}\hat{c}^{\dagger}_{n,B}|\emptyset\rangle. \tag{13}\]
Here \(L\) is the total number of unit cells in the lattice and \(|\emptyset\rangle\) denotes the vacuum state. The state \(|\Psi_{0}\rangle\) in Eq. (13) thus describes a charge density wave at half-filling, with each sublattice B being populated by a single fermion. Other types of initial states yield consistent results concerning the entanglement transitions that will be studied below. At any stroboscopic time \(t=\ell T\), the matrix elements of single-particle correlator \(C(\ell T)\) in the lattice representation can now be expressed as
\[C_{ms,ns^{\prime}}(\ell T)=\langle\Psi(\ell T)|\hat{c}^{\dagger}_{m,s}\hat{c} _{n,s^{\prime}}|\Psi(\ell T)\rangle, \tag{14}\]
where \(m,n=1,...,L\) and \(s,s^{\prime}=A,B\) denote the unit
Figure 3: Floquet spectrum \(E\) [(a), (b)] and ratios of real quasienergies \(R\) [(c), (d)] vs the hopping amplitude \(J_{1}\) and non-Hermitian parameter \(\gamma\) under the PBC. Other system parameters are \((J_{2},\gamma)=(0.1\pi,0.5\pi)\) for (a), (c) and \((J_{1},J_{2})=(2.2\pi,2\pi/3)\) for (b), (d). The solid and dashed lines in (a) and (b) denote the real and imaginary parts of quasienergy.
Figure 4: Ratios of real quasienergies \(R\) [Eq. (11)] versus (a) \((J_{1},J_{2})\) at \(\gamma=\pi/2\) and (b) \((J_{1},\gamma)\) at \(J_{2}=2\pi/3\) for the periodically quenched NHSSH model [Eq. (1)] under PBC. Different colors represent different values of \(R\), as can be read out from the color bars.
cell and sublattice indices, respectively. Restricting the indices \(m,n\) of \(C(\ell T)\) to a subsystem X with \(l\) unit cells gives us a \(2l\times 2l\) block of \(C(\ell T)\). The eigenvalues of this block constitute the correlation-matrix spectrum \(\{\zeta_{j}(\ell T)|j=1,...2l\}\) of the subsystem X. Without interactions, the \(|\Psi(t)\rangle\) is a Gaussian state and the bipartite EE can be obtained from the spectrum of correlation matrix [62]. That is, at any given stroboscopic time \(t=\ell T\), we can find the EE between the subsystem X and remaining part Y of the whole system as
\[S(t)=-\sum_{j=1}^{2l}[\zeta_{j}\ln\zeta_{j}+(1-\zeta_{j})\ln(1-\zeta_{j})]. \tag{15}\]
Here we have suppressed the time-dependence in \(\zeta_{j}\) for brevity. The \(S(t)\) thus defined corresponds to the bipartite EE \(S(t)=-\text{Tr}[\rho_{\text{X}}(t)\ln\rho_{\text{X}}(t)]\), where the reduced density matrix \(\rho_{\text{X}}(t)\) of subsystem X can be obtained by tracing out all the degrees of freedom belonging to the remaining subsystem Y with \(2(L-l)\) sites, i.e., \(\rho_{\text{X}}(t)=\text{Tr}_{\text{Y}}[|\Psi(t)\rangle\langle\Psi(t)|]\). The numerical recipe of computing \(S(t)\) for our non-Hermitian Floquet system is summarized in the Appendix A.
We first investigate the scaling behaviors of steady-state EE \(S(L,l)\) vs the system size \(L\) for \(l=L/2\) (equal bipartition), at half-filling and under PBC. For a given \(L\) and \(l\), \(S(L,l)\) is obtained by averaging the stroboscopic EE \(S(t)\) [Eq. (15)] over a late-time domain \(t\in[\ell^{\prime}T,\ell T]\) with \(1\ll\ell^{\prime}<\ell\), where we take \(\ell^{\prime}=800\) and \(\ell=1000\) throughout our numerical calculations. In Fig. 5, we observe two drastically distinct scaling behaviors in \(S(L,L/2)\) at different strengths of hopping \(J_{1}\) [Fig. 5(a)] and gain/loss \(\gamma\) [Figs. 5(b)-(c)]. Referring to Fig. 2, we realize that whenever the Floquet spectrum of our periodically quenched NHSSH model forms a dissipation gap along the imaginary-quasienergy axis, the steady-state EE becomes independent of the system size \(L\) in Fig. 5, such that \(S(L,L/2)\sim L^{0}\) and area-law scalings are observed in associated cases. Instead, in cases when imaginary quasienergy gaps vanish in Fig. 2, the steady-state EE becomes proportional to the system size \(L\) in Fig. 5, such that volume-law entangled phases are reached with \(S(L,L/2)\sim L\) in these cases.
Note in passing that within the volume-law entangled phases, the gradients of \(S(L,L/2)\) versus \(L\) reach maximal values in PT-invariant cases with real quasienergy spectra [for \(J_{1}=0.85\pi,1.7\pi\) in Fig. 5(a) and \(\gamma=0.4\pi\) in Fig. 5(b)]. Meanwhile, volume-law scalings of the steady-state EE can be observed in both PT-invariant and PT-broken phases, so long as there are no dissipation gaps along the imaginary quasienergy axis. These observations indicate that PT transitions do not have one-to-one correspondences with entanglement transitions in non-Hermitian Floquet systems. As another notable result, the scaling behavior of \(S(L,L/2)\) changes from area-law to volume-law when the gain and loss strength \(\gamma\) raises from \(0.9\pi\) to \(1.3\pi\) in Fig. 5(c), which goes beyond the situation normally expected in non-driven systems.
To further decode entanglement phase transitions in our non-Hermitian Floquet setting, we present in Fig. 6 the steady-state EE \(S(L,l)\) versus the subsystem size \(l\) with a fixed total number of unit cells \(L=320\). The system is still at half-filling and under PBC. In Figs. 6(a) and 6(c), we observe area-law scalings \(S(L,l)\sim l^{0}\) in the
Figure 5: Steady-state EE \(S(L,l)\) versus the system size \(L\) under PBC for a equal bipartition \(l=L/2\) and at half-filling. In (a), the legend gives the value of \(J_{1}\) for each curve, with other system parameters given by \((J_{2},\gamma)=(0.1\pi,0.5\pi)\) [same as those taken in Figs. 2(a) and 2(b)]. In (b) and (c), the legends show the value of \(\gamma\) for each curve, with other system parameters given by \((J_{1},J_{2})=(2.2\pi,2\pi/3)\) [same as those used in Figs. 2(c) and 2(d)].
Figure 6: Steady-state EE \(S(L,l)\) versus the subsystem size \(l\) under PBC for a fixed total system size \(L=320\) and at half-filling. (a)–(c) share the same legends with the corresponding panels (a)–(c) of Fig. 5. The curves marked by the same symbol in Figs. 5 and 6 have the same system parameters.
steady-state EE for the cases with finite dissipation gaps along imaginary quasienergy axes of the Floquet spectra in Fig. 2. When imaginary quasienergy gaps vanish in Fig. 2, we find that the \(S(L,l)\) in Figs. 6(a)-(c) might be fitted by the function \(g_{0}\sin(\pi l/L)+g_{1}\ln[\sin(\pi l/L)]+g_{2}\), which is usually expected in volume-law entangled phases. Therefore, the scaling behaviors of steady-state EE versus the subsystem size also suggest two possible phases with different entanglement nature in our periodically quenched NHSSH model, which are consistent with those observed in Fig. 5. We also notice that the appearance of these distinct entangling phases does not follow a monotonic sequence with the increase of either the quenched hopping amplitude \(J_{1}\) or the gain and loss strength \(\gamma\).
We are now ready to demonstrate entanglement phase transitions in our system. In Figs. 7(a) and 7(b), we present the steady-state EE \(S(L,l=L/2)\) of our periodically quenched NHSSH model versus \(J_{1}\) and \(\gamma\), respectively, for several different system sizes \(L\) under the PBC and at half-filling. Two clearly distinct regions are observed in both figures. Referring to the spectrum information shown in Fig. 3, we conclude that in the parameter regions with gapped Floquet spectra along the imaginary quasienergy axis and \(R=0\), the steady-state EE follows an area-law scaling \(S(L,L/2)\propto L^{0}\). Meanwhile, in the regions with \(R\in(0,1]\) and gapless Floquet spectra along the imaginary axis, the steady-state EE follows a volume-law \(S(L,L/2)\propto L\). Therefore, there should be entanglement transitions between volume-law entangled and area-law entangled phases with the change of hopping or gain/loss strength in our Floquet system. To confirm these entanglement transitions, we present in Figs. 7(c) and 7(d) the gradient \(g\) of steady-state EE, as obtained from the linear fitting \(S(L,L/2)\sim gL+s_{0}\), versus \(J_{1}\) and \(\gamma\). Multiple area-law to volume-law (with \(g=0\rightarrow>0\)) and volume-law to area-law (with \(g>0\rightarrow=0\)) entanglement phase transitions are now clearly observable. Two notable features deserve to be further emphasized.
Figure 7: Reentrant entanglement transitions versus the hopping amplitude \(J_{1}\) [(a), (c)] and gain/loss strength \(\gamma\) [(b), (d)]. System parameters are \((J_{2},\gamma)=(0.1\pi,0.5\pi)\) for (a), (c) and \((J_{1},J_{2})=(2.2\pi,2\pi/3)\) for (b), (d), which are the same as those chosen for the panels (a), (c) and (b), (d) of Fig. 3, respectively. (a) and (b) show the steady-state EE \(S(L,l)\), with \(l=L/2\), versus \(J_{1}\) and \(\gamma\) for different lattice sizes \(L\). (c) and (d) show the gradients \(g\) extracted from the linear fitting \(S(L,L/2)\sim gL+s_{0}\).
First, with the increase of quenched intracell hopping amplitude \(J_{1}\) from zero, we find a series of alternated transitions between volume-law entangled and area-law entangled phases [Fig. 7(c)]. Similar patterns of alternated entanglement transitions can be observed with the variation of intercell hopping amplitude \(J_{2}\) when \(J_{1}\) is fixed. Therefore, we could induce and even engineer entanglement phase transitions with high flexibility in non-Hermitian Floquet systems by tuning a single control parameter, which are hardly achievable in non-driven situations. The underlying physical picture is as follows. Since the real part of quasienergy is a phase factor and defined modulus \(2\pi\), two quasienergy bands of a non-Hermitian Floquet system could meet with each other and separate again at both \(E=0\) (center of the first quasienergy BZ) and \(E=\pi\) (edge of the first quasienergy BZ). Moreover, due to the \(2\pi\)-periodicity of \(E\), the values of Floquet quasienergies \(E\mod 2\pi\) in general could not change monotonically with the increase or decrease of a single system parameter. The combination of these two mechanisms then allows the Floquet bands of our system to touch and re-separate along the \(\text{Im}E\) axis sequentially at \(\text{Re}E=0\) and \(\text{Re}E=\pi\). The final results are alternated entanglement phase transitions triggered by a single driving parameter, as observed in Fig. 7(c).
Second, with the increase of gain/loss amplitude \(\gamma\), the system could first undergo a volume-law to area-law entanglement transition, which is followed by reentering a volume-law entangled phase through another entanglement transition at a larger \(\gamma\), and finally going back to a area-law entangled phase with the further raise of \(\gamma\) [Fig. 7(d)]. Here, the non-Hermiticity-induced reentrant transition from area-law entangled to volume-law entangled phases is abnormal and usually not available in static non-Hermitian systems. As the real part of quasienergy also depends on \(\gamma\), the reentrant entanglement transition observed here is due to the presence of two possible gap-closing points at \(E=0,\pi\), the \(2\pi\) periodicity of \(\text{Re}E\) and the non-monotonous dependence of Floquet spectrum compositions on \(\gamma\). Assisted by Floquet drivings, the re-emerged volume-law entangled phase at strong dissipation rates may provide us with further room for protecting quantum information against decoherence.
For completeness, we present in Fig. 8 the gradient \(g\) extracted from the linear fitting \(S(L,L/2)\sim gL+s_{0}\) of the steady-state EE versus \(L\) at different system parameters, which constitutes the entanglement phase diagram of our periodically quenched NHSSH model. A comparison between Figs. 4 and 8 yield a nice consistency, i.e., the regions with \(R=0\) (fully complex Floquet spectra with finite dissipation gaps along \(\text{Im}E\)) and \(R>0\) (partially complex or real Floquet spectra with no dissipation gaps) are associated with area-law entangled and volume-law entangled phases, respectively. Moreover, rich patterns of entanglement phase transitions are observable over broad regions in the hopping and gain/loss parameter spaces. Therefore, we conclude that the interplay between periodic driving and non-Hermitian effects could not only generate rich phases with different entanglement nature, but also trigger alternated and reentrant entanglement transitions in non-Hermitian systems. By applying time-periodic quenches to the hopping amplitudes of fermions in a prototypical SSH model with balanced gain and loss, we found alternated and reentrant entanglement transitions due to the combined efforts of Floquet driving and non-Hermitian effects. System-size scaling behaviors of steady-state EE were systematically analyzed and entanglement phase diagrams were formulated for our considered model. The alternated transitions between volume-law and area-law entangled phases are due to driving-induced consecutive closings and re-openings of Floquet dissipation gaps along the imaginary quasienergy axis. The driving field also allows the composition of Floquet spectrum (real vs complex) to change non-monotonically with the increase of gain and loss strengths, yielding abnormal area-law to volume-law reentrant transitions in the steady-state EE following the raise of non-Hermitian effects. The alternated and reentrant entanglement transitions found here were expected to be generic and observable in other driven non-Hermitian systems. Our work thus unveiled the diversity and richness of entanglement phase transitions in non-Hermitian Floquet systems. It further provided a flexible route to induce and control entanglement phase transitions in open systems via periodic driving fields.
In experiments, periodically quenched non-Hermitian lattices with internal degrees of freedom can be implemented in photonic waveguides and quantum walks [29; 30; 31; 32; 33]. One-dimensional arrays of periodically driven resonators have also been implemented to detect non
Figure 8: Entanglement phase diagrams vs \((J_{1},J_{2})\) and \((J_{1},\gamma)\). Other system parameters are \(\gamma=0.5\pi\) for (a) and \(J_{2}=2\pi/3\) for (b). The gradient \(g\) is obtained from the linear-fitting \(S(L,l)\sim gL+s_{0}\) of the steady-state EE with \(l=L/2\) at half-filling and under PBC. The \(\max(g)\) denotes the maximum of \(g\) over the considered parameter space \((J_{1},J_{2})\in(-3\pi,3\pi)\times(-3\pi,3\pi)\) [\((J_{1},\gamma)\in(-3\pi,3\pi)\times(-2\pi,2\pi)\)] in (a) [(b)].
Hermitian Floquet band structures [34]. Photonic systems thus form promising candidates for the realization of our Floquet non-Hermitian SSH model. In cold atoms, the SSH model and its dynamical modulations have been realized through various different strategies [63; 64; 65; 66]. Non-Hermitian effects may further be created by laser-induced atom losses [67; 68; 69]. Therefore, our model and its entanglement dynamics may also be explored experimentally in cold atom setups.
In future work, it would be interesting to consider entanglement phase transitions in non-Hermitian Floquet systems beyond one spatial dimension, with impurities or disorder, under other driving protocols, and subject to many-body interactions. Systematic analyses regarding the critical behaviors of steady-state EE at entanglement transition points are highly desired in non-Hermitian Floquet systems. The experimental realization of our quenched non-Hermitian lattice and the detection of entanglement phase transitions therein also constitute interesting directions of future research.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China (Grant Nos. 12275260 and 11905211), the Fundamental Research Funds for the Central Universities (Grant No. 202364008), and the Young Talents Project of Ocean University of China.
## Appendix A Numerical calculation of stroboscopic EE
Here we outline an approach that can be used to obtain the stroboscopic EE for our system, which follows the method introduced in Ref. [54].
Starting with the normalized state \(|\Psi(t)\rangle\) [Eq. (12)] at the stroboscopic time \(t=\ell T\), we obtain the state after one more evolution period as
\[|\Psi(t+T)\rangle\propto\hat{U}|\Psi(t)\rangle\] \[= \prod_{n=1}^{N}\left[\sum_{m=1}^{L}\sum_{s=A,B}[e^{-iH_{\rm eff} }\mathcal{U}]_{msn}(t)\hat{c}_{m,s}^{\dagger}\right]|\emptyset\rangle. \tag{26}\]
Here \(N\) counts the total number of fermions. \(L\) is the number of unit cells in the lattice. \(H_{\rm eff}\) is a \(2L\times 2L\) matrix of the Floquet effective Hamiltonian \(\hat{H}_{\rm eff}\equiv i\ln\hat{U}\) in the lattice representation. The normalized state at \(t+T\) can be obtained by performing the QR-decomposition, i.e.,
\[e^{-iH_{\rm eff}}\mathcal{U}=\mathcal{Q}\mathcal{R}. \tag{27}\]
Here \(\mathcal{Q}\) is a \(2L\times N\) matrix satisfying \(\mathcal{Q}^{\dagger}\mathcal{Q}=1\). \(\mathcal{R}\) is an \(N\times N\) upper triangular matrix. The \(2L\times N\) matrix \(\mathcal{U}\) is isometry and it also satisfies \(\mathcal{U}^{\dagger}\mathcal{U}=1\). At the latter time \(t+T\), the matrix \(\mathcal{U}(t+T)\) is given by
\[\mathcal{U}(t+T)=\mathcal{Q}. \tag{28}\]
Note in passing that the matrix \(\mathcal{U}(t=0)\) accounts the initial distribution of fermions in the lattice. For the state \(|\Psi_{0}\rangle\) in Eq. (13), the \(\mathcal{U}(t=0)\) takes the explicit form
\[[\mathcal{U}(0)]_{j,j^{\prime}}=\delta_{j,2j^{\prime}},\qquad j,j^{\prime}=1,...,N, \tag{29}\]
where we have \(L=N\) in the half-filled case. Following this approach, we can find the \(\mathcal{U}(\ell T)\) at any stroboscopic time \(t=\ell T\). The matrix elements of single-particle correlator \(C(\ell T)\) can be obtained as
\[C_{ms,ns^{\prime}}(\ell T)=[\mathcal{U}(\ell T)\mathcal{U}^{ \dagger}(\ell T)]_{ns^{\prime},ms}. \tag{30}\]
The EE can be finally extracted from the spectrum of \(C(\ell T)\) according to Eq. (15) in the main text. This approach is very efficient for studying the long-time stroboscopic dynamics of EE. It is also applicable to both clean and disordered noninteracting fermionic systems.
|
2302.08411 | Explicit Diffusion of Gaussian Mixture Model Based Image Priors | In this work we tackle the problem of estimating the density $f_X$ of a
random variable $X$ by successive smoothing, such that the smoothed random
variable $Y$ fulfills $(\partial_t - \Delta_1)f_Y(\,\cdot\,, t) = 0$,
$f_Y(\,\cdot\,, 0) = f_X$. With a focus on image processing, we propose a
product/fields of experts model with Gaussian mixture experts that admits an
analytic expression for $f_Y (\,\cdot\,, t)$ under an orthogonality constraint
on the filters. This construction naturally allows the model to be trained
simultaneously over the entire diffusion horizon using empirical Bayes. We show
preliminary results on image denoising where our model leads to competitive
results while being tractable, interpretable, and having only a small number of
learnable parameters. As a byproduct, our model can be used for reliable noise
estimation, allowing blind denoising of images corrupted by heteroscedastic
noise. | Martin Zach, Thomas Pock, Erich Kobler, Antonin Chambolle | 2023-02-16T16:39:13Z | http://arxiv.org/abs/2302.08411v1 | # Explicit Diffusion of Gaussian Mixture Model Based Image Priors
###### Abstract
In this work we tackle the problem of estimating the density \(f_{X}\) of a random variable \(X\) by successive smoothing, such that the smoothed random variable \(Y\) fulfills \((\partial_{t}-\Delta_{1})f_{Y}(\,\cdot\,,t)=0\), \(f_{Y}(\,\cdot\,,0)=f_{X}\). With a focus on image processing, we propose a product/fields of experts model with Gaussian mixture experts that admits an analytic expression for \(f_{Y}(\,\cdot\,,t)\) under an orthogonality constraint on the filters. This construction naturally allows the model to be trained simultaneously over the entire diffusion horizon using empirical Bayes. We show preliminary results on image denoising where our model leads to competitive results while being tractable, interpretable, and having only a small number of learnable parameters. As a byproduct, our model can be used for reliable noise estimation, allowing blind denoising of images corrupted by heteroscedastic noise.
## 1 Introduction
Consider the practical problem of estimating the probability density \(f_{X}:\mathcal{X}\rightarrow\mathbb{R}\) of a random variable \(X\) in \(\mathcal{X}\), given a set of data samples \(\{x_{i}\}_{i=1}^{N}\) drawn from \(f_{X}\).1 This is a challenging problem in high dimension (e.g. for images of size \(M\times N\), i.e. \(\mathcal{X}=\mathbb{R}^{M\times N}\)), due to extremely sparsely populated regions. A fruitful approach is to estimate the density at different times when undergoing a diffusion process. Intuitively, the diffusion equilibrates high- and low-density regions over time, thus easing the estimation problem.
Footnote 1: For notational convenience, throughout this article we do not make a distinction between the _distribution_ and _density_ of a random variable.
Let \(Y_{t}\) (carelessly) denote the random variable whose distribution is defined by diffusing \(f_{X}\) for some time \(t\). We denote the density of \(Y_{t}\) by \(f_{Y}(\,\cdot\,,t)\), which fulfills the diffusion equation \((\partial_{t}-\Delta_{1})f_{Y}(\,\cdot\,,t)=0\), \(f_{Y}(\,\cdot\,,0)=f_{X}\). The empirical Bayes theory [12] provides a machinery for reversing the diffusion process: Given an instantiation of the random variable \(Y_{t}\), the Bayesian least-squares estimate of \(X\) can be expressed solely using \(f_{Y}(\,\cdot\,,t)\). Importantly, this holds for all positive \(t\), as long as \(f_{Y}\) is is properly constructed.
In practice we wish to have a parametrized, trainable model of \(f_{Y}\), say \(f_{\theta}\) where \(\theta\) is a parameter vector, such that \(f_{Y}(x,t)\approx f_{\theta}(x,t)\) for all \(x\in\mathcal{X}\) and all \(t\in[0,\infty)\). Recent choices for the family of functions \(f_{\theta}(\,\cdot\,,t)\) were of practical nature: Instead of an analytic expression for \(f_{\theta}\) at any time \(t\), authors proposed a time-conditioned network in the hope that it can learn to behave as if it had undergone a diffusion process. Further, instead of worrying about the normalization \(\int_{\mathcal{X}}f_{Y}(\,\cdot\,,t)=1\) for all \(t\in[0,\infty)\), usually they directly estimate the \(score\,-\nabla_{\Gamma}\log f_{Y}(\,\cdot\,,t):\mathcal{X}\rightarrow\mathcal{X}\) with some network \(s_{\theta}(\,\cdot\,,t):\mathcal{X}\rightarrow\mathcal{X}\). This has the advantage that normalization constants vanish, but usually, the constraint \(\partial_{j}(s_{\theta}(\,\cdot\,,t))_{i}=\partial_{i}(s_{\theta}(\,\cdot\,, t))_{j}\) is not enforced in the architecture of \(s_{\theta}\). Thus, \(s_{\theta}(\,\cdot\,,t)\) is in general not the gradient of a scalar function (the negative-log-density it claims to model).
In this paper, we pursue a more principled approach. Specifically, we leverage Gaussian mixture models to represent the popular product/field of experts model [5, 14] and show that under an orthogonality constraint of the associated filters, the diffusion of the model can be expressed analytically.
## 2 Background
In this section, we first emphasize the importance of the diffusion process in density estimation (and sampling) in high dimensions. Then, we detail the relationship between diffusing the density function, empirical Bayes, and denoising score matching [18].
### Diffusion Eases Density Estimation and Sampling
Let \(f_{X}\) be a density on \(\mathcal{X}\subset\mathbb{R}^{d}\). A major difficulty in estimating \(f_{X}\) with parametric models is that \(f_{X}\) is extremely sparsely populated in high dimensional spaces2, i.e.,
\(d\gg 1\). This phenomenon has many names, e.g. the curse of dimensionality or the manifold hypothesis [1]. Thus, the learning problem is difficult, since meaningful gradients are rare. Conversely, let us for the moment assume we have a model \(\tilde{f}_{X}\) that approximates \(f_{X}\) well. In general, it is still very challenging to generate a set of points \(\{x_{i}\}_{i=1}^{I}\) such that we can confidently say that the associated empirical density \(\frac{1}{I}\sum_{i=1}^{I}\delta(\,\cdot\,-x_{i})\) approximates \(\tilde{f}_{X}\). This is because, in general, there does not exist a procedure to directly draw from \(\tilde{f}_{X}\), and (modern) Markov chain Monte Carlo (MCMC) relies on the estimated gradients of \(\tilde{f}_{X}\) and, in practice, only works well for unimodal distributions [17].
Footnote 1: The term \(\int_{0}^{\infty}\tilde{f}_{X}\) is not the term \(\int_{0}^{\infty}\tilde{f}_{X}\), but it is not measure theoretic — sense).
The isotropic diffusion process or heat equation
\[(\partial_{t}-\Delta_{1})f(\,\cdot\,,t)=0\mbox{ with initial condition }f(\,\cdot\,,0)=f_{X} \tag{1}\]
equilibrates the density in \(f_{X}\), thus mitigating the challenges outlined above. Here, \(\partial_{t}\) denotes \(\frac{\partial}{\partial_{t}}\) and \(\Delta_{1}=\mbox{Tr}(\nabla_{1}^{2})\) is the Laplace operator, where the 1 indicates application to the first argument. We detail the evolution of \(f_{X}\) under this process and relations to empirical Bayes in Section 2.2.
_Learning \(f(\,\cdot\,,t)\)_ for \(t\geq 0\) is more stable since the diffusion "fills the space" with meaningful gradients [16]. Of course, this assumes that for different times \(t_{1}\) and \(t_{2}\), the models of \(f(\,\cdot\,,t_{1})\) and \(f(\,\cdot\,,t_{2})\) are somehow related to each other. As an example of this relation, the recently popularized noise-conditional score-network [17] shares convolution filters over time, but their input is transformed with a time-conditional instance normalization. In this work, we make this relation explicit by considering a family of functions \(f(\,\cdot\,,0)\) for which \(f(\,\cdot\,,t)\) can be expressed analytically.
For _sampling_, \(f(\,\cdot\,,t)\) for \(t>0\) can help by gradually moving samples towards high-density regions of \(f_{X}\), regardless of initialization. To utilize this, a very simple idea with relations to simulated annealing [7, 17] is to have a pre-defined time schedule \(t_{T}>t_{T-1}>\ldots>t_{1}>0\) and sample \(f(\,\cdot\,,t_{i})\), \(i=T,\ldots,0\) (e.g. with Langevin dynamics [13]) successively.
### Diffusion, Empirical Bayes, and Denoising Score Matching
In this section, similar to the introduction, we again adopt the interpretation that the evolution in Eq. (1) defines the density of a smoothed random variable \(Y_{t}\). That is, \(Y_{t}\) is a random variable with probability density \(f_{Y}(\,\cdot\,,t)\), which fulfills \((\partial_{t}-\Delta_{1})f_{Y}(\,\cdot\,,t)=0\) and \(f_{Y}(\,\cdot\,,0)=f_{X}\). It is well known that the Green's function of Eq. (1) is a Gaussian (see e.g. [2]) with zero mean and variance \(\sigma^{2}(t)=2t\mbox{Id}\). In other words, for \(t>0\) we can write \(f_{Y}(\,\cdot\,,t)=G_{0,2t\mbox{Id}}*f_{X}\), where
\[G_{\mu,\Sigma}(x)=|2\pi\Sigma|^{-1/2}\exp\left(-\frac{1}{2}(x-\mu)^{\top} \Sigma^{-1}(x-\mu)\right). \tag{2}\]
Thus, the diffusion process constructs a (linear) _scale space in the space of probability densities_. In terms of the random variables, \(Y_{t}=X+\sqrt{2t}N\) where \(N\sim\mathcal{N}(0,\mbox{Id})\). We next motivate how to estimate the corresponding instantiation of \(X\) which has "most likely" spawned an instantiation of \(Y_{t}\) using empirical Bayes.
In the school of empirical Bayes [12], we try to estimate a clean random variable given a corrupted instantiation, using only knowledge about the corrupted density. In particular, for our setup, Miyasawa [9] has shown that the Bayesian least-squares estimator \(x_{\rm EB}\) for an instantiation \(y_{t}\) of \(Y_{t}\) is
\[x_{\rm EB}(y_{t})=y_{t}+\sigma^{2}(t)\nabla_{\rm I}\log f_{Y}(y_{t},t), \tag{3}\]
which is also known as Tweedie's formula [3]. Raphan and Simoncelli [11] extended the empirical Bayes framework to arbitrary corruptions and coined the term non-parametric empirical Bayes least-squares (NEBLS).
Recently, Eq. (3) has been used frequently for parameter estimation in diffusion-based models. Let \(\{x_{i}\}_{i=1}^{I}\) be a dataset of \(I\) samples drawn from \(f_{X}\) and \(Y_{t}\) governed by the considered diffusion process. Thus, both the left- and right-hand side of Eq. (3) are _known_ -- in expectation. This naturally leads to the loss function
\[\min_{\theta}\int_{(0,\infty)}\mathbb{E}_{(x,y_{t})\sim f_{X\times Y_{t}}}\|x- y_{t}-\sigma(t)^{2}\nabla_{\rm I}\log f_{\theta}(y_{t},t)\|\,\mbox{d}t \tag{4}\]
for estimating \(\theta\) such that \(f_{\theta}\approx f_{Y}\). Here, \(f_{X\times Y_{t}}\) denotes the joint distribution of real and degraded points. This learning problem is known as denoising score matching [6, 17, 18].
## 3 Methods
In this section, we introduce a patch and convolutional model to approximate the prior distribution of natural images. For both models, we present conditions such that they obey the diffusion process.
### Patch Model
Patch-based prior models such as expected patch log likelihood (EPLL) [19] typically use Gaussian mixure models (GMMs) to approximately learn a prior for natural image patches. Throughout this section, we approximate the density of image patches \(p\in\mathbb{R}^{a}\) of size \(a=b\times b\) as a product of GMM experts, i.e.
\[\tilde{f}_{\theta}(p,t)=Z(\{k_{j}\}_{j=1}^{J})^{-1}\prod_{j=1}^{J}\psi_{j}( \langle k_{j},p\rangle,w_{j},t), \tag{5}\]
in analogy to the product-of-experts model [5]. \(Z(\{k_{j}\}_{j=1}^{J})\) is required such that \(\tilde{f}_{\theta}\) is properly normalized. Every expert \(\psi_{j}:(\mathbb{R}\times\triangle^{L}\times[0,\infty))\to\mathbb{R}^{+}\) for \(j=1,\ldots,J\) models the density of associated filters \(k_{j}\) for all diffusion times \(t\) by an one-dimensional GMM with \(L\) components of the form
\[\psi_{j}(x,w_{j},t)=\sum_{l=1}^{L}w_{jl}G_{\mu_{l},\sigma_{j}^{2}(t)}(x). \tag{6}\]
The weights of each expert \(w_{j}=(w_{j1},\ldots,w_{jL})^{\top}\) must satisfy the unit simplex constraint, i.e., \(w_{j}\in\triangle^{L}\), \(\triangle^{L}=\{x\in\mathbb{R}^{L}:x_{l}\geq 0,\sum_{i=1}^{L}x_{l}=1\}\). Although not necessary, we assume for simplicity that all \(\psi_{j}\) have the same number of components and each component has the same mean. The discretization of \(\mu_{l}\) over the real line is fixed a priori in a uniform way and detailed in Section 4.1. Further, the variances of all components of each expert are shared and are modeled as
\[\sigma_{j}^{2}(t)=\sigma_{0}^{2}+c_{j}2t,\]
where \(\sigma_{0}\) is chosen to support the uniform discretization of the means \(\mu_{l}\) and \(c_{j}\in\mathbb{R}_{++}\) are constants, to reflect the convolution effect of the diffusion process.
Next, we show how the diffusion process leads to the linear change of each expert's variance \(\sigma_{j}^{2}(t)\). In detail, we exploit two well-known properties of GMMs: First, the product of GMMs is again a GMM, see e.g. [15]. This allows us to work on highly expressive models that enable efficient _evaluations_ due to factorization. Second, we use the fact that there exists an analytical solution to the diffusion equation if \(f_{X}\) is a GMM: The Green's function is a Gaussian with isotropic covariance \(2t\text{Id}\). Hence, diffusion amounts to the convolution of two Gaussians for every component due to the linearity of convolution. Using previous notation, if \(X\) is a random variable with normal distribution \(\mathcal{N}(\mu_{X},\Sigma_{X})\), then \(Y_{t}\) follows the distribution \(\mathcal{N}(\mu_{X},\Sigma_{X}+2t\text{Id})\). In particular, the mean remains unchanged, thus it is sufficient to only adapt the variances in Eq. (5) linearly along with the diffusion time.
**Theorem 1**.: \(\tilde{f}(\,\cdot\,,0)\) _is a homoscedastic GMM on \(\mathbb{R}^{a}\) with \(L^{J}\) components, precision matrix_
\[(\Sigma_{a})^{-1}=\frac{1}{\sigma_{0}^{2}}\sum_{j=1}^{J}(k_{j}\otimes k_{j}). \tag{7}\]
_and means \(\mu_{a,\hat{l}}=\Sigma_{a}\sum_{j=1}^{J}k_{j}\mu_{l(j)}\)._
Proof.: By definition,
\[\begin{split}&\prod_{j=1}^{J}\psi(\langle k_{j},p\rangle,w_{j},0)= \\ &\prod_{j=1}^{J}\sum_{l=1}^{L}\frac{w_{jl}}{\sqrt{2\pi\sigma_{0}^ {2}}}\exp\left(-\frac{1}{2\sigma_{0}^{2}}(\langle k_{j},p\rangle-\mu_{l})^{2} \right).\end{split} \tag{8}\]
Let \(\hat{l}(j)\) be a fixed but arbitrary selection from the index set \(\{1,\ldots,L\}\) for each \(j\in\{1,\ldots,J\}\). The general component of the above reads as
\[(2\pi\sigma_{0}^{2})^{-\frac{1}{2}}\left(\prod_{j=1}^{J}w_{jl(j)}\right)\exp \left(-\frac{1}{2\sigma_{0}^{2}}\sum_{j=1}^{J}(\langle k_{j},p\rangle-\mu_{l(j )})^{2}\right). \tag{9}\]
To find \((\Sigma_{a})^{-1}\), we complete the square as follows: Motivated by \(\nabla_{\!p}\|p-\mu_{a}\|_{\Sigma_{a}^{-1}}^{2}/2=\Sigma_{a}^{-1}(p-\mu_{a})\) we find \(\nabla_{\!p}\big{(}\frac{1}{2\sigma_{0}^{2}}\sum_{j=1}^{J}(\langle k_{j},p \rangle-\mu_{l(j)})^{2}\big{)}=\frac{1}{\sigma_{0}^{2}}\sum_{j=1}^{J}(k_{j} \otimes k_{j})p-k_{j}\mu_{l(j)}\) and the theorem immediately follows.
To get a tractable analytical expression for the diffusion process, we assume that the filters \(k_{j}\) are pairwise orthogonal, i.e. for all \(i,j\in\{1,\ldots,J\}\)
\[\langle k_{j},k_{i}\rangle=\begin{cases}0&\text{if }i\neq j,\\ \|k_{j}\|^{2}&\text{else}.\end{cases} \tag{10}\]
**Theorem 2** (Patch diffusion).: _Under assumption Eq. (10), \(\tilde{f}(\,\cdot\,,t)\) satisfies the diffusion equation \((\partial_{t}-\Delta_{1})\tilde{f}(\,\cdot\,,t)=0\) if \(\sigma_{j}^{2}(t)=\sigma_{0}^{2}+\|k_{j}\|^{2}2t\)._
Proof.: Assuming Eq. (10), the Eigedomposition of the precision matrix can be trivially constructed. In particular, \((\Sigma_{a})^{-1}=\sum_{j=1}^{J}\frac{\|k_{j}\|^{2}}{\sigma_{0}^{2}}(\frac{k_ {j}}{\|k_{j}\|}\otimes\frac{k_{j}}{\|k_{j}\|})\), hence \(\Sigma_{a}=\sum_{j=1}^{J}\frac{\sigma_{0}^{2}}{\|k_{j}\|^{2}}(\frac{k_{j}}{\|k _{j}\|}\otimes\frac{k_{j}}{\|k_{j}\|})\). As discussed in Section 2.2, \(\Sigma_{a}\) evolves as \(\Sigma_{a}\mapsto\Sigma_{a}+2t\text{Id}_{a}\) under diffusion. Equivalently, for all \(j=1,\ldots,J\) Eigenvalues, \(\frac{\sigma_{0}^{2}}{\|k_{j}\|^{2}}\mapsto\frac{\sigma_{0}^{2}+2t\|k_{j}\|^{2}} {\|k_{j}\|^{2}}\). Recall that \(\sigma_{0}^{2}\) is just \(\sigma_{j}^{2}(0)\). Thus, \(\tilde{f}(\,\cdot\,,t)\) satisfies the diffusion equation if \(\sigma_{j}^{2}(t)=\sigma_{0}^{2}+\|k_{j}\|^{2}2t\).
**Corollary 2.1**.: _With assumption (10) the potential functions \(\psi_{j}(\,\cdot\,,w_{j},t)\) in Eq. (5) model the marginal distribution of the random variable \(Z_{j,t}=\langle k_{j},Y_{t}\rangle\). In addition, Eq. (5) is normalized when \(Z(\{k_{j}\}_{j=1}^{J})^{-1}=\prod_{j=1}^{J}\|k_{j}\|^{2}\)._
Proof.: Consider one component of the resulting homoscedastic GMM: \(\hat{Y}_{t}\sim\mathcal{N}(\mu_{a,\hat{l}},\Sigma_{a}+2t\text{Id}_{a})\). The distribution of \(\hat{Z}_{j,t}=\langle k_{j},\hat{Y}_{t}\rangle\) is (see e.g. [4] for a proof) \(\hat{Z}_{j,t}\sim\mathcal{N}(k_{j}^{{}^{\prime}}\,\mu_{a,\hat{l}},k_{j}^{{}^{ \prime}}\,(\Sigma_{a}+2t\text{Id}_{a})k_{j})=\mathcal{N}(\mu_{i(j)},\sigma_{0} ^{2}+2t\|k_{j}\|^{2})\). The claim follows from the linear combination the different components.
We note that Eq. (7) only specifies a covariance matrix if \(J=a\), otherwise the matrix is singular. In the case \(J<a\), we restrict the analysis to the subspace \(\text{span}(\{k_{1},\ldots,k_{J}\})\). In particular, we also assume that the diffusion process does not transport density out of this subspace.
### Convolutional Model
To avoid the extraction and combination of patches in patch-based image priors and still account for the local nature of low-level image features, we describe a convolutional Gaussian mixture diffusion model (GMDM) next. The following analysis assumes vectorized images \(x\in\mathbb{R}^{n}\) with \(n\) pixels; the generalization to higher dimensions is straightforward. In analogy to the patch-based model of the previous section, we extend the fields-of-experts model [14] to our considered diffusion setting by accounting for the diffusion time \(t\) and
obtain3
Footnote 3: For simplicity, we discard the normalization constant \(Z\), which is independent of \(t\).
\[f_{\theta}(x,t)=\prod_{i=1}^{n}\prod_{j=1}^{J}\psi_{j}((K_{j}x)_{i},w_{j},t). \tag{11}\]
Here, each expert \(\psi_{j}\) models the density of convolution _features_ extracted by convolution kernels \(\{k_{j}\}_{j=1}^{J}\) of size \(a=b\times b\). \(\{K_{j}\}_{j=1}^{J}\subset\mathbb{R}^{n\times n}\) are the corresponding matrix representations and all convolutions are cyclic, i.e., \(K_{j}x\equiv k_{j}*_{n}x\), where \(*_{n}\) denotes a 2-dimensional convolution with cyclic boundary conditions. Further, \(w_{j}\in\triangle^{L}\) are used the weight the components of each expert \(\psi_{j}\) as in Eq. (6). As in the patch model, it is sufficient to adapt the variances \(\sigma_{j}^{2}(t)\) by the diffusion time as the following analysis shows.
By definition for \(t=0\), we have
\[f_{\theta}(x,0)=\prod_{i=1}^{n}\prod_{j=1}^{J}\sum_{l=1}^{L}\frac{w_{jl}}{ \sqrt{2\pi\sigma_{0}^{2}}}\exp\left(-\frac{((K_{j}x)_{i}-\mu_{l})^{2}}{2 \sigma_{0}^{2}}\right). \tag{12}\]
First, we expand the product over the pixels
\[\begin{split} f_{\theta}(x,0)&=\prod_{j=1}^{J} \sum_{l(i)=1}^{L^{n}}(2\pi\sigma_{0}^{2})^{-\frac{n}{2}}\\ &\overline{w}_{j\hat{l}(i)}\exp\left(-\frac{\|(K_{j}x)-\mu_{\hat {l}(i)}\|^{2}}{2\sigma_{0}^{2}}\right)\end{split} \tag{13}\]
using the index map \(\hat{l}(i)\) and \(\overline{w}_{j\hat{l}(i)}=\prod_{i=1}^{I}w_{j\hat{l}(i)}\). Further, expanding over the features results in
\[\begin{split} f_{\theta}(x,0)&=\sum_{i(i,j)=1}^{(L ^{n})^{J}}(2\pi\sigma_{0}^{2})^{-\frac{nJ}{2}}\\ &\overline{\overline{w}}_{\hat{l}(i,j)}\exp\left(-\frac{1}{2 \sigma_{0}^{2}}\sum_{j=1}^{J}\|(K_{j}x)-\mu_{\hat{l}(i,j)}\|^{2}\right),\end{split} \tag{14}\]
where \(\overline{\overline{w}}_{\hat{l}(i,j)}=\prod_{j=1}^{J}\prod_{i=1}^{I}w_{i(i,j)}\) Observe that Eq. (14) again describes a homoscedastic GMM with precision \(\Sigma^{-1}=\frac{1}{\sigma_{0}^{2}}\sum_{j=1}^{J}K_{j}^{\top}K_{j}\) and means \(\tilde{\mu}_{\hat{i}(i,j)}=\Sigma\frac{1}{\sigma_{0}^{2}}\sum_{j=1}^{J}K_{j}^ {\top}\mu_{\hat{i}(i,j)}\). Due to the assumed boundary conditions, the Fourier transform diagonalizes the convolution matrices: \(K_{j}=F^{*}\operatorname{diag}(Fk_{j})F\). Thus, the precision matrix can be expressed as
\[\Sigma^{-1}=F^{*}\operatorname{diag}\biggl{(}\sum_{j=1}^{J}\frac{|Fk_{j}|^{2} }{\sigma^{2}}\biggr{)}F \tag{15}\]
where we used \(FF^{*}=\text{Id}\), \(\bar{z}z=|z|^{2}\) and \(|\ \cdot\ |\) denotes the complex modulus acting element-wise on its argument. We assume that the spectra of \(k_{j}\) have disjoint support, i.e.
\[\Gamma_{i}\cap\Gamma_{j}=\emptyset\ \ \text{if}\ \ i\neq j, \tag{16}\]
where \(\Gamma_{j}=\operatorname{supp}Fk_{j}\). Note that, in analogy to the pair-wise orthogonality of the filters in the patch model Eq. (10), from this immediately follows that \(\langle Fk_{j},Fk_{i}\rangle=0\) when \(i\neq j\). In addition, we assume that the magnitude is constant over the support, i.e.
\[|Fk_{j}|=\xi_{j}\mathds{1}_{\Gamma_{j}}, \tag{17}\]
where \(\mathds{1}_{A}\) is the characteristic function of the set \(A\).
**Theorem 3** (Convolutional Diffusion).: _Under assumptions (16) and (17), \(f(\ \cdot,t)\) satisfies the diffusion equation \((\partial_{t}-\Delta_{1})f(\ \cdot\,t)=0\) if \(\bar{\sigma}_{j}^{2}(t)=\sigma_{0}^{2}+\xi_{j}^{2}2t\)._
Proof.: The proof is in analogy to Theorem 2. By Eq. (15), under diffusion, \(F^{*}\operatorname{diag}\bigl{(}\sum_{j=1}^{J}\frac{\sigma^{2}}{|Fk_{j}|^{2}} \bigr{)}F\mapsto F^{*}\operatorname{diag}\left(\frac{\sigma^{2}+2t\sum_{j=1}^ {J}|Fk_{j}|^{2}}{\sum_{j=1}^{J}|Fk_{j}|^{2}}\right)F\). Using Eq. (16) the inner sum decomposes as
\[\frac{\sigma_{0}^{2}+2t\sum_{j=1}^{J}|Fk_{j}|^{2}}{\sum_{j=1}^{J}|Fk_{j}|^{2}} =\sum_{j=1}^{J}\frac{\sigma_{0}^{2}+2t|Fk_{j}|^{2}}{|Fk_{j}|^{2}} \tag{18}\]
and with Eq. (17) the numerator reduces to \(\sigma_{0}^{2}+2t\xi_{j}^{2}\).
## 4 Numerical Results
### Numerical Optimization
For all experiments, \(\psi_{j}\) is a \(L=125\) component GMM, with equidistant means \(\mu_{l}\) in the interval \([-\gamma,\gamma]\), where we chose \(\gamma=1\). To support the uniform discretization of the means, the shared standard deviation of the experts is \(\sigma_{0}=\frac{2\gamma}{L-1}\). Assuming zero-mean filters of size \(b\times b\), we use \(J=b^{2}-1\) filters. Each component of the initial filters is independently drawn from a zero-mean Gaussian distribution with standard deviation \(b^{-1}\). We avoid simplex projections by replacing \(w_{j}\) with learnable parameters \(\zeta_{j}\), from which \(w_{j}\) are computed using a soft-argmax \(w_{jl}=\frac{\exp\zeta_{jl}}{\sum_{l=1}^{L}\exp\zeta_{jl}}\), and initialize \(\zeta_{jl}=\frac{0.1\sqrt{\alpha}}{1+a\mu_{l}^{2}}\), where \(\alpha=1000\).
For the numerical experiments, \(f_{X}\) reflects the distribution of rotated and flipped \(b\times b\) patches from the 400 gray-scale images in the BSDS 500 [8] training and test set, with each pixel in the interval \([0,1]\). We optimize the parameters \(\theta=\{(k_{j},\zeta_{j})\}_{j=1}^{J}\) in Eq. (4) using the iPALM algorithm [10] with respect to a randomly chosen batch of size \(3200\) for \(100\,000\) steps. We approximate the infinite-time diffusion process by uniformly drawing \(\sqrt{2t}\) from the interval \([0,0.4]\). We detail how we ensure the orthogonality of the filters during the iterations of iPALM in the next section.
#### 4.1.1 Enforcing Orthogonality
Let \(K=[k_{1},k_{2},\ldots,k_{J}]\in\mathbb{R}^{a\times J}\) denote the matrix obtained by horizontally stacking the filters. We are interested in finding
\[\operatorname{proj}_{\mathcal{O}}(K)=\operatorname*{arg\,min}_{M\in\mathcal{O}} \lVert M-K\rVert_{F}^{2} \tag{19}\]
where \(\mathcal{O}=\{X\in\mathbb{R}^{a\times J}:X^{\top}X=D^{2}\}\), \(D=\operatorname{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{J})\) is diagonal, and \(\|\cdot\|_{F}\) is the Frobenius norm. Since \(\operatorname{proj}_{\mathcal{O}}(K)^{\top}\operatorname{proj}_{\mathcal{O}} (K)=D^{2}\) we can represent it as \(\operatorname{proj}_{\mathcal{O}}(K)=OD\) with \(O\) semi-unitary (\(O^{\top}O=\operatorname{Id}\)). Other than positivity, we do not place any restrictions on \(\lambda_{1},\ldots,\lambda_{J}\), as these are related to the precision in our model. Thus, we rewrite the objective
\[\begin{split}&\operatorname{proj}_{\mathcal{O}}(K)=\operatorname{ arg\,min}_{\begin{subarray}{c}O^{\top}O=\operatorname{Id}_{J}\\ D=\operatorname{diag}(\lambda_{1},\ldots,\lambda_{J})\end{subarray}}\\ &\left\{\|OD-K\|_{F}^{2}=\|K\|_{F}^{2}-2\langle K,OD\rangle_{F}+ \|D\|_{F}^{2}\right\}\end{split} \tag{20}\]
where \(\langle\,\cdot\,,\,\cdot\,\rangle_{F}\) is the Frobenius inner product.
We propose the following alternating minimization scheme for finding \(O\) and \(D\). The solution for the reduced subproblem in \(O\) can be computed by setting \(O=U\), using the polar decomposition of \(DK^{\top}=UP\), where \(U\in\mathbb{R}^{J\times a}\) is semi-unitary (\(U^{\top}U=\operatorname{Id}_{a}\)) and \(P=P^{\top}\in\mathbb{S}_{a}^{+}\). The sub-problem in \(D\) is solved by setting \(D_{i,i}=\left((O^{\top}K)_{i,i}\right)_{+}\). The algorithm is summarized in Algorithm 1, where we have empirically observed fast convergence; \(B=3\) steps already yielded satisfactory results. A theoretical analysis of the algorithm is presented in the supplemental material.
```
Input :\(K=[k_{1},\ldots,k_{J}]\in\mathbb{R}^{a\times J}\), \(B\in\mathbb{N}\), \(D^{(1)}=\operatorname{Id}_{J}\) Output :\(O^{(B)}D^{(B)}=\operatorname{proj}_{\mathcal{O}}(K)\)
1for\(b\in 1,\ldots,B-1\)do
2\(U^{(b)}P^{(b)}=D^{(b)}K^{\top}\)// Polar decomposition
3\(O^{(b+1)}=U^{(b)}\)
4\(D_{i,i}^{(b+1)}=\left(((O^{(b+1)})^{\top}K)_{i,i}\right)_{+}\)
```
**Algorithm 1**Algorithm for orthogonalizing a set of filters \(K\).
To visually evaluate whether our learned model matches the empirical marginal densities for any diffusion time \(t\), we plot them in Fig. 1. At the top, the learned \(7\times 7\) orthogonal filters \(k_{j}\) are depicted, the associated learned potential functions \(-\log\psi_{j}\) are shown below. Indeed, they match the empirical marginal responses
\[h_{j}(z,t)=-\log\mathbb{E}_{p\sim f_{X}}\delta(z-\langle k_{j},p\rangle) \tag{21}\]
visualized at the bottom almost perfectly even at the low-density tails. In accordance to Theorem 2, the potentials barely change with \(t\) when \(\|k_{j}\|\) is small. Conversely, when \(\|k_{j}\|\) is large, the change is much more drastic. We observe the same for \(15\times 15\) filters, as shown in the supplementary material.
### Sampling
A direct consequence of Corollary 2.1 is that our model admits a simple sampling procedure: The statistical independence of the components allows to draw random patches by \(Y_{t}=\sum_{j=1}^{J}\frac{k_{j}}{\|k_{j}\|^{2}}Z_{j,t}\), where \(Z_{j,t}\) is sampled from the one-dimensional GMM \(\psi_{j}\). The samples in Fig. 2 indicate a good match over a wide range of \(t\). However, for small \(t\) the generated patches appear slightly noisy, which is due to a over-smooth approximation of the sharply peaked marginals around 0.
### Image Denoising
For the denoising experiments, we use the 68 test images from [8]. To exploit our prior for denoising, we employ empirical Bayes-patch averaging (EB-PA) and the half-quadratic splitting (HQS) algorithm [19]. In HQS, we approximate the solution to the inner MAP problem with one empirical Bayes step on \(\tilde{f}_{\theta}(\,\cdot\,,t)\), and set \(\beta=\frac{1}{2t}\) for using a predefined schedule for \(t\). The quantitative analysis in Table 1 shows competitive performance, especially given the relatively small number of parameters in our model.
### Noise Estimation and Blind Image Denoising
The construction of our model allows us to interpret \(\tilde{f}_{\theta}(\,\cdot\,,t)\) as a time-conditional likelihood density. Fig. 3 shows that we can utilize our model for heteroscedastic blind denoising, by estimating pixel-wise noise and using EB-PA. Fig. 4 shows the expected negative-log density over a range of \(\sigma\) and \(\sqrt{2t}\). For visualization purposes, we normalized the negative-log density to have a minimum of zero over \(t\): \(l_{\theta}(x,t)=-\log\tilde{f}_{\theta}(x,t)-(\max_{t}\log\tilde{f}_{\theta}( x,t))\). The estimate \(\sigma\mapsto\operatorname{arg\,min}_{t}\mathbb{E}_{p\sim f_{X},\eta\sim \mathcal{N}(0,\operatorname{Id})}l_{\theta}(p+\sigma\eta,t)\) is a very good match to \(\sigma\mapsto\sqrt{2t}\).
## 5 Conclusion
In this paper, we introduced GMDMs as products of GMMs on filter responses that allow for an explicit solution of the diffusion equation of the associated density. Our explicit formulation enables learning of product/field-of-experts-like image priors simultaneously for all diffusion times using denoising score matching. Our numerical results demonstrated that GMDMs capture the statistics of natural image patches well for any noise level and hence are suitable for heteroscedastic (blind) image denoising. In future work, we plan to extend the numerical evaluation to the convolutional model and apply our framework to challenging inverse problems in medical imaging.
## Appendix A Preliminary Theoretical Analysis of Projection Algorithm
The problem is given \(K\in\mathbb{R}^{m\times n}\), \(n\leq m\), find \(A\in\mathbb{R}^{m\times n}\) such that \(A^{T}A\) is a diagonal \(n\times n\) matrix which minimizes \(\|A-K\|_{2}\) We decompose \(K=U^{K}\Sigma^{K}(V^{K})^{T}\) and
Figure 1: Learned filters \(k_{j}\) (top, the intervals show the values of black and white respectively, amplified by a factor of 10) and potential functions \(-\log\psi_{j}\) (middle). On the bottom the empirical marginal filter response histograms are drawn.
\begin{table}
\begin{tabular}{r c c c|c c|c} \hline \hline \multirow{2}{*}{\(255\sigma\)} & \multirow{2}{*}{FoE [14]} & \multirow{2}{*}{GMM-EPLL [19]} & \multicolumn{3}{c}{GMDM (\(b=7\mid b=15\))} \\ \cline{3-6} & & & \multicolumn{2}{c}{EB-PA} & \multicolumn{2}{c}{HQS} \\ \hline
15 & 30.18 & 31.21 & 30.00 & 30.34 & 30.37 & 30.69 \\
25 & 27.77 & 28.72 & 27.47 & 27.81 & 28.13 & 28.39 \\
50 & 23.29 & 25.72 & 24.61 & 24.96 & 25.32 & 25.50 \\
100 & 16.68 & 23.19 & 22.14 & 22.74 & 23.07 & 23.12 \\ \hline \#Params & 648 & 819200 & 8352 & 78400 & 8352 & 78400 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative denoising results. Reference numbers are taken from [19].
Figure 2: Samples from the random variable \(Y_{t}\) (top) and generated patches (bottom).
\(U\Sigma V^{T}\), with \(U,V\) in \(SO(m)\), \(SO(n)\) respectively and \(\Sigma\)\(m\times n\) matrices with \(\Sigma_{i,j}=0\) if \(i\neq j\), and \(\Sigma_{i,j}=\sigma_{i}\geq 0\) for \(i=1,\ldots,n\) (hence if \(m=n\)\(\Sigma\) is diagonal, if \(m>n\) it is made of a \(n\times n\) diagonal matrix "above" a \((m-n)\times n\) null matrix).
The constraints reads \(A^{T}A=V\Sigma^{2}V^{T}\) is diagonal, so that \(A\) should have a SVD representation with \(V=I\), we assume now it has the form \(U\Sigma\).
Then, \(\|A-K\|=\|U\Sigma-U^{K}\Sigma^{K}(V^{K})^{T}\|=\|(U^{K})^{T}U\Sigma-\Sigma^{K}( V^{K})^{T}\|\) so without loss of generality, we can assume that \(K\) has the form \(\Sigma^{K}(V^{K})^{T}\), and we consider the problem:
\[\min_{U,\Sigma}\|U\Sigma-\tilde{K}\| \tag{22}\]
where \(\tilde{K}=\Sigma^{K}(V^{K})^{T}\). Observe that
\[\|U\Sigma-\tilde{K}\|^{2}=\|\Sigma\|^{2}-2(U\Sigma)\cdot\tilde{K}+\|\Sigma^{K }\|^{2}\]
The above objective is easily minimized wr \(\Sigma\) or \(U\). For \(\Sigma\), since \((U\Sigma)\cdot\tilde{K}=\Sigma\cdot(U^{T}\tilde{K})\) the solution is
\[\sigma_{i}=(U^{T}\tilde{K})_{i,i}=\sum_{k=1}^{n}u_{k,i}\sigma_{k}^{K}v_{i,k}^{ K}\]
and the objective becomes (the first term below is constant and could be removed):
\[\min_{U}\|\Sigma^{K}\|^{2}-\sum_{i=1}^{n}(U^{T}\tilde{K})_{i,i}^{2}.\]
Remark: Frank-Wolfe type methodLetting \(f(U):=\|\Sigma^{K}\|^{2}-\sum_{i=1}^{n}(U^{T}\tilde{K})_{i,i}^{2}\), one has \(\nabla f(U)\cdot M=-2\sum_{j=1}^{n}(U^{T}\tilde{K})_{j,j}(\sum_{i=1}^{m}M_{i,j}\tilde{K}_{i,j})\), that is
\[(\nabla f(U))_{i,j}=-2(U^{T}\tilde{K})_{j,j}\tilde{K}_{i,j}=-2\sigma_{j} \tilde{K}_{i,j}.\]
for \(j\leq n\), and \(0\) for \(j>n\). This means that the alternating minimization algorithm can be viewed as a Frank-Wolfe type method on \(f\): starting from \(U^{0}=I\), one finds \(U^{n+1}\) by minimizing \(\nabla f(U^{n})\cdot U\) for \(U\in SO(m)\). A stationary point is clearly a local minimum, yet as \(f\) is concave, it is not clear that it is a global minimum.
On the other hand, minimizing wr \(U\) means solving
\[\max_{U^{T}U=I}U\cdot(\tilde{K}\Sigma^{T})\]
since \((U\Sigma)\cdot\tilde{K}=\operatorname{Tr}U\Sigma\tilde{K}^{T}=\operatorname{ Tr}U(\tilde{K}\Sigma^{T})^{T}=U\cdot(\tilde{K}\Sigma^{T})\).
Now, given \(M\) a \(m\times m\) matrix, \(\max_{O^{T}O}O\cdot M=\|M\|_{1}\), is the sum of the singular values. Indeed if \(M=U\Sigma V^{T}\) with \(U,V\in SO(m)\), \(O\cdot M=O\cdot(U\Sigma V^{T})=(U^{T}OV)\odot\Sigma\) and the max is reached for \(O=UV^{T}\), with value \(\sum_{i}\sigma_{i}\).
Hence the problem above is solved for \(U=U^{\Sigma}(V^{\Sigma})^{T}\) where \(U^{\Sigma},V^{\Sigma}\) are the \(m\times m\) orthogonal matrices arising in the SVD representation of \(\tilde{K}\Sigma^{T}\). Then, the objective becomes:
\[\min_{\Sigma}\|\Sigma\|^{2}-2\|\tilde{K}\Sigma^{T}\|_{1}+\|\Sigma^{K}\|^{2}\]
Figure 3: Blind denoising: Image corrupted by heteroscedastic Gaussian noise in a checkerboard-pattern (standard deviation 0.1 and 0.2), noise estimate, EB-PA denoising result, and the difference to the reference image.
Now we show that in fact we can work only with smaller, \(n\times n\) matrices, using the particular form of \(\tilde{K}\). Indeed, \((\tilde{K}\Sigma^{T})_{i,j}=\sum_{l=1}^{n}\sum_{k=1}^{n}\sigma_{i}^{K}\delta_{i, l}(V^{K})_{k,l}\sigma_{k}\delta_{j,k}\) is \(0\) if \(i>n\) or \(j>n\), and \(\sigma_{i}^{K}(v^{K})_{j,i}\sigma_{j}\) else. Denoting \((\tilde{K}\Sigma^{T})_{n}\) this reduced \(n\times n\) matrix it is obvious that it has the same singular values as \(\tilde{K}\Sigma^{T}\) and hence, the objective can be reduced to
\[\min_{(\sigma_{i})_{i}}\sum_{i=1}^{n}\sigma_{i}^{2}-2\|(\sigma_{i}\sigma_{j}^{ K}v_{i,j}^{K})_{i,j=1}^{n}\|_{1}+\|\Sigma^{K}\|^{2}\]
equivalently, we can replace the problem (22) with the same problem, yet with smaller \(n\times n\) matrices, and in particular \(U\in SO(n)\) instead of \(SO(m)\).
Assume now we iterate by alternatively minimizing over \(U\) and \(\Sigma\) and suppose we reach a fixed point. Then, consider \(f(U)=\|\Sigma^{K}\|^{2}-\sum_{i=1}^{n}(U^{T}\tilde{K}_{n})_{i,i}^{2}\) where \(\tilde{K}_{n}=(\sigma_{i}^{K}v_{i,j}^{K})_{i=1}^{n}\). One has for small \(t\):
\[f(U+tM)=-\sum_{i}((U^{T}+tM^{T})\tilde{K}_{n})_{i,i}^{2}=-\sum_{i }(U^{T}\tilde{K}_{n})_{i,i}^{2} \tag{23}\] \[-2t\sum_{i}(U^{T}\tilde{K}_{n})_{i,i}\Big{(}\sum_{j}M_{j,i}( \tilde{K}_{n})_{j,i}\Big{)}+o(t)\]
showing that \((\nabla f(U))_{i,j}=-2(U^{T}\tilde{K}_{n})_{j,j}(\tilde{K}_{n})_{i,j}=-2\sigma _{j}(\tilde{K}_{n})_{i,j}=-2\tilde{K}_{n}\Sigma\) (where now \(\Sigma=\Sigma^{T}\) since it is a \(n\times n\) diagonal matrix).
Now if \(M\) is tangent to \(SO(n)\), one has \((U+tM)^{T}(U+tM)=I+t(M^{T}U+U^{T}M)+t^{2}M^{T}M=I+o(t)\) so that \(M^{T}U+U^{T}M=0\). Being \((U,\Sigma)\) a fixed point one has \(U=U^{\Sigma}(V^{\Sigma})^{T}\) where \(\tilde{K}_{n}\Sigma=U^{\Sigma}D(V^{\Sigma})^{T}\) for some diagonal matrix \(D\). Then,
\[\nabla f(U)\cdot M=-2\operatorname{Tr}(M^{T}U^{\Sigma}D(V^{ \Sigma})^{T})\] \[=-2\operatorname{Tr}(M^{T}UV^{\Sigma}D(V^{\Sigma})^{T})=2 \operatorname{Tr}(U^{T}MV^{\Sigma}D(V^{\Sigma})^{T})\] \[=2\operatorname{Tr}(MV^{\Sigma}D(U^{\Sigma})^{T}=2\operatorname{ Tr}(M(\tilde{K}_{n}\Sigma)^{T})=-\nabla f(U)\cdot M\]
so that \(\nabla f(U)\cdot M=0\) and \(U\) is a critical point of \(f\) on \(SO(n)\).
## Appendix B Additional Experiments
To emphasize the importance of the diffusion for learning, we learn a model solely on \(\sigma=0.02\) and compare the learned potential functions to the empirical marginal filter responses. In detail, Fig. 6 shows the normalized mean-squared error \(\operatorname{NMSE}_{\kappa}:\sqrt{2t}\mapsto\sum_{j=1}^{J}\oint_{\Omega_{j}(t,\kappa)}\frac{(\langle y_{j}(z,\mathbf{y}_{j},t)-h_{j}(z,t)\rangle^{2}}{ \max_{z\in\Omega_{j}(t,\kappa)}h_{j}^{2}(z,t)}\ \mathrm{d}z\). To avoid regions in which the empirical histogram is extremely unreliable, we define a "credible interval" \(\Omega_{j}(t,\kappa)=\{[\alpha,\beta]\subset\mathbb{R}:\int_{-\infty}^{\alpha} h(\,\cdot\,,t)=\int_{\beta}^{\infty}h(\,\cdot\,,t)=\kappa\}\), and show different \(\kappa\in\{0.005,0.01,0.02\}\). The results show that training for multiple noise scales improves the performance. In particular, larger diffusion times improve the performance also for (e.g.) \(\sigma=0.02\)_especially in low-density regions_, which is apparent when \(\kappa\) is small. When \(\kappa\) is large, we observe that the performance of the models only diverges when \(\sigma\) becomes relatively large.
We show that our model is also scalable by learning on \(15\times 15\) patches. The subset containing \(112\) of the \(15^{2}-1\) filters and potential functions shown in Fig. 7 indicate that the results and discussion from the main paper also apply to much larger filter sizes. The model shown in the figure was used in the denoising experiments in Table 1 in the main paper.
|
2301.11410 | Automatic differentiation as an effective tool in Electrical Impedance
Tomography | Determining physical properties inside an object without access to direct
measurements of target regions can be formulated as a specific type of
\textit{inverse problem}. One of such problems is applied in \textit{Electrical
Impedance Tomography} (EIT).
In general, EIT can be posed as a minimization problem and solved by
iterative methods, which require knowledge of derivatives of the objective
function. In practice, this can be challenging because analytical closed-form
solutions for them are hard to derive and implement efficiently.
In this paper, we study the effectiveness of \textit{automatic
differentiation (AD)} to solve EIT in a minimization framework. We devise a
case study where we compare solutions of the inverse problem obtained with AD
methods and with the manually-derived formulation of the derivative against the
true solution.
Furthermore, we study the viability of AD for large scale inverse problems by
checking the memory and load requirements of AD as the resolution of the model
increases. With powerful infrastructure, AD can pave the way for faster and
simpler inverse solvers and provide better results than classical methods. | Ivan Pombo, Luis Sarmento | 2023-01-26T20:51:54Z | http://arxiv.org/abs/2301.11410v1 | # Automatic differentiation as an effective tool in Electrical Impedance Tomography
###### Abstract
Determining physical properties inside an object without access to direct measurements of target regions can be formulated as a specific type of _inverse problem_. One of such problems is applied in _Electrical Impedance Tomography_ (EIT).
In general, EIT can be posed as a minimization problem and solved by iterative methods, which require knowledge of derivatives of the objective function. In practice, this can be challenging because analytical closed-form solutions for them are hard to derive and implement efficiently.
In this paper, we study the effectiveness of _automatic differentiation (AD)_ to solve EIT in a minimization framework. We devise a case study where we compare solutions of the inverse problem obtained with AD methods and with the manually-derived formulation of the derivative against the true solution.
Furthermore, we study the viability of AD for large scale inverse problems by checking the memory and load requirements of AD as the resolution of the model increases. With powerful infrastructure, AD can pave the way for faster and simpler inverse solvers and provide better results than classical methods.
## 1 Introduction
Electrical Impedance Tomography (EIT) is a non-invasive imaging method that produces images by determining the electrical conductivity inside a subject using only electrical measurements obtained at its surface. More specifically, sinusoidal currents are applied to the subject through electrodes placed in certain locations at the surface of the object. The resulting voltages are then measured, making it possible to infer internal properties of the objects. EIT is a low-cost method and harmless for human being, since it only applies low amplitude currents. Additionally, it allows for real-time monitoring of various subjects even in the most difficult conditions. There are applications of this technology for medical purposes, in scenarios such as ventilation monitoring, detecting brain hemorrhages and breast cancer. EIT is also used in geophysical imaging, flow analysis and other industrial purposes. For further insight into the applications see [14] and [1].
A particularly relevant application of EIT is in the early determination of breast cancer, specifically for young women where the risk of the ionizing X-rays of mammographies outweigh the benefits of regular check-ups. Fig. 1 describes one simplified EIT scenario where the blue region represents cancer inside the breast, denoted as a domain \(\Omega\). The assumption is that healthy and cancerous tissue have different conductivity values \(\sigma_{1},\,\sigma_{2}\), respectively. The goal is to locate a potential region affected by cancer from measurements on the breast surface, which is the boundary of the domain \(\Omega\) and denoted as \(\partial\Omega\).
Figure 1: Example of a target conductivity over the domain \(\Omega\) that represents a simple model of breast cancer where tumors have higher conductivity than the background. The domain \(\Omega\) is represented by the black circumference which has a conductivity of \(\sigma_{\text{out}}\). In a blue circle it is represented a region with different conductivity \(\sigma_{\text{in}}\) from the background one \(\sigma_{\text{out}}\).
The measurements are obtained by injecting into the domain \(\Omega\) a fixed set of different _electrical current patterns_\(I_{j}\). Each \(I_{j}\) is defined by injecting electrical current through all electrodes in a particular manner, _i.e._, for \(L\) electrodes we have \(I_{j}=(I_{j,1},...,I_{j,L})\). Simultaneously, we measure the resulting voltages \(V_{j}\) for each current pattern, obtaining a voltage measurement at each electrode, denoted as \(V_{j}=(V_{j,1},...,V_{j,L})\). This leads to a set of _true measurements_ denoted by \(m_{j}=(I_{j},V_{j})\). Then, the corresponding _inverse problem_ is to determine the electrical conductivity over \(\Omega\) that leads to these measurements. In the particular case of Fig. 1 we want to determine the conductivity outside and inside the anomaly, \(\sigma_{\text{out}}\) and \(\sigma_{\text{in}}\), respectively, and the location of the anomaly (in blue).
This is a hard problem because in general there is no analytical expression that maps a set of electrical measurements back to the respective conductivity profile that generates them.
To solve this inverse problem we first need to understand how to solve the _direct problem_, that is, computing electrical measurements \(V_{j}\) for a given set of currents \(I_{j}\) and conductivity \(\sigma\). The direct problem has an easier solution, since the propagation of electrical current through the domain obeys the well-known Maxwell equations.
Many methods for solving the direct problem are described in the literature, _e.g._, Finite Element Method (FEM) [12], Boundary Element Method (BEM) [4], and, more recently Deep Learning methods (DL) [10].
Independently of the numerical method used to solve the direct problem, such a procedure is commonly designated as _simulation_. Hence, for a given conductivity profile we can obtain through a simulation method the electrical measurements denoted as \(m_{j}^{\text{Sim}}=(I_{j},V_{j}^{\text{Sim}})\), for each different current pattern with \(j=1,...,N\). We can thus define an operator that maps conductivity into voltage measurements, here termed by _direct operator_and given as:
\[\textbf{Sim}:\sigma\mapsto V^{\textbf{Sim}}=(V_{1,1}^{\text{Sim}},..,V_{j,l}^{ \text{Sim}},...,V_{N,L}^{\text{Sim}})\in\mathbb{R}^{L\cdot N} \tag{1}\]
where \(V_{j,l}^{\text{Sim}}\) represent voltages measured at the \(l\)-th electrode for the \(j\)-th current pattern.
Our goal is to find a conductivity profile \(\sigma\) that matches measurements \(m=(m_{1},...,m_{N})\). Thus, we can formulate EIT as the following minimization problem by making use of the direct operator **Sim**:
\[\min_{\sigma}\frac{1}{2}\left\|\textbf{Sim}(\sigma)-m^{\text{true}}\right\|_{ 2}^{2}. \tag{2}\]
We use the \(L^{2}\)-norm here for simplicity, but, in general, we could use any other norm as long as it is differentiable.
Most classical methods for solving this minimization problem are based on iteratively improving the solution. The update requires computing the derivative of both the loss function in (2) and the **Sim** operator.
To solve the inverse problem under an optimization framework we opted for the _Levenberg-Marquardt algorithm_[6, 7]. It is a simple quasi-Newton method that only requires the Jacobian computation of the **Sim** operator. Further details about the method are given in Appendix B.
In essence, the main challenges to solve the minimization problem (2) with iterative classic methods are:
* to ensure that the simulator is once-differentiable with respect to a conductivity parameterization;
* devise a method to compute the respective derivatives of the simulator.
Our study explores a simulation operator obtained through FEM, which is already well established for EIT, see [8].
When the **Sim** operator is given by FEM we can deduce an analytical closed-form of the derivative with respect to the conductivity variation. It is simply obtained with respect to a conductivity discretization over the FEM mesh, see Fig. 2. As such, it requires derivative computations with respect to conductivity values over _all elements of the mesh_. If the conductivity is defined through a different parameterization we can obtain the respective derivatives by the chain rule of differentiation. For such endeavor, the analytical formulation needs to be adapted and derived for each particular parameterization of the conductivity. As a result this formula is hard to derive and implement, see [5] and [13].
Automatic differentiation (AD) is a method that automatically evaluates exact derivatives for complex programs. It exploits the simple mathematical operations the programs are built on, to automatically compute the derivative through the chain rule. While the initial concept was developed in the sixties [15], only recently with advancements in hardware and efficient implementations, like JAX [2], it has gained traction for application in general problems.
Figure 2: Circular anomaly defined over a triangular FEM mesh in 2D. Electrodes are attached to the boundary, black lines.
In this paper, we explore automatic differentiation as an alternative to manual methods for computing the Jacobian of differentiable simulators. In particular, the goal is to validate its effectiveness in solving the EIT inverse problem. By effectiveness we mean that it is as successful in solving the inverse problem as previous methods, namely, through analytical formulation. By doing so we show its versatility compared with analytic formulation and moreover verify its viability for high resolution images where.
The validation is done by comparing the absolute error between solutions obtained by solving the minimization problem with both methods to compute the derivatives and the absolute error compared with the true solution. In particular, we evaluate the maximum difference between both Jacobian computations to check if they are evaluating to the same result. Then as a second set of checks, we explore the memory consumption of AD and show that it is still in reasonable terms as the problem scales with higher resolution.
Our end goal is to show feasibility and practicality of AD as a tool for lowering the entry barrier for other inverse problems in Partial Differential Equations, where AD can also be applied.
In the following, we first introduce the EIT case study we are using for comparison. In Section 3, we explain how the required derivatives are computed with both the analytical and AD method. In Sec. 4, we introduce our experimental setup. Results comparing the effectiveness of both methods and viability of AD are given in Sec. 5, and conclusions are drawn out in Sec. 6.
## 2 Establishing a case study
In this section we establish a case study in order to make a clear comparison between both methods for computing derivatives.
### EIT scenario
To demonstrate our claims we focus on a two-dimensional setup. We remark that this is not physically accurate since electrical current propagates in three dimensions However, it simplifies the construction of our case study.
EIT is an _ill-posed_ inverse problem [8] and thus we need to take into account the possible instability of the problem, _i.e._, small variations in the measurements may imply large variation on parameters solution. In practice, this makes it hard to solve the inverse problem since true measurements, captured with real-world measuring devices, always contain noise. Therefore, real solutions for noisy input data can be drastically distinct from the true solution.
Due to this, it becomes hard to accurately determine a very large number of parameters, _e.g._, the value of the conductivity at all mesh elements (see Fig. 2), from a small number of measurements. An example would be a conductivity defined over a fine mesh which has a value at each mesh element, see Fig. 2.
To mitigate this problem we want to make as many measurements as possible. However, the possible number of distinct measurements is constrained by the quantity of electrodes. This occurs since for \(L\) electrodes there are only \(L-1\) linearly independent current patterns for which the voltage measurements yield independent information of the conductivity, see [8].
The best way to mitigate this issue is to work on simpler cases. By doing so we can reduce the parameter space and have less variability on the solutions, like in Fig. 1. Even though instability issues do not become completely fixed, the space of measurements has a lower, more tractable, dimensionality.
For the sake of comparison we wish to make, it is enough to focus on conductivity profiles with a circular region of distinct conductivity value from the background, see Fig. 1 and 2. These anomalies are parameterized by their center \((c_{x},c_{y})\) inside the domain \(\Omega\), radius \(r\) and conductivity value inside and outside \(\sigma_{\text{in}}\), \(\sigma_{\text{out}}\), respectively.
We work with this simplification because it is easier to obtain a solution to the inverse problem due to the parameterization of such region being given by only a few parameters. Further, we remark that we need to make sure that the parameterization is differentiable. Our choice of circular regions is based on this, since it is easy to define a smooth parameterization. For regions with corners two smoothing procedures would be required, one to smoothen the corners and another to smooth the parameterization.
By the reasons above, in our experiments we assume the existence of a _single_ circular anomaly with conductivity value different from the background, like in Fig. 1and denoted the parameterization variables as
\[\sigma=(r,c_{x},c_{y},\sigma_{\text{in}},\sigma_{\text{out}}). \tag{3}\]
We introduce now the EIT model, the conductivity parameterization definition and the measurement setup we use to proceed with out comparison.
### Voltage measuring setup
We introduce here the measuring setup that is applied for the direct problem.
In this simple 2D setup, we define the **Sim** operator in (1) according to the case study and the measurement setup.
Recall that with \(L\) electrodes at the surface \(\partial\Omega\), we can at most apply \(L-1\) linearly independent current patterns \(I_{j}\in\mathbb{R}^{L}\) with \(j=1,...,L-1\). The **Sim** operator is obtained by solving the direct problem for each \(I_{j}\) and determine the respective voltages \(V_{j}\in\mathbb{R}^{L}\) over the electrodes.
The more measurements we can perform the better we are able to potentially reconstruct the conductivity. Therefore, we need to choose \(L-1\) linearly independent
current patterns. This choice is non-trivial. One possibility presented in the literature [8] is obtained by injecting currents in a wave pattern through the electrodes according to
\[I_{j,l}=\begin{cases}A\cos(j\theta_{l}),&j=1,...,\frac{L}{2},\\ A\sin\left((j-\frac{L}{2})\theta_{l}\right),&j=\frac{L}{2}+1,...,L-1\end{cases} \tag{4}\]
with \(\theta_{l}=\frac{2\pi}{L}l\) and \(A\) the constant current amplitude. These patterns have been shown to obtain th best result on the detection of conductivities profiles with small anomalies in the regions furthest from the boundary, [8].
The experiments are performed in the following setting:
* \(\Omega\) is a circular domain with radius \(r_{\Omega}10\)cm;
* Current amplitude of \(A=3\)mA, which is a reasonable value for human subjects, and the voltages are measured in (mV);
* Attach \(L=16\) electrodes equally spaced at the boundary with each having fixed length \(\pi/64\).
We refer to Figure 2 for a visual representation of the setting.
Under the above setup, the simulator in equation (1) is given as
\[\mathbf{Sim}:\mathbb{R}^{5} \rightarrow\mathbb{R}^{L(L-1)} \tag{5}\] \[(r,c_{x},c_{y},\sigma_{in},\sigma_{out}) \mapsto(V_{1}^{\mathbf{Sim}},...,V_{j,l}^{\mathbf{Sim}},...,V_{ L-1,L}^{\mathbf{Sim}})\]
with \(V_{j,l}^{\mathbf{Sim}}\in\mathbb{R}\) being the voltage measurement on the \(l\)-th electrode obtained by the direct problem solution for the trigonometric current pattern \(I_{j}\).
## 3 Modeling EIT
### Direct problem
Currents propagating in human tissues and organs can be satisfactorily modeled by the Complete Electrode Model [3]. It accounts for the finite nature of electrodes, for the current injection through them and for the electrochemical effects happening between skin and electrode surface.
Let \(\Omega\) describe the subject region we are evaluating. To establish a measurement setup, we attach \(L\) electrodes at the subject boundary \(\partial\Omega\). Through them we apply an electrical _current pattern_\(I=(I_{1},...,I_{L})\) into \(\Omega\). The objective is to find the electrical potential \(u\) inside and the voltages at electrodes \(V=(V_{1},...,V_{L})\) that fulfill the system of equations describing the Complete Electrode Model:
\[\begin{cases}\nabla\cdot(\sigma\nabla u)=0,&\text{in }\Omega,\\ \int_{E_{l}}\sigma\frac{\partial u}{\partial\nu}\,dS=I_{l},&l=1,2,...,L\\ \sigma\frac{\partial u}{\partial\nu}=0,&\text{in }\partial\Omega\setminus\cup_{l=1}^{L}E_{l}\\ u+z_{l}\sigma\frac{\partial u}{\partial\nu}\big{|}_{E_{l}}=V_{l},&l=1,2,...,L \end{cases} \tag{6}\]
where \(\nu\) is the outward pointing normal vector at \(\partial\Omega\), \(dS\) is measuring length of the boundary and \(\sigma\) is the conductivity distribution.
The first equation represents electrical _current diffusion_. The second and third define the insertion of current through electrodes, meaning current spreads through the whole electrode before being inserted into the domain and in regions without electrodes there isn't current flowing. Finally, the last equations model the electrochemical effects at interface of skin-electrode, with \(z_{l}\) termed as _contact impedance_ representing the resistance at that interface.
To ensure the existence and uniqueness of a solution, the current pattern must satisfy Kirchoff's law and we fix a _reference voltage condition_:
\[\sum_{l=1}^{L}I_{l}=0,\quad\text{ and }\quad\sum_{l=1}^{L}V_{l}=0. \tag{7}\]
### Modeling the circular anomaly
In this section, we define the conductivity parameterization formally introduced in Section 2.
The parameterization is done through a _level-set_, _i.e._, a function that has positive sign inside the region it describes, negative on the outside and equal to zero on the region boundary. In particular, a _circle level-set_\(\text{LS}(x,y)\) can be defined through a center \(c=(c_{x},c_{y})\) and a radius \(r\) as follows
\[\text{LS}(x,y)=r^{2}-\left[(x-c_{x})^{2}+(y-c_{y})^{2}\right]. \tag{8}\]
The level-set function is positively valued if the point \((x,y)\) is inside the circular anomaly, negative if it is outside and zero if its precisely at the boundary of the anomaly.
As such, we can use the _Heaviside function_\(H(z)\) that equals \(1\) if \(z>0\) and \(0\) otherwise, to fully describe the conductivity profile of interest through
\[\sigma(x,y)=\sigma_{in}H(\text{LS}(x,y))+\sigma_{out}\left(1-H(\text{LS}(x,y) )\right). \tag{9}\]
Under this formulation \(\sigma\) is not differentiable due to the discontinuity of \(H\) at \(z=0\). In order to attain differentiability, we use a smooth approximation of the Heaviside function given as
\[H^{\epsilon}(z)=\frac{1}{\pi}\arctan\left(\frac{z}{\epsilon}\right)+\frac{1}{ 2}.\]
The conductivity \(\sigma\) is instead established in terms of \(H^{\epsilon}\), where \(\epsilon>0\) works as a smoothing parameter. The smaller it is the closer \(H^{\epsilon}\) is to \(H\).
This smoothing procedure is necessary both for the analytical computation as well as AD. In fact, we need to take into account the mathematical differentiability for a proper implementation of derivatives through AD. For example, JAX AD applies the derivative to \(H\) by following the conditional operations if else, which implies a derivative of \(0\) everywhere, which is not true for \(z=0\).
Derivatives computation
In order to solve the inverse problem in a minimization framework, we need to compute derivatives of the **Sim** operator. In this section, we deduce the analytical formula and explain how to apply AD to **Sim**, in order to obtain the derivatives with respect to the parameters of interest.
We recall that the direct solver and **Sim** are independent of the derivative computation method.
### Analytical Computation
We recall that by Eq. (5) we have that the FEM simulator operator is given by
\[\textbf{Sim}:\mathbb{R}^{5} \rightarrow\mathbb{R}^{L(L-1)}\] \[(c_{x},c_{y},r,\sigma_{in},\sigma_{out}) \mapsto\left(V_{1}^{\textbf{Sim}},...,V_{j,l}^{\textbf{Sim}},..., V_{L-1,L}^{\textbf{Sim}}\right). \tag{10}\]
To avoid heavy notation, we denote the vector of voltage measurements by \(V^{\textbf{Sim}}\in\mathbb{R}^{L(L-1)}\) and \(V_{n}\in\mathbb{R}^{L}\) are the voltages measured \(j\)-th current pattern.
The Jacobian matrix \(J\in\mathbb{R}^{L(L-1)\times 5}\) is given by
\[J=\begin{pmatrix}\frac{\partial V^{\textbf{Sim}}}{\partial c_{x}}&\frac{ \partial V^{\textbf{Sim}}}{\partial c_{y}}&\frac{\partial V^{\textbf{Sim}}}{ \partial r}&\frac{\partial V}{\partial\sigma_{in}}&\frac{\partial V^{\textbf{ Sim}}}{\partial\sigma_{out}}\end{pmatrix} \tag{11}\]
In order to provide an analytical formulation, we specifically focus on the computation of derivatives for each \(V_{n}\) with respect to a single parameter, which if done for all \(n=1,...,L-1\) determines one column of the Jacobian.
Furthermore, we need to specify a method to simulate the measurements.
In this paper, we have used FEM applied to the Complete Electrode Model described before. The FEM solution is \(\theta=(\alpha,\beta)\in\mathbb{R}^{N+L-1}\), where \(\alpha\) describes the electrical potential inside \(\Omega\) and \(\beta\) the voltages at the electrodes. Accordingly, we denote for each current pattern \(I_{j}\) the FEM solution by \(\theta_{j}=[\alpha_{j},\beta_{j}]\in\mathbb{R}^{N+L-1}\) with respect to \(\tilde{I}_{j}\) on the right-hand side of the FEM system of equations (a variation of \(I_{j}\)).
With this in mind, the voltages are computed by \(V_{j}=M\beta_{j}\) where \(M\) is a matrix defining the basis functions used by FEM at the electrodes. For further detail about the FEM solution we point to Appendix A.
Now, if we define \(\tilde{M}=[\hat{0}\ M]\in\mathbb{R}^{L\times(N+L-1)}\) then we have
\[V_{n}=\tilde{M}\theta_{n}=\tilde{M}A^{-1}\tilde{I}_{n}. \tag{12}\]
As such, it holds for any parameter \(w\) of \(\{c_{x},c_{y},r,\sigma_{in},\sigma_{out}\}\) that:
\[\frac{\partial V_{n}}{\partial w}=\frac{\partial\left(\tilde{M}A^{-1}\tilde{I }_{n}\right)}{\partial w}.\]
Since neither \(\tilde{M}\) and \(\tilde{I}_{n}\) depend on the conductivity \(\sigma\) and, therefore, for any of the parameters, it holds that
\[\frac{\partial V_{n}}{\partial w}=\tilde{M}\frac{\partial A^{-1}}{\partial w} \tilde{I}_{n}=-\tilde{M}A^{-1}\frac{\partial A}{\partial w}A^{-1}\tilde{I}_{n} \tag{13}\]
with the last equality following from matrix calculus properties.
Thus, in essence, the computation resumes to the stiffness matrix derivative and noticing that \(A^{-1}\tilde{I}_{n}=\theta_{n}\). Setting \(\gamma=\tilde{M}A^{-1}\) the computation of the derivative in Eq. (13) simplifies to
\[\frac{\partial V_{n}}{\partial w}=-\gamma^{T}\frac{\partial A}{\partial w} \theta_{n}. \tag{14}\]
As such, the focus is on the computation of \(\frac{\partial A}{\partial w}\). The stiffness matrix \(A\) is composed of four blocks, like,
\[\begin{bmatrix}B^{1}+B^{2}&C\\ C^{T}&D\end{bmatrix}.\]
The block \(B^{1}\) is the only one depending on the conductivity. Due to its definition there is a clear way of computing the derivatives of \(B^{1}\) with respect to the conductivity value \(\sigma_{k}\) over each mesh element (see the Appendix for further details on its definition):
\[\frac{\partial B^{1}_{ij}}{\partial\sigma_{k}}=\begin{cases}\int_{T_{k}} \nabla\phi_{i}\cdot\nabla\phi_{j}\,dx,\text{ if }i,j\in T_{k}\\ 0,\text{ otherwise}.\end{cases} \tag{15}\]
Furthermore, the resulting matrix is independent of \(\sigma\) therefore it can be precomputed at the start and re-used.
Through the chain rule we have that
\[\frac{\partial B^{1}_{ij}}{\partial w}=\sum_{k=0}^{K}\frac{\partial B^{1}_{ij}} {\partial\sigma_{k}}\frac{\partial\sigma_{k}}{\partial w}. \tag{16}\]
We note that due to sparsity of the matrix defined in Eq. (15) it can be assembled very efficiently. However, this optimal performance is an extra layer of complexity that needs to be solved manually and AD takes care of that automatically.
The remaining object to be computed from Eq. (14) is \(\gamma\). Since, \(A\) is a very large sparse matrix the best way to do determine it is by solving the adjoint system equivalent to \(\gamma=\tilde{M}A^{-1}\) given as
\[A^{T}\gamma=\tilde{M}^{T}\text{ with }\gamma\in\mathbb{R}^{N+(L-1)\times L}. \tag{17}\]
Since \(A\) depends on the conductivity \(\sigma\) this system needs to be solve once at each iteration of the inverse solver.
Finally, a formula for the derivatives in Eq. (11) is obtained after solving the adjoint system (17) and computing the derivative of \(B^{1}\) as in (16). The derivatives are compactly given through the formula
\[\frac{\partial V_{n}}{\partial w}=-\gamma^{T}\begin{bmatrix}\frac{\partial B^{ 1}}{\partial w}&0\\ 0&0\end{bmatrix}\theta_{n}. \tag{18}\]
Through this demonstration, we have seen that it can be very tedious to deduce and implement the analytical derivatives for complex problems, like ours. For simple functions, an analytical derivative in compact form takes the lead in efficiency, however we want to experiment with the case of more complex functions.
### Automatic differentiation method
In this section, we introduce how to apply JAX automatic differentiation toolbox [2] to obtain the Jacobian. Further details about the inner workings of AD and JAX are explained in Appendix C.
Since our direct operator **Sim** has more output variables than input variables we note that the most-efficient AD mode is the forward-mode.
The implementation of a differentiable simulator **Sim** means we can simply use JAX AD to compute the derivatives. Our **Sim** operator is differentiable with respect to the parameterization variables \((r,c_{x},c_{y},\sigma_{in},\sigma_{out})\) that define the anomaly, as introduced in section 3.2.
This preparation are a requirement for both derivative methods, but now the derivative computation with AD is simply implemented through JAX.
To do so, we implement a routine that defines the direct operator **Sim** given in Eq. (5). The implementation is established through the solution of the direct problem through FEM, that we here hide as the _simulator_ method. Listing 1 provides the routine with all of these in mind.
```
importjax defdirect_operator(anomaly_parameters): """Simulatemeasurementsforgiven inputfunctionwithJAX. Args: anomaly_parameters:Arrayofshape (5,)withparametrizationvariables ofcircularanomalies. Returns: measurements:Arrayofshape (nmb_electrodes(nmb_electrodes-1),)that containsthevoltagemeasurementsforall currentpatterns. """
#Computemeasurements measurements=simulator(anomaly_parameters) returnmeasurements
```
Listing 1: Definition of the direct operator through a general simulator method.
In order to compute the Jacobian defined in Eq. (11) with JAX one only needs to call jax.jacfwd(direct_operator) for our direct operator as in Listing 2.
To establish the inverse solver these function definitions are redundant and we can immediately call simulator and jax.jacfwd(direct_operator) in the inverse solver routine. This definition is just for visualization purposes in this section.
```
defjacobian(anomaly_parameters): """ComputeJacobianwithJAXAD Args: anomaly_parameters:1darrayofshape (5,)withparametrizationvariables ofcircularanomalies. Returns: Jacobianmatrixofshape (nmb_electrodes(nmb_electrodes-1),5). """
#Definethejacobianthroughforward-mode jacobian=jax.jacfwd(direct_operator) returnjacobian(anomaly_parameters)
```
Listing 2: Computation of the Jacobian matrix through JAX automatic differentiation toolbox.
## 5 Experimental setup
To compare both analytic and automatic differentiation methods, we explore their evaluation at different conductivities, and how they fit in to solve the inverse problem. For the latter, we consider two particular cases for the inverse problem. The first case, that we label as the case of **fixed conductivities** is simpler. We want to determine only the location parameters \((r,c_{x},c_{y})\) and we assume the conductivity values inside \(\sigma_{\text{in}}\) and outside \(\sigma_{\text{out}}\) are fixed. This scenario can represent breast cancer, for example, where we know _a priori_ conductivity values of different tissues, and we are only concerned in determining the anomaly location.
The second case, that we label as the case of **general conductivities**, we want to determine all parameters \((r,c_{x},c_{y},\sigma_{in},\sigma_{out})\). This is a more general scenario where we only know there is a circular anomaly and want to characterize it in terms of location, radius and conductivity.
Recall that we fix a voltage measurement setup to simplify the comparison. Our only interest is to show that AD is as good as analytical methods in terms of solution accuracy. Further, we show that the memory requirements for AD scale reasonably well with the mesh resolution, to show that AD can be effectively implemented in more realistic cases involving more complex scenarios and 3D meshes.
All of the experiments have been run in a machine with the following hardware specifications:
* CPU _Intel Core i5-12400F_ (released in Q1 2022, 12th gen., 4.4 GHz, 6 cores, 12 threads, 64 GB RAM);
* GPU _NVIDIA GeForce RTX 3070_ (released in Q4 2020, 6144 CUDA cores, 8 GB memory).
We chose this machine because it has typical med-range specs and can be considered as a good example of an affordable solution for the numerical computation, compatible with the lower cost of EIT. We remark that besides automatic differentiation, JAX excels in optimizing the performance for a given hardware. Therefore, we have not performed any specific optimization, but appropriate care as been taken throughout the implementation.
### Establish a ground truth
In order to have a "lab" setup, _i.e._, one we can control the experiment from start to finish, we define a voltage measurements dataset through simulation. For such, we randomly initialize our conductivity parameterization under a certain range of parameters and determine their respective voltage measurements \(m\).
To test new inverse solvers we need to generate measurements with the highest resolution possible to avoid the so-called _inverse crimes_. Such crimes occur by using the same resolution to obtain \(m\) and **Sim** operator computationally. By doing it, we do not account for errors arising from the approximate nature of the direct solver, which occurs when using true measurements obtained by a real-world measuring device, which adequately we can think as having infinite resolution. As such, we need to choose a higher mesh resolution for \(m\) than for **Sim** operator, since they are obtained both through FEM.
With this in mind we generate our ground truth dataset of voltage measurements with the highest possible resolution for our hardware specifications. In our work, it was established with a FEM mesh of 5815 elements that is set accordingly to have each element with a edge length of \(h=0.035\) relative to the domain size.
Furthermore, we generate the dataset through the following random initialization of the anomaly parameters:
* Uniformly generate conductivity centers anywhere inside the disk domain \(\Omega=B_{1}(0)\) with radius \(1\). Hence, we use polar coordinates to generate the centers. To start we uniformly generate an angle between \([0,2\pi]\). Then, we uniformly generate a value in \([0,1]\) to obtain a radius sample by taking square root of it. Joining both through polar coordinates gives an almost uniformly sampled set of 2D points inside \(\Omega\);
* Uniformly generate an anomaly radius, taking into consideration the center position generated on the previous point, so that anomalies are strictly in \(\Omega\). As such, for each center we select the anomaly radius uniformly from \([0.1,1-|c|]\), where \(|c|\) is the distance from center to origin;
* Uniformly generate conductivity values inside \(\sigma_{\text{in}}\) from \([1,1.6]\) S/m and outside \(\sigma_{\text{out}}\) from \([0.6,1.]\) S/m. Such values do not encapsulate any particular medical or industrial scenario.
Our model assumes that contact impedances on each electrode are fixed and have value \(z=5\times 10^{-6}\Omega\).
In fact, we generate two separate datasets each with 1000 cases. One for the case of **fixed conductivities** where we randomly generate 1000 anomalies and compute the respective measurements with fixed conductivity value inside of \(\sigma_{\text{in}}=1.4\) S/m and outside of \(\sigma_{\text{out}}=0.7\) S/m. Another for the case of **general conductivities** where we randomly generate 1000 anomalies and compute their measurements as described above.
Furthermore, we provide an initial sanity check for the general dataset. We verified that the Jacobian computed through both methods matches with minimal error margin, which may arise due to round off errors. This analysis is presented in Appendix D.
## 6 Results
In order to solve the inverse problem for the two cases described above, we use a FEM mesh with 5210 elements set by \(h=0.037\) to define the **Sim** operator, in order to avoid inverse crimes. Our chosen inverse solver is the Levenberg-Marquardt method with a line search algorithm on each iteration. Further, we establish two stopping criteria based on a maximum number of iterations equal to 20 and a relative mean squared loss
\[\frac{1}{2}\frac{\|\textbf{Sim}(\sigma)-m^{\text{true}}\|_{2}^{2}}{\|m^{\text{ true}}\|_{2}^{2}}<\xi \tag{19}\]
with a feasible threshold of \(\xi=0.001\). This choice was established empirically, since after that it becomes hard to improve the anomaly reconstruction.
Let \(\sigma_{\text{AD}}\) and \(\sigma_{\text{AN}}\) be the solutions obtained through the inverse solver with the different methods to compute the derivative. In order to verify the effectiveness of AD in solving the EIT inverse problem we evaluate how \(\sigma^{\text{AD}}\) and \(\sigma^{\text{AN}}\) compare with the true solution \(\sigma^{\text{true}}\) and how they compare with each other. This evaluation is based on the mean squared error between the anomalies, _i.e._, for two different anomaly parameterizations \(\sigma_{1},\sigma_{2}\) we evaluate
\[\text{MSE}(\sigma_{1},\,\sigma_{2}):=\|\sigma_{1}-\sigma_{2}\|_{2}.\]
In essence, we compute \(\text{MSE}(\sigma^{\text{true}},\,\sigma^{\text{AD}})\), \(\text{MSE}(\sigma^{\text{true}},\sigma^{\text{AN}})\), \(\text{MSE}(\sigma^{\text{AD}},\,\sigma^{\text{AN}})\). Then, we perform an analysis of the mean squared errors by computing simple statistics of the mean, variance, maximum and minimum error, and by plotting the histogram with a logarithmic scale in the x-axis.
We remark that the following analysis is focused on a general analysis on the reconstructions obtained through the different methods and does not verifies the nature of the errors obtained, _i.e._, we do not check if the errors are occurring for one specific parameter or for small/large values of those same parameters.
### Case 1: Fixed Conductivities
In this case our goal is to determine the anomaly parameterized by \(\sigma^{\rm true}=(r,c_{x},c_{y})\), since we know _a priori_ that the conductivity inside and outside are \(\sigma_{in}=1.4\) S/m and \(\sigma_{out}=0.7\) S/m, respectively. Here, we denote \(\sigma^{\rm true}\) as the conductivity we aim to discover and \(m^{\rm true}\) for the respective measurements.
We start from our measurements dataset for the **fixed conductivities** with the set of 1000 voltage measurements corresponding to different anomalies. This number of experiments was constrained by time and hardware capabilities.
The statistical analysis for this case is given in Table 1 and the histogram for the different mean squared errors are in Fig. 3 and 4.
The histogram presented in the Fig. 3 shows that the distribution of the mean squared errors \(\mathrm{MSE}(\sigma^{\rm true},\sigma^{\rm AD})\) and \(\mathrm{MSE}(\sigma^{\rm true},\sigma^{\rm AN})\) is similar. Notice that the mean squared errors in both cases are concentrated around \(10^{-2}\) with a set of outliers with error higher than \(0.1\). However, this outliers occur in the same proportion for both methods. In analysis, this shows that the inverse solver with automatic differentiation matches that with the analytic derivative.
In Fig. 4 the histogram presents the distribution of the mean squared errors between reconstruction \(\mathrm{MSE}(\sigma^{\rm AD},\sigma^{\rm AN})\) and one can see that it is highly concentrated around \(10^{-3}\). There are some different reconstructions between the methods, but their error is in the order of \(0.1\). Again, this highlights the effectiveness of AD compared with the analytic method. However, there are some outliers that shows divergence in the reconstructions between both methods. These errors seem to be related with round-off errors when we combine this analysis with the sanity check for the Jacobian.
To complete the discussion of this case, we allude to the statistics in Table 1. We point to the mean and variance of the different mean squared errors. This shows that on average the reconstruction obtained with AD is much closer with the analytic one than with the true anomalies. Furthermore, the variance between these reconstructions is very small. Once again it shows the effectiveness of AD to match the analytic derivative method and that other inverse solver methods need to be improved in order to obtain better reconstruction results.
### Case 2: General Conductivities
For this case the objective is to determine the general anomaly parameterization given by \(\sigma^{\rm true}=(r,c_{x},c_{y},\sigma_{\rm in},\sigma_{\rm out})\). Again, we denote \(\sigma^{\rm true}\) as the conductivity we aim to discover and \(m^{\rm true}\) for the respective measurements.
We start from the measurements dataset for the **general conductivities** with the set of 1000 voltage measurements corresponding to the different anomalies. Recall, that in this generation we have assumed that \(\sigma_{\rm in}\) is always greater than \(\sigma_{\rm out}\).
The statistical analysis for this case is given in Table 2 and the histogram for the different mean squared errors are in Figs. 5 and 6.
The histogram presented in the Fig. 5 shows that the distribution of the mean squared errors \(\mathrm{MSE}(\sigma^{\rm true},\sigma^{\rm AD})\) and \(\mathrm{MSE}(\sigma^{\rm true},\sigma^{\rm AN})\) is similar. In analysis, this shows that the inverse solver with automatic differentiation matches that with the analytic derivative. Further, no
\begin{table}
\begin{tabular}{l|c c c c} & Mean & \(S^{2}\) & Max. & Min. \\ \hline \(\mathrm{MSE}(\sigma^{\rm true},\sigma^{\rm AD})\) & 0.0456 & 0.0059 & 0.4177 & 0.0020 \\ \(\mathrm{MSE}(\sigma^{\rm true},\sigma^{\rm AN})\) & 0.0455 & 0.0057 & 0.4007 & 0.0020 \\ \(\mathrm{MSE}(\sigma^{\rm AD},\sigma^{\rm AN})\) & 0.002 & 2.64e-4 & 0.2702 & 1.51e-5 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of mean squared errors of fixed conductivities, case 1, that compares the reconstructed conductivities obtained through the different derivative methods with the true anomalies.
Figure 4: Histogram of the mean squared errors of fixed conductivities, case 1, comparing the reconstructed anomalies.
Figure 3: Histogram of the mean squared errors of fixed conductivities, case 1, comparing the reconstructed anomalies obtained through the different derivative methods with the true anomalies.
tice that the mean squared errors in both cases are concentrated around \(10^{-1}\). In fact by setting a threshold, we verified that there are at most 50 reconstructions for both methods where the mean squared error with the true anomaly is higher than 0.5, which together with the histograms shows that the vast majority of reconstructions is successful.
Furthermore, the histogram in Fig. 6 that presents the histogram of \(\text{MSE}(\sigma^{\text{AD}},\sigma^{\text{AN}})\) shows that the errors between reconstructions are more concentrated around the interval \([10^{-4},10^{-2}]\). Again, this highlights the equivalence of AD compared with the analytic method. However, there are some outliers that shows divergence in the reconstructions between both methods. Combining this analysis with the sanity check for the Jacobian it reveals that this might occur due to round-off errors.
To complete the discussion of this case, we allude to the statistics Table 2. The only aspect we would like to point out here is the mean of the different mean squared errors. This shows that on average the reconstruction obtained with AD is much closer with the analytic one than with the true anomalies. Once again it shows the effectiveness of AD to match the analytic derivative method and that other inverse solver methods need to be improved in order to obtain better reconstruction results.
### Computational performance of AD
The viability of AD also depends of its scaling capabilities. Namely, we want to understand if increasing the number of mesh elements, and therefore the resolution and accuracy of the FEM turns AD unfeasible. This is relevant because AD requires the construction of a computational graph for the direct problem and then applies the chain-rule throughout the nodes of the graph to compute the derivatives. As the number of mesh elements increases the computational graph becomes larger and can be unfeasible to use for it to compute the derivatives.
In order to understand this behavior, we compute for ten different mesh sizes the Jacobian for 100 distinct general anomalies, randomly generated as described before. For each mesh size we measure the average GPU memory and load usage through the Python package GPUtil. In Fig. 7 we plot the average of GPU load and memory usage percent for each of the different mesh resolutions and in Fig. 8 we plot the time that took to compute the Jacobian matrices with respect to each mesh resolution.
It is clear from both figures the growth in GPU memory usage and time to execute this experiment. Moreover, for meshes with more than 15000 elements we require more than 8Gb of GPU memory. As of now, we cannot understand the order of growth and further experiments with finer resolution are needed.
\begin{table}
\begin{tabular}{l|c c c c} & Mean & \(S^{2}\) & Max. & Min. \\ \hline \(\text{MSE}(\sigma^{\text{true}},\sigma^{\text{AD}})\) & 0.2264 & 0.0292 & 0.9698 & 0.0042 \\ \(\text{MSE}(\sigma^{\text{true}},\sigma^{\text{AN}})\) & 0.2215 & 0.0273 & 0.9706 & 0.0042 \\ \(\text{MSE}(\sigma^{\text{AD}},\sigma^{\text{AN}})\) & 0.039 & 0.0134 & 0.8838 & 4.4e-6 \\ \hline \end{tabular}
\end{table}
Table 2: Statistics of mean squared errors of general conductivities, case 2, that compares the reconstructed conductivities obtained through the different derivative methods with the true anomalies.
Figure 5: Histogram of the mean squared errors of general conductivities, case 2, that compares the reconstructed anomalies obtained through the different derivative methods with the true anomalies.
Figure 6: Histogram of the mean squared errors of general conductivities, case 2, comparing the reconstructed anomalies.
Figure 7: Percentage of GPU load and memory usage with respect to the number of mesh elements.
## 7 Conclusion
In this paper we have compared the effectiveness of AD to solve inverse problems against classical methods with analytical formulations of the derivative. We have shown how to adequately construct a FEM differentiable simulator in the context of inverse problems. We successfully introduced automatic differentiation for solving inverse problems in an optimization framework, in particular, Electrical Impedance Tomography. We have shown that AD provides a simple way of computing derivatives of complex operators, for example, arising from solutions of PDEs, with respect to a set of parameters.
We have shown that AD is indeed effective to compute the derivatives, since it matches the analytical computation up to minimal error. Further, it was used to solve the Electrical Impedance Tomography inverse problem and we shown that it is even superior to analytical methods, in terms of time and resources.
The analytical formulation is nothing more than an application of differentiation rules to the FEM formulation of the direct operator. By construction AD essentially executes the same process, but automatically. As such, AD and the analytical formulation can be even performing the same operations, but the fact that AD is a plug-and-play tool makes it advantageous to use for complex operators.
Moreover, it has proven more efficient since it takes less time on average to solve any particular EIT problem, when compared with the analytical formulation in our case study and scales well with the mesh resolution. This indicates that with the right hardware AD can be efficiently executed for large-scale problems.
With this tool, we can cast our focus into an efficient implementation of the direct problem solvers, which is way more understood in literature, and on the methods to solve the inverse problem. It allows freedom to experiment and deal with difficult equations, without much thought, bringing focus to the practical application at hand.
Further, we expect that AD extends nicely to higher dimensions, while the analytic formulation will require some re-implementation to accommodate the three dimensional shapes of anomalies.
Future studies are interested in testing how AD easily handles different shapes of anomalies, as well as 3D models.
**Acknowledgements**.: _The work of I. Pombo was supported by FCT through CIDMA and projects UIDB/04106/2020, UIDP/04106/2020 and the PhD Scholarship SFRH/BD/143523/2019._
_This work was developed during a research internship at Inductiva Research Labs from March 2022 to Jan 2023. The first author would like to thank the entire Inductiva team for the continuous support and encouragement provided during the entire period of the internship and in particular thank Hugo Penedones, Fabio Cruz, David Lima and David Carvalho for their comments and constructive feedback given over the several versions of this manuscript._
|
2307.02084 | Parameter estimation for Einstein-dilaton-Gauss-Bonnet gravity with
ringdown signals | Future space-based gravitational-wave detectors will detect gravitational
waves with high sensitivity in the millihertz frequency band, which provides
more opportunities to test theories of gravity than ground-based ones. The
study of quasinormal modes (QNMs) and their application to testing gravity
theories have been an important aspect in the field of gravitational physics.
In this study, we investigate the capability of future space-based
gravitational wave detectors such as LISA, TaiJi, and TianQin to constrain the
dimensionless deviating parameter for Einstein-dilaton-Gauss-Bonnet (EdGB)
gravity with ringdown signals from the merger of binary black holes. The
ringdown signal is modeled by the two strongest QNMs in EdGB gravity. Taking
into account time-delay interferometry, we calculate the signal-to-noise ratio
(SNR) of different space-based detectors for ringdown signals to analyze their
capabilities. The Fisher information matrix is employed to analyze the accuracy
of parameter estimation, with particular focus on the dimensionless deviating
parameter for EdGB gravity. The impact of the parameters of gravitational wave
sources on the estimation accuracy of the dimensionless deviating parameter has
also been studied. We find that the constraint ability of EdGB gravity is
limited because the uncertainty of the dimensionless deviating parameter
increases with the decrease of the dimensionless deviating parameter. LISA and
TaiJi has more advantages to constrain the dimensionless deviating parameter to
a more accurate level for the massive black hole, while TianQin is more
suitable for less massive black holes. Bayesian inference method is used to
perform parameter estimation on simulated data, which verifies the reliability
of the conclusion. | Cai-Ying Shao, Yu Hu, Cheng-Gang Shao | 2023-07-05T07:47:33Z | http://arxiv.org/abs/2307.02084v1 | # Parameter estimation for Einstein-dilaton-Gauss-Bonnet gravity with ringdown signals
###### Abstract
Future space-based gravitational-wave detectors will detect gravitational waves with high sensitivity in the millihertz frequency band, which provides more opportunities to test theories of gravity than ground-based ones. The study of quasinormal modes (QNMs) and their application to testing gravity theories have been an important aspect in the field of gravitational physics. In this study, we investigate the capability of future space-based gravitational wave detectors such as LISA, TaiJi, and TianQin to constrain the dimensionless deviating parameter for Einstein-dilaton-Gauss-Bonnet (EdGB) gravity with ringdown signals from the merger of binary black holes. The ringdown signal is modeled by the two strongest QNMs in EdGB gravity. Taking into account time-delay interferometry, we calculate the signal-to-noise ratio (SNR) of different space-based detectors for ringdown signals to analyze their capabilities. The Fisher information matrix is employed to analyze the accuracy of parameter estimation, with particular focus on the dimensionless deviating parameter for EdGB gravity. The impact of the parameters of gravitational wave sources on the estimation accuracy of the dimensionless deviating parameter has also been studied. We find that the constraint ability of EdGB gravity is limited because the uncertainty of the dimensionless deviating parameter increases with the decrease of the dimensionless deviating parameter. LISA and TaiJi has more advantages to constrain the dimensionless deviating parameter to a more accurate level for the massive black hole, while TianQin is more suitable for less massive black holes. Bayesian inference method is used to perform parameter estimation on simulated data, which verifies the reliability of the conclusion.
Introduction
Gravitational wave signals from the coalescence of compact binaries were detected by LIGO Scientific Collaboration and Virgo Collaboration [1] for the first time in 2015. This landmark detection has ushered in a new frontier of testing General Relativity and exploring our universe [2; 3; 4; 5; 6; 7]. One of the promising prospects is analyzing the QNMs in a ringdown waveform. These oscillation frequencies are determined by the mass and angular momentum of the remnant black hole. Empirical detection of QNMs would not only provide us an opportunity to test the no-hair theorem [8; 9; 10; 11] but also allow to constrain the alternative theory [12; 13; 14; 15]. To this end, the technological conditions for larger SNR are required. The ground-based gravitational wave detectors (such as LIGO, Virgo, and KAGRA) operate in frequency bands above 10Hz and suffer the influence of the gravity gradient and seismic noise, which leads to the ringdown signal being neglected for short durations. For proposed space-based gravitational wave detectors, the frequency bands dip to the millihertz frequency band, where more plentiful sources are waiting to be probed. It is possible for future space-based observations such as LISA, TaiJi, and TianQin to detect ringdown signals from intermediate and supermassive sources with rather large SNR. On this basis, the window for testing the nature of gravity will be opened, which is significant.
Einstein's theory of general relativity, as the cornerstone of modern gravitational theories, describes the most beautiful physical world at macroscopic scales. However, such a prevailing theory is plugged with the black hole singularity, dark matter and dark energy, and the non-renormalization problems, which indicate that general relativity is not a complete theory of gravity. A variety of conceiving alternative theories of gravity have been proposed owing to complexities of curvature in General Relativity such as Lovelock [16], \(f(R)\)[17] and \(f(T)\)[18] theories. As a higher-curvature gravity theory motivated by low-energy limit in string gravity [19], EdGB gravity was formulated that a dilaton scalar field is coupled to the Gauss-Bonnet invariant with coupling parameter \(\alpha_{\rm GB}\) in the action [20; 21]. As a consequence, the field equations are always of second order and this gravity is ghost-free. Furthermore, the black hole and neutron star can become scalarized spontaneously in this theory [22; 23; 24]. The presence of a nontrivial scalar field outside its horizon leads to the violation of the classical "no-hair" theorems [25; 26] and thus the resultant hairy black hole can not be described by the Kerr hypothesis. It is exactly the fact that there are a number of attractive features in EdGB gravity that have led to observable implications [4; 27].
At present, with the development and gradual maturity of technology, not only is EdGB gravity constrained by astronomical observations [28; 29; 30] but also by gravitational wave data from binary black holes [31; 32; 33] and numerical relativity simulation [34; 35]. Here, inspired from the significance of quasinormal mode corrections for a massive black hole in EdGB gravity [13; 36] and high enough SNR with space-based gravitational wave detectors, we investigate the ability to estimate accuracy of the dimensionless deviating parameter with LISA, TaiJi and TianQin in parameterized ringdown signal. According to the standard dimensional analysis, it seems the dimensionful coupling parameter \(\alpha_{\rm GB}\) leads to negligible deviations in low energy regime, equivalently at the scale of supermassive black holes. However, in lack of a complete quantum theory of gravity, it is better to resort to the experiment data to constrain the deviation to general relativity independently at the supermassive black hole scale and the stellar-origin black hole scale. With this in mind, we focus mainly on space-based gravitational wave detectors, with the supermassive black hole as target. First, we build the ringdown waveform with the two strongest QNMs in EdGB gravity. The black hole in question is extremely slowly-rotating with first order in rotation, whose QNMs in gravitational perturbations naturally return to Kerr spacetime as the dimensionless deviating parameter fades away [37]. In order to further improve the SNR, it is essential to eliminate different noise.
The laser frequency noise in an unequal-arm interferometer is not negligible [38]. To suppress laser noise, the time-delay interferometry combination has been proposed [39]. In this paper, we choose the first-generation TDI Michelson combination X [39] to obtain response functions and noise power spectral density. Then the SNR is calculated to assess the scientific performance of LISA, TaiJi, and TianQin. Furthermore, the effects of the arm length of the detector, the mass, the luminosity distance, the spin of the remnant black hole, the symmetric mass ratio, and the dimensionless deviating parameter on the measurement errors of the dimensionless deviating parameter are further probed. In particular, we present the maximum constraint on the dimensionless deviating parameter for EdGB gravity with ringdown signals from massive binary black holes. In order to verify the conclusions, we also performed simulations of Bayesian inference to obtain probability distributions of the dimensionless deviating parameter.
The remainder of our paper is organized as follows. In the next section, we give a brief introduction to EdGB gravity, including the black hole solution and QNMs of gravitational field perturbation for slowly-rotating black holes. In Section III, we present the ringdown signals of time-delay interferometry Michelson combination X and the sensitivity curves of space-based gravitational wave detectors. In Section IV, we calculate the SNR and use Fisher information matrix to analyze how measurement errors of the dimensionless deviating parameter can be affected by the arm length of the detector, the mass, the luminosity distance, the spin of the remnant black hole, the symmetric mass ratio and the dimensionless deviating parameter. The maximum capacity of constraint on the dimensionless deviating parameter for EdGB gravity with LISA and TianQin is also presented. Then in Section V, Bayesian parameter estimation is performed on simulated data to verify the reliability of the conclusion. The concluding remarks are provided in the last section.
## II Einstein-Dilaton-Gauss-Bonnet gravity
Let us start with the action for EdGB gravity [20; 21]
\[S = \int d^{4}x\frac{\sqrt{-g}}{16\pi}\left(R-\frac{1}{2}\partial_{ \mu}\phi\partial^{\mu}\phi+\frac{\alpha_{GB}}{4}e^{\phi}R_{GB}^{2}\right)+S_{m}, \tag{1}\]
where \(S_{m}\) is the matter sector, \(g\) denotes the determinant of the metric, \(R\) is the Ricci scalar, \(\phi\) is a dynamical scalar field, \(\alpha_{\rm GB}\) is the coupling parameter with the dimensions of a quadratic length, and \(\mathcal{R}_{\rm GB}^{2}\) is the Gauss-Bonnet invariant expressed as
\[\mathcal{R}_{\rm GB}^{2}=R_{\mu\nu\rho\sigma}R^{\mu\nu\sigma}-4R_{ \mu\nu}R^{\mu\nu}+R^{2}. \tag{2}\]
As EdGB gravity is proposed, the construction of black hole solutions has aroused considerable interest [40; 41; 42; 25; 43]. In general, a small-coupling regime [42; 20] is adopted to simplify the calculation as one confronts the problem of clumsy equations in the analytical solution. For spinning EdGB black holes, the field equation can be solved by expanding the spin \(\chi_{f}\ll 1\) and the dimensionless deviating parameter \(\frac{\alpha_{\rm GB}}{M^{2}}\ll 1\) to the ideal order. By comparison with Kerr black holes, they possess a minimal mass and larger angular momentum [44; 45; 46; 41].
To simplify the notation, the dimensionless deviating parameter is given by
\[\zeta=\frac{\alpha_{\rm GB}}{M^{2}}, \tag{3}\]
where \(M\) is the mass of the black hole.
Furthermore, the QNMs have been studied extensively in EdGB gravity [37; 47; 48; 49]. For our purpose, we only focus on the gravitational-led QNMs of extremely slowly-rotating black holes with first order in the spin. The resultant QNMs can be modeled as [37]
\[\omega^{nlm}(\chi_{f},\zeta)=\omega_{0}^{nl}(\zeta)+\chi_{f}m\omega_{1}^{nl}( \zeta). \tag{4}\]
Here, \(\omega_{0}^{nl}\) is the QNMs of a non-rotating black hole in the polar sector [47] and \(\omega_{1}^{nl}\) is the QNMs of spin corrections. \(\chi_{f}\) is dimensionless spin, \(\chi_{f}=J/M^{2}\), where \(J\) is the spin angular momentum of the black hole. \(m\) is the azimuthal number. Specifically, for lowest-lying QNMs with the multipole number \(l=2,3\), the analytic fitting formula reads [37; 47]
\[\begin{array}{l}M\omega_{0}^{nl}=(1+f_{1}\zeta^{2}+f_{2}\zeta^{3}+f_{3} \zeta^{4})\omega_{s}^{0l},\\ M\omega_{1}^{0l}=q_{1}+q_{2}\zeta^{2}+q_{3}\zeta^{3}+q_{4}\zeta^{4}+q_{5} \zeta^{5}+q_{6}\zeta^{6}.\end{array} \tag{5}\]
Here, \(\omega_{s}^{0l}\) is the QNMs of Schwarzschild spacetime. For the modes with \(l=2\), \(M\omega_{s}^{02}\approx 0.37370-0.08896i\). For the modes with \(l=3\), \(M\omega_{s}^{03}\approx 0.5994-0.0927i\). The coefficients \(f_{i}\) and \(q_{i}\) are listed in Table 1. As the dimensionless deviating parameter tends to zero, the QNMs of EdGB gravity fall into Kerr spacetime [37; 49; 50].
The dependence of the relative departures between the QNMs of Kerr black hole on the dimensionless deviating parameter \(\zeta\) for different spins \(\chi_{f}\) is illustrated in Fig. 1, where
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \((l,m)\) & QNMs & \(f_{1}\) & \(f_{2}\) & \(f_{3}\) & \(q_{1}\) & \(q_{2}\) & \(q_{3}\) & \(q_{4}\) & \(q_{5}\) & \(q_{6}\) \\ \hline \multirow{3}{*}{(2,2)} & \(\omega_{R}\) & -0.03135 & -0.09674 & 0.23750 & 0.06290 & -0.01560 & -0.00758 & -0.06440 & 0.26800 & -0.60300 \\ \cline{2-10} & \(\omega_{I}\) & 0.04371 & 0.17940 & -0.29470 & 0.00099 & -0.00110 & 0.01864 & -0.17271 & 0.56422 & -0.81190 \\ \hline \multirow{3}{*}{(3,3)} & \(\omega_{R}\) & -0.09911 & -0.04907 & 0.09286 & 0.06740 & -0.02910 & 0.02510 & -0.32090 & 1.17030 & -1.33410 \\ \cline{2-10} & \(\omega_{I}\) & 0.07710 & 0.13990 & -0.34500 & 0.00065 & 0.00023 & 0.02330 & -0.28320 & 1.32300 & -2.44200 \\ \hline \end{tabular}
\end{table}
Table 1: The coefficients for Eq. (5), where \(\omega_{R}\) and \(\omega_{I}\) denote the real and imaginary parts of the QNMs, respectively. The data is taken from [37; 47].
Figure 1: The dependence of the relative departures for the real and imaginary parts of QNMs with \(l=m=2\) on the dimensionless deviating parameter \(\zeta\). The darker curves represent the greater spin \(\chi_{f}\).
\(\left(\omega_{R,I}^{nlm}(\zeta)-\omega_{R,I}^{nlm}(0)\right)/\omega_{R,I}^{nlm}(0)\). We note that as the spin increases, the relative departures of the real and imaginary parts of QNMs increase, which implies the correction of QNMs in EdGB gravity can be magnified by the spin.
## III Ringdown waveform and detector response
The ringdown waves from a distorted black hole consist of plus component \(h_{+}\) and cross component \(h_{\times}\), which is dominated by the form
\[\begin{split}& h_{+}(t)=\frac{M}{D_{L}}\sum\limits_{l,m}A_{lm}Y_{+} ^{lm}(\iota)e^{-t/\tau_{lm}}\cos\left(\omega_{lm}t-m\phi_{0}\right),\\ & h_{\times}(t)=-\frac{M}{D_{L}}\sum\limits_{l,m}A_{lm}Y_{\times} ^{lm}(\iota)e^{-t/\tau_{lm}}\sin\left(\omega_{lm}t-m\phi_{0}\right),\end{split} \tag{6}\]
for \(t\geq t_{0}\) and \(h_{+,\times}(t)=0\) for \(t<t_{0}\), where \(t_{0}\) is the initial time of ringdown. \(M\) is the mass of the final black hole. \(D_{L}\) is the luminosity distance to the source. \(l\) and \(m\) are the harmonic indices. \(A_{lm},\phi_{0}\) are the amplitude and initial phase of the QNMs. \(\tau_{lm},\omega_{lm}\) are the damping time and the oscillation frequency of the QNMs determined by Eq. (4) in EdGB gravity. \(\iota\in[0,\pi]\) is the inclination angle of the remnant. The function \(Y_{+,\times}^{lm}(\iota)\) indicates the total of \(-2\) weighted spin spheroidal harmonics, which can be expressed as [51]
\[\begin{split}& Y_{+}^{lm}(\iota){=}_{-2}Y^{lm}(\iota,0)+(-1)^{l} _{-2}Y^{l-m}(\iota,0),\\ & Y_{\times}^{lm}(\iota){=}_{-2}Y^{lm}(\iota,0)-(-1)^{l}_{-2}Y^{ l-m}(\iota,0).\end{split} \tag{7}\]
In order to build the ringdown waveform, we focus only on two dominant modes \(l=m=2,3\) in EdGB gravity. More specifically,
\[\begin{split}& Y_{+}^{22}(\iota)=\sqrt{\frac{5}{4\pi}\frac{1+ \cos^{2}\iota}{2}},Y_{\times}^{22}(\iota)=\sqrt{\frac{5}{4\pi}}\cos\iota,\\ & Y_{+}^{33}(\iota)=-\sqrt{\frac{21}{8\pi}\frac{1+\cos^{2}\iota }{2}}\sin\iota,Y_{\times}^{33}(\iota)=-\sqrt{\frac{21}{8\pi}}\sin\iota\cos \iota.\end{split} \tag{8}\]
Moreover, \(A_{lm}\) is well fitted as [52; 53]
\[\begin{split}& A_{22}(\nu)=0.864\nu,\\ & A_{33}(\nu)=0.44(1-4\nu)^{0.45}A_{22}(\nu).\end{split} \tag{9}\]
Here \(\nu=m_{1}m_{2}/(m_{1}+m_{2})^{2}\) is the symmetric mass ratio and \(m_{1},m_{2}\) are masses of two separated black holes before coalescence.
Now, we exploit the first generation time-delay interferometry Michelson combination X [39] to obtain the frequency-domain ringdown signals, which can be written as
\[h(f)=\sum\limits_{A=+,\times}\frac{1}{2}(1-e^{-2i\iota})(D_{u}^{A}\mathcal{T} (u,\hat{n}\cdot\hat{u})-D_{v}^{A}\mathcal{T}(u,\hat{n}\cdot\hat{v}))h_{A}(f), \tag{10}\]
where \(h_{+,\times}(f)\) is frequency-domain ringdown signals after the Fourier transformation of \(h_{+,\times}(t)\),
\[\begin{split}& D_{u}^{+}=\left[\cos^{2}\!\theta\!\cos^{2}\!(\phi- \pi/6)-\sin^{2}\!(\phi-\pi/6)\right]\cos 2\psi-\cos\theta\sin(2\phi-\pi/3)\sin 2 \psi\\ & D_{u}^{\times}=-\cos\theta\sin(2\phi-\pi/3)\cos 2\psi-\left[\cos^{2}\! \theta\!\cos^{2}\!(\phi-\pi/6)-\sin^{2}\!(\phi-\pi/6)\right]\sin 2\psi\\ & D_{v}^{+}=\left[\cos^{2}\!\theta\!\cos^{2}\!(\phi+\pi/6)-\sin^{2} \!(\phi+\pi/6)\right]\cos 2\psi-\cos\theta\sin(2\phi+\pi/3)\sin 2\psi\\ & D_{v}^{\times}=-\cos\theta\sin(2\phi+\pi/3)\cos 2\psi-\left[\cos^{2}\!\theta\!\cos^{2}\!(\phi+\pi/6)-\sin^{2}\!(\phi+\pi/6) \right]\sin 2\psi.\end{split} \tag{11}\]
The frequency-dependent transfer function \(\mathcal{T}\) is
\[\mathcal{T}(u,\hat{n}\cdot\hat{u})=\frac{1}{2}e^{-iu}\left[e^{-iu(1-\hat{n}\cdot \hat{u})/2}\text{sinc}\left(u(1+\hat{n}\cdot\hat{u})/2\right)+e^{iu(1+\hat{n} \cdot\hat{u})/2}\text{sinc}\left(u(1-\hat{n}\cdot\hat{u})/2\right)\right]. \tag{12}\]
Here, \(u=\frac{2\pi fL}{c}\), where \(L\) is the arm length of the detector and \(c\) is the speed of light, \(\text{sinc}(z)=\frac{\sin z}{z}\), \(\hat{n}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\) is the orientation of the source and the unit vectors with respect to detector's arms \(\hat{u},\hat{v}\) are
\[\hat{u}=\left(\cos\frac{\pi}{6},\sin\frac{\pi}{6},0\right),\hat{v}=\left(\cos \frac{\pi}{6},-\sin\frac{\pi}{6},0\right). \tag{13}\]
For convenience, Fig. 2 shows the detector coordinate system adopted in this paper. The origin is placed at spacecraft 1. \((\hat{p},\hat{q},\hat{k})\) are basis vectors of the canonical reference frame, where \(\hat{k}\) denotes the direction of propagation of gravitational waves. \((\hat{\phi},\hat{\theta},\hat{n})\) are basis vectors of the observational reference frame and \(\psi\) is the polarization angle.
In order to test EdGB gravity with space-based gravitational wave detectors, it is necessary to evaluate the capability of LISA, TaiJi, and TianQin. Here, we adopt the noise power spectral density and average response functions of tensor polarizations for the Michelson combination X [54]:
\[S_{N}(u)_{X}=\frac{4\sin^{2}\!u}{u^{2}}\left[\frac{s_{a}^{2}L^{2}}{u^{2}c^{4}}( 3+\cos 2u)+\frac{u^{2}s_{x}^{2}}{L^{2}}\right], \tag{14}\]
\[\begin{array}{c}R(u)_{X}=\frac{\sin^{2}\!u}{2u^{2}}[(-7\sin u+2\sin 2u)/u+(-4+5 \cos u-4\cos 2u)/u^{2}+(-5\sin u+4\sin 2u)/u^{3}\\ +(5+\cos 2u)/3-6\cos 2u(Cui-2Ci2u+Ci3u+\ln 4/3)+4(Cui-Ci2u+\log 2)\\ -6\sin 2u(Siu-2Si2u+Si3u)],\end{array} \tag{15}\]
where SinIntegral \(\text{Si}(z)=\int_{0}^{z}(\sin t/t)dt\), CosIntegral \(\text{Ci}(z)=-\int_{z}^{\infty}(\cos t/t)dt\), \(S_{a}\) is the residual acceleration noise, \(S_{x}\) is the displacement noise. For LISA, \(S_{a}=3\times 10^{-15}\) ms\({}^{-2}\)\(/\sqrt{\text{Hz}}\), \(S_{x}=1.5\times 10^{-11}\text{m}/\sqrt{\text{Hz}}\) and \(L=2.5\times 10^{9}\) m [55]. For TaiJi, \(S_{a}=3\times 10^{-15}\) ms\({}^{-2}\)\(/\sqrt{\text{Hz}}\),
Figure 2: The detector coordinate system adopted in this paper.
\(8\times 10^{-12}\rm{m}/\sqrt{Hz}\) and \(L=3\times 10^{9}\) m [56]. For TianQin, \(S_{a}=1\times 10^{-15}\) ms\({}^{-2}\) / \(\sqrt{Hz}\), \(S_{x}=1\times 10^{-12}\rm{m}/\sqrt{Hz}\) and \(L=\sqrt{3}\times 10^{8}\) m [57]. The sky-averaged sensitivity is defined to read [58]
\[S_{n}(f)=\frac{S_{N}(f)}{R(f)}. \tag{16}\]
Especially, the galactic confusion noise mainly generated by abundant double white dwarf binaries plays a non-ignorable role in the detection of gravitational waves. For LISA and TaiJi, the galactic confusion noise takes the form [58]
\[S_{c}(f)=\alpha f^{-7/3}e^{-\beta^{4}+\gamma f\sin(\eta f)}\left[1+\tanh\left( \lambda\left(f_{c}-f\right)\right)\right]\rm{Hz}^{-1}. \tag{17}\]
For TianQin, the galactic confusion noise can be modeled as [59]
\[S_{\rm{ctq}}(f)=10^{\frac{6}{20}a_{(Log\{\frac{f}{10^{4}}\})^{l}}}. \tag{18}\]
Provided that the detector scenario is operated for 4 years, the corresponding coefficients are \(\alpha=9\times 10^{-45}\), \(\beta=0.138\), \(\gamma=-221\), \(\eta=521\), \(\lambda=1680\), \(f_{c}=0.0013\), \(a_{0}=-18.6,a_{1}=-1.43,a_{2}=-0.687,a_{3}=0.24,a_{4}=-0.15,a_{5}=-1.8\) and \(a_{6}=-3.2\). Thus, the full sensitivity curve is derived by adding the galactic confusion noise to \(S_{n}(f)\).
Fig. 3 shows the sensitivity curve for LISA, TaiJi, and TianQin, from which we can see that TianQin is more sensitive to gravitational wave signals at higher frequencies while TaiJi and LISA are reliable to detect signals for lower frequencies. TaiJi is better than LISA in detecting gravitational wave signals at higher frequencies, because the target displacement noise of TaiJi is better than LISA in case where both residual acceleration noise and arm length of the detector are similar. Obviously, the galactic confusion noise provokes a small rise of the sensitivity value in the low-frequency range while TianQin is less affected than TaiJi and LISA.
Figure 3: The sensitivity curves for LISA, TaiJi, and TianQin.
The SNR and the uncertainty of parameter estimation for Edgb Gravity
The inner product weighted by the detector noise spectral density of two frequency-domain signals \(h_{1}(f),h_{2}(f)\) is defined as
\[(h_{1}|h_{2})=2\int_{f_{low}}^{f_{high}}\frac{{h_{1}}^{*}(f)h_{2}(f)+h_{1}(f){h_{ 2}}^{*}(f)}{S_{N}(f)}df, \tag{19}\]
where we choose \(f_{low}\) to be half of the smallest oscillation frequency and \(f_{high}\) to be twice the highest oscillation frequency to prevent the "junk" radiation in the Fourier transformation [9]. The sky-averaged SNR (denoted by \(\rho\)) based on the definition of the inner product can be expressed as
\[\rho=\sqrt{(h|h)}. \tag{20}\]
Supposing that the probability distribution for the measurement errors of parameters is Gaussian in the limit of large SNR [60; 61], the measurement errors on parameters \(\theta_{i}\) can be derived from Fisher information matrix:
\[\Delta\theta_{i}\approx\sqrt{(\Gamma^{-1})_{ii}}. \tag{21}\]
Here the Fisher information matrix is given by
\[\Gamma_{ij}=\left(\frac{\partial h}{\partial\theta_{i}}\mid\frac{\partial h} {\partial\theta_{j}}\right), \tag{22}\]
where \(\vec{\theta}\) is a 7-dimensional parameter space in the ringdown signals, namely \(\vec{\theta}=\left\{M,\chi_{f},D_{L},\nu,\phi_{0},\iota,\zeta\right\}\).
First, the SNR varying with the mass of the black hole for LISA, TaiJi, and TianQin are calculated in Fig. 4, from which we plot two dominant QNMs in the ringdown signals, respectively. As one can see, the total SNR is heavily dominated by the strongest \((2,2)\) mode. On the whole, as the mass increase, the SNR grows until reaching the maximum and then decreases gradually for all detectors. After comparing the SNR of LISA, TaiJi, and TianQin, it is found that TianQin is more sensitive to gravitational signals with smaller masses, while TaiJi and LISA is more reliable to detect signals for more massive black holes. Especially, the galactic confusion noise serves as a catalyst for a small dip to appear around the mass within \(6\times 10^{6}M_{\odot}\leq M\leq 10^{8}M_{\odot}\) for TaiJi and LISA, while for TianQin, this effect is trivial. Besides, its implications can be negligible for the less massive black hole.
Next, we consider estimating the measurement errors for the dimensionless deviating parameter \(\zeta\) (denoted by \(\Delta\zeta\)) via available tools. Before introducing the standard parameters of the space-based gravitational wave detector, we roughly evaluate the influence of the arm length of the detector on the dimensionless deviating parameter, where the residual acceleration noise is fixed and the displacement noise is proportional to the arm length for gravitational wave detection. Fig. 5 shows the measurement errors as a function of arm length for different sources of gravitational waves, where the maximum error of \(\zeta\) is denoted with dotted cyan horizontal line and the real arm length of LISA, TaiJi, and TianQin is denoted with dashed green vertical line. The test of EdGB gravity will be affected by sources of different masses. The arm length of TaiJi and LISA is more appropriate to test EdGB gravity for more massive black holes while for less massive black holes, more practical arm length can be provided by TianQin. Because the above are qualitative analysis, only the parameters related to LISA are used here to evaluate the ability to test EdGB gravity on the
whole. For the specific calculation below, the standard parameters of the space-based gravitational wave detector are considered.
In addition, in order to explore how several source parameters, such as the mass \(M\), the luminosity distance \(D_{L}\), the spin of the remnant black hole \(\chi_{f}\) and the symmetric mass ratio \(\nu\), affect \(\Delta\zeta\), we display the variation of \(\Delta\zeta\) with related parameters in Fig. 6. The top left plot of Fig. 6 shows the measurement errors as a function of the mass, where we have assumed
Figure 4: The SNR of LISA, TaiJi, and TianQin with the change of the mass for the ringdown signal. The calculations are carried out with \(\chi_{f}=0.01,D_{L}=2.5Gpc,\nu=2/9,\phi_{0}=0,\iota=\pi/3,\zeta=0.2\).
Figure 5: The dependence of parameter estimation accuracy \(\Delta\zeta\) on the arm length of the detector \(L\) for different sources of gravitational waves. Here, we choose the basic parameters of LISA as a reference. The residual acceleration noise is fixed as \(S_{a}=3\times 10^{-15}\ ms^{-2}\ /\sqrt{Hz}\). The displacement noise is proportional to the arm length \(S_{x}\sim\alpha L\), where \(\alpha=\frac{1.5\times 10^{-11}}{2.5\times 10^{9}}\). The dotted cyan horizontal line represents the maximum error of \(\zeta\) and the dashed green vertical line denotes the real arm length of LISA, TaiJi, and TianQin. The calculations are carried out with \(\chi_{f}=0.01,\nu=2/9,\phi_{0}=0,\iota=\pi/3,\zeta=0.2\).
\(\chi_{f}=0.01,D_{L}=2.5Gpc,\nu=2/9\). As expected, at first \(\Delta\zeta\) decreases with the increase of \(M\) until arriving at the minimum value, but soon increases with the accumulation of \(M\) for all detectors. For specific massive black holes, a slight bulge emerges in the measurement errors owing to the galactic confusion noise, which implies the galactic confusion noise plays a negative role in constraining EdGB gravity. It is clear that TianQin can constrain \(\zeta\) more accurately for smaller masses, but for more massive black holes TaiJi and LISA performs well, which is also reflected by the sensitivity of detectors. The measurement errors as a function of the luminosity distance are plotted in the top right plot of Fig. 6, where we fix \(M=10^{7}M_{\odot},\chi_{f}=0.01,\nu=2/9\). As one can see, \(\Delta\zeta\) increases with the increase of \(D_{L}\), which is obvious because the SNR increases monotonically with decreasing distance. This is intuitively reflected in the ringdown waveform containing the term \(1/D_{L}\), so there is a \(1/{D_{L}}^{2}\) in the error function after Fisher analysis. The bottom left plot of Fig. 6 presents the dependence of measurement errors on the spin of the remnant black hole, where we set \(M=10^{7}M_{\odot},D_{L}=2.5Gpc,\nu=2/9\). We note that \(\Delta\zeta\) decreases extremely slowly as \(\chi_{f}\) increases. It was implied that the spin parameter of the black hole has little effect on measurement errors for EdGB gravity by detectors. The dependence of measurement errors on the symmetric mass ratio is illustrated in the bottom right plot of Fig. 6, where we choose \(M=10^{7}M_{\odot},D_{L}=2.5Gpc,\chi_{f}=0.01\). Observe first that \(\Delta\zeta\) decrease with the increase of \(\nu\) and then grows larger abruptly as \(\nu\) approaches \(0.25\). That's because radiated energy gets more with large symmetric mass ratio and as \(\nu\) approaches \(0.25\), the amplitude of the QNMs for \((3,3)\) is zero. In order to estimate associated parameters better, it was necessary to avoid selecting \(\nu\) in this limit or replace it with other dominant modes.
In particular, we pay attention with interest to the issue of the measurement errors \(\Delta\zeta\) varying with \(\zeta\), which is illustrated in Fig. 7. One can see that the effect of \(\Delta\zeta\) on \(\zeta\) is dynamic. That's
Figure 6: The dependence of parameter estimation accuracy \(\Delta\zeta\) on the mass \(M\) (top left), the luminosity distance \(D_{L}\) (top right), the spin of the remnant black hole \(\chi_{f}\) (bottom left) and the symmetric mass ratio \(\nu\) (bottom right). The black, magenta, and blue curves represent the errors detected by LISA, TaiJi, and TianQin respectively. Other parameters used are \(\phi_{0}=0,\iota=\pi/3,\zeta=0.2\).
because \(\zeta\) is nonlinear in Eq. (4) and Eq. (5). Owing to relatively large coefficients in Table 1, higher-order corrections are not discarded here. Hence, the result of the covariance matrix contains variable \(\zeta\), and specifically, \(\Delta\zeta\) decreases with the increase of \(\zeta\). When the dimensionless deviating parameter is small, it seems difficult to distinguish EdGB gravity from General Relativity due to the larger uncertainty of \(\zeta\). We emphasize this problem by tracing the maximum error of \(\zeta\) in EdGB
Figure 8: The maximum constraint on the dimensionless deviating parameter for EdGB gravity with LISA and TianQin in the \(Log(D_{L}/Gpc)-Log(M/M_{\odot})\) plane. The calculations are carried out for the case of \(\phi_{0}=0,\iota=\pi/3,\nu=2/9,\chi_{f}=0.01\).
Figure 7: The dependence of parameter estimation accuracy \(\Delta\zeta\) on the dimensionless deviating parameter \(\zeta\). The dashed green line denotes \(\Delta\zeta=\zeta\). The red intersections are \(\Delta\zeta_{\rm max}\) for each detector. The profiles are obtained with given \(M=10^{7}M_{\odot},\chi_{f}=0.01,D_{L}=2.5Gpc,\nu=2/9,\phi_{0}=0,\iota=\pi/3\) for different space-based gravitational wave detectors.
gravity (denoted by \(\Delta\zeta_{\rm max}\)), namely the solution of the equation \(\Delta\zeta=\zeta\), which is treated as the maximum constraint of the detector by probing a particular wave source, marked as red dots. Only in the area below that do the space-based gravitational wave detectors have the potential to tell the difference between EdGB gravity and General Relativity. Due to no significant difference in the standard parameters of the space-based gravitational wave detector between TaiJi and LISA, for comparison purposes, the remainder of our paper focuses only on the results of LISA and TianQin. Furthermore, to investigate the maximum constraint on the dimensionless deviating parameter for EdGB gravity with LISA and TianQin, we display the density plots of \(\Delta\zeta_{\rm max}\) in the \(Log_{10}(D_{L}/Gpc)-Log_{10}(M/M_{\odot})\) plane in Fig. 8. We observe that \(\zeta\) can be best constrainted with LISA for \(M\sim 5.5\times 10^{6}M_{\odot}\) and TianQin for \(M\sim 3\times 10^{6}M_{\odot}\). What's more, \(\Delta\zeta\) decreases with the increase of SNR, which is demonstrated in Fig. 9. For a much smaller deviation from General Relativity, we need to count on detectors with rather larger SNR to obtain the maximum constraint. The growth of the required SNR is not linear, and as the accuracy of constraints increases, the SNR increases dramatically. It was noticeable that compared with TianQin, LISA is more likely to give an optimal constraint in the future.
## V Bayesian inference
To verify the results calculated by Fisher information matrix analysis, we employ Bayesian inference method to analyze the simulated source. Bayesian inference method is widely used in estimating the probability distribution of unknown parameters from sampled data containing signals and noise, which is instrumental in astrophysical and cosmological analysis. Unlike Fisher information matrix analysis, which is limited to large SNR, Bayesian analysis is applicable to a wider range of situations. Furthermore, Bayesian posterior probability distributions will give more information. The disadvantage is that it is more computationally intensive. In essence, Bayesian statistics is to create a likelihood to associate unknown parameters and measurement parameters. Then the probability distribution of the unknown parameters will be updated through
the distribution of measurement data. This process is based on Bayes'theorem:
\[P(\vec{\theta}|d)=\frac{\pi(\vec{\theta})\mathcal{L}(d|\vec{\theta})}{p(d)}, \tag{23}\]
where \(P(\vec{\theta}|d)\) is the posterior probability of the set of free parameters. \(d=h(\vec{\theta}_{0})+n\) represents the measurement data which collects gravitational-wave signal modeled by all the true parameters \(\vec{\theta}_{0}\) and detector noise \(n\) modeled by the noise power spectra. \(\pi(\vec{\theta})\) is the prior probability of \(\vec{\theta}\). \(p(d)\) is a normalization constant which is also called the evidence of \(d\). \(\mathcal{L}(d|\vec{\theta})\) is the likelihood, which can be written as
\[\mathcal{L}(d|\vec{\theta})=\exp\left[-\frac{1}{2}(h(\vec{\theta})-d|h(\vec{ \theta})-d)\right]. \tag{24}\]
The amplitudes of the strain data for ringdown and the noise power spectra for LISA and TianQin are illustrated in Fig. 10. As one can see that the highest peak corresponds to mode \((2,2)\). By comparison, the noise power spectra of TianQin reaches the lowest level in the present frequency band.
After generating simulation data, we utilize Bayesian inference method to obtain the probability distributions of the source parameters, including \(Log_{10}(M/M_{\odot})\), \(\chi_{f}\), \(D_{L}/Gpc\), \(\nu\), \(\phi_{0}\), \(\iota\) and \(\zeta\). Fig. 11 shows the posterior distribution for the ringdown signal with LISA, where the true parameters for the ringdown signal are set to be \(M=10^{6.5}M_{\odot}\),\(\chi_{f}=0.1,D_{L}=10Gpc,\nu=2/9,\phi_{0}=0,\iota=\pi/3,\zeta=0.2\). The priors of the corresponding parameters are respectively set to be uniform distributions within the range of \([6,7]\), \([0.001,0.2]\), \([9,14]\), \([0,1/4]\), \([0,2\pi]\), \([0,\pi]\), \([0,0.4]\). For comparison, the results for the same wave source with TianQin are shown in Fig. 12. As we can see that the probability distribution for \(D_{L}\) is relatively poor, even if we have set a narrower prior. If a broad priori is set, the estimation accuracy of \(D_{L}\) will be even worse. More seriously, its estimation will affect the estimation accuracy of other parameters. Fortunately, the ringdown signal
Figure 10: The amplitudes of the strain data \(d\) for ringdown and the noise power spectra for LISA and TianQin. The results are obtained using the parameters \(M=6\times 10^{6}M_{\odot}\),\(\chi_{f}=0.01,D_{L}=2.5Gpc,\nu=2/9,\phi_{0}=0,\iota=\pi/3,\zeta=0.2\).
is usually spotted after inspiral and merger, which will provide an estimate of the luminosity distance. Now that we focus only on the probability distribution for \(\zeta\), Fig. 13 displays the posterior possibility of \(\zeta\) for LISA and TianQin with different luminosity distances to the source. As shown in the left side of Fig. 13, the luminosity distance is \(10Gpc\), and the dimensionless deviating parameter cannot be distinguished from \(0\). The \(95\%\) credible upper limits given by different detectors are \(0.385\) for LISA and \(0.387\) for TianQin. For the right subplot, all parameters remain the same, except that the luminosity distance is reduced to \(0.5Gpc\). For such a signal with an extremely large SNR (over \(1000\)), the space-based detectors can distinguish between dimensionless deviating parameter and \(0\) by the ringdown signal. The estimates for the dimensionless deviating parameter with both detectors are \(0.1943^{+0.0287}_{-0.0216}\) for LISA and \(0.1727^{+0.0402}_{-0.0292}\) for TianQin. It is obvious that LISA can give the best limit. It is found that the posterior possibility of \(\zeta\) at shorter luminosity distances is better than that at longer luminosity distance, which is implied that possible constraint to \(\zeta\) become more accurate at high SNR. The uncertainty of the dimensionless deviating parameter for different detectors matches the results of the Fisher information matrix.
Figure 11: The posterior distribution for the ringdown signals with LISA. The true parameters are set with \(M=10^{6.5}M_{\odot}\),\(\chi_{f}=0.1,D_{L}=10Gpc,\nu=2/9,\phi_{0}=0,\iota=\pi/3,\zeta=0.2\).
## VI Concluding remarks
In this paper, we analyze the ability of space-based gravitational-wave detectors LISA and Tian-Qin to constrain the dimensionless deviating parameter for EdGB gravity by detecting ringdown signals. The detection capabilities of different detectors for ringdown signals are first evaluated and compared. The ringdown waveform is parameterized by several of the strongest modes in EdGB gravity. We adopt time-delay interferometry Michelson combination X to make scientific performance evaluations for space-based detectors. According to the SNR distribution of LISA, TaiJi, and TianQin, it is found that TianQin is more sensitive to the ringdown signals with the less massive black hole, while TaiJi and LISA are more reliable to detect signals for more massive black holes. For specific massive black holes, the effect of galactic confusion noise plays an important role in the detection signal, and this effect is insignificant for TianQin compared to TaiJi and LISA.
In order to estimate the measurement accuracy of the dimensionless deviating parameter, we
Figure 12: The posterior distribution for the ringdown signals with TianQin. The true parameters are set with \(M=10^{6.5}M_{\odot},\chi_{f}=0.1,D_{L}=10Gpc,\nu=2/9,\phi_{0}=0,\iota=\pi/3, \zeta=0.2\).
first used Fisher information matrix analysis to study qualitatively the influence of the arm length of the detector on the measurement errors and then explore how the constraints on the dimensionless deviating parameter are affected by the source parameters, such as the mass, the luminosity distance, the spin of the remnant black hole and the symmetric mass ratio. In particular, we have found that the measurement errors of the dimensionless deviating parameter increase as the dimensionless deviating parameter decreases. To distinguish between EdGB gravity and General Relativity, it is necessary to determine whether the dimensionless deviating parameter is 0, which requires that its uncertainty is less than the value of the parameter itself. For a given source, the critical value at which the uncertainty is equal to the dimensionless deviating parameter is the upper limit of the detector's ability to constrain the dimensionless deviating parameter for EdGB gravity. We present the maximum capability of the detectors to constrain the dimensionless deviating parameter with the distribution of the wave source. LISA has more potential to constrain the dimensionless deviating parameter to an accurate level for massive binary black hole mergers, while TianQin performs better for smaller black holes.
In addition, to verify the conclusion obtained through Fisher information matrix analysis, we have performed Bayesian parameter estimation on the simulated data. It is found that the posterior possibility of \(\zeta\) becomes better as luminosity distances decrease. By comparison, LISA might be better able to constrain the dimensionless deviating parameter for massive black holes, because the sensitivity in this frequency band is lower than that of other detectors. These results may be helpful in testing EdGB gravity with future space-based gravitational wave detectors. For a more realistic black hole with bigger spins, it is interesting to generalize the research to explore SNR and estimate deviating parameters.
###### Acknowledgements.
This work is supported by the National Key R&D Program of China under Grant No.2022YFC2204602, the Natural Science Foundation of China Grant No.11925503. |
2303.08738 | The Quantum Density Matrix and its many uses: From quantum structure to
quantum chaos and noisy simulators | The quantum density matrix generalises the classical concept of probability
distribution to quantum theory. It gives the complete description of a quantum
state as well as the observable quantities that can be extracted from it. Its
mathematical structure is described, with applications to understanding quantum
correlations, illustrating quantum chaos and its unravelling, and developing
software simulators for noisy quantum systems with efficient quantum state
tomography. | Apoorva D. Patel | 2023-03-15T16:25:07Z | http://arxiv.org/abs/2303.08738v2 | # The Quantum Density Matrix
###### Abstract
The quantum density matrix generalises the classical concept of probability distribution to quantum theory. It gives the complete description of a quantum state as well as the observable quantities that can be extracted from it. Its mathematical structure is described, with applications to understanding quantum correlations, illustrating quantum chaos and its unravelling, and developing software simulators for noisy quantum systems with efficient quantum state tomography.
Computational Complexity, Density Matrix, Hilbert Space,
Machine Learning, Quantum Chaos, Simulator, Wigner Function
In the textbook formulation of quantum theory, the quantum states are first introduced as objects belonging to a Hilbert space, i.e. a complete vector space with complex coefficients and inner product. This description is convenient because it keeps the superposition principle of quantum dynamics manifest, at the cost of keeping around an overall unobservable phase. The concept of density matrix is developed later, from the outer products of the quantum state vectors. The overall unobservable phase disappears from it, while all the physical degrees of freedom of the quantum state are retained. The probabilistic nature of quantum theory is easily expressed in the density matrix formalism, and the formalism is particularly useful in describing the behaviour of open quantum systems.
In this article, Section 1 reviews the basic structure of the quantum density matrix [1; 2]. Afterwards, several applications of the density matrix are presented. They include the Wigner function in Section 2, quantum chaos and its unravelling using quantum machine learning in Section 3, and quantum simulator for an open quantum system in Section 4, with future outlook in Section 5. The standard Dirac notation is used throughout.
## 1 Basic Structure
A quantum state is a unit _ray_ in the Hilbert space. So \(\langle\psi|\psi\rangle=1\), and the whole class of vectors of the form \(e^{i\delta}|\psi\rangle\) is identified with the same quantum state. The overall global phase of a quantum state is unobservable, although relative phases between quantum states can be observed in interference experiments. (Rays form a projective manifold, in contrast to a simpler to work with Hilbert space, which is why quantum states are described in the vector space language with an overall redundant phase.) Because of the constraints, a quantum state in an \(n\)-dimensional Hilbert space is described by \(2n-2\) real parameters.
The density matrix is a quantum generalisation of the concept of the probability distribution in statistical physics. In addition to covering all the quantum properties that can be described in the vector space language, it also accommodates the concept of a probabilistic ensemble.
### Pure states
Let \(\{|i\rangle\}\) be a complete orthonormal basis in the Hilbert space. For a quantum state \(|\psi\rangle=\sum_{i}c_{i}|i\rangle\), \(\sum_{i}|c_{i}|^{2}=1\), also refered to as a pure state, the density matrix is the outer product \(\rho=|\psi\rangle\langle\psi|=\sum_{ij}c_{i}c_{j}^{*}|i\rangle\langle j|\). \(\rho\) is Hermitian, and the normalisation condition is linear, \(Tr(\rho)=1\). Also, the overall global phase is eliminated from \(\rho\) by construction, i.e. each ray in the Hilbert space specifies a unique density matrix. In addition, \(\rho\) is also a projection operator, i.e. \(\rho^{2}=\rho\), which implies that its eigenvalues can be only 0 or 1. Taking into account the constraint \(Tr(\rho)=1\), only one eigenvalue of \(\rho\) is one, while all other eigenvalues are zero. Such a \(\rho\) is also positive, because its projection along any direction is positive, \(Tr(\rho|i\rangle\langle i|)=|\langle\psi|i\rangle|^{2}\geq 0\). Altogether, a pure state density matrix for an \(n\)-dimensional quantum state is described by \(2n-2\) real parameters.
Upon measurement, the eigenstate \(|i\rangle\) of the measured observable is detected with the probability \(\rho_{ii}=|c_{i}|^{2}\). Using the cyclic property of trace, the expectation value of an observable can be expressed as
\[\langle O\rangle\equiv\langle\psi|O|\psi\rangle=Tr(\rho O). \tag{1}\]
Furthermore, the post-measurement ensemble of quantum states can be expressed as the transformation:
\[\rho\longrightarrow\sum_{i}P_{i}\rho P_{i}=\sum_{i}|c_{i}|^{2}P_{i}\,\ \ P_{i}=|i\rangle\langle i|. \tag{2}\]
In other words, measurement of an observable makes the density matrix diagonal in the eigenbasis of the observable, erasing all the off-diagonal elements that carry the superposition information. It also leads to the ensemble interpretation of the density matrix, where multiple measurement outcomes occur with their associated probabilities.
The Schrodinger evolution of the density matrix is given by:
\[i\frac{d}{dt}\rho(t) = \Big{(}i\frac{d}{dt}|\psi(t)\Big{)}\langle\psi(t)|+|\psi(t) \rangle\Big{(}i\frac{d}{dt}\langle\psi(t)|\Big{)} \tag{3}\] \[= H(t)|\psi(t)\rangle\langle\psi(t)|-|\psi(t)\rangle\langle\psi(t )|H(t)\] \[\equiv [H(t),\rho(t)]\.\]
It has the formal solution: \(\rho(t)=U(t,0)\ \rho(0)\ U^{\dagger}(t,0)\), with the path-ordered evolution operator \(U(t,0)=\mathcal{P}(\exp(-i\int_{0}^{t}Hdt))\). This unitary evolution preserves the Hermiticity, trace and projection nature of \(\rho\).
An important property of the pure state density matrix is that it has an inherent symplectic (i.e. phase space) structure, describable using pairs of conjugate coordinates (see for example, Ref.[3], Section 7). This means that quantum states are never fully localised; they are smeared objects over an area of the size of the Planck constant, for each conjugate pair of coordinates.
### Mixed states
The properties specified by Eqs.(1-3) are linear in \(\rho\), in addition to the Hermiticity, trace and positivity conditions. Hence they hold for a linear combination of pure state density matrices as well. Since the density matrix is a quadratic function of \(|\psi\rangle\), this linear combination is not a superposition of quantum states. Rather it describes a mixture of quantum states in a statistical ensemble. The normalisation is retained by choosing this mixture to be a probabilistic one:
\[\rho_{\rm mixed}=\sum_{k}p_{k}\rho^{(k)}\,\ \ p_{k}\in[0,1]\,\ \ \sum_{k}p_{k}=1. \tag{4}\]
The post-measurement ensemble of quantum states is such a mixture. The probabilistic mixture nature of \(\rho_{\rm mixed}\) makes it very useful in the analysis of open quantum systems, i.e. quantum systems that are not isolated but interact with their environments. A general \(\rho_{\rm mixed}\) for an \(n\)-dimensional quantum state is described by \(n^{2}-1\) real parameters.
\(\rho_{\rm mixed}\) is a linear interpolation of pure state density matrices \(\rho^{(k)}\). The collection of possible \(\rho_{\rm mixed}\) hence forms a _convex set_ with \(\rho^{(k)}\) on the boundary. (In a convex set, the complete linear interpolation between any two points of the set belongs to the set.) \(\rho_{\rm mixed}\) is positive, but it is not necessarily a projection operator. Its eigenvalues lie in the interval \([0,1]\), so one can write \(\rho_{\rm mixed}^{2}\preceq\rho_{\rm mixed}\). In general, a particular \(\rho_{\rm mixed}\) can be prepared by (infinitely) many different combinations of \(\rho^{(k)}\). Quantum theory provides no information
at all about the method of such a preparation. (This is analogous to the situation in classical physics, where an equilibrium state can be arrived at in many different ways, and no information about the direction of arrival survives in the description of the equilibrium state.)
### Qubit
A qubit is a quantum state in a two-dimensional Hilbert space. Starting with \(|\psi\rangle=e^{i\delta}(\cos(\theta/2)|0\rangle+e^{i\phi}\sin(\theta/2)|1\rangle)\), \(\theta\in[0,\pi]\), \(\phi\in[0,2\pi]\), the density matrix for a single pure qubit is:
\[\rho = \left(\begin{matrix}\cos^{2}(\theta/2)&e^{-i\phi}\sin(\theta/2) \cos(\theta/2)\\ e^{i\phi}\sin(\theta/2)\cos(\theta/2)&\sin^{2}(\theta/2)\end{matrix}\right) \tag{5}\] \[= \frac{1}{2}\left(\begin{matrix}1+\cos\theta&e^{-i\phi}\sin\theta \\ e^{i\phi}\sin\theta&1-\cos\theta\end{matrix}\right)\ =\ \frac{1}{2}(I+\hat{n}\cdot\vec{\sigma})\.\]
Here \(\hat{n}\) is the unit vector specifying the location \((\theta,\phi)\) on the Bloch sphere: \(\hat{n}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\), and \(\vec{\sigma}\) are the Pauli matrices. The geometry is illustrated in Fig. 1. The general density matrix for a mixed state qubit lies in the interior of the Bloch sphere, specified by the spherical polar coordinates \((r,\theta,\phi)\). It has the form \(\rho_{\rm mixed}=\frac{1}{2}(I+\vec{r}\cdot\vec{\sigma})\) with \(r\in[0,1]\).
### Reduced density matrix
Bipartite quantum systems provide a setting to illustrate many quantum puzzles, arising from the unusual properties of quantum correlations. The setting is versatile enough to tackle a variety of situations: two qubits, two quantum
Figure 1: The Bloch sphere representation of a qubit. The basis states \(|0\rangle\) and \(|1\rangle\) correspond to the north and the south poles respectively. A rotation by angle \(\theta\) around the direction \(\hat{m}\) takes the state \(|0\rangle\) to the state \(|\psi\rangle\). The state \(|\psi_{\perp}\rangle\) orthogonal to \(|\psi\rangle\) corresponds to its diametrically opposite point.
registers, a quantum system and its measurement apparatus, and a quantum system and its environment (i.e. the rest of the universe). It must be kept in mind that the correlations depend on how the whole system is divided into two parts.
Many questions about bipartite systems concern what can be learned from transforming or observing only one part. In a probabilistic framework, the calculations for such instances can be simplified by summing over all possibilities for the part that remains unobserved. Let the whole Hilbert space be \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), transformations that act only on part \(B\) have the form \(I_{A}\otimes U_{B}\), and operators that measure properties of only part \(B\) have the structure \(I_{A}\otimes O_{B}\). On the other hand, the density matrix describing the state of the whole system, \(\rho_{AB}\), may not factorise due to correlations.
The generic density matrix for the whole system can be expanded in terms of orthonormal bases for parts \(A\) and \(B\) as: \(\rho_{AB}=\sum_{ijkl}\rho_{ik,jl}|i_{A}\rangle|j_{B}\rangle\langle k_{A}| \langle l_{B}|\). Then the reduced density matrix for the part \(B\), corresponding to a sum over all the unobserved possibilities of the part \(A\), is the partial trace of \(\rho_{AB}\) over part \(A\):
\[\rho_{B}=Tr_{A}(\rho_{AB})=\sum_{i}\rho_{ii,jl}|j_{B}\rangle\langle l_{B}|. \tag{6}\]
By construction, \(\rho_{B}\) is Hermitian, positive and \(Tr_{B}(\rho_{B})=1\). Under a local transformation of part \(B\), it evolves to \(U_{B}\rho_{B}U_{B}^{\dagger}\). The expectation value for observing a local operator on part \(B\) is:
\[\langle O_{B}\rangle=Tr_{AB}(\rho_{AB}(I_{A}\otimes O_{B}))=Tr_{B}(\rho_{B}O_ {B}). \tag{7}\]
\(\rho_{B}\) is indeed a mixed state density matrix, and it is not always a projection operator. This is a generic result--the change in the nature of the density matrix is not a dynamical process, but it is a consequence of ignoring some degrees of freedom of the whole system.
The off-diagonal elements of a density matrix are complex numbers in general, and they can interfere destructively when summed over different possibilities. The off-diagonal elements are called coherences, and their suppression is known as _decoherence_. When a quantum system interacts with its environment, it is impossible to keep track of the environmental degrees of freedom and hence they are summed over, which suppresses the coherences and drives the density matrix towards a diagonal form. A diagonal density matrix corresponds to a classical probability distribution, and decoherence provides a means to understand how a quantum system can reduce to a classical one. It should be kept in mind that the nature of the system-environment interaction determines what would be the appropriate diagonal basis, which is also refered to as the preferred basis.
### Gleason's theorem
The axiomatic formulation of quantum mechanics assumes a description of states and operators, and then provides a prescription for probabilistic outcomes of operator measurements. Gleason's theorem provides a powerful counter-statement that any theory obeying certain rules of probabilistic observations (which may be quantum or classical) for all its states must have a description in terms of a density matrix with specific properties. Together, they make the density matrix formalism of quantum mechanics both necessary and sufficient.
Consider the situation where independent measurement settings \(\{M_{i}\}\) for a system produce outcomes with probabilities \(\{p_{\phi}(M_{i})\}\) for the state \(\phi\). Then it is logical to assume that:
(i) Probability that no outcome is measured is zero, i.e. \(p_{\phi}(0)=0\).
(ii) Probability of all possible outcomes is one, i.e. \(p_{\phi}(\sum_{i}M_{i})=1\).
(iii) Probabilities of independent outcomes add, i.e. \(p_{\phi}(M_{i}+M_{j})=p_{\phi}(M_{i})+p_{\phi}(M_{j})\) for \(i\neq j\).
In the quantum Hilbert space, any measured operator can be expressed in terms of a complete orthonormal set of projection operators, \(O=\sum_{i}\lambda_{i}P_{i}\) with \(\sum_{i}P_{i}=I\). Then the orthogonality of the projection operators \(\{P_{i}\}\), i.e. \(P_{i}P_{j}=0\) for \(i\neq j\), allows them to be identified as the measurement settings \(\{M_{i}\}\).
In these circumstances, Gleason's theorem guarantees that there exists a unique solution in terms of a Hermitian, positive and unit-trace density matrix such that \(p_{\phi}(P_{i})=Tr(\rho(\phi)P_{i})\), for any Hilbert space of dimension greater than two. The key assumption here is that probabilities add for independent measurement outcomes. With the freedom to choose the basis directions in a Hilbert space, the theorem can be interpreted as applying either to all states for a fixed \(\{P_{i}\}\), or all possible \(\{P_{i}\}\) for a fixed state. Note that the classical situation just corresponds to a diagonal density matrix.
An exception occurs for the two-dimensional Hilbert space of a qubit, because it does not have enough orthogonal directions and the assumption (iii) reduces to \(p_{\phi}(M_{1})+p_{\phi}(M_{2})=1\). The density matrix solution remains valid, but it is not unique and (infinitely) many other solutions can also be constructed.
These properties let us look upon the density matrix as a quantum generalisation of the classical probability distribution. The off-diagonal elements, which can be complex, bring in new features for the behaviour of expectation values that are absent in the classical version. (It is worth remembering that complex numbers were invented to solve quadratic equations that did not have real solutions, and turned out to be powerful enough to obtain roots of polynomials of any order.)
### Schmidt decomposition
This is a striking result from linear algebra, which predates quantum theory. It simplifies the description of correlations between two complementary parts of a quantum system, by making a clever choice of basis.
Any pure quantum state of a bipartite system can be expressed in the form:
\[|\psi_{AB}\rangle=\sum_{i,\mu}a_{i\mu}|i_{A}\rangle|\mu_{B}\rangle\equiv\sum_{i }|i_{A}\rangle|\vec{i}_{B}\rangle\, \tag{8}\]
where \(|i_{A}\rangle\in\mathcal{H}_{A}\) and \(|\mu_{B}\rangle\in\mathcal{H}_{B}\) form complete orthonormal bases, while the vectors \(|\vec{i}_{B}\rangle\equiv\sum_{\mu}a_{i\mu}|\mu_{B}\rangle\in\mathcal{H}_{B}\) may not be either normalised or mutually orthogonal. Now choose the orthonormal basis \(\{|i_{A}\rangle\}\) such that the reduced density matrix \(\rho_{A}\) is diagonal. \(\rho_{A}\) can be also expressed as the partial trace \(Tr_{B}(\rho_{AB})\). Comparison of the two forms gives:
\[\rho_{A} = \sum_{i}p_{i}|i_{A}\rangle\langle i_{A}| \tag{9}\] \[= Tr_{B}\Big{(}\big{(}\sum_{i}|i_{A}\rangle|\vec{i}_{B}\rangle \big{)}\big{(}\sum_{j}\langle j_{A}|\langle\vec{j}_{B}|\big{)}\Big{)}=\sum_{ ij}\langle\vec{j}_{B}|\vec{i}_{B}\rangle\ |i_{A}\rangle\langle j_{A}|\.\]
Consistency in the orthonormal basis \(\{|i_{A}\rangle\}\) requires that \(\sum_{j}\langle\vec{j}_{B}|\vec{i}_{B}\rangle=p_{i}\delta_{ij}\). Thus \(\{|i_{B}\rangle\}\) also form an orthogonal basis, and the vectors \(|i^{\prime}_{B}\rangle=p_{i}^{-1/2}|\vec{i}_{B}\rangle\) are orthonormal. Moreover, we can also express \(|\psi_{AB}\rangle=\sum_{i}p_{i}^{1/2}|i_{A}\rangle|i^{\prime}_{B}\rangle\), and have the reduced density matrix \(\rho_{B}=\sum_{i}p_{i}|i^{\prime}_{B}\rangle\langle i^{\prime}_{B}|\).
This result, which converts a bipartite quantum state from a double sum over indices \(i\) and \(\mu\) to a single sum over index \(i\), by a clever choice of basis, has many physical implications (subject to the specific choice of partition):
\(\bullet\) There is no restrictions on the dimensionalities of \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\). The number of non-zero values of \(p_{i}\) that appear in the preceding expansions of the reduced density matrices \(\rho_{A}\) and \(\rho_{B}\) is called the Schmidt rank \(r_{S}\). Obviously, \(r_{S}\leq\min(dim(\mathcal{H}_{A}),dim(\mathcal{H}_{B}))\). When \(r_{S}=1\), the quantum state factorises between parts \(A\) and \(B\), and there are no correlations. But when \(r_{S}>1\), the quantum state does not factorise between parts and \(A\) and \(B\), and such states are called _entangled_.
\(\bullet\) When \(dim(\mathcal{H}_{A})\leq dim(\mathcal{H}_{B})\), only up to \(dim(\mathcal{H}_{A})\) degrees of freedom of \(\mathcal{H}_{B}\) can be correlated with those of \(\mathcal{H}_{A}\). This is true even if \(\mathcal{H}_{B}\) has many more degrees of freedom than \(\mathcal{H}_{A}\), as is often the case when \(A\) labels the system and \(B\) its environment. Diagonalisation of \(\rho_{B}\) is needed to explicitly find these degrees of freedom, but diagonalisation of \(\rho_{A}\) is enough to specify their number. The correlations are constrained by the one-to-one correspondence between \(|i_{A}\rangle\) and \(|i^{\prime}_{B}\rangle\), and that is known as _monogamy_.
\(\bullet\) The orthonormal basis sets \(\{|i_{A}\rangle\}\) and \(\{|i^{\prime}_{B}\rangle\}\) with non-zero values of \(p_{i}\) have the same size. So they can be related by a unitary transformation (including both rotations and reflections). Also, the Schmidt decomposition is unaffected
by independent local unitary transformations on the two parts. Any transformation of the form \(U_{A}\otimes U_{B}\) merely redefines the basis sets \(\{|i_{A}\rangle\}\) and \(\{|i^{\prime}_{B}\rangle\}\).
\(\bullet\) Since any mixed state density matrix can be diagonalised as \(\rho_{A}=\sum_{i}p_{i}|i_{A}\rangle\langle i_{A}|\), it can always be extended to a pure state by adding suitable \(|i^{\prime}_{B}\rangle\). Such an extension of a mixed state to a pure state is not unique, but the required number of \(|i^{\prime}_{B}\rangle\) does not exceed \(dim(\mathcal{H}_{A})\), and so the pure state dimension does not exceed \((dim(\mathcal{H}_{A}))^{2}\). This concept turns out to be very useful in construction of error-correction codes for bounded error quantum computation that eliminate undesired system-environment correlations.
\(\bullet\) We now have the framework to start with a mixed quantum state, extend it to a pure quantum state, evolve it through unitary transformations, and then perform projective measurements as well as sum over unobserved degrees of freedom to get back a mixed quantum state. These steps can be merged to construct a direct quantum evolution of the initial mixed state to the final one. Such a merged description is called a _superoperator_ or a _quantum channel_ or a _completely positive trace preserving_ (CPTP) map. In it, the quantum state is not a ray, its evolution is not unitary, and its measurements are not projective. Such a generalised framework is useful in the study of open quantum systems and decoherence. The infinitesimal time evolution form of the CPTP map, with some additional assumptions, gives _master equations_.
\(\bullet\) The correlations between the two parts of a pure quantum state can be quantified in terms of the entropy:
\[S(\{p_{i}\})=-\sum_{i}p_{i}\log(p_{i})=-Tr(\rho_{A}\log(\rho_{A}))=-Tr(\rho_{B }\log(\rho_{B})). \tag{10}\]
Noting that \(S(|\psi_{AB}\rangle)=-Tr(\rho_{AB}\log(\rho_{AB}))=0\) for the pure state \(|\psi_{AB}\rangle\), \(S(\{p_{i}\})\) is called the _entropy of formation_ of the mixed state. \(S(\{p_{i}\})\) is maximised when all \(p_{i}\) are equal, \(S_{\max}=\log(r_{S})\). That corresponds to equipartition or the microcanonical ensemble of statistical mechanics.
\(\bullet\) For a system of two qubits, the Schmidt decompostion is \(|\psi_{AB}\rangle=\sqrt{p}|i_{A}\rangle|i^{\prime}_{B}\rangle+\sqrt{1-p}|j_{A} \rangle|j^{\prime}_{B}\rangle\), with \(p\in[0,1]\) and \(i\neq j\). In this case, the entropy \(S(p)\) is a monotonically increasing function of \(p\) for \(p\in[0,\frac{1}{2}]\), and can be used to compare correlations between the two qubits, i.e. specify whether one two-qubit system is more or less correlated than another one. The choice \(p=\frac{1}{2}\) gives the maximally entangled Bell states, which form a complete orthonormal basis in the four-dimensional Hilbert space. With the one-to-one correspondence between \(|i_{A}\rangle\) and \(|i^{\prime}_{B}\rangle\), they are very useful in construction of quantum cryptographic protocols.
### Superoperator evolution
Consider the generic evolution of a fully specified quantum state \(\rho_{A}\), which is initially uncoupled from its environment. Since \(\rho_{A}\) is a linear combination of pure states in the ensemble interpretation, it suffices to consider action of a generic \(U_{AB}\) on the quantum state \(|\psi_{A}\rangle\otimes|0_{B}\rangle\). Then, in terms of a complete
set of basis \(\{|\mu_{B}\rangle\}\),
\[U_{AB}(|\psi_{A}\rangle\!\otimes\!|0_{B}\rangle)=\sum_{\mu}|\mu_{B}\rangle\langle \mu_{B}|U_{AB}|\psi_{A}\rangle\!\otimes\!|0_{B}\rangle=\sum_{\mu}M_{\mu}|\psi_{ A}\rangle\!\otimes\!|\mu_{B}\rangle\, \tag{11}\]
with \(M_{\mu}=\langle\mu_{B}|U_{AB}|0_{B}\rangle\). Unitarity of \(U_{AB}\) for any \(|\psi_{A}\rangle\) implies that \(\sum_{\mu}M_{\mu}^{\dagger}M_{\mu}=I\). The corresponding density matrix evolution is:
\[\rho^{\prime}_{A}=Tr_{B}(U_{AB}\ \rho_{AB}\ U_{AB}^{\dagger})=\sum_{\mu}M_{ \mu}\ \rho_{A}\ M_{\mu}^{\dagger}\, \tag{12}\]
which maintains Hermiticity, trace and positivity.
This operator-sum representation (or Kraus representation) is extremely useful in analysis of open quantum systems. In particular:
\(\bullet\) Both unitary evolution and projective measurement are its special cases. The former has only one term in the sum, while the latter replaces \(M_{\mu}\) by the projection operators \(P_{\mu}\).
\(\bullet\) Generalised measurement is the reduction to \(\mathcal{H}_{A}\) of a projective measurement in \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\). It defines a positive operator valued measure (POVM) with probabilities \(p_{a}=Tr(\rho\Pi_{a})\), \(\sum_{a}\Pi_{a}=I\). The operators \(\Pi_{a}\) need not be normalised or orthogonal, but they can be expressed as \(\Pi_{a}=\lambda_{a}|a\rangle\langle a|\), \(\lambda_{a}\geq 0\). The resultant density matrix evolution, is also a particular case of Eq.(12).
\[\rho\longrightarrow\rho^{\prime}=\sum_{a}\sqrt{\Pi_{a}}\ \rho\ \sqrt{\Pi_{a}}. \tag{13}\]
\(\bullet\) The operator-sum representation is not unique. The Kraus operators \(M_{\mu}\) can be traded for \(N_{\mu}=\sum_{\nu}U_{\mu\nu}M_{\nu}\) by a unitary change of basis. The number of independent Kraus operators, however, cannot exceed \(((dim(\mathcal{H}_{A}))^{2}-1)((dim(\mathcal{H}_{B}))^{2}-1)\), in terms of the number of elements of \(\rho_{A}\) and \(\rho_{B}\).
\(\bullet\) The superoperator evolution is reversible only when it is unitary. Otherwise, it defines a semigroup, with the decoherence providing an arrow of time. When \(\sum_{\mu}M_{\mu}M_{\mu}^{\dagger}=I\) as well, the quantum channel is called _unital_, and the entropy of the system increases monotonically.
\(\bullet\) The linearity of the superoperator evolution can be justified by the ensemble interpretation. But a stronger property than positivity, called complete positivity, is required to make it a legitimate description in the whole universe. This property states that any extension of \(\mathcal{H}_{A}\) to \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) must be positive (not just a specific \(\mathcal{H}_{B}\) represented by \(|0_{B}\rangle\) before). An example of a map that is positive but not completely positive is \(\rho\rightarrow\rho^{T}\).
\(\bullet\) The Kraus representation theorem shows that any superoperator evolution preserving linearity, Hermiticity, trace and complete positivity, has the form given by Eq.(12), and the only freedom is a unitary change of basis for \(M_{\mu}\). Importantly, for the purpose of describing the evolution of the density matrix, there is no loss of generality in assuming that the initial quantum state is uncoupled from its environment (even when it may not be true in reality).
\(\bullet\) The operator-sum representation can be converted to a differential local time evolution with the Markovian approximation. This approximation amounts to assuming that the information that leaks to the environment does not return to the system during the time scales considered, or equivalently the equilibration time scale of the environment is much shorter than the evolution time scale of the system being considered. Then considering evolution for time \(dt\), one can write
\[M_{0}=I+(-iH+K)dt\,\ \ M_{\mu\neq 0}=L_{\mu}\sqrt{dt}. \tag{14}\]
The Hamiltonian in \(M_{0}\) is chosen to agree with Eq.(3), and the Kraus operator completeness relation fixes \(K=-\frac{1}{2}\sum_{\mu>0}L_{\mu}^{\dagger}L_{\mu}\). The resultant evolution is the Gorini-Kossakowski-Sudarshan-Lindblad master equation:
\[i\frac{d}{dt}\rho(t)=[H(t),\rho(t)]+i\sum_{\mu>0}\left(L_{\mu}\rho L_{\mu}^{ \dagger}-\frac{1}{2}L_{\mu}^{\dagger}L_{\mu}\rho-\frac{1}{2}\rho L_{\mu}^{ \dagger}L_{\mu}\right)\,. \tag{15}\]
## 2 Wigner Function
Wigner function is just the density matrix in the representation, where one relative index is Fourier transformed to its conjugate variable [4]. It therefore encodes complete information of a quantum system. It is real by construction. Since it is defined in the symplectic phase space, its domain is quantised in units of the Planck constant.
### Infinite dimensional systems
The Wigner function for a continuous one-dimensional quantum state is:
\[W(x,p) = \frac{1}{2\pi\hbar}\int_{-\infty}^{\infty}dy\ \psi^{*}(x-\frac{y}{2})e^{ipy/ \hbar}\psi(x+\frac{y}{2}) \tag{16}\] \[= \frac{1}{2\pi\hbar}\int_{-\infty}^{\infty}dy\ \rho(x-\frac{y}{2},x+\frac{y}{2})e^{ipy/ \hbar}\,\]
\[\rho(x-\frac{y}{2},x+\frac{y}{2})=\int_{-\infty}^{\infty}dp\ W(x,p)e^{-ipy/ \hbar}. \tag{17}\]
It can be negative, but its marginals are non-negative.
\[\int_{-\infty}^{\infty}dp\ W(x,p)=|\psi(x)|^{2}=\rho(x,x)\,\ \ \ \int_{-\infty}^{\infty}dx\ W(x,p)=|\tilde{\psi}(p)|^{2}. \tag{18}\]
Its smeared values over a phase space volume element \(\Delta x\Delta p=2\pi\hbar\) (associated with counting of states in quantum statistics) are also non-negative. The
normalisation condition is:
\[Tr(\rho)=1\quad\longleftrightarrow\quad\int_{-\infty}^{\infty}dx\ dp\ W(x,p)=1. \tag{19}\]
The expectation value of a Hermitian operator \(O\) is obtained as:
\[\langle O\rangle\equiv Tr(\rho O) = \int dx\ dy\ \rho(x-\frac{y}{2},x+\frac{y}{2})\ O(x+\frac{y}{2},x- \frac{y}{2}) \tag{20}\] \[= \int dx\ dy\int dp\ W(x,p)e^{-ipy/\hbar}\int dq\ O(x,q)e^{iqy/\hbar}\] \[= 2\pi\hbar\int dx\int dp\ W(x,p)\int dq\ O(x,q)\ \delta(p-q)\] \[= 2\pi\hbar\int dx\ dp\ W(x,p)\ O(x,p)\.\]
It should be noted that \(O(x,p)\) implicitly defined here is Hermitian, and its normalisation is fixed by the convention \(\langle I\rangle=1\).
### Finite dimensional systems
For a finite dimensional quantum system with \(d\) degrees of freedom, the odd and even values of \(d\) need to be handled separately. When \(d\) is odd,
\[W(n,k)=\frac{1}{d}\sum_{m=0}^{d-1}\rho_{n-m,n+m}e^{4\pi ikm/d}\, \tag{21}\]
is a valid Wigner function [5; 6]. Here the indices are defined modulo \(d\), i.e. \(n,k,m\in Z_{d}=\{0,1,...,d-1\}\). The odd value of \(d\) allows all independent indices to be covered in two cycles of \(Z_{d}\).
This definition does not work for even \(d\). If the index shift is made one sided, the Wigner function does not remain real. So an alternative construction is needed, incorporating a "quantum square-root". Since any integer is an odd number times a power of two, figuring out the Wigner function for \(d=2\) (i.e. a qubit) is sufficient to reach any \(d\) using tensor products.
For \(d=2\), the Wigner function can be defined using eigenvalues of \(\sigma_{z}\) and \(\sigma_{x}\) as the two conjugate labels (replacing \(x\) and \(p\)). \(\sigma_{z}\) and \(\sigma_{x}\) are related by the Hadamard operator, \(\sigma_{z}=H\sigma_{x}H\), which gives the discrete Fourier transformation in \(d=2\). For instance, one can call \(W(+,+)\) the weight for the spin being up along both \(z\)-axis and \(x\)-axis. The Wigner function for a qubit can be constructed as a map from the Bloch sphere representation, \(\rho=(I+\vec{n}\cdot\vec{\sigma})/2\), with the replacements:
\[I\ \rightarrow\ \frac{1}{2}\begin{pmatrix}1&1\\ 1&1\end{pmatrix}\,\quad\sigma_{x}\ \rightarrow\ \frac{1}{2}\begin{pmatrix}1&-1\\ 1&-1\end{pmatrix}\,\]
\[\sigma_{y}\ \rightarrow\ \ \pm\frac{1}{2}\begin{pmatrix}1&-1\\ -1&1\end{pmatrix}\;,\quad\sigma_{z}\ \rightarrow\ \frac{1}{2}\begin{pmatrix}1&1\\ -1&-1\end{pmatrix}\;. \tag{22}\]
The ambiguity in the sign for \(\sigma_{y}\) is related to the charge conjugation symmetry of the \(SU(2)\) group algebra, \(\vec{\sigma}\leftrightarrow-\vec{\sigma^{*}}\), and both choices should be checked for consistency.
The normalisation condition \(Tr(\rho)=1\) becomes \(\sum_{ij}W(i,j)=1\), while \(\rho^{2}\preceq\rho\) gives \(\sum_{ij}W(i,j)^{2}\leq 1/2\). A simple set of qubit Wigner function weights, for a pure state allowing negative values, is \((0.6,0.3,0.2,-0.1)\)[7].
The expectation values can be expressed as \(\langle O\rangle=\sum_{ij}W(i,j)\ O(i,j)\), where the operator normalisation, fixed by imposing \(\langle I\rangle=1\), is different from that for the Wigner function. The qubit operators map as:
\[I\ \rightarrow\ \ \begin{pmatrix}1&1\\ 1&1\end{pmatrix}\;,\ \vec{\sigma}\cdot\vec{n}\ \rightarrow\ \ \begin{pmatrix}n_{x}\pm n_{y}+n_{z}&-n_{x}\mp n_{y}+n_{z}\\ n_{x}\mp n_{y}-n_{z}&-n_{x}\pm n_{y}-n_{z}\end{pmatrix}\;. \tag{23}\]
The marginals giving qubit observables \(\langle I\pm\sigma_{i}\rangle\) are all non-negative.
The Wigner function is non-negative within the octahedron \(\pm x\pm y\pm z=1\) embedded in the Bloch sphere (taking into account both the signs of \(\sigma_{y}\)), as illustrated in Fig. 2. The directions \(\hat{n}_{j}\in\{\pm 1,\pm 1,\pm 1\}\), orthogonal to the faces of the octahedron, give the maximum negativity to the Wigner function.
Wigner functions for multi-qubit states are easily constructed using tensor products. For example, the Wigner function for the two-qubit singlet state becomes:
\[W_{\text{singlet}}=\frac{1}{8}\begin{pmatrix}-1&1&1&-1\\ 1&1&1&1\\ 1&1&1&1\\ -1&1&1&-1\end{pmatrix}\;. \tag{24}\]
Figure 2: The region of the Bloch sphere corresponding to non-negative Wigner function for the qubit is the inscribed octahedron. Its vertices are a unit distance away from the origin along the coordinate axes.
Its negative components are enough to give \(\langle(\vec{\sigma}\cdot\vec{n}_{1})(\vec{\sigma}\cdot\vec{n}_{2})\rangle=-\vec{ n}_{1}\cdot\vec{n}_{2}\), and violate the Bell inequality.
### Quantum features
Bell inequalities for experimentally observable correlations are derived assuming statistical probability distributions for arbitrary local hidden variables [8]. Their experimental violation has led to many discussions about the interpretation of quantum theory. The standard formulation of quantum theory bypasses them, without introducing any new variables, when the statistical probability distributions are replaced by the density matrix. Quantum density matrices bring in complex weights in general, Wigner functions make the weights real by a specific choice of representation, but the possibility of the weights being non-probabilistic (e.g. negative) remains. That is the sense in which Wigner functions are different from classical phase space distributions.
For a quantum algorithm, Wigner functions can be associated with the initial product state, the logic gate operations, and the final local measurements. The outcome probabilities of any quantum evolution can then be expressed as a phase space probability distribution, which is a product of these Wigner function factors summed over all evolution time steps \(t\) and all quantum state components \(n\). When all the Wigner function factors are non-negative, the evolution describes a classical stochastic process, which can be efficiently sampled with an effort polynomial in \(n\) and \(t\)[9]. This result is robust with respect to sampling errors and bounded approximations. It generalises the Gottesman-Knill theorem, which states that all Clifford group quantum operations can be perfectly simulated in polynomial time on a probabilistic classical computer [10].
Clifford group operations are those that transform the Pauli group \(\{I,\sigma_{x},\sigma_{y},\sigma_{z}\}^{\otimes n}\) within itself, upto phase factors \(\{\pm 1,\pm i\}\). For a single qubit, we can identify them with the symmetry operations of the octahedron depicted in Fig. 2, which transform the non-negative Wigner function region to itself:
(i) Rotations by angle \(\pi\) about an axis through diametrically opposite vertices flip signs of the transverse components. These are the Pauli matrix transformations, \(\sigma_{j}\rightarrow\sigma_{i}\sigma_{j}\sigma_{i}=-\sigma_{j}\) for \(i\neq j\).
(ii) Rotations by angles \(\pm\frac{\pi}{2}\) about an axis through diametrically opposite vertices interchange the transverse components (upto a sign). These are the square-root of Pauli matrix transformations, \(\sqrt{\sigma_{i}}\sigma_{j}(\sqrt{\sigma_{i}})^{\dagger}\), and \((\sqrt{\sigma_{i}})^{\dagger}\sigma_{j}\sqrt{\sigma_{i}}\).
(iii) Rotations by angle \(\pi\) about an axis through centres of diametrically opposite edges interchange the edge end-points and flip sign of the third component, \(\frac{1}{\sqrt{2}}(\sigma_{i}+\sigma_{j})\sigma_{k}\frac{1}{\sqrt{2}}(\sigma_ {i}+\sigma_{j})\). The Hadamard transformation is of this type.
(iv) Rotations by angles \(\pm\frac{2\pi}{3}\) about an axis through centres of diametrically opposite faces cyclically permute the Pauli matrix labels of the coordinates.
(v) The inversion operation flips signs of all three coordinates, and corresponds to the charge conjugation symmetry of the \(SU(2)\) group algebra.
Quantum algorithms need something beyond these Clifford group operations
to beat their classical counterparts, which is often achieved by including the non-Clifford \(\sqrt[4]{\sigma}_{z}\) logic gate.
## 3 Quantum Chaos
In classical dynamics, chaos is characterised as rapid divergence of evolution trajectories that are infinitesimally separated to begin with. Such a divergence makes long term predictions of a chaotic system unreliable, when the initial data has limited precision, and the rate of divergence is specified in terms of the Lyapunov exponents.
A similar description is desirable for quantum systems, to identify whether they are chaotic or not. Consider two nearby quantum states \(|\psi\rangle\) and \(|\psi^{\prime}\rangle\). Their geodesic separation is specified in terms of the overlap \(\langle\psi|\psi^{\prime}\rangle\), which is invariant under any unitary evolution \(|\psi\rangle\to U|\psi\rangle=e^{-iHt}|\psi\rangle\), and so cannot be used to identify divergent evolution trajectories. In this context, it has been pointed out that the quantum state should be treated not as a single point in the Hilbert space, but as an analog of the phase space distribution of classical systems [11; 12]. The density matrix, with a symplectic structure describable using pairs of canonically conjugate variables, is the natural setting for such a distribution. As per the Liouville theorem of canonical classical dynamics, the phase space density is invariant under evolution. But the quantum state distribution, spread over an elemental area measured in units of the Planck constant \(h\), can simultaneously stretch in one direction and contract in the conjugate direction; such a behaviour would characterise chaotic dynamics.
### Quantum evolution trajectories
Let the unitary operator \(V(t)\) denote the separation between the two evolution trajectories, i.e. \(|\psi^{\prime}(t)\rangle=V(t)|\psi(t)\rangle\). Then,
\[V(t)=e^{-iHt}\ V(0)\ e^{iHt}. \tag{25}\]
Parametrising it in terms of a generator direction, \(V(t)=e^{i\epsilon O(t)}\), in the linear regime the evolution obeys:
\[O(t)=e^{-iHt}\ O(0)\ e^{iHt}. \tag{26}\]
The corresponding differential evolution equation is:
\[\frac{d}{dt}O(t)=-i[H,O(t)]. \tag{27}\]
This equation has the same form as the one obeyed by the density matrix in the Schrodinger picture, Eq.(3), and that is not an accident. (In the Heisenberg picture, where the operators evolve while the states are held fixed, there is a sign flip in the evolution equations.) Given a Hamiltonian, the density matrix
distribution may evolve to expand along some directions and contract along some others, and \(O(t)\) does the same.
The differential volume element for the symplectic structure of the density matrix is \(dx\wedge dp\) for one-dimensional systems, and \(d(\cos\theta)\wedge d\phi\) for the Bloch sphere. It is quantised in units of \(2\pi\) in the convention where \(\hbar=1\). Quantum measurement limits the information that can be obtained from a symplectic structure, since only one variable in each conjugate pair of variables can be perfectly measured. So all the elements of the density matrix are not simultaneously measurable; only specific projections (e.g. the marginal distributions) are. The evolution of these projections can be analysed to detect chaos.
For a pure state in an \(n\)-dimensional Hilbert space, the density matrix is parametrised by \(2n-2\) real parameters. The maximum number of simultaneously measurable parameters is given by the commuting Cartan subalgebra of \(SU(n)\) with \(n-1\) generators (which is half of \(2n-2\)). So just like classical chaos is defined by the behaviour of the evolution trajectories in the coordinate space, after projecting out the momenta that form the other half of the phase space, quantum chaos can be defined by the trajectories of the Cartan generators. These Cartan generators are fixed by the measurement operators, and may not commute with the evolution Hamiltonian. (When the measurement operators commute with the Hamiltonian, there is no evolution of the observables and no chaos.) In a basis where the Cartan generators are diagonal, the off-diagonal terms of the Hamiltonian produce transitions and the density matrix distribution evolves. The transition directions correspond to raising/lowering operators, and are specified by variables conjugate to the diagonal ones. The linear response analysis of evolution along specific generator directions, within the overall evolution that is unitary, can therefore be used to identify chaos and the Lyapunov exponents.
In this setting, quantum chaos is fully described by the physical interplay between the measurement basis and the evolution Hamiltonian. The intrinsic metric of the density matrix space is sufficient for this purpose, and no other distance measure is needed. Note that entanglement is also described by the off-diagonal components of the density matrix in the basis defining the bipartition, and so the same analysis can be extended to the evolution of entanglement too.
### Well-known examples
The inverted harmonic oscillator provides a simple illustration of the preceding framework. The fundamental commutator is \([x,p]=i\), and we can choose units such that
\[H=-\frac{1}{2}x^{2}+\frac{1}{2}p^{2}\,\ \ [H,x]=-ip\,\ \ [H,p]=-ix. \tag{28}\]
Perturbations in the initial state evolve according to:
\[\frac{dx}{dt}=-p\,\ \ \frac{dp}{dt}=-x\, \tag{29}\]
whose solutions are hyperbolic functions with the Lyapunov exponents \(\pm 1\) (in contrast to the trigonometric function solutions of the normal harmonic oscillator with zero Lyapunov exponents).
In the phase space, the evolution matrix for the vector \(\binom{x}{p}\) is \(\left(\begin{matrix}0&-1\\ -1&0\end{matrix}\right)\). The absolute value of its determinant is 1, which keeps the phase space density constant as it must. The Lyapunov exponents are the real parts of the eigenvalues of this matrix, and the corresponding eigenvectors give the expanding/contracting/neutral evolution directions. The Hamiltonian is a squeezing operator (\(H\propto(a^{\dagger 2}+a^{2})\) in terms of the creation/annihilation operators), and the evolution exponentially distorts the phase space distribution. It should be noted in this case that the exponential separation of initially close trajectories results from the local maximum in the potential, and is not chaotic or random.
A more relevant example of chaos is provided by the kicked top model, defined by the Hamiltonian [13]:
\[H=\frac{\kappa}{2J\tau}J_{z}^{2}+pJ_{y}\sum_{n=-\infty}^{\infty}\delta(t-n\tau ). \tag{30}\]
Here periodic kicks at time interval \(\tau\) rotate the state by angle \(p\) about the \(y\)-axis, and \(\kappa\) is the chaoticity parameter that twists the state distribution around the \(z\)-axis between the kicks. The Floquet map evolution from kick to kick is given by the unitary operator:
\[U(\tau)=\exp(-i\frac{\kappa}{2J}J_{z}^{2})\ \exp(-ipJ_{y})\, \tag{31}\]
and the evolution can be represented on the Bloch sphere.
For angular momentum \(J\), there are \(2J+1\) quantum eigenstates smeared over the solid angle \(4\pi\). The \(J\rightarrow\infty\) limit gives the classical evolution of a point \((\theta,\phi)\) on the Bloch sphere. The transition between classical and quantum dynamics can be studied by varying \(J\). For \(J=\frac{1}{2}\), the Pauli matrix algebra only allows Hamiltonians of the form \(H=b\ \hat{n}\cdot\sigma\), which produce only periodic evolution, i.e. precession of the quantum state around the direction \(\hat{n}\). (There are only two eigenstates, each smeared over a solid angle \(2\pi\) of the Bloch sphere. Such a large smearing eliminates any possibility of chaos.) For larger \(J=1,\frac{3}{2},2,\ldots\), the \(J_{z}^{2}\) term in the Hamiltonian contributes to the dynamics, and that is essential for generating chaos.
The connection between classical and quantum state evolution can be conveniently described using the coherent states \(\{|\Omega\rangle\}\), which are obtained by rotating the fully symmetric highest weight state, \(|J,J\rangle=|\frac{1}{2},\frac{1}{2}\rangle^{\otimes 2J}\):
\[|\Omega\rangle=R(\Omega)|J,J\rangle\,\ \ R(\Omega)=\exp(i \theta(J_{x}\sin\phi-J_{y}\cos\phi))\,\] \[\langle\Omega|\vec{J}|\Omega\rangle=J(\sin\theta\cos\phi\ \hat{x},\sin\theta\sin\phi\ \hat{y},\cos\theta\ \hat{z}). \tag{32}\]
In the qubit notation, \(\vec{J}=\frac{1}{2}\sum_{i=1}^{2J}\vec{\sigma}^{(i)}\),
\[|\Omega\rangle=(\cos\frac{\theta}{2}|0\rangle+e^{i\phi}\sin\frac{\theta}{2}|1 \rangle)^{\otimes 2J}\, \tag{33}\]
and the Floquet operator of Eq.(31) can be easily applied to \(|\Omega\rangle\) using one- and two-qubit logic gates.
The stereographic projection of the Bloch sphere onto the complex plane helps in understanding the evolution dynamics. With \(z=\tan\frac{\theta}{2}e^{i\phi}\), the angular momentum eigenstates are a set of monomials built on the lowest weight state, \(\langle z|J,J_{z}\rangle\propto z^{J+J_{z}}\), and the group generators are:
\[J_{x}=\frac{1}{2}(z^{2}\partial_{z}-2Jz-\partial_{z})\,\ \ J_{y}=\frac{1}{2i}(z^{2} \partial_{z}-2Jz+\partial_{z})\,\ \ J_{z}=z\partial_{z}-J. \tag{34}\]
The quantum state evolution is specified by:
\[\frac{d}{dt}z=-i[H,z]=i\frac{\kappa}{2J\tau}(2J-1)z-\frac{p}{2}(z^{2}+1)\sum_{ n=-\infty}^{\infty}\delta(t-n\tau). \tag{35}\]
In the Floquet evolution, the term proportional to \(\kappa\) gives uniform rotation, \(z\sim\exp(i\frac{\kappa(2J-1)t}{2J\tau})\), as in case of the harmonic oscillator. The term proportional to \(p\) produces phase-dependent radial jumps, \(\Delta(\tanh^{-1}z)\sim-\frac{p}{2}\). Combined together, they produce a spiral evolution of \(z\), and can lead to diverging evolution trajectories. Chaos requires contribution from both the terms, and it is absent when \(\kappa p(2J-1)=0\).
### Tracking chaos using quantum machine learning
The kicked top model provides a useful strategy to tackle the classification problem in supervised quantum machine learning. The problem is to efficiently analyse huge amount of data, collected say by various sensors and detectors, to make suitable decisions. Often there is no time and space to store the data, and the interesting features must be extracted quickly while discarding the rest. Examples cover wide-ranging situations: astronomy, imaging, weather analysis, collider physics, genetic information and so on.
The typical analysis method is to put together multiple binary classification steps in a binary tree structure. Each step separates the data with binary labels into disjoint classes. The classification parameters are first found using training datapoints with known properties, and then used to determine the class labels for new datapoints with unknown properties. The capability of a classifier is enhanced by embedding the datapoints with a nonlinear map in a larger feature space, \(\vec{x}_{i}\rightarrow\phi(\vec{x}_{i})\), and then performing a linear classification in the feature space; the ideal feature map would just map the datapoints as \(\vec{x}_{i}\rightarrow\pm 1\).
The quantum Hilbert space offers much more versatility in the construction of feature maps compared to a classical vector space. Using the discrete logarithm function as a feature map, it has been proved that a quantum classifier can achieve robust speed-up for a classification problem that is hard to tackle classically [14]. Furthermore, when the input classification data originate from a quantum process, they may be directly fed into a quantum classifier, without intervening measurements that would project them to classical data. By retaining coherent quantum correlations, such a procedure can provide an exponential quantum advantage [15]. In particular, for an \(n\)-qubit system, expectation values of all \(4^{n}\) elements of its density matrix can be estimated, in the Pauli basis (see Eq.(37)) upto a constant error, using \(O(n)\) copies of the density matrix. Any classical algorithm must use \(2^{\Omega(n)}\) copies of the density matrix for the same task.
A versatile classifier should be able to construct a variety of structures covering various patterns of the datapoints, as a function of variational parameters. The feature map provided by the time evolution of the aperiodic Heisenberg spin chain Hamiltonian has been found useful for this purpose (see for example, Ref.[16]):
\[H=\sum_{\langle i,j\rangle}\alpha_{ij}J_{iz}J_{jz}+\sum_{i}\beta_{i}J_{iy}. \tag{36}\]
In practice, this feature map is implemented as a set of discrete time evolution Trotter steps, alternating one-qubit rotations with a ring of two-qubit C-NOT gates as illustrated in Fig. 3. It is easily seen that the mean-field approximation (i.e. all qubits coupled to each other with equal strength) to this discrete time evolution is just the kicked top evolution of Eq.(31). The capability of the latter to go from regular to chaotic behaviour, as a function of parameter values, hence explains the success of the former as a versatile feature map.
We can now demonstrate the power of quantum machine learning by converting the kicked top evolution to a binary classification problem, and showing
Figure 3: An illustrative digital quantum logic circuit for constructing the feature map \(x_{i}\rightarrow\phi(x_{i})\). The boxed part of the logic circuit is iterated several times to yield the Trotter decomposed evolution of the aperiodic Heisenberg spin chain Hamiltonian, with the variational parameters \(\theta_{j}\). The initial part of the logic circuit inputs the datapoint parameters in the uniform superposition state.
that it can be efficiently solved in both regular and chaotic regimes. Let the kicked top evolve for time \(n\tau\), starting from the initial coherent state \(|\Omega\rangle\). Given the final state, the binary classification task is to predict whether the initial state was in the northern or the southern hemisphere of the Bloch sphere. The solution requires finding an (approximate) inverse evolution map.
The capabilities of backtracking the kicked top evolution were compared between classical and quantum machine learning methods using numerical simulations [17]. For ease of simulation, the rotation parameter \(p\) was set to \(\frac{\pi}{2}\), while varying the chaoticity parameter \(\kappa\). The Bloch sphere was uniformly discretised as \(32\times 32\) datapoints on the \((\theta,\phi)\) grid, and the number of time evolution steps \(n\) ranged from 1 to 1000. 30% of the datapoints were randomly selected as the training datapoints, and the rest were used to check the success rate of the trained classification method. In the case of classical evolution, the initial \(|\Omega\rangle\) corresponds to a point on the Bloch sphere. The evolution changes from being regular at small \(\kappa\) to chaotic for large \(\kappa\), with the transition to chaos occurring at \(\kappa=4\). As illustrated in Fig. 4a, the success rate of various classical machine learning methods decreases with increasing \(\kappa\). The accuracy for correctly predicting the starting hemisphere of the evolution is non-trivial for small \(\kappa\), but becomes essentially the random guess value 0.5 once \(\kappa\) crosses over to the chaotic regime.
In the case of quantum evolution, the Floquet operator of Eq.(31) evolves the initial coherent state \(|\Omega\rangle\) of \(2J\) qubits for \(n\) time steps, and the resultant state is directly fed into the the classification logic circuit (i.e. the boxed part of Fig. 3) without any measurement. After executing the classification logic circuit for \(l\) iterations, the class prediction is chosen as the sign of the expectation value of the first qubit. The probabilistic nature of quantum measurement means that the whole algorithm has to be run several times, say \(m\)
Figure 4: Comparison of classical and quantum classifier performances for the evolution of a kicked top: (a) The success rates (validation accuracy) of several common classical machine learning methods as a function of the chaoticity parameter \(\kappa\). (b) The success rates of the aperiodic Heisenberg spin chain form quantum classifier, as a function of \(\kappa\) for \(J\) varying from 1 to 3. The minimum and maximum values indicate the results before and after tuning the variational parameters \(\theta_{j}\) using the training datapoints.
to determine the expectation value to a reasonable accuracy. In the numerical simulations, both \(l\) and \(m\) were kept finite, around 10. The training datapoints were used to tune the variational parameters of the classifier, \(\theta_{j}\), so as to maximise the success rate of class prediction. The results are shown in Fig. 4b for \(J\) varying from 1 to 3; while training the variational parameters is essential for optimising the success rate, once that is done, the class prediction is highly successful for all values of \(\kappa\) and \(J\). The cross-over of \(\kappa\) from regular to chaotic regime, or a large difference between \(n\) and \(l\), have no discernible effect. This is a striking result. The successful backtracking of the kicked top evolution with a finite depth classifier is due to the fact that the initial coherent quantum state is smeared over a solid angle \(\frac{4\pi}{2J+1}\) on the Bloch sphere, instead of being a point. The lesson is that the smearing of quantum states in the phase space suppresses chaos as well as makes it possible to backtrack it.
## 4 Noisy Quantum Processor Simulation
Quantum systems are highly sensitive to disturbances from the environment; even necessary controls and observations perturb them. The available, and upcoming, quantum devices are noisy, and techniques to bring down the undesired errors are being intensively pursued. This era of noisy intermediate scale quantum systems has been labeled NISQ [18]. It is also necessary to come up with error-resilient system designs, as well as techniques that validate and verify the results. Such NISQ systems roughly span devices with 10-100 qubits, 10-1000 logic operations, limited interactions between qubits, and with no error correction since the fault-tolerance threshold is orders of magnitudes away. They would likely be used as special purpose platforms, with limited capabilities.
Software simulators are being developed for help in investigations of noisy quantum processors [19]. They are programs running on classical parallel computer platforms, which are designed to mimic noisy quantum processors, and can model and benchmark 10-50 qubit systems. A quantum computation may suffer from many sources of error: imprecise initial state preparation, imperfect logic gate execution, disturbances to the data in memory, and error-prone measurements. (It is safe to assume that the program instructions, which are classical, are essentially error-free.) A realistic quantum simulator needs to include all of them with appropriate probability distributions. Additional features that can be included are restrictions on possible logic operations and connectivity between the components, which would imitate what may be the structure of a real quantum processor. With such improvisations, the simulation results would look close to what a noisy quantum processor would deliver, and one can test how well various algorithms work with imperfect quantum components. More importantly, one can vary the imperfections and the connectivity in the software simulator to figure out what design for the noisy quantum processor would produce the best results.
Quantum simulators serve an important educational purpose as well. They are portable, and can be easily distributed over existing computational facilities world wide. They provide a platform to students to acquire the skills of _programming_ as well as _designing_ quantum processors, which is of vital importance for developing future expertise in the field.
### Generic implementation
The standard formulation of quantum states as vectors in a Hilbert space evolving by unitary transformations is appropriate for describing the pure states of a closed quantum system, but is insufficient for describing the mixed states that result from interactions of an open system with its environment. The evolution of generic mixed states is described using the density matrix formulation, the CPTP map of Eq.(12), where various environmental disturbances are modeled by suitable choices of the Kraus operators \(\{M_{\mu}\}\). It is an ensemble description of the quantum system, and so is inherently probabilistic, in contrast to the deterministic state vector description that can describe individual experimental system evolution. Nonetheless, it allows determination of the expectation value of any physical observable, which is the average result over many experimental realisations.
In going from a description based on \(|\psi\rangle\) to the one based on \(\rho\), the number of degrees of freedom gets squared. This property is fully consistent with the Schmidt decomposition, which implies that any correlation between the system and the environment can be specified by modeling the environment using a set of degrees of freedom as large as that for the system. The squaring of the degrees of freedom is the price to be paid for the flexibility to include all possible environmental effects on the quantum system, and it slows down the classical simulation of an open quantum system.
Consider computational problems whose algorithms have already been converted to discrete quantum logic circuits acting on a set of qubits. Also assume that all logic gate instructions can be executed with a fixed clock step. In this framework, the computational complexity of the program is specified by the number of qubits and the total number of clock steps. Since the quantum state deteriorates with time due to environmental disturbances, the total execution time is reduced by identifying non-overlapping logic operations at every clock step and then implementing them in parallel.
The density matrix of an \(n\)-qubit quantum register can be expressed in the orthogonal Pauli basis, utilising the tensor product structure of the Hilbert space, as
\[\rho=\sum_{i_{1},i_{2},\ldots,i_{n}}a_{i_{1}i_{2}\ldots i_{n}}(\sigma_{i_{1}} \otimes\sigma_{i_{2}}\otimes\ldots\otimes\sigma_{i_{n}}). \tag{37}\]
Here \(i_{1},\ldots,i_{n}\in\{0,1,2,3\}\), \(\sigma_{0}\equiv I\), and \(a_{i_{1}\ldots i_{n}}\) are \(4^{n}\) real coefficients encoded as an array. The normalisation \(Tr(\rho)=1\) implies \(a_{0\ldots 0}=2^{-n}\). The constraint \(Tr(\rho^{2})\leq 1\), which follows from \(\rho^{2}\preceq\rho\), implies \(\sum_{i_{1},\ldots,i_{n}}a_{i_{1}\ldots i_{n}}^{2}\leq 2^{-n}\). The orthogonality of the Pauli basis makes it easy to describe various transformations of the density matrix as simple changes of the coefficients. (When the
density matrix is expressed as a \(2^{n}\times 2^{n}\) complex Hermitian matrix, the number of independent components remain the same, but the matrix elements do not belong to an orthogonal set, and their transformations do not have the same type of compact description.) It has also been observed, due to the fact that all the Pauli basis elements mutually either commute or anticommute, that the Pauli basis is highly efficient for actual quantum hardware measurements [15].
When the operator to be measured is expressed in the same Pauli basis,
\[O=2^{-n}\sum_{i_{1},i_{2},\ldots,i_{n}}o_{i_{1}i_{2}\ldots i_{n}}(\sigma_{i_{1} }\otimes\sigma_{i_{2}}\otimes\ldots\otimes\sigma_{i_{n}})\, \tag{38}\]
its expectation value is just the dot product,
\[Tr(\rho O)=2^{-n}\sum_{i_{1},i_{2},\ldots,i_{n}}a_{i_{1}i_{2}\ldots i_{n}}o_{i_ {1}i_{2}\ldots i_{n}}. \tag{39}\]
Also, treating the coefficients \(a_{i_{1}\ldots i_{n}}\) as vector space coordinates, the density matrices can be discriminated in terms of the Euclidean distance between them; the Hilbert-Schmidt distance is just the \(L_{2}\) distance in the \(4^{n}\)-dimensional space:
\[Tr((\rho_{1}-\rho_{2})^{2})=2^{-n}\sum_{i_{1},i_{2},\ldots,i_{n}}(a_{i_{1}i_{2 }\ldots i_{n}}-b_{i_{1}i_{2}\ldots i_{n}})^{2}=2^{-n}||\vec{a}-\vec{b}||_{2}^{ 2}. \tag{40}\]
The reduced density matrix with the degrees of freedom of the \(k^{\rm th}\) qubit summed over, \(Tr_{k}(\rho)\), is specified by the \(4^{n-1}\) coefficients \(2a_{i_{1}\ldots i_{k-1}0i_{k+1}\ldots i_{n}}\), since only the terms containing \(\sigma_{0}\) provide a non-zero partial trace. Upon a projective measurement, the quantum state components orthogonal to the direction of measurement vanish. So when the \(k^{\rm th}\) qubit is measured along direction \(\hat{n}\), the coefficients \(a_{\ldots i_{k}\ldots}\) are set to zero for \(i_{k}\perp\hat{n}\), while those for \(i_{k}||\hat{n}\) and \(i_{k}=0\) remain unchanged.
### The QSim simulator
In the framework of the preceding subsection, we constructed a simulator for noisy quantum logic circuits [20]. It is an open-source software library written in Python [21], which is added as a new backend to IBM's Qiskit platform [22]. It has been made freely available as a national educational resource [23].
We consider problems where all operations--logic gates, errors and measurements--are local, i.e. act on only a few qubits. Indeed, the Qiskit transpiler decomposes more complicated operations into a sequence of one-qubit and two-qubit operations. The tensor product structure of such operations is a non-trivial operator on the addressed qubits and the identity operator on the rest of the qubits. Since the expression for the quantum register has the same tensor product structure, the operations change the Pauli matrix factors corresponding to only the addressed qubits (e.g. \(\sigma_{i_{k}}\)), and the coefficients change
only for the associated subscripts (e.g. \(a_{\ldots i_{k}\ldots}\)). Such operations are efficiently implemented in the software using linear algebra vector instructions (there is no complex number algebra in our code), while explicitly evaluating Eq.(12).
The manipulations of logic circuit instructions and operations are carried out at the classical level; even when a quantum hardware is available, they would be executed by a classical compiler. So we assume that they are error-free. We incorporate possible errors in initialisation, logic gates, measurement and memory using simple models:
\(\bullet\) We allow a fully-factorised thermal state as one of the initial state options:
\[\rho_{\rm th}=\begin{pmatrix}p&0\\ 0&1-p\end{pmatrix}^{\otimes n}, \tag{41}\]
where the parameter \(p\) is provided by the user.
\(\bullet\) For single qubit rotations around fixed axes, we assume that errors arise from inaccuracies in their rotation angles. Let \(\alpha\) be the inaccuracy in the angle, with the mean \(\langle\langle\alpha\rangle\rangle=\overline{\alpha}\) and the fluctuations symmetric about \(\overline{\alpha}\). Then the replacement \(\theta\rightarrow\theta+\alpha\) in the rotation operator \(R_{n}(\theta)\) modifies the density matrix transformation according to the substitutions:
\[\cos\theta\to r\cos(\theta+\overline{\alpha})\,\ \ \sin\theta \to r\sin(\theta+\overline{\alpha})\, \tag{42}\]
where \(\overline{\alpha}\) and \(r=\langle\langle\cos(\alpha-\overline{\alpha})\rangle\rangle\) are the parameters provided by the user.
\(\bullet\) To model the error in the C-NOT gate, we assume that C-NOT is implemented as a transition selective pulse that exchanges amplitudes of the two target qubit levels when the control qubit state is \(|1\rangle\). Then the error is in the duration of the transition selective pulse, and alters only the second half of the unitary operator, \(U_{cx}=|0\rangle\langle 0|\otimes I+|1\rangle\langle 1|\otimes\sigma_{1}\). It is included in the same manner as the error in single qubit rotation angle (i.e. as a disturbance to the rotation operator \(\sigma_{1}\)). The corresponding two parameters, analogous to \(\overline{\alpha}\) and \(r\), are provided by the user.
\(\bullet\) We model a single qubit projective measurement error as depolarisation, which is equivalent to a bit-flip error in a binary measurement. Then when the \(k^{\rm th}\) qubit is measured along direction \(\hat{n}\), the coefficients \(a_{\ldots i_{k}\ldots}\) in the post-measurement state are set to zero for \(i_{k}\perp\hat{n}\), reduced by a multiplicative factor \(d_{1}\) (provided by the user) for \(i_{k}||\hat{n}\), and left unaffected for \(i_{k}=0\). Also, the probabilities of the two outcomes become \(\frac{1}{2}(1\pm 2^{n}d_{1}\hat{n}\cdot\vec{c})\), where \(\{c_{0},\vec{c}\}=a_{0\ldots i_{k}\ldots 0}\). In case of a measurement of a multi-qubit Pauli operator string, the above procedure is applied to every qubit whose measurement operator has \(i_{k}\neq 0\).
\(\bullet\) In case of a Bell-basis measurement of qubits \(k\) and \(l\), the post-measurement coefficients with \(i_{k}\neq i_{l}\) are set to zero, those with \(i_{k}=i_{l}\in\{1,2,3\}\) are reduced by a multiplicative factor \(d_{2}\) provided by the user, and those with \(i_{k}=i_{l}=0\) are left the same. Also, the probabilities of the four outcomes are obtained by reducing the \(i_{k}=i_{l}\in\{1,2,3\}\) contributions by the factor \(d_{2}\).
\(\bullet\) We assume that the memory errors are small during a clock step, and implement them by modifying the density matrix at the end of every clock step, in the spirit of the Trotter expansion. With the \(\sigma_{3}\) basis as the computational basis, the decoherence effect suppresses the off-diagonal coefficients with \(i_{k}\in\{1,2\}\) for every qubit by a multiplicative factor \(f\). It can be represented by the Kraus operators:
\[M_{0}=\sqrt{\frac{1+f}{2}}\ I\,\ \ M_{1}=\sqrt{\frac{1-f}{2}}\ \sigma_{3}. \tag{43}\]
In terms of the clock step size \(\Delta t\) and the decoherence time \(T_{2}\), the parameter \(f=\exp(-\Delta t/T_{2})\), and it is provided by the user.
\(\bullet\) We consider the decay of the quantum state towards the thermal state, Eq.(41). This evolution is represented by the Kraus operators:
\[M_{0} = \sqrt{p}\begin{pmatrix}1&0\\ 0&\sqrt{g}\end{pmatrix},\ M_{1}=\sqrt{p}\begin{pmatrix}0&\sqrt{1-g}\\ 0&0\end{pmatrix}, \tag{44}\] \[M_{2} = \sqrt{1-p}\begin{pmatrix}\sqrt{g}&0\\ 0&1\end{pmatrix},\ M_{3}=\sqrt{1-p}\begin{pmatrix}0&0\\ \sqrt{1-g}&0\end{pmatrix}.\]
Its effect on every qubit is to suppress the off-diagonal coefficients with \(i_{k}\in\{1,2\}\) by \(\sqrt{g}\), and change the diagonal coefficients according to:
\[a_{\ldots 3\ldots}\to g\ a_{\ldots 3\ldots}+(2p-1)(1-g)a_{\ldots 0\ldots}. \tag{45}\]
In terms of the clock step \(\Delta t\) and the relaxation time \(T_{1}\), the parameter \(g=\exp(-\Delta t/T_{1})\), and it is provided by the user. (Our Kraus representation automatically ensures the physical constraint \(T_{2}\leq 2T_{1}\)). We note that the decoherence and decay superoperators commute with each other, and we execute the combined operation at the end of every clock step.
We expect the memory errors to cause maximum damage to the quantum signal, because they act on all the qubits all the time, while other operational errors are confined to particular qubits at specific instances. Our tests for simple algorithms confirm this expectation, and so we consider it imperative to reduce the total execution time of a quantum program as much as possible. Towards this end, we rearrange the complete list of quantum logic circuit instructions produced by the Qiskit transpiler in a set of partitions, such that all operations in a single partition can be executed as parallel threads during a single clock step. In the process, successive single qubit rotations are merged wherever feasible, a stack of sequential operations is constructed for every qubit, and non-overlapping qubit operations are collected in a single partition wherever possible. This procedure puts logic gate operations and measurement operations in separate partitions, since single qubit measurement may affect the whole quantum register in case of entangled quantum states. Also, the clock step is assumed to be longer than the time required to execute each operation in the corresponding partition.
In a quantum computer, elementary logic operations would be directly executed on quantum hardware, and the computational complexity of the algorithm is specified in terms of the number of qubits and the number of elementary logic operations required. In a classical simulator, the elementary logic operations are executed using linear algebra, and so its computational complexity is specified in terms of the number of linear algebra operations (permutations, additions and multiplications) required to execute various elementary logic operations. For our simulator, these computational resource requirements are easily enumerated:
\(\bullet\) Memory: Storage of an \(n\)-qubit register needs \(4^{n}\) real variables.
\(\bullet\) Logic gates: One- and two-qubit operations are block-diagonal matrices with fixed block sizes. Their execution requires \(O(4^{n})\) linear algebra operations.
\(\bullet\) Measurement: The probability of a measurement outcome for an observable is just a value look up in the density matrix simulator. For an \(n\)-qubit register, it is an \(O(n)\) effort.
\(\bullet\) Environmental noise: The one-qubit Kraus operators parametrising the noise are block-diagonal matrices with fixed block sizes. They are implemented using \(O(4^{n})\) linear algebra operations.
Overall, in going from a quantum state vector simulator to a density matrix one, the computational resource requirements increase from \(O(2^{n})\) to \(O(4^{n})\). It implies that given certain computational resources, a density matrix simulator can simulate half the number of qubits compared to a state vector one. On the other hand, a density matrix simulator produces the complete output probability distribution in one run, while a state vector simulator requires multiple runs (labeled shots) of the program for the same purpose. Our fully portable Qsim simulates quantum logic circuits with 10 qubits and 100 operations in a few minutes on a laptop.
The main achievement of our simulator is the ability to simulate noisy quantum systems, using simple error models. By varying the noise parameter values, we can estimate how accurately we need to control various errors in quantum hardware in order to get meaningful errors. Simulation of simple algorithms shows that the errors produced by decoherence and decay cause the largest deterioration of the results, with decay dominating over decoherence [20].
## 5 Outlook
Quantum theory was invented because classical theory could not explain certain observations at all. Quantum technology therefore can be advantageous when these phenomena are at the core of the problems to be tackled; they include superposition, entanglement, squeezing and tunnelling of quantum states. The quantum density matrix provides a complete description of quantum states, generalising the classical concept of probability distribution by adding extra degrees of freedom (as off-diagonal matrix elements). These extra degrees of freedom cover genuinely quantum phenomena that often appear to
be non-intuitive. The unitary transformations available in quantum logic are more powerful than their subset of permutation operations available in classical reversible logic. For computational problems with classical initial and final states, they can provide short-cuts through the Hilbert space, leading to more efficient algorithms. Such strategies are vigorously being pursued, even in absence of a systematic procedure.
In addition, two noteworthy research directions at present involving the quantum density matrix are: (i) The technique of classical shadows [24], which quantifies how much can be learned about a quantum system using efficient classical methods (and hence what are the quantum features that would be hard to extract). (ii) Quantum machine learning [25], which uses efficient quantum feature maps that are hard to construct classically for speeding up classification and data analysis problems.
Acknowledgments.This work was supported in part by the project "Centre for Excellence in Quantum Technology", funded by the Ministry of Electronics and Information Technology, Government of India.
|
2302.07722 | The Half-Volume Spectrum of a Manifold | We define the half-volume spectrum $\{\tilde \omega_p\}_{p\in \mathbb N}$ of
a closed manifold $(M^{n+1},g)$. This is analogous to the usual volume spectrum
of $M$, except that we restrict to $p$-sweepouts whose slices each enclose half
the volume of $M$. We prove that the Weyl law continues to hold for the
half-volume spectrum. We define an analogous half-volume spectrum $\tilde c(p)$
in the phase transition setting. Moreover, for $3 \le n+1 \le 7$, we use the
Allen-Cahn min-max theory to show that each $\tilde c(p)$ is achieved by a
constant mean curvature surface enclosing half the volume of $M$ plus a
(possibly empty) collection of minimal surfaces with even multiplicities. | Liam Mazurowski, Xin Zhou | 2023-02-15T15:27:09Z | http://arxiv.org/abs/2302.07722v1 | # The half-volume spectrum of a manifold
###### Abstract.
We define the half-volume spectrum \(\{\tilde{\omega}_{p}\}_{p\in\mathbb{N}}\) of a closed manifold \((M^{n+1},g)\). This is analogous to the usual volume spectrum of \(M\), except that we restrict to \(p\)-sweepouts whose slices each enclose half the volume of \(M\). We prove that the Weyl law continues to hold for the half-volume spectrum. We define an analogous half-volume spectrum \(\tilde{c}(p)\) in the phase transition setting. Moreover, for \(3\leq n+1\leq 7\), we use the Allen-Cahn min-max theory to show that each \(\tilde{c}(p)\) is achieved by a constant mean curvature surface enclosing half the volume of \(M\) plus a (possibly empty) collection of minimal surfaces with even multiplicities.
## 1. Introduction
The spectrum of the Laplacian is an important invariant of a closed Riemannian manifold \((M^{n+1},g)\). A number \(\lambda\) is called an eigenvalue of the Laplacian provided there is a function \(u\colon M\to\mathbb{R}\) such that \(\Delta u+\lambda u=0\). It is well-known that the eigenvalues form a discrete sequence \(0=\lambda_{0}<\lambda_{1}\leq\lambda_{2}\leq\ldots\) and \(\lambda_{p}\to\infty\) as \(p\to\infty\). In fact, the eigenvalues of Laplacian are characterized by the min-max formula
\[\lambda_{p}=\inf_{(p+1)\text{-planes}\ P\subset W^{1,2}(M)}\left[\sup_{u\in P \setminus\{0\}}\frac{\int_{M}|\nabla u|^{2}}{\int_{M}u^{2}}\right],\]
and they satisfy the Weyl law
\[\lambda_{p}\sim 4\pi^{2}\operatorname{Vol}(B)^{-\frac{2}{n+1}}\operatorname{ Vol}(M)^{-\frac{2}{n+1}}p^{\frac{2}{n+1}}\]
as \(p\to\infty\). Here \(B\) is the unit ball in \(\mathbb{R}^{n+1}\).
In [11], Gromov proposed a non-linear analog of the spectrum of the Laplacian. Roughly speaking, he defines a \(p\)-sweepout of \(M\) to be a family \(X\) of hypersurfaces with the following property: given any \(p\) points in \(M\), there is a hypersurface \(\Sigma\) belonging to the family \(X\) which passes through all \(p\) of these points. Then he defines the \(p\)-widths
\[\omega_{p}=\inf_{p\text{-sweepouts}\ X}\left[\sup_{\Sigma\in X}\operatorname{ Area}(\Sigma)\right].\]
See Section 2 for precise definitions. The sequence \(\{\omega_{p}\}_{p\in\mathbb{N}}\) is called the volume spectrum of \(M\).
Gromov [12] and Guth [14] proved that the volume spectrum satisfies sublinear growth bounds. Namely, there are constants \(C_{1}\) and \(C_{2}\) depending on \(M\) such that
\[C_{1}p^{\frac{1}{n+1}}\leq\omega_{p}\leq C_{2}p^{\frac{1}{n+1}}.\]
Later, Liokumovich, Marques, and Neves [17] showed that the volume spectrum satisfies a Weyl law. That is, there is a universal constant \(a_{n}\) depending only on the dimension such that
\[\omega_{p}\sim a_{n}\operatorname{Vol}(M)^{\frac{n}{n+1}}p^{\frac{1}{n+1}}\]
as \(p\to\infty\); see Chodosh-Mantoulidis [6] for the calculation of \(a_{n}\) when \(n=2\). The Weyl law for the volume spectrum has been instrumental in the proof of many results on the existence of minimal surfaces in Riemannian manifolds.
In the early 1980s, Almgren [2], Pitts [20], and Schoen-Simon [21] developed a min-max theory for the area functional on closed Riemannian manifolds. Their combined work implies that every closed Riemannian manifold of dimension \(3\leq n+1\leq 7\), contains a closed, smooth, embedded minimal surface. Around the same time, Yau [24] conjectured that every closed manifold should contain infinitely many minimal surfaces. Marques and Neves devised a program to prove Yau's conjecture by using the Almgren-Pitts min-max theory to find a minimal surface with area \(\omega_{p}\) for each \(p\in\mathbb{N}\).
This program has now been successfully carried out. Fix a closed Riemannian manifold \((M^{n+1},g)\) with \(3\leq n+1\leq 7\). Irie, Marques, and Neves [16] showed that, for a generic metric \(g\), the union of all minimal surfaces in \(M\) is dense in \(M\). In particular, this proved Yau's conjecture for generic metrics. Later Marques, Neves, and Song [19] refined this result to show that, for a generic metric \(g\), there is a sequence of minimal surfaces in \(M\) which becomes equidistributed in \(M\). The Weyl law for the volume spectrum was a key ingredient in the proof of both of these results. Following the second named author's proof of the Multiplicity One Conjecture [25], Marques and Neves [18] showed that, for a generic metric \(g\), there is a sequence of minimal surfaces \(\Sigma_{p}\) with index \(p\) and \(\operatorname{Area}(\Sigma_{p})=\omega_{p}\). Song [22] proved Yau's conjecture for arbitrary metrics \(g\).
The Almgren-Pitts min-max theory relies heavily on tools from geometric measure theory. There is a parallel min-max theory for finding minimal surfaces based on the theory of phase transitions. This theory relies on the Allen-Cahn PDE and the varifold regularity theory of Wickramasekera. Gaspar and Guaraco [9] defined a phase transition spectrum \(\{c(p)\}_{p\in\mathbb{N}}\) associated to a Riemannian manifold via the Allen-Cahn PDE. They showed that each \(c(p)\) is achieved by a collection of minimal surfaces with multiplicities. Chodosh and Mantoulidis [5] proved the Multiplicity One Conjecture in the phase transition setting in ambient dimension three. Thus, for generic metrics on \(M^{3}\), they obtained the existence of a sequence of minimal surfaces \(\Sigma_{p}\) with index \(p\) and \(\operatorname{Area}(\Sigma_{p})=c(p)\).
Gaspar and Guaraco [10] showed that the phase transition spectrum also satisfies a Weyl law. That this, there is a constant \(\tau_{n}\) depending only on the dimension such that
\[c(p)\sim\tau_{n}\operatorname{Vol}(M)^{\frac{n}{n+1}}p^{\frac{1}{n+1}}.\]
Dey [7] proved that actually \(\omega_{p}=c(p)\) for all \(p\in\mathbb{N}\) and thus the Almgren-Pitts volume spectrum and the phase transition volume spectrum coincide. In particular, the constants \(a_{n}\) and \(\tau_{n}\) appearing in the two Weyl laws are equal.
In this paper, we define a "half-volume" spectrum associated to a Riemannian manifold. In the Almgren-Pitts setting, we restrict to \(p\)-sweepouts by families of hypersurfaces that
each enclose half the volume of \(M\). Then we define
\[\tilde{\omega}_{p}=\inf_{\text{half-volume $p$-sweepouts $X$}}\left[\sup_{\Sigma\in X}\text{Area}(\Sigma)\right].\]
The sequence \(\{\tilde{\omega}_{p}\}_{p\in\mathbb{N}}\) is called the half-volume spectrum of \(M\). In the phase transition setting, we define an analogous half-volume spectrum \(\tilde{c}(p)\) by looking at critical points of the Allen-Cahn energy subject to the volume constraint \(\int_{M}u=0\). In both cases, we show that the Weyl law continues to hold. This gives the following theorems.
**Theorem 1**.: _The Almgren-Pitts half-volume spectrum satisfies_
\[\tilde{\omega}_{p}\sim a_{n}\operatorname{Vol}(M)^{\frac{n}{n+1}}p^{\frac{1}{ n+1}},\quad\text{as $p\to\infty$}.\]
**Theorem 2**.: _The phase transition half-volume spectrum satisfies_
\[\tilde{c}(p)\sim\tau_{n}\operatorname{Vol}(M)^{\frac{n}{n+1}}p^{\frac{1}{n+1}},\quad\text{as $p\to\infty$}.\]
In the Allen-Cahn setting, we are able to use the results of Bellettini and Wickramasekera [3] to find varifolds achieving each \(\tilde{c}(p)\). In the following theorem, a hypersurface \(\Sigma\) is called almost-embedded if near each point in \(M\) either \(\Sigma\) is embedded or \(\Sigma\) decomposes into an ordered union of embedded sheets.
**Theorem 3**.: _Let \((M^{n+1},g)\) be a closed Riemannian manifold with \(3\leq n+1\leq 7\). Fix a number \(p\in\mathbb{N}\). There are_
* _a Caccioppoli set_ \(\Omega\subset M\) _with_ \(\operatorname{Vol}(\Omega)=\frac{1}{2}\operatorname{Vol}(M)\) _whose boundary is smooth and almost-embedded with constant mean curvature,_
* _a (possibly empty) collection of smooth, disjoint minimal surfaces_ \(\Sigma_{1},\dots,\Sigma_{k}\subset M\setminus\Omega\)_,_
* _and positive integers_ \(\theta_{0}\in\mathbb{Z}\) _and_ \(\theta_{1},\dots,\theta_{k}\in 2\mathbb{Z}\)__
_such that \(\tilde{c}(p)=\theta_{0}\operatorname{Area}(\partial\Omega)+\theta_{1} \operatorname{Area}(\Sigma_{1})+\dots+\theta_{k}\operatorname{Area}(\Sigma_{ k}).\) Moreover, \(\theta_{0}=1\) unless \(\partial\Omega\) is also a minimal surface._
Note that Theorem 3 produces a constant mean curvature surface that encloses half the volume of \(M\). Previously, the second author and Zhu [26] developed a min-max theory in the Almgren-Pitts setting capable of finding surfaces of constant mean curvature \(c\). However, there is no control over the volume enclosed by the surface. Likewise, Bellettini and Wickramasekera [3] developed a min-max theory in the Allen-Cahn setting capable of finding surfaces of constant mean curvature \(c\). Again there is no control over the volume enclosed by the surface. Thus there is a trade off. Theorem 3 produces constant mean curvature surfaces enclosing half the volume of \(M\), but at the expense of losing control over the exact value of the mean curvature.
We conclude the introduction with some open problems. First, we conjecture that \(\tilde{\omega}_{p}=\tilde{c}(p)\) for all \(p\in\mathbb{N}\). Second, we conjecture that, for a generic metric \(g\), the phase transition half-volume spectrum is achieved by multiplicity one constant mean curvature surfaces enclosing half the volume of \(M\). In other words, in Theorem 3 the collection \(\Sigma_{1},\dots,\Sigma_{k}\) is empty and \(\theta_{0}=1\) for every \(p\). In particular, we conjecture that generically there are infinitely many constant mean curvature surfaces enclosing half the volume of \(M\). Finally, it is interesting to know whether one can find surfaces achieving \(\tilde{\omega}_{p}\) by applying the
Almgren-Pitts min-max theory with a volume constraint. This seems to be a difficult task. Already it is not obvious how to define a suitable pull-tight on the space of half-volume cycles.
**Acknowledgement**.: X.Z. was supported by NSF grant DMS-2243149, and an Alfred P. Sloan Research Fellowship.
## 2. The Almgren-Pitts Half-Volume Spectrum
In this section, we investigate the topology of the space of half-volume cycles in a given manifold, and then define the Almgren-Pitts half-volume spectrum. Let \((M^{n+1},g)\) be a closed Riemannian manifold. We will use the following notation.
* Let \(\mathfrak{h}=\frac{1}{2}\operatorname{Vol}M\).
* Let \(\mathcal{C}(M)\) denote the collection of all Caccioppoli sets in \(M\).
* Let \(\mathcal{C}_{\mathfrak{h}}(M)\), \(\mathcal{C}_{\geq\mathfrak{h}}(M)\), and \(\mathcal{C}_{\leq\mathfrak{h}}(M)\) denote the space of Caccioppoli sets with volume equal to \(\mathfrak{h}\), greater than or equal to \(\mathfrak{h}\), and less than or equal to \(\mathfrak{h}\), respectively.
* Let \(\mathcal{Z}(M,\mathbb{Z}_{2})\) denote the set of all \(n\)-dimensional flat chains mod \(2\) in \(M\).
* Let \(\mathcal{B}(M,\mathbb{Z}_{2})\) denote the set of all \(T\in\mathcal{Z}(M,\mathbb{Z}_{2})\) such that \(T=\partial\Omega\) for some \(\Omega\in\mathcal{C}(M)\). This is the connected component of the empty set in \(\mathcal{Z}(M,\mathbb{Z}_{2})\) in the flat topology.
* Let \(\mathcal{H}(M,\mathbb{Z}_{2})\) be the set of all \(T\in\mathcal{Z}(M,\mathbb{Z}_{2})\) such that \(T=\partial\Omega\) for some \(\Omega\in\mathcal{C}_{\mathfrak{h}}(M)\). This is the space of "half-volume cycles."
* We use \(\mathcal{F}\) to denote the flat topology, \(\mathbf{F}\) to denote the \(\mathbf{F}\)-topology, and \(\mathbf{M}\) to denote the mass topology. All spaces are assumed to be equipped with the flat topology except where otherwise noted.
* We will abuse notation and write \(\operatorname{Vol}(\Omega)\) and \(\operatorname{Area}(T)\) instead of \(\mathbf{M}(\Omega)\) and \(\mathbf{M}(T)\) for \(\Omega\in\mathcal{C}(M)\) and \(T\in\mathcal{Z}(M,\mathbb{Z}_{2})\), respectively.
We will show that \(\mathcal{H}(M,\mathbb{Z}_{2})\) is weakly homotopy equivalent to \(\mathbb{R}\mathrm{P}^{\infty}\). The first step is to show that the double cover \(\mathcal{C}_{\mathfrak{h}}(M)\) is contractible.
**Proposition 4**.: _The space \(\mathcal{C}_{\leq\mathfrak{h}}(M)\) deformation retracts to \(\mathcal{C}_{\mathfrak{h}}(M)\)._
Proof.: The union of two Caccioppoli sets is a Caccioppoli set. Choose a Morse function \(f\colon M\to\mathbb{R}\). For \(s\in[0,\mathfrak{h}]\), let \(B_{s}\) be the sublevel set of \(f\) with volume equal to \(s\). Note that each \(B_{s}\) is a Caccioppoli set. For each \(\Omega\in\mathcal{C}_{\leq\mathfrak{h}}(M)\) and \(t\in[0,1]\), choose a number \(s(\Omega,t)\) so that
\[\operatorname{Vol}(\Omega\cup B_{s(\Omega,t)})=\operatorname{Vol}(\Omega)+t( \mathfrak{h}-\operatorname{Vol}(\Omega)).\]
Note that there is not necessarily a unique choice for \(s(\Omega,t)\), and the mapping \((\Omega,t)\mapsto s(\Omega,t)\) may not be continuous. Nevertheless, we claim that the map \(\phi\colon\mathcal{C}_{\leq\mathfrak{h}}(M)\times[0,1]\to\mathcal{C}_{\leq \mathfrak{h}}(M)\) defined by
\[\phi(\Omega,t)=\Omega\cup B_{s(\Omega,t)}\]
is continuous in the flat topology. Given this claim, \(\phi\) is the required deformation retraction. Indeed, \(\phi(\Omega,t)=\Omega\) for all \(\Omega\in\mathcal{C}_{\mathfrak{h}}(M)\) and all \(t\in[0,1]\), and moreover, \(\phi(\Omega,1)\in\mathcal{C}_{\mathfrak{h}}(M)\) for all \(\Omega\in\mathcal{C}_{\leq\mathfrak{h}}(M)\).
To see that \(\phi\) is continuous, let \(\varepsilon,\eta>0\) be small positive numbers. Assume that \(\Omega,\Theta\in\mathcal{C}_{\leq\mathfrak{h}}(M)\) satisfy \(\operatorname{Vol}(\Omega\,\Delta\,\Theta)<\varepsilon\) and that \(t,r\in[0,1]\) satisfy \(|t-r|<\eta\). We need to check that \(\operatorname{Vol}\big{(}\phi(\Omega,t)\,\Delta\,\phi(\Theta,r)\big{)}\) is small. Let \(U=B_{s(\Omega,t)}\setminus\Omega\) and let \(V=B_{s(\Theta,r)}\setminus\Theta\). Now observe that
\[\operatorname{Vol}\big{(}\phi(\Omega,t)\,\Delta\,\phi(\Theta,r) \big{)} =\operatorname{Vol}\big{(}[\Omega\cup U]\,\Delta[\Theta\cup V] \big{)}\] \[=\operatorname{Vol}(\Omega)+\operatorname{Vol}(U)+\operatorname{ Vol}(\Theta)+\operatorname{Vol}(V)\] \[\quad-2\operatorname{Vol}(\Omega\cap\Theta)-2\operatorname{ Vol}(\Omega\cap V)-2\operatorname{Vol}(U\cap\Theta)-2\operatorname{Vol}(U\cap V).\]
Note that \(\operatorname{Vol}(\Omega)+\operatorname{Vol}(\Theta)-2\operatorname{Vol}( \Omega\cap\Theta)=\operatorname{Vol}(\Omega\,\Delta\,\Theta)<\varepsilon\). We also have \(\operatorname{Vol}(\Omega\cap V)\leq\operatorname{Vol}(\Omega\setminus \Theta)<\varepsilon\) and \(\operatorname{Vol}(U\cap\Theta)\leq\operatorname{Vol}(\Theta\setminus \Omega)<\varepsilon\).
Therefore, it remains to show that \(\operatorname{Vol}(U)+\operatorname{Vol}(V)-2\operatorname{Vol}(U\cap V)\) is small. Observe that \(\operatorname{Vol}(U)-\operatorname{Vol}(U\cap V)=\operatorname{Vol}(U \setminus V)\) and that \(\operatorname{Vol}(V)-\operatorname{Vol}(U\cap V)=\operatorname{Vol}(V \setminus U)\). Without loss of generality, we can assume that \(s(\Theta,r)\geq s(\Omega,t)\). In this case, we have
\[\operatorname{Vol}(U\setminus V) =\operatorname{Vol}\big{(}[B_{s(\Omega,t)}\setminus\Omega] \setminus[B_{s(\Theta,r)}\setminus\Theta]\big{)}\] \[\leq\operatorname{Vol}\big{(}[B_{s(\Theta,r)}\setminus\Omega] \setminus[B_{s(\Theta,r)}\setminus\Theta]\big{)}\leq\operatorname{Vol}( \Theta\setminus\Omega)<\varepsilon.\]
We also have
\[\operatorname{Vol}(V\setminus U) =\operatorname{Vol}\big{(}[B_{s(\Theta,r)}\setminus\Theta] \setminus[B_{s(\Omega,t)}\setminus\Omega]\big{)}\] \[=\operatorname{Vol}\big{(}[B_{s(\Theta,r)}\setminus B_{s( \Omega,t)}]\setminus\Theta\big{)}+\operatorname{Vol}\big{(}[B_{s(\Omega,t)} \setminus\Theta]\setminus[B_{s(\Omega,t)}\setminus\Omega]\big{)}\] \[\leq\operatorname{Vol}\big{(}[B_{s(\Theta,r)}\setminus B_{s( \Omega,t)}]\setminus\Theta\big{)}+\operatorname{Vol}(\Omega\setminus\Theta)\] \[<\operatorname{Vol}\big{(}[B_{s(\Theta,r)}\setminus B_{s( \Omega,t)}]\setminus\Theta\big{)}+\varepsilon.\]
Thus to prove the claim, it remains to show that \(\operatorname{Vol}\big{(}[B_{s(\Theta,r)}\setminus B_{s(\Omega,t)}]\setminus \Theta\big{)}\) is small. Here we must use the assumption that \(t\) is close to \(r\). Notice that
\[r(\mathfrak{h}-\operatorname{Vol}(\Theta)) =\operatorname{Vol}(V)=\operatorname{Vol}\big{(}[B_{s(\Theta,r)} \setminus B_{s(\Omega,t)}]\setminus\Theta\big{)}+\operatorname{Vol}(B_{s( \Omega,t)}\setminus\Theta),\] \[t(\mathfrak{h}-\operatorname{Vol}(\Omega))=\operatorname{Vol}( U)=\operatorname{Vol}(B_{s(\Omega,t)}\setminus\Omega).\]
Now observe that
\[|r(\mathfrak{h}-\operatorname{Vol}(\Theta))-t(\mathfrak{h}- \operatorname{Vol}(\Omega))| \leq|t-r|\mathfrak{h}+t|\operatorname{Vol}(\Omega)- \operatorname{Vol}(\Theta)|+|t-r|\operatorname{Vol}(\Theta)\] \[\leq 2\eta\mathfrak{h}+\varepsilon.\]
Also we have
\[|\operatorname{Vol}(B_{s(\Omega,t)}\setminus\Theta)-\operatorname{ Vol}(B_{s(\Omega,t)}\setminus\Omega)| \leq\operatorname{Vol}\big{(}[B_{s(\Omega,t)}\setminus\Theta]\, \Delta[B_{s(\Omega,t)}\setminus\Omega]\big{)}\] \[\leq\operatorname{Vol}(\Theta\setminus\Omega)+\operatorname{ Vol}(\Omega\setminus\Theta)<\varepsilon.\]
This implies that
\[\operatorname{Vol}\big{(}[B_{s(\Theta,r)}\setminus B_{s(\Omega,t)}]\setminus \Theta\big{)}\leq 2\eta\mathfrak{h}+2\varepsilon,\]
which completes the proof of the claim.
**Proposition 5**.: _The space \(\mathcal{C}(M)\) deformation retracts to \(\mathcal{C}_{\mathfrak{h}}(M)\), and the space \(\mathcal{B}(M,\mathbb{Z}_{2})\) deformation retracts to \(\mathcal{H}(M,\mathbb{Z}_{2})\)._
Proof.: Consider the deformation retraction \(\phi\colon\mathcal{C}_{\leq\mathfrak{h}}(M)\times[0,1]\to\mathcal{C}_{\leq \mathfrak{h}}(M)\) from the previous proposition. We can extend \(\phi\) to an odd map \(\psi\colon\mathcal{C}(M)\times[0,1]\to\mathcal{C}(M)\) by the formula
\[\psi(\Omega,t)=\begin{cases}\phi(\Omega,t),&\text{if }\operatorname{Vol}(\Omega) \leq\mathfrak{h}\\ M\setminus\phi(M\setminus\Omega,t),&\text{if }\operatorname{Vol}(\Omega)\geq \mathfrak{h}.\end{cases}\]
Then \(\psi\) is a deformation retraction of \(\mathcal{C}(M)\) onto \(\mathcal{C}_{\mathfrak{h}}(M)\). Moreover, since \(\psi\) is odd, this descends to a map \(\theta\colon\mathcal{B}(M,\mathbb{Z}_{2})\times[0,1]\to\mathcal{B}(M,\mathbb{Z }_{2})\). This is the required deformation retraction of \(\mathcal{B}(M,\mathbb{Z}_{2})\) onto \(\mathcal{H}(M,\mathbb{Z}_{2})\).
**Proposition 6**.: _Let \(K\) be the maximal area of a level set of the Morse function \(f\) used in the proof of Proposition 4. Let \(\theta\) be the deformation retraction from Proposition 5. Then \(\operatorname{Area}(\theta(T,t))\leq\operatorname{Area}(T)+K\) for all \(T\in\mathcal{B}(M,\mathbb{Z}_{2})\) and all \(t\in[0,1]\)._
Proof.: Fix some \(T\in\mathcal{B}(M,\mathbb{Z}_{2})\) and some \(t\in[0,1]\). Choose a set \(\Omega\in\mathcal{C}_{\leq\mathfrak{h}}(M)\) such that \(\partial\Omega=T\). Let \(\phi\) be the deformation retraction from Proposition 4. Then
\[\theta(T,t)=\partial\phi(\Omega,t)=\partial(\Omega\cup B_{s(\Omega,t)}).\]
Note that \(\partial(\Omega\cup B_{s(\Omega,t)})\subset\partial\Omega\cup\partial B_{s( \Omega,t)}\) and therefore \(\operatorname{Area}(\theta(T,t))\leq\operatorname{Area}(\partial\Omega)+K= \operatorname{Area}(T)+K\), as needed
The homotopy groups of the cycle spaces were originally computed by Almgren [1]. Later, Marques and Neves [18] gave a simplified proof in the case of codimension 1 cycles.
**Theorem 7** (Marques-Neves).: _The map \(\partial\colon\mathcal{C}(M)\to\mathcal{B}(M,\mathbb{Z}_{2})\) is a double cover. The space \(\mathcal{C}(M)\) is contractible, and \(\mathcal{B}(M,\mathbb{Z}_{2})\) is weakly homotopy equivalent to \(\mathbb{R}\mathrm{P}^{\infty}\)._
Combined with the previous propositions, this yields the following corollary.
**Corollary 8**.: _The map \(\partial\colon\mathcal{C}_{\mathfrak{h}}(M)\to\mathcal{H}(M,\mathbb{Z}_{2})\) is a double cover. The space \(\mathcal{C}_{\mathfrak{h}}(M)\) is contractible and \(\mathcal{H}(M,\mathbb{Z}_{2})\) is weakly homotopy equivalent to \(\mathbb{R}\mathrm{P}^{\infty}\). The inclusion map \(\mathcal{H}(M,\mathbb{Z}_{2})\to\mathcal{B}(M,\mathbb{Z}_{2})\) is a homotopy equivalence._
We now recall the notion of sweepouts. Since \(\mathcal{B}(M,\mathbb{Z}_{2})\) is weakly homotopy equivalent to \(\mathbb{R}\mathrm{P}^{\infty}\), it follows that the cohomology ring of \(\mathcal{B}(M,\mathbb{Z}_{2})\) with \(\mathbb{Z}_{2}\) coefficients is \(\mathbb{Z}_{2}[\lambda]\), where the generator \(\lambda\) is of degree 1. Let \(X\) be a cubical complex.
**Definition 9**.: A flat continuous map \(\Phi\colon X\to\mathcal{B}(M,\mathbb{Z}_{2})\) is called a \(p\)-sweepout if \(\Phi^{*}\lambda^{p}\neq 0\) in \(H^{p}(X,\mathbb{Z}_{2})\).
**Definition 10**.: A map \(\Phi\colon X\to\mathcal{Z}(M,\mathbb{Z}_{2})\) is said to have _no concentration of mass_ provided
\[\lim_{r\to 0}\left[\sup_{q\in M}\sup_{x\in X}\operatorname{Area}(\Phi(x)_{ \llcorner}B(q,r))\right]=0.\]
**Definition 11**.: Let \(\mathcal{P}_{p}(M)\) denote the collection of all \(p\)-sweepouts of \(M\) with no concentration of mass. Note that different \(p\)-sweepouts may have different domains.
**Definition 12**.: The \(p\)-width of \(M\) is
\[\omega_{p}=\inf_{\Phi\in\mathcal{P}_{p}(M)}\left[\sup_{x\in\operatorname{dom}( \Phi)}\operatorname{Area}(\Phi(x))\right].\]
**Remark 13**.: In [17], the authors state that the cohomology ring of \(\mathcal{Z}(M,\mathbb{Z}_{2})\) is isomorphic to \(\mathbb{Z}_{2}[\lambda]\). Then they define a \(p\)-sweepout as a map \(\Phi\colon X\to\mathcal{Z}(M,\mathbb{Z}_{2})\) such that \(\Phi^{*}\lambda^{p}\neq 0\). However, the cohomology ring of \(\mathcal{Z}(M,\mathbb{Z}_{2})\) is actually \(\oplus_{i}\mathbb{Z}_{2}[\lambda_{i}]\) where the direct sum is taken over the connected components of \(\mathcal{Z}(M,\mathbb{Z}_{2})\). These connected components are in bijection with homology classes in \(H_{n}(M,\mathbb{Z}_{2})\). Given this, there are several possible ways to define a \(p\)-sweepout. The simplest, which we shall adopt, is to replace the space \(\mathcal{Z}(M,\mathbb{Z}_{2})\) with \(\mathcal{B}(M,\mathbb{Z}_{2})\) as in Definition 9 so that the cohomology ring is indeed \(\mathbb{Z}_{2}[\lambda]\). Alternatively, one could define a \(p\)-sweepout as a map \(\Phi\colon X\to\mathcal{Z}(M,\mathbb{Z}_{2})\) such that \(\Phi^{*}\lambda^{p}_{i}\neq 0\) for some \(i\). In either case, it is straightforward to see that one still obtains a Weyl law for the resulting \(p\)-widths.
We can now introduce the central object of the paper. By Corollary 8, the cohomology ring of \(\mathcal{H}(M,\mathbb{Z}_{2})\) with \(\mathbb{Z}_{2}\) coefficients is also \(\mathbb{Z}_{2}[\lambda]\). Again let \(X\) be a cubical complex.
**Definition 14**.: A flat continuous map \(\Phi\colon X\to\mathcal{H}(M,\mathbb{Z}_{2})\) is called a half-volume \(p\)-sweepout if \(\Phi^{*}\lambda^{p}\neq 0\) in \(H^{p}(X,\mathbb{Z}_{2})\).
**Definition 15**.: Let \(\mathcal{Q}_{p}(M)\) denote the collection of all half-volume \(p\)-sweepouts of \(M\) with no concentration of mass.
**Definition 16**.: The half-volume \(p\)-width of \(M\) is
\[\tilde{\omega}_{p}=\inf_{\Phi\in\mathcal{Q}_{p}(M)}\left[\sup_{x\in\mathrm{dom }(\Phi)}\mathrm{Area}(\Phi(x))\right].\]
We will call the sequence \(\{\tilde{\omega}_{p}\}_{p\in\mathbb{N}}\) the half-volume spectrum of \(M\).
Liokumovich, Marques, and Neves [17] showed that the \(p\)-widths of \(M\) satisfy a Weyl law.
**Theorem 17** (Liokumovich, Marques, Neves).: _There is a universal constant \(a_{n}\) such that \(\omega_{p}\sim a_{n}\operatorname{Vol}(M)^{n/(n+1)}p^{1/(n+1)}\) as \(p\to\infty\)._
Next, we will show that the half-volume spectrum also satisfies a Weyl law. It is possible to prove this directly. However, this is not the approach we will take. Rather, we will show that the Weyl law for the half-volume spectrum follows from Theorem 17, together with the fact that every \(p\)-sweepout is homotopic to a \(p\)-sweepout by half-volume cycles.
**Proposition 18**.: _The half-volume spectrum satisfies \(\omega_{p}\leq\tilde{\omega}_{p}\) for all \(p\in\mathbb{N}\)._
Proof.: Notice that any half-volume \(p\)-sweepout with no concentration of mass automatically belongs to \(\mathcal{P}_{p}(M)\). Therefore, the proposition follows immediately from the definitions of \(\omega_{p}\) and \(\tilde{\omega}_{p}\).
**Proposition 19**.: _There is a constant \(K\) depending only on \(M\) such that \(\tilde{\omega}_{p}\leq\omega_{p}+K+1\) for all \(p\in\mathbb{N}\)._
Proof.: Choose a \(p\)-sweepout \(\Phi\colon X\to\mathcal{B}(M,\mathbb{Z}_{2})\) in \(\mathcal{P}_{p}(M)\) with
\[\sup_{x\in X}\operatorname{Area}(\Phi(x))\leq\omega_{p}+1.\]
Let \(\theta\colon\mathcal{B}(M,\mathbb{Z}_{2})\times[0,1]\to\mathcal{B}(M,\mathbb{Z} _{2})\) be the deformation retraction constructed in Proposition 5. By Proposition 6, there is a constant \(K\) such that
\[\operatorname{Area}(\theta(T,t))\leq\operatorname{Area}(T)+K\]
for all \(T\in\mathcal{B}(M,\mathbb{Z}_{2})\) and all \(t\in[0,1]\). Therefore, the map \(\Psi\colon X\to\mathcal{H}(M,\mathbb{Z}_{2})\) given by \(\Psi(x)=\theta(\Phi(x),1)\) is a half-volume \(p\)-sweepout with
\[\sup_{x\in X}\operatorname{Area}(\Psi(x))\leq\sup_{x\in X}\operatorname{Area }(\Phi(x))+K.\]
Moreover, it is straightforward to check that \(\Psi\) has no concentration of mass. This proves that \(\tilde{\omega}_{p}\leq\omega_{p}+K+1\).
We are now able to prove Theorem 1.
**Theorem 1**.: _The Weyl law holds for the half-volume spectrum. In other words, we have \(\tilde{\omega}_{p}\sim a_{n}\operatorname{Vol}(M)^{n/(n+1)}p^{1/(n+1)}\) as \(p\to\infty\)._
Proof.: This follows from Proposition 18 and Proposition 19. Indeed, we have
\[\omega_{p}\leq\tilde{\omega}_{p}\leq\omega_{p}+K+1.\]
Theorem 17 implies that
\[\lim_{p\to\infty}\frac{\omega_{p}}{a_{n}\operatorname{Vol}(M)^{n/(n+1)}p^{1/( n+1)}}=1\quad\text{and}\quad\lim_{p\to\infty}\frac{\omega_{p}+K+1}{a_{n} \operatorname{Vol}(M)^{n/(n+1)}p^{1/(n+1)}}=1,\]
and it follows that
\[\lim_{p\to\infty}\frac{\tilde{\omega}_{p}}{a_{n}\operatorname{Vol}(M)^{n/(n+ 1)}p^{1/(n+1)}}=1\]
as well.
## 3. The Phase Transition Half-Volume Spectrum
There is also an analogous half-volume spectrum in the Allen-Cahn setting. Let \(W\colon\mathbb{R}\to\mathbb{R}\) be an even double-well potential. This means that
1. \(W\) is smooth and non-negative,
2. \(W(x)=W(-x)\) for all \(x\in\mathbb{R}\),
3. \(W\) has non-degenerate minima \(W(\pm 1)=0\),
4. \(W\) has a non-degenerate maximum \(W(0)>0\),
5. \(W\) is increasing on \((-1,0)\) and \((1,\infty)\) and decreasing on \((0,1)\) and \((-\infty,-1)\),
6. there are constants \(\kappa>0\) and \(\alpha\in(0,1)\) such that \(W^{\prime\prime}(x)\geq\kappa\) for all \(|x|\geq\alpha\).
Define the constant
\[\sigma=\int_{-1}^{1}\sqrt{W(s)/2}\,ds.\]
Let \(u\colon M\to\mathbb{R}\) be an \(W^{1,2}\) function. For \(\varepsilon>0\) define the Allen-Cahn energy
\[E_{\varepsilon}(u)=\int_{M}\frac{\varepsilon}{2}|\nabla u|^{2}+\frac{W(u)}{ \varepsilon}.\]
In [9], Gaspar and Guaraco define a phase-transition spectrum associated to \(M\) via the Allen-Cahn energy.
In order to state the definition of the spectrum, we shall need some further background. A paracompact topological space \(X\) is called a \(\mathbb{Z}_{2}\)-space if it admits a free \(\mathbb{Z}_{2}\)-action. Given such a space, there is always a quotient space \(T=X/\mathbb{Z}_{2}\) and the natural map \(X\to T\) is a principal \(\mathbb{Z}_{2}\)-bundle. Any such bundle arises as a pullback of the universal bundle \(S^{\infty}\to\mathbb{R}\mathrm{P}^{\infty}\). More precisely, there is a classifying map \(f\colon T\to\mathbb{R}\mathrm{P}^{\infty}\) such that \(X\to T\) is the pullback of \(S^{\infty}\to\mathbb{R}\mathrm{P}^{\infty}\) via \(f\). The Alexander-Spanier cohomology ring of \(\mathbb{R}\mathrm{P}^{\infty}\) with \(\mathbb{Z}_{2}\) coefficients is \(\mathbb{Z}_{2}[\mu]\) where the generator \(\mu\) is in degree one. The map \(f\) is unique up to homotopy, and therefore the cohomology classes \(f^{*}\mu^{p}\) are well-defined in the Alexander-Spanier cohomology ring \(H^{*}(T,\mathbb{Z}_{2})\). The \(\mathbb{Z}_{2}\)-index of \(X\) is defined to be the largest \(p\) such that \(f^{*}\mu^{p-1}\neq 0\) in \(H^{*}(T,\mathbb{Z}_{2})\). A subspace \(A\) of \(X\) is called invariant if it is closed under the \(\mathbb{Z}_{2}\)-action.
The \(\mathbb{Z}_{2}\)-index enjoys the following useful properties. See Fadell and Rabinowitz [8] for more details.
1. (Monotonicity) If \(X_{1}\) and \(X_{2}\) are \(\mathbb{Z}_{2}\)-spaces and there is a continuous equivariant map \(X_{1}\to X_{2}\) then \(\operatorname{ind}_{\mathbb{Z}_{2}}(X_{1})\leq\operatorname{ind}_{\mathbb{Z}_ {2}}(X_{2})\).
2. (Subadditivity) If \(X\) is a \(\mathbb{Z}_{2}\)-space and \(A_{1}\) and \(A_{2}\) are closed, invariant subsets with \(A_{1}\cup A_{2}=X\) then \(\operatorname{ind}_{\mathbb{Z}_{2}}(X)\leq\operatorname{ind}_{\mathbb{Z}_{2}} (A_{1})+\operatorname{ind}_{\mathbb{Z}_{2}}(A_{2})\).
3. (Continuity) If \(X\) is a \(\mathbb{Z}_{2}\)-space and \(A\) is a closed, invariant subset of \(X\) then there is an invariant neighborhood \(V\) of \(A\) in \(X\) such that \(\operatorname{ind}_{\mathbb{Z}_{2}}(A)=\operatorname{ind}_{\mathbb{Z}_{2}}( \overline{V})\).
The space \(W^{1,2}(M)\setminus\{0\}\) is paracompact since it is a metric space. Moreover, it admits a natural \(\mathbb{Z}_{2}\) action \(u\mapsto-u\). Note that \(E_{\varepsilon}\) respects this action since \(E_{\varepsilon}(u)=E_{\varepsilon}(-u)\). This uses the fact that \(W\) is even. A set \(A\subset W^{1,2}(M)\setminus\{0\}\) is called invariant provided \(u\in A\) if and only if \(-u\in A\). The \(\mathbb{Z}_{2}\)-action on \(W^{1,2}(M)\setminus\{0\}\) descends to any such \(A\). Define the families
\[\mathcal{F}_{p}=\{A\subset W^{1,2}(M)\setminus\{0\}:\text{ $A$ is compact and invariant with }\operatorname{ind}_{\mathbb{Z}_{2}}(A)\geq p+1\}.\]
Gaspar and Guaraco define the min-max values
\[c(\varepsilon,p)=\frac{1}{2\sigma}\inf_{A\in\mathcal{F}_{p}}\left[\sup_{u\in A }E_{\varepsilon}(u)\right].\]
Then they set \(c(p)=\liminf_{\varepsilon\to 0}c(\varepsilon,p)\). The sequence \(\{c(p)\}_{p\in\mathbb{N}}\) is the phase transition spectrum of \(M\).
Gaspar and Guaraco [10] showed that the Weyl law also holds for the phase transition spectrum.
**Theorem 20** (Gaspar and Guaraco).: _There is a universal constant \(\tau_{n}\) such that \(c(p)\sim\tau_{n}\operatorname{Vol}(M)^{n/(n+1)}p^{1/(n+1)}\) as \(p\to\infty\)._
Dey [7] proved that \(\omega_{p}=c(p)\) for all \(p\in\mathbb{N}\). In particular, this implies that the constant \(\tau_{n}\) is equal to the constant \(a_{n}\).
**Remark 21**.: Gaspar and Guaraco do not include the normalization constant \(\frac{1}{2\sigma}\) in the definition of \(c(\varepsilon,p)\) and \(c(p)\). We have chosen to include it so that one has \(\omega_{p}=c(p)\).
It is also possible to define a half-volume spectrum in the phase transition setting. Define
\[Y=\{u\in W^{1,2}(M):\int_{M}u=0\}.\]
Note that \(Y\) is a closed subspace of \(W^{1,2}(M)\) and so \(Y\) is also a Hilbert space. We can run essentially the same construction using \(Y\) in place of \(W^{1,2}(M)\). For each \(p\in\mathbb{N}\), define
\[\mathcal{G}_{p}=\{A\subset Y\setminus\{0\}:\,A\text{ is compact and invariant with }\operatorname{ ind}_{\mathbb{Z}_{2}}(A)\geq p+1\},\]
and then set
\[\tilde{c}(\varepsilon,p)=\frac{1}{2\sigma}\inf_{A\in\mathcal{G}_{p}}\left[ \sup_{u\in A}E_{\varepsilon}(u)\right].\]
Taking the limit as \(\varepsilon\to 0\) gives the phase-transition half volume spectrum.
**Definition 22**.: For each \(p\in\mathbb{N}\), let \(\tilde{c}(p)=\liminf_{\varepsilon\to 0}\tilde{c}(\varepsilon,p)\). The phase transition half volume spectrum of \(M\) is the sequence \(\{\tilde{c}(p)\}_{p\in\mathbb{N}}\).
**Proposition 23**.: _The phase transition half-volume spectrum satisfies \(c(p)\leq\tilde{c}(p)\) for all \(p\in\mathbb{N}\)._
Proof.: Note that \(\mathcal{G}_{p}\subset\mathcal{F}_{p}\) for every \(p\in\mathbb{N}\). Therefore, for every \(\varepsilon>0\), it holds that \(c(\varepsilon,p)\leq\tilde{c}(\varepsilon,p)\). The result then follows by sending \(\varepsilon\to 0\).
**Proposition 24**.: _The phase transition half-volume spectrum satisfies \(\tilde{c}(p)\leq c(p+1)\) for all \(p\in\mathbb{N}\)._
Proof.: Fix an \(\varepsilon>0\). Select a set \(A\in\mathcal{F}_{p+1}\) with
\[\sup_{u\in A}E_{\varepsilon}(u)\leq 2\sigma\left[c(\varepsilon,p+1)+ \varepsilon\right].\]
Define the set \(B=\{u\in A:\int_{M}u=0\}\) and note that \(B\) is closed and invariant. We claim that \(\operatorname{ind}_{\mathbb{Z}_{2}}(B)\geq p+1\) so that \(B\in\mathcal{G}_{p}\). Given this, we obtain that \(\tilde{c}(\varepsilon,p)\leq c(\varepsilon,p+1)+\varepsilon\), and the result follows upon sending \(\varepsilon\to 0\).
It remains to prove the claim. By the continuity of the index, there is a neighborhood \(V\) of \(B\) in \(A\) such that \(\operatorname{ind}_{\mathbb{Z}_{2}}(B)=\operatorname{ind}_{\mathbb{Z}_{2}}( \overline{V})\). There is an \(\eta>0\) such that
\[\{u\in A:-\eta<\int_{M}u<\eta\}\subset V.\]
Indeed, if not, then there is a sequence \(u_{k}\) in \(A\setminus V\) with \(\int_{M}u_{k}\to 0\). Since \(A\setminus V\) is compact, we can find a subsequence \(u_{k_{j}}\) that converges to a limit \(u\) in \(A\setminus V\). But \(u\) satisfies \(\int_{M}u=0\) and therefore \(u\in B\subset V\) and this is a contradiction. Therefore, such an \(\eta\) exists.
Let \(K=\{u\in A:\,\left|\int_{M}u\right|\geq\frac{\eta}{2}\}\). Then \(K\) is a closed invariant subset of \(A\) and \(K\cup\overline{V}=A\). Define a map \(K\to S^{0}\) by sending \(u\) to \(1\) if \(\int_{M}u>0\) and sending \(u\) to \(-1\) if \(\int_{M}u<0\). This map is continuous and equivariant and so by the monotonicity of the index we have
\[\operatorname{ind}_{\mathbb{Z}_{2}}(K)\leq\operatorname{ind}_{\mathbb{Z}_{2}}( S^{0})=1.\]
Hence by the subadditivity of the index, we get
\[p+2\leq\operatorname{ind}_{\mathbb{Z}_{2}}(A)\leq\operatorname{ind}_{\mathbb{Z }_{2}}(K)+\operatorname{ind}_{\mathbb{Z}_{2}}(\overline{V})\leq\operatorname{ ind}_{\mathbb{Z}_{2}}(\overline{V})+1.\]
This implies that \(\operatorname{ind}_{\mathbb{Z}_{2}}(B)=\operatorname{ind}_{\mathbb{Z}_{2}}( \overline{V})\geq p+1\), and so \(B\in\mathcal{G}_{p}\) as needed.
We can now prove Theorem 2.
**Theorem 2**.: _The phase transition half-volume spectrum satisfies the Weyl law. In other words, we have \(\tilde{c}(p)\sim\tau_{n}\operatorname{Vol}(M)^{n/(n+1)}p^{1/(n+1)}\) as \(p\to\infty\)._
Proof.: By Propositions 23 and 24 we have
\[c(p)\leq\tilde{c}(p)\leq c(p+1).\]
By the Weyl law for the phase-transition spectrum, we have
\[\lim_{p\to\infty}\frac{c(p)}{\tau_{n}\operatorname{Vol}(M)^{n/(n+1)}p^{1/(n+1 )}}=1\quad\text{ and }\quad\lim_{p\to\infty}\frac{c(p+1)}{\tau_{n}\operatorname{Vol}(M)^{n/(n+1)}p ^{1/(n+1)}}=1\]
and therefore
\[\lim_{p\to\infty}\frac{\tilde{c}(p)}{\tau_{n}\operatorname{Vol}(M)^{n/(n+1)}p ^{1/(n+1)}}=1\]
as well.
## 4. Surfaces Associated to the Half-Volume Spectrum
In this section, we use the Allen-Cahn min-max theory to construct surfaces associated to the phase transition half-volume spectrum. The goal is to prove Theorem 3. Fix a closed Riemannian manifold \((M^{n+1},g)\) with \(3\leq n+1\leq 7\). Fix a number \(p\in\mathbb{N}\). In this section, we require the following additional hypothesis on the double-well potential \(W\).
* There are constants \(0<C_{1}<C_{2}\) and \(\beta>1\) and \(2<q<\frac{11}{5}\) such that \[C_{1}|x|^{q}\leq W(x)\leq C_{2}|x|^{q}\quad\text{and}\quad C_{1}|x|^{q-1}\leq |W^{\prime}(x)|\leq C_{2}|x|^{q-1}\] for all \(|x|\geq\beta\).
The first step of the proof is to construct, for each small enough \(\varepsilon>0\), a critical point \(u_{\varepsilon}\) of \(E_{\varepsilon}\) subject to the volume constraint
\[\int_{M}u_{\varepsilon}=0.\]
Given such a \(u_{\varepsilon}\), there is a Lagrange multiplier \(\lambda_{\varepsilon}\in\mathbb{R}\) such that \(u_{\varepsilon}\) is a critical point of
\[F_{\varepsilon,\lambda_{\varepsilon}}(v)=E_{\varepsilon}(v)+\lambda_{ \varepsilon}\int_{M}v\]
on all of \(W^{1,2}\). The construction of \(u_{\varepsilon}\) is similar to that of Gaspar and Guaraco [9] in the unconstrained case.
**Remark 25**.: There are two purposes for imposing the growth condition (vii). The first is that it allows us to verify the Palais-Smale condition with the volume constraint. In the unconstrained case, one has
\[E_{\varepsilon}(\max(\min(u,1),-1))\leq E_{\varepsilon}(u)\]
and so by a truncation argument it is enough to verify the Palais-Smale condition along Palais-Smale sequences which are bounded in \(L^{\infty}\). See [9] for more details. However, truncation may not preserve the volume constraint. In the volume constrained case, we instead rely on (vii) to show that \(W^{\prime}(u)\in L^{2}\) whenever \(u\in W^{1,2}\). The second purpose is to get uniform \(L^{\infty}\) bounds on critical points of \(F_{\varepsilon,\lambda}\). Given a sequence \(\varepsilon_{k}\to 0\) and critical points \(u_{\varepsilon_{k}}\) of \(F_{\varepsilon_{k},\lambda_{\varepsilon_{k}}}\) with \(E_{\varepsilon_{k}}(u_{\varepsilon_{k}})\leq C\), the growth condition (vii) implies that \(\|u_{\varepsilon_{k}}\|_{L^{\infty}}\leq\beta\) provided \(k\) is large enough.
We recall (see Proposition 4.4 in [13]) that the first variation of \(E_{\varepsilon}\) is given by
\[DE_{\varepsilon}(u)(v)=\int_{M}\frac{\varepsilon}{2}\nabla u\cdot\nabla v+ \frac{W^{\prime}(u)}{\varepsilon}v.\]
Fix a number \(\varepsilon>0\). A sequence \(A_{k}\) in \(\mathcal{G}_{p}\) is called a critical sequence if
\[\lim_{k\to\infty}\left[\sup_{u\in A_{k}}E_{\varepsilon}(u)\right]=2\sigma \tilde{c}(\varepsilon,p).\]
A sequence \(u_{k}\in A_{k}\) is called a min-max sequence provided \(\lim_{k\to\infty}E_{\varepsilon}(u_{k})=2\sigma\tilde{c}(\varepsilon,p)\).
In the unconstrained case, it is not necessarily true that every min-max sequence is bounded in \(W^{1,2}\). However, one can obtain the existence of a bounded min-max sequence via a truncation argument. See, for example, the remarks before Proposition 4.5 in [13]. We cannot employ truncation because it doesn't preserve the volume constraint. Fortunately, in the volume constrained case, every min-max sequence is automatically bounded in \(W^{1,2}\).
**Proposition 26**.: _Any min-max sequence \(u_{k}\) is uniformly bounded in \(W^{1,2}(M)\)._
Proof.: Assume that \(u\in Y\) satisfies \(E_{\varepsilon}(u)\leq K\). Since \(W\geq 0\), it follows immediately that
\[\int_{M}|\nabla u|^{2}\leq\frac{2K}{\varepsilon}.\]
Since \(u\) has average \(0\), the Poincare inequality implies that \(\|u\|_{W^{1,2}}\leq CK/\varepsilon\). This proves the result.
**Proposition 27**.: _Assume that \(u_{k}\) is a sequence uniformly bounded in \(W^{1,2}\). Then \(W^{\prime}(u_{k})\) is uniformly bounded in \(L^{2}\)._
Proof.: By assumption the sequence \(u_{k}\) is uniformly bounded in \(W^{1,2}\). As \(3\leq n+1\leq 7\), the Sobolev embedding theorem implies that \(u_{k}\) is uniformly bounded in \(L^{12/5}\). Now \(|W^{\prime}(u_{k})|\leq C|u_{k}|^{q-1}\leq C|u_{k}|^{6/5}\) whenever \(|u_{k}|\geq\beta\). Therefore
\[\int_{M}W^{\prime}(u_{k})^{2} =\int_{|u_{k}|\leq\beta}W^{\prime}(u_{k})^{2}+\int_{|u_{k}|> \beta}W^{\prime}(u_{k})^{2}\] \[\leq C\operatorname{Vol}(M)+C\int_{M}|u_{k}|^{12/5},\]
and it follows that \(W^{\prime}(u_{k})\) is uniformly bounded in \(L^{2}\)
The functional \(E_{\varepsilon}|_{Y}\) satisfies the Palais-Smale condition. See [13] Proposition 4.4 for the proof without a volume constraint.
**Proposition 28**.: _The functional \(E_{\varepsilon}|_{Y}\) satisfies the Palais-Smale condition. More precisely, assume that \(u_{k}\) is a bounded sequence in \(Y\) and that \(\|DE_{\varepsilon}|_{Y}(u_{k})\|\to 0\). Then a subsequence of \(u_{k}\) converges strongly to a limit \(u\in Y\)._
Proof.: Assume that \(u_{k}\) is a bounded sequence in \(Y\) such that \(\|DE_{\varepsilon}|_{Y}(u_{k})\|\to 0\). We need to show that some subsequence of \(u_{k}\) converges strongly to a limit \(u\in Y\). Note that \(Y\) is closed and convex in \(W^{1,2}(M)\) and so \(Y\) is weakly closed. Thus, passing to a subsequence, we can assume that \(u_{k}\) converges weakly in \(W^{1,2}\) and strongly in \(L^{12/5}\) to a point \(u\in Y\).
Observe that
\[DE_{\varepsilon}|_{Y}(u)(u_{k}-u)=\int_{M}\varepsilon\nabla u\cdot\nabla(u_{k} -u)+\int_{M}\frac{W^{\prime}(u)}{\varepsilon}(u_{k}-u).\]
The first term on the right hand side goes to \(0\) by the weak convergence \(u_{k}\rightharpoonup u\). Note that \(W^{\prime}(u)\in L^{2}\) since \(u\in L^{12/5}\). Therefore the second term on the right hand side also goes to \(0\) since \(u_{k}\to u\) in \(L^{2}\). Thus we obtain
\[DE_{\varepsilon}|_{Y}(u)(u_{k}-u)\to 0,\quad\text{as $k\to\infty$}.\]
Also note that \(DE_{\varepsilon}|_{Y}(u_{k})(u_{k}-u)\to 0\) since \(\|DE_{\varepsilon}|_{Y}(u_{k})\|\to 0\) and \(u_{k}-u\) is uniformly bounded in \(W^{1,2}\). On the other hand,
\[DE_{\varepsilon}|_{Y}(u_{k})(u_{k}-u)=\int_{M}\varepsilon\nabla u_{k}\cdot \nabla(u_{k}-u)+\int_{M}\frac{W^{\prime}(u_{k})}{\varepsilon}(u_{k}-u).\]
The second term on the right hand side goes to \(0\) as \(W^{\prime}(u_{k})\) is uniformly bounded in \(L^{2}\) and \(u_{k}-u\to 0\) in \(L^{2}\).
Now observe that
\[DE_{\varepsilon}|_{Y}(u_{k})(u_{k}-u)-DE_{\varepsilon}|_{Y}(u)(u _{k}-u)\] \[\qquad=\int_{M}\varepsilon|\nabla u_{k}-\nabla u|^{2}+\int_{M} \frac{W^{\prime}(u_{k})}{\varepsilon}(u_{k}-u)-\int_{M}\frac{W^{\prime}(u)}{ \varepsilon}(u_{k}-u).\]
We have already seen that every term in this formula goes to \(0\) except \(\int_{M}\varepsilon|\nabla u_{k}-\nabla u|^{2}\), and therefore \(\int_{M}\varepsilon|\nabla u_{k}-\nabla u|^{2}\) goes to \(0\) as well. This proves that \(u_{k}\to u\) strongly in \(W^{1,2}\), as needed.
According to Gaspar and Guaraco [9], for each given \(p\), we have \(2\sigma c(\varepsilon,p+1)<E_{\varepsilon}(0)\) provided \(\varepsilon\) is small enough. Therefore, we also have \(2\sigma\tilde{c}(\varepsilon,p)<E_{\varepsilon}(0)\) provided \(\varepsilon\) is small enough. Hence, for \(\varepsilon\) small enough, any min-max sequence remains bounded away from \(0\). By the classical theory for functionals satisfying the Palais-Smale condition (see [23]), we get the following existence result for critical points of \(E_{\varepsilon}|_{Y}\). See Theorem 3.3 in [9] for the corresponding result in the unconstrained case.
**Proposition 29**.: _Fix \(p\in\mathbb{N}\). For all small enough \(\varepsilon\), there is a critical point \(u_{\varepsilon}\in Y\) of \(E_{\varepsilon}|_{Y}\) with \(E_{\varepsilon}(u_{\varepsilon})=\tilde{c}(\varepsilon,p)\). There is a number \(\lambda_{\varepsilon}\in\mathbb{R}\) such that \(u_{\varepsilon}\) is a critical point of_
\[v\mapsto F_{\varepsilon,\lambda_{\varepsilon}}(v)=E_{\varepsilon}(v)+\lambda_{ \varepsilon}\int_{M}v\]
on all of \(W^{1,2}\), and \(u_{\varepsilon}\) satisfies the PDE_
\[-\varepsilon\Delta u_{\varepsilon}+\frac{W^{\prime}(u_{\varepsilon})}{ \varepsilon}=\lambda_{\varepsilon}\]
_in the weak sense. Moreover, we have \(\int_{M}u_{\varepsilon}=0\). The index of \(u_{\varepsilon}\) as a critical point of \(E_{\varepsilon}|_{Y}\) is at most \(p\)._
Given the existence of \(u_{\varepsilon}\), the second step in the proof is to study the convergence of \(u_{\varepsilon}\) as \(\varepsilon\to 0\). Fortunately for us, Bellettini and Wickramasekera [3] have already studied the regularity of such limits. Let us recall the setup in [3]. For each \(\varepsilon>0\), suppose \(u_{\varepsilon}\) is a critical point of \(F_{\varepsilon,\lambda_{\varepsilon}}\) and that \(E_{\varepsilon}(u_{\varepsilon})\) and \(\operatorname{ind}_{F_{\varepsilon,\lambda_{\varepsilon}}}(u_{\varepsilon})\) and \(\|u_{\varepsilon_{k}}\|_{L^{\infty}}\) are uniformly bounded. Choose a sequence \(\varepsilon_{j}\to 0\) and assume that \(\lambda_{\varepsilon_{j}}\to\lambda\). Passing to a subsequence if necessary, there exist a radon measure \(\mu\) on \(M\) and a function \(u_{\infty}\in BV(M)\) such that
\[\frac{1}{2\sigma}\left(\frac{\varepsilon_{j}}{2}|\nabla u_{\varepsilon_{j}}|^ {2}+\frac{W(u_{\varepsilon_{j}})}{\varepsilon_{j}}\right)\rightharpoonup\mu\]
and \(u_{\varepsilon_{j}}\to u_{\infty}\) in \(L^{1}\). Moreover, \(u_{\infty}\) takes only the values \(\pm 1\). Hutchinson and Tonegawa [15] proved that there is an integral varifold \(V\) on \(M\) such that \(\|V\|=\mu\). The following is the special case of Theorem 4.1 in Bellettini and Wickramasekera [3] where the prescription functions are assumed to be constants and the ambient dimension is assumed to be between \(3\) and \(7\).
**Theorem 30** (See Theorem 4.1 in [3]).: _Let \((M^{n+1},g)\) be a closed Riemannian manifold with \(3\leq n+1\leq 7\). Assume \((u_{\varepsilon_{j}})\) is a sequence as above, and assume that \(\lambda>0\). Let \(\Omega=\operatorname{int}(\{x\in M:\,u_{\infty}(x)=1\})\). Then \(V=V_{0}+V_{\lambda}\) where_
1. \(V_{0}\) _is induced by a collection of smooth, disjoint minimal surfaces equipped with even multiplicities. Moreover,_ \(\operatorname{spt}(\|V_{0}\|)\subset M\setminus\Omega\)_._
2. _If_ \(\Omega=\emptyset\) _then_ \(V_{\lambda}=0\)_. If_ \(\Omega\neq\emptyset\) _then_ \(V_{\lambda}=|\partial^{\star}\Omega|\neq 0\)_, and moreover,_ \(V_{\lambda}\) _is induced by a smooth surface with constant mean curvature_ \(\lambda\) _whose mean curvature vector points into_ \(\Omega\)_._
_The minimal surfaces may have tangential intersection with the CMC surface. Likewise, the CMC surface may have tangential intersections with itself but it never crosses itself._
We can now complete the proof of Theorem 3.
**Theorem 3**.: _Let \((M^{n+1},g)\) be a closed Riemannian manifold with \(3\leq n+1\leq 7\). Fix a number \(p\in\mathbb{N}\). There are_
1. _a Caccioppoli set_ \(\Omega\subset M\) _with_ \(\operatorname{Vol}(\Omega)=\frac{1}{2}\operatorname{Vol}(M)\) _whose boundary is smooth and almost-embedded with constant mean curvature,_
2. _a (possibly empty) collection of smooth, disjoint minimal surfaces_ \(\Sigma_{1},\ldots,\Sigma_{k}\subset M\setminus\Omega\)_,_
3. _and positive integers_ \(\theta_{0}\in\mathbb{Z}\) _and_ \(\theta_{1},\ldots,\theta_{k}\in 2\mathbb{Z}\)__
_such that \(\tilde{c}(p)=\theta_{0}\operatorname{Area}(\partial\Omega)+\theta_{1} \operatorname{Area}(\Sigma_{1})+\ldots+\theta_{k}\operatorname{Area}(\Sigma_ {k}).\) Moreover, \(\theta_{0}=1\) unless \(\partial\Omega\) is also a minimal surface._
Proof.: Choose a sequence \(\varepsilon_{k}\to 0\) so that \(\tilde{c}(\varepsilon_{k},p)\to\tilde{c}(p)\). Let \(u_{\varepsilon_{k}}\) be the critical points of \(F_{\varepsilon_{k},\lambda_{\varepsilon_{k}}}\) constructed in Proposition 29. Then \(E_{\varepsilon_{k}}(u_{\varepsilon_{k}})\) and \(\operatorname{ind}_{F_{\varepsilon_{k},\lambda_{\varepsilon_{k}}}}(u_{ \varepsilon_{k}})\) are uniformly bounded. According to Hutchinson and Tonegawa [15] Section 6.1 and Lemma 3.4 in [4], the Lagrange multipliers \(\lambda_{\varepsilon_{k}}\) are also uniformly bounded and there is a constant \(K>0\) such that \(\|u_{\varepsilon_{k}}\|_{L^{\infty}}\leq K\) for all \(k\). For the interested reader, we include the details of the argument in the appendix. Applying Theorem 30 to the sequence \(u_{\varepsilon_{k}}\) now yields the result.
## 5. Appendix
The goal of the appendix is to prove the following proposition. We largely follow the sketch in [15] section 6.1, giving details as appropriate. The proof that the Lagrange multipliers are bounded depends on work of Chen [4].
**Proposition 31**.: _Assume that the potential \(W\) satisfies the growth condition (vii). Choose a sequence \(\varepsilon_{k}\to 0\) and let \(u_{\varepsilon_{k}}\) be a critical point of \(F_{\varepsilon_{k},\lambda_{\varepsilon_{k}}}\). Assume that \(E_{\varepsilon_{k}}(u_{\varepsilon_{k}})\) is uniformly bounded. Then the Lagrange multipliers \(\lambda_{\varepsilon_{k}}\) are uniformly bounded and \(\|u_{\varepsilon_{k}}\|_{L^{\infty}}\) is also uniformly bounded._
Proof.: The first step is to check that each \(u_{\varepsilon_{k}}\) is smooth. Recall that \(|W^{\prime}(u_{\varepsilon_{k}})|\leq C|u_{\varepsilon_{k}}|^{q-1}\) for \(|u_{\varepsilon_{k}}|\geq\beta\). For simplicity, we will give the argument assuming \(3\leq n+1\leq 5\). In this case, by the Sobolev embedding theorem, \(u_{\varepsilon_{k}}\) belongs to \(L^{10/3}\). It follows that \(W^{\prime}(u_{\varepsilon_{k}})\in L^{10/(3q-3)}\). Hence by elliptic regularity we have \(u_{\varepsilon_{k}}\in W^{2,q_{1}}\) for \(q_{1}=10/(3q-3)\). Note that
\[\frac{n+1}{q_{1}}\leq\frac{3}{2}(q-1)\leq\frac{18}{10}<2.\]
Thus by the Sobolev inequalities we obtain that \(u_{\varepsilon_{k}}\) is Holder continuous. Standard elliptic regularity then implies that \(u_{\varepsilon_{k}}\) is smooth. The cases \(6\leq n+1\leq 7\) are handled similarly. One applies the Sobolev inequalities together with elliptic regularity several times. Each application improves the regularity of \(u_{\varepsilon_{k}}\) until eventually one obtains \(u_{\varepsilon_{k}}\in W^{2,q_{1}}\) with \(2>(n+1)/q\). This gives Holder continuity of \(u_{\varepsilon_{k}}\), and standard elliptic regularity then implies that \(u_{\varepsilon_{k}}\) is smooth. Note at this point we do not have uniform \(L^{\infty}\) estimates on \(u_{\varepsilon_{k}}\).
To prove that the Lagrange multipliers are bounded we follow [4] Lemma 3.4. Note that [4] Lemma 3.4 is proved for domains in Euclidean space. Some addition difficulties arise in adapting the mollifier arguments used in [4] to the case of a closed manifold. We need to prove a sequence of lemmas. In what follows, \(C\) denotes a positive constant that is allowed to change from line to line.
**Lemma 32**.: _We have_
\[\int_{M}(|u_{\varepsilon_{k}}|-1)^{2}\to 0,\quad\text{as }k\to\infty.\]
Proof.: Define the sets
\[A_{1}=\{||u_{\varepsilon_{k}}|-1|\leq\varepsilon_{k}^{1/4}\},\]
\[A_{2}=\{\varepsilon_{k}^{1/4}\leq||u_{\varepsilon_{k}}|-1|\leq\beta-1\},\]
\[A_{3}=\{|u_{\varepsilon_{k}}|\geq\beta\}.\]
We will estimate the integral over the sets \(A_{1}\), \(A_{2}\), and \(A_{3}\) separately. For \(A_{1}\), we have
\[\int_{A_{1}}(|u_{\varepsilon_{k}}|-1)^{2}\leq\varepsilon_{k}^{1/2}\operatorname{ Vol}(M).\]
Regarding \(A_{2}\), property (iii) of \(W\) imply that there is a constant \(c>0\) independent of \(k\) such that \(|W(x)|\geq c\varepsilon_{k}^{1/2}\) whenever \(||x|-1|\geq\varepsilon_{k}^{1/4}\). Therefore the set \(A_{2}\) has measure at most \(\varepsilon_{k}^{1/2}c^{-1}E_{\varepsilon_{k}}(u_{\varepsilon_{k}})\) and so
\[\int_{A_{2}}(|u_{\varepsilon_{k}}|-1)^{2}\leq(\beta-1)^{2}c^{-1}\varepsilon_{ k}^{1/2}E_{\varepsilon_{k}}(u_{\varepsilon_{k}}).\]
It remains to estimate the integral over \(A_{3}\). Without loss of generality we can assume that \(\beta\geq 5\). Remember that \(C|W(x)|\geq|x|^{q}\) for \(|x|\geq\beta\) and so
\[(|u_{\varepsilon_{k}}|-1)^{2}\leq|u_{\varepsilon_{k}}|^{2}\leq|u_{\varepsilon _{k}}|^{q}\leq C|W(u_{\varepsilon_{k}})|\]
whenever \(|u_{\varepsilon_{k}}|\geq\beta\). Thus we have the estimate
\[\int_{A_{3}}(|u_{\varepsilon_{k}}|-1)^{2}\leq\int_{A_{3}}C|W(u_{\varepsilon_{k }})|\leq C\varepsilon_{k}E_{\varepsilon_{k}}(u_{\varepsilon_{k}}).\]
Combining these three estimates shows that
\[\int_{M}(|u_{\varepsilon_{k}}|-1)^{2}\leq C\varepsilon_{k}^{1/2}(1+E_{ \varepsilon_{k}}(u_{\varepsilon_{k}})).\]
The right hand side goes to \(0\) as \(k\to\infty\).
Define
\[\Phi(s)=\int_{0}^{s}\sqrt{W(s)/2}\,ds.\]
Let \(w_{\varepsilon_{k}}=\Phi\circ u_{\varepsilon_{k}}\). It is easy to check that \(\nabla w_{\varepsilon_{k}}\) is uniformly bounded in \(L^{1}\) (see [15]).
**Lemma 33**.: _There is a constant \(C>0\) such that \(|x-y|^{2}\leq C|\Phi(x)-\Phi(y)|\) for all \(x,y\in\mathbb{R}\)._
Proof.: We check a number of cases. First suppose that \(x,y\geq\beta\) and assume without loss of generality that \(x\geq y\). Then
\[|\Phi(x)-\Phi(y)| =\int_{y}^{x}\sqrt{W(s)/2}\,ds\geq\frac{1}{\sqrt{2}}\int_{y}^{x}s ^{q/2}\,ds\] \[\geq\frac{1}{\sqrt{2}}\int_{y}^{x}s\,ds=\frac{1}{2\sqrt{2}}(x^{2} -y^{2})\geq\frac{1}{2\sqrt{2}}|x-y|^{2},\]
where the last inequality uses the fact that \(x\geq y\geq 0\) and so \(x+y\geq x-y\). The same argument works in the case where \(x,y\leq-\beta\).
Now suppose that \(x\geq\beta\) and \(y\leq-\beta\). Let \(a=\int_{-\beta}^{\beta}\sqrt{W(s)/2}\,ds>0\). Then
\[|\Phi(x)-\Phi(y)| =\int_{\beta}^{x}\sqrt{W(s)/2}\,ds+a+\int_{y}^{-\beta}\sqrt{W(s)/ 2}\,ds\] \[\geq\frac{1}{\sqrt{2}}\int_{\beta}^{x}s\,ds+a+\frac{1}{\sqrt{2}} \int_{y}^{-\beta}(-s)\,ds\] \[\geq\frac{1}{2\sqrt{2}}(x^{2}-\beta^{2})+a+\frac{1}{2\sqrt{2}}( y^{2}-\beta^{2}).\]
We have
\[|x-y|^{2}\leq 2(x^{2}+y^{2})\leq 4\beta^{2}+2(x^{2}-\beta^{2})+2(y^{2}-\beta^{2} )\leq C|\Phi(x)-\Phi(y)|.\]
The same argument works if \(x\leq-\beta\) and \(y\geq\beta\).
It remains to handle the case when \(-\beta\leq x,y\leq\beta\). Assume for contradiction there are two sequences \(x_{j},y_{j}\in[-\beta,\beta]\) such that
\[|x_{j}-y_{j}|^{2}>j|\Phi(x_{j})-\Phi(y_{j})|. \tag{1}\]
Passing to a subsequence if necessary, we can assume that \(x_{j}\to x\in[-\beta,\beta]\) and \(y_{j}\to y\in[-\beta,\beta]\). It is clear from (1) that \(x=y\). Passing to the limit in
\[|x_{j}-y_{j}|\geq j\frac{|\Phi(x_{j})-\Phi(y_{j})|}{|x_{j}-y_{j}|}\]
we obtain that \(\Phi^{\prime}(x)=0\). Thus \(W(x)=0\) and so \(x=\pm 1\). Without loss of generality assume that \(x=1\). Note that there is a constant \(C>0\) such that \(W(s)\geq C|s-1|^{2}\) for \(s\) close to \(1\). Thus
\[|\Phi(x_{j})-\Phi(y_{j})|=\left|\int_{y_{j}}^{x_{j}}\sqrt{W(s)/2}\,ds\right| \geq C\left|\int_{y_{j}}^{x_{j}}|s-1|\,ds\right|.\]
If \(x_{j}\geq y_{j}\geq 1\) then
\[\left|\int_{y_{j}}^{x_{j}}|s-1|\,ds\right| =\int_{y_{j}}^{x_{j}}s-1\,ds=\frac{1}{2}\left[(x_{j}-1)^{2}-(y_{j} -1)^{2}\right]\] \[=\frac{1}{2}\left[(x_{j}+y_{j}-2)(x_{j}-y_{j})\right]\geq\frac{1} {2}(x_{j}-y_{j})^{2},\]
which combined with the previous equation yields a contradiction. If \(x_{j}\geq 1\geq y_{j}\) then
\[\left|\int_{y_{j}}^{x_{j}}|s-1|\,ds\right| =\int_{y_{j}}^{1}1-s\,ds+\int_{1}^{x_{j}}s-1\,ds\] \[=\frac{1}{2}\left[(1-y_{j})^{2}+(x_{j}-1)^{2}\right]\geq\frac{1} {4}(x_{j}-y_{j})^{2}\]
since either \(x_{j}-1\geq\frac{1}{2}(x_{j}-y_{j})\) or \(1-y_{j}\geq\frac{1}{2}(x_{j}-y_{j})\). Again this gives a contradiction. The remaining possibilities likewise lead to contradiction and the lemma is proved.
For each \(\eta\in(0,1)\), let \(u_{\varepsilon_{k},\eta}\) be a mollified version of \(u_{\varepsilon_{k}}\). More precisely, choose an isometric embedding of \(M\) into \(\mathbb{R}^{m}\). Let \(N\) be a small tubular neighborhood of \(M\) where the nearest point projection \(\Pi\colon N\to M\) is well-defined and a submersion. Let \(B_{r}(x)\) denote the open ball of radius \(r\) centered at \(x\in\mathbb{R}^{m}\). Also let \(B_{1}\) denote the open unit ball centered at the origin in \(\mathbb{R}^{m}\). Choose a non-negative smooth function \(\rho\colon B_{1}\to\mathbb{R}\) which is compactly supported in \(B_{1}\) and satisfies
\[\int_{B_{1}}\rho(y)\,d\mathcal{L}^{m}(y)=1.\]
Extend \(u_{\varepsilon_{k}}\) to a function \(v_{\varepsilon_{k}}\) on \(N\) by setting \(v_{\varepsilon_{k}}=u_{\varepsilon_{k}}\circ\Pi\). Then let
\[u_{\varepsilon_{k},\eta}(x)=\int_{B_{1}}\rho(y)v_{\varepsilon_{k}}(x-\eta y) \,d\mathcal{L}^{m}(y),\quad x\in M\]
be the mollified version of \(u_{\varepsilon_{k}}\).
**Lemma 34**.: _For each fixed \(\eta\), there is a uniform bound \(\|u_{\varepsilon_{k},\eta}\|_{L^{\infty}(M)}\leq C(\eta)\)._
Proof.: Observe that
\[|u_{\varepsilon_{k},\eta}(x)| \leq\int_{B_{1}}\rho(y)|v_{\varepsilon_{k}}(x-\eta y)|\,d\mathcal{ L}^{m}(y)\] \[\leq 1+\int_{B_{1}}\rho(y)||v_{\varepsilon_{k}}(x-\eta y)|-1 \big{|}\,d\mathcal{L}^{m}(y)\] \[\leq 1+C\eta^{-(n+1)}\int_{B_{\eta}(x)}\big{|}|v_{\varepsilon_{k} }(z)|-1\big{|}\,d\mathcal{L}^{m}(z).\]
By the co-area formula we have
\[\int_{B_{\eta}(x)}\big{|}|v_{\varepsilon_{k}}(z)|-1\big{|}\,d \mathcal{L}^{m}(z)\] \[\qquad\leq C\int_{B_{\eta}(x)}\big{|}|v_{\varepsilon_{k}}(z)|-1 \big{|}\,J\Pi(z)\,d\mathcal{L}^{m}(z)\] \[\qquad\leq C\int_{\Pi(B_{\eta}(x))}\int_{p\in\Pi^{-1}(q)}\big{|} |v_{\varepsilon_{k}}(p)|-1\big{|}\,d\mathcal{H}^{m-n-1}(p)\,d\mathcal{H}^{n+ 1}(q)\] \[\qquad\leq C\int_{\Pi(B_{\eta}(x))}\big{|}|u_{\varepsilon_{k}}(q )|-1\big{|}\,d\mathcal{H}^{n+1}(q).\]
Inserting this into the previous equation and using Lemma 32 gives the result.
**Lemma 35**.: _For each fixed \(\eta\), there is a uniform bound \(\|\nabla u_{\varepsilon_{k},\eta}\|_{L^{\infty}(M)}\leq C(\eta)\)._
Proof.: Fix an index \(i\in\{1,\dots,m\}\). Note the formula for \(u_{\varepsilon_{k},\eta}(x)\) also makes sense for \(x\in N\) so we can regard \(u_{\varepsilon_{k},\eta}\) as a function defined on \(N\). For \(x\in M\) we have
\[\partial_{i}u_{\varepsilon_{k},\eta}(x)=\int_{B_{1}}\partial_{i}\rho(y)v_{ \varepsilon_{k}}(x-\eta y)\,d\mathcal{L}^{m}(y).\]
Thus we have
\[|\partial_{i}u_{\varepsilon_{k},\eta}(x)| \leq\int_{B_{1}}|\partial_{i}\rho(y)||v_{\varepsilon_{k}}(x-\eta y )|\,d\mathcal{L}^{m}(y)\] \[\leq C+\int_{B_{1}}|\partial_{i}\rho(y)|(|v_{\varepsilon_{k}}(x- \eta y)|-1)\,d\mathcal{L}^{m}(y)\] \[\leq C+C\eta^{-(n+1)}\int_{B_{\eta(x)}}\big{|}|v_{\varepsilon_{k} }(z)|-1\big{|}\,d\mathcal{L}^{m}(z).\]
Using the coarea formula as in the proof of the previous lemma now gives the result.
**Lemma 36**.: _There is a uniform bound \(\|u_{\varepsilon_{k},\eta}-u_{\varepsilon_{k}}\|_{L^{2}(M)}^{2}\leq C\eta\) for \(k\geq K(\eta)\)._
Proof.: Observe that
\[\int_{M}|u_{\varepsilon_{k},\eta}-u_{\varepsilon_{k}}|^{2} =\int_{M}\left|\int_{B_{1}}\rho(y)v_{\varepsilon_{k}}(x-\eta y)\,d \mathcal{L}^{m}(y)-u_{\varepsilon_{k}}(x)\right|^{2}\,d\mathcal{H}^{n+1}(x)\] \[\leq\int_{M}\int_{B_{1}}\rho(y)|v_{\varepsilon_{k}}(x-\eta y)-u_{ \varepsilon_{k}}(x)|^{2}\,d\mathcal{L}^{m}(y)\,d\mathcal{H}^{n+1}(x)\] \[\leq C\int_{M}\int_{B_{1}}\rho(y)|f(x-\eta y)-f(x)|\,d\mathcal{L}^ {m}(y)\,d\mathcal{H}^{n+1}(x),\]
where \(f=\Phi\circ u_{\varepsilon_{k}}\circ\Pi=w_{\varepsilon_{k}}\circ\Pi\) and we used Lemma 33 to get the last inequality. By Fubini's theorem we get
\[\int_{M} \int_{B_{1}}\rho(y)|f(x-\eta y)-f(x)|\,d\mathcal{L}^{m}(y)\,d \mathcal{H}^{n+1}(x)\] \[\leq\eta\int_{M}\int_{B_{1}}\int_{0}^{1}\rho(y)|\nabla f(x-t\eta y )|\,\,dt\,L^{m}(y)\,d\mathcal{H}^{n+1}(x)\] \[=\eta\int_{B_{1}}\rho(y)\int_{0}^{1}\int_{M}|\nabla f(x-t\eta y)| \,d\mathcal{H}^{n+1}(x)\,dt\,d\mathcal{L}^{m}(y).\]
Now for fixed \(y\) and \(t\), we have
\[\int_{M}|\nabla f(x-t\eta y)|\,d\mathcal{H}^{n+1}(x)=\int_{M-t\eta y}|\nabla f (z)|\,d\mathcal{H}^{n+1}(z).\]
Provided \(\eta\) is small enough, the map \(\Pi\colon M-t\eta y\to M\) is a diffeomorphism and so by the change of variables forumla
\[\int_{M-t\eta y}|\nabla f(z)|\,d\mathcal{H}^{n+1}(z) \leq\int_{M-t\eta y}|\nabla w_{\varepsilon_{k}}(\Pi(z))|\,d \mathcal{H}^{n+1}(z)\] \[\leq C\int_{M-t\eta y}|\nabla w_{\varepsilon_{k}}(\Pi(z))|J\Pi(z )\,d\mathcal{H}^{n+1}(z)\] \[=C\int_{M}|\nabla w_{\varepsilon_{k}}(q)|\,d\mathcal{H}^{n+1}(q).\]
Thus we obtain
\[\eta\int_{B_{1}}\rho(y)\int_{0}^{1}\int_{M}|\nabla f(x-t\eta y)| \,d\mathcal{H}^{n+1}(x)\,dt\,d\mathcal{L}^{m}(y)\] \[\qquad\leq C\eta\int_{B_{1}}\rho(y)\|w_{\varepsilon_{k}}\|_{L^{ 1}(M)}\,d\mathcal{L}^{m}(y)\leq C\eta.\]
Putting everything together we get
\[\int_{M}|u_{\varepsilon_{k},\eta}-u_{\varepsilon_{k}}|^{2}\leq C\eta,\]
and the result follows.
**Lemma 37**.: _Let \(\overline{u}_{\varepsilon_{k},\eta}\) be the average of \(u_{\varepsilon_{k},\eta}\). There is a uniform bound \(|\overline{u}_{\varepsilon_{k},\eta}|\leq C\eta^{\frac{1}{2}}\) for \(k\geq K(\eta)\)._
Proof.: Observe that
\[\overline{u}_{\varepsilon_{k},\eta}=\frac{1}{\operatorname{Vol}(M)}\int_{M}u_{ \varepsilon_{k},\eta}\,=\frac{1}{\operatorname{Vol}(M)}\int_{M}(u_{\varepsilon _{k},\eta}-u_{\varepsilon_{k}}).\]
Now the result follows from Lemma 36 and Holder's inequality.
Now fix an \(\eta\) to be specified later. Given a large integer \(k\), let \(\psi\) be the solution to
\[-\Delta\psi=u_{\varepsilon_{k},\eta}-\overline{u}_{\varepsilon_{k},\eta},\quad \int_{M}\psi=0.\]
By Lemmas 34, 35, and 37 the right hand side of the above PDE is uniformly bounded in \(C^{1}\). Therefore by elliptic regularity, \(\psi\) is bounded in \(C^{2}\) by a constant that depends on \(\eta\) but not on \(k\). Note that \(u_{\varepsilon_{k}}\) satisfies the PDE
\[-\varepsilon_{k}\Delta u_{\varepsilon_{k}}+\frac{W^{\prime}(u_{\varepsilon_{ k}})}{\varepsilon_{k}}=\lambda_{\varepsilon_{k}}.\]
Multiplying by \(\nabla\psi\cdot\nabla u_{\varepsilon_{k}}\) and integrating yields
\[\lambda_{\varepsilon_{k}}\int_{M}\nabla\psi\cdot\nabla u_{\varepsilon_{k}}= \int_{M}\nabla\psi\cdot\nabla u_{\varepsilon_{k}}\left(-\varepsilon_{k} \Delta u_{\varepsilon_{k}}+\frac{W^{\prime}(u_{\varepsilon_{k}})}{\varepsilon _{k}}\right). \tag{2}\]
Observe that
\[\int_{M}\nabla\psi\cdot\nabla u_{\varepsilon_{k}}\frac{W^{\prime}(u_{ \varepsilon_{k}})}{\varepsilon_{k}}=\int_{M}\nabla\psi\cdot\nabla\left(\frac{W (u_{\varepsilon_{k}})}{\varepsilon_{k}}\right)=-\int_{M}\frac{W(u_{ \varepsilon_{k}})}{\varepsilon_{k}}\Delta\psi.\]
Also, by the integration by parts formula for the Hessian, we have
\[\varepsilon_{k}\int_{M}D^{2}\psi(\nabla u_{\varepsilon_{k}},\nabla u_{ \varepsilon_{k}})=-\varepsilon_{k}\int_{M}\nabla\psi\cdot\nabla u_{ \varepsilon_{k}}\Delta u_{\varepsilon_{k}}+\frac{\varepsilon_{k}}{2}\int_{M} |\nabla u_{\varepsilon_{k}}|^{2}\Delta\psi.\]
Thus we have the following formula for the right hand side of (2):
\[\int_{M}\nabla\psi\cdot\nabla u_{\varepsilon_{k}}\left(-\varepsilon_{k} \Delta u_{\varepsilon_{k}}+\frac{W^{\prime}(u_{\varepsilon_{k}})}{\varepsilon _{k}}\right)\]
\[=\varepsilon_{k}\int_{M}D^{2}\psi(\nabla u_{\varepsilon_{k}},\nabla u_{ \varepsilon_{k}})-\int_{M}\left(\frac{\varepsilon_{k}}{2}|\nabla u_{ \varepsilon_{k}}|^{2}+\frac{W(u_{\varepsilon_{k}})}{\varepsilon_{k}}\right) \Delta\psi.\]
Since \(\|\psi\|_{C^{2}}\leq C(\eta)\), this gives a bound
\[\left|\int_{M}\nabla\psi\cdot\nabla u_{\varepsilon_{k}}\left(-\varepsilon_{k} \Delta u_{\varepsilon_{k}}+\frac{W^{\prime}(u_{\varepsilon_{k}})}{\varepsilon _{k}}\right)\right|\leq C(\eta)E_{\varepsilon_{k}}(u_{\varepsilon_{k}}).\]
We now turn attention to the left hand side of (2). Integrating by parts, we have
\[\lambda_{\varepsilon_{k}}\int_{M}\nabla\psi\cdot\nabla u_{\varepsilon_{k}}=- \lambda_{\varepsilon_{k}}\int_{M}u_{\varepsilon_{k}}\Delta\psi=\lambda_{ \varepsilon_{k}}\int_{M}u_{\varepsilon_{k}}(u_{\varepsilon_{k},\eta}-\overline {u}_{\varepsilon_{k},\eta}).\]
Now observe that
\[\int_{M}u_{\varepsilon_{k}}(u_{\varepsilon_{k},\eta}-\overline{u}_{\varepsilon _{k},\eta})=\int_{M}u_{\varepsilon_{k}}(u_{\varepsilon_{k},\eta}-u_{\varepsilon _{k}})+\int_{M}(u_{\varepsilon_{k}}^{2}-1)-\overline{u}_{\varepsilon_{k},\eta} \int_{M}u_{\varepsilon_{k}}+\operatorname{Vol}(M).\]
By Lemmas 32 and 37 and Holder's inequality, we can select \(\eta\) small enough that
\[\left|\int_{M}u_{\varepsilon_{k}}(u_{\varepsilon_{k},\eta}-u_{\varepsilon_{k}} )\right|\leq\frac{1}{4}\operatorname{Vol}(M),\quad\left|\overline{u}_{ \varepsilon_{k},\eta}\int_{M}u_{\varepsilon_{k}}\right|\leq\frac{1}{4} \operatorname{Vol}(M).\]
By Lemma 32 and Holder's inequality, we have
\[\left|\int_{M}(u_{\varepsilon_{k}}^{2}-1)\right|\leq\int_{M}||u_{\varepsilon_{k} }|-1|(|u_{\varepsilon_{k}}|+1)\leq\frac{1}{4}\operatorname{Vol}(M)\]
for \(k\) large enough. It follows that
\[\int_{M}\nabla\psi\cdot\nabla u_{\varepsilon_{k}}\geq\frac{1}{4}\operatorname {Vol}(M)\]
for large enough \(k\). Using equation (2) then gives an upper bound on \(\lambda_{\varepsilon_{k}}\) which is independent of \(k\).
To complete the proof of the proposition, it remains to show that \(\|u_{\varepsilon_{k}}\|_{L^{\infty}}\) is uniformly bounded. Let \(M_{\varepsilon_{k}}=M/\varepsilon_{k}\). Define the rescaled functions \(f_{\varepsilon_{k}}\colon M_{\varepsilon_{k}}\to\mathbb{R}\) by \(f_{\varepsilon_{k}}(x)=u_{\varepsilon_{k}}(\varepsilon_{k}x)\). Then \(f_{\varepsilon_{k}}\) solves
\[-\Delta f_{\varepsilon_{k}}+W^{\prime}(f_{\varepsilon_{k}})=\varepsilon_{k} \lambda_{\varepsilon_{k}}.\]
Fix some \(p\geq 1\). Multiplying the equation by \(|f_{\varepsilon_{k}}|^{p-1}f_{\varepsilon_{k}}\) and integrating gives
\[\int_{M_{\varepsilon_{k}}}p|f_{\varepsilon_{k}}|^{p-1}|\nabla f_{\varepsilon_ {k}}|^{2}+\int_{M_{\varepsilon_{k}}}W^{\prime}(f_{\varepsilon_{k}})|f_{ \varepsilon_{k}}|^{p-1}f_{\varepsilon_{k}}=\varepsilon_{k}\lambda_{\varepsilon _{k}}\int_{M_{\varepsilon_{k}}}|f_{\varepsilon_{k}}|^{p-1}f_{\varepsilon_{k}}. \tag{3}\]
Now for \(|x|\geq\beta\) we have \(W^{\prime}(x)|x|^{p-1}x\geq C|x|^{p+q-1}\). Since \(\varepsilon_{k}\lambda_{\varepsilon_{k}}\to 0\), it follows from (3) that
\[\int_{\{f_{\varepsilon_{k}}\geq\beta\}}|f_{\varepsilon_{k}}|^{p+q-1}\leq\frac {1}{2}\int_{M_{\varepsilon_{k}}}|f_{\varepsilon_{k}}|^{p}\]
assuming \(k\) is large enough. This implies that
\[\int_{M_{\varepsilon_{k}}}|f_{\varepsilon_{k}}|^{p+q-1}\leq C\beta^{p+q-1}+ \frac{1}{2}\int_{M_{\varepsilon_{k}}}|f_{\varepsilon_{k}}|^{p}.\]
By induction, for any positive integer \(r\), this gives
\[\int_{M_{\varepsilon_{k}}}|f_{\varepsilon_{k}}|^{2+r(q-1)} \leq\frac{C\beta^{2}}{2^{r}}\sum_{j=1}^{r}(2\beta^{q-1})^{j}+2^{- r}\int_{M_{\varepsilon_{k}}}|f_{\varepsilon_{k}}|^{2}\] \[=\frac{C\beta^{2}}{2^{r}}\left(\frac{2\beta^{q-1}(2^{r}\beta^{r(q -1)}-1)}{2\beta^{q-1}-1}\right)+2^{-r}\int_{M_{\varepsilon_{k}}}|f_{ \varepsilon_{k}}|^{2}\] \[\leq C\beta^{2+r(q-1)}+2^{-r}\int_{M_{\varepsilon_{k}}}|f_{ \varepsilon_{k}}|^{2}.\]
Raising both sides to the power \((2+r(q-1))^{-1}\) and then sending \(r\to\infty\) gives the bound \(\|f_{\varepsilon_{k}}\|_{L^{\infty}(M_{\varepsilon_{k}})}\leq\beta\) provided \(k\) is large enough. This implies that \(\|u_{\varepsilon_{k}}\|_{L^{\infty}(M)}\leq\beta\) for all large \(k\), as needed.
|
2309.02528 | Adaptive Adversarial Training Does Not Increase Recourse Costs | Recent work has connected adversarial attack methods and algorithmic recourse
methods: both seek minimal changes to an input instance which alter a model's
classification decision. It has been shown that traditional adversarial
training, which seeks to minimize a classifier's susceptibility to malicious
perturbations, increases the cost of generated recourse; with larger
adversarial training radii correlating with higher recourse costs. From the
perspective of algorithmic recourse, however, the appropriate adversarial
training radius has always been unknown. Another recent line of work has
motivated adversarial training with adaptive training radii to address the
issue of instance-wise variable adversarial vulnerability, showing success in
domains with unknown attack radii. This work studies the effects of adaptive
adversarial training on algorithmic recourse costs. We establish that the
improvements in model robustness induced by adaptive adversarial training show
little effect on algorithmic recourse costs, providing a potential avenue for
affordable robustness in domains where recoursability is critical. | Ian Hardy, Jayanth Yetukuri, Yang Liu | 2023-09-05T18:40:22Z | http://arxiv.org/abs/2309.02528v1 | # Adaptive Adversarial Training Does Not Increase Recourse Costs
###### Abstract.
Recent work has connected adversarial attack methods and algorithmic recourse methods: both seek minimal changes to an input instance which alter a model's classification decision. It has been shown that traditional adversarial training, which seeks to minimize a classifier's susceptibility to malicious perturbations, increases the cost of generated recourse; with larger adversarial training radii correlating with higher recourse costs. From the perspective of algorithmic recourse, however, the appropriate adversarial training radius has always been unknown. Another recent line of work has motivated adversarial training with adaptive training radii to address the issue of instance-wise variable adversarial vulnerability, showing success in domains with unknown attack radii. This work studies the effects of adaptive adversarial training on algorithmic recourse costs. We establish that the improvements in model robustness induced by adaptive adversarial training show little effect on algorithmic recourse costs, providing a potential avenue for affordable robustness in domains where recoursability is critical.
Adversarial Robustness, Algorithmic Recourse, Counterfactual Explanations +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
moves an individual towards a desired class manifold. With this in mind, it is worth considering not only the change in overall cost of recourse, but also the change in proximity of recourse to the desired data manifold, when selecting an adversarial training radius.
Even more fundamentally, it is important to question _whether a fixed adversarial training radius is appropriate, particularly in the context of algorithmic recourse?_ It has been shown (Bradbury et al., 2015) that different data instances have different _inherent adversarial vulnerabilities_ due to their varying proximities to other classes. As such, some researchers have argued that an identical adversarial training radius should not be applied to all instances during training. Several methods (Bradbury et al., 2015; Krizhevsky et al., 2014; Krizhevsky et al., 2014) have been proposed for automatically learning _instance-wise_ adversarial radii to address this variability. These are broadly referred to as "Adaptive Adversarial Training" (AAT) regimes (Bradbury et al., 2015).
This work explores the effects of AAT on both model robustness and ultimate recourse costs in an attempt to address the trade-off between the two and find a _justifiable_ middle ground. Our contributions include:
* An observation on the effects of robustness on recourse costs, and when AAT yields more affordable recourse.
* Experiments demonstrating AAT's superior robustness/ recourse trade-offs over traditional AT.
## 2. Background and Related Works
Algorithmic RecourseThe continued adoption of ML in high-impact decision making such as banking, healthcare, and resource allocation has inspired much work in the field of Algorithmic Recourse (Grover et al., 2016; Goyal et al., 2017; Goyal et al., 2018), and Counterfactual Explanations (Grover et al., 2016; Goyal et al., 2018; Goyal et al., 2018). The performance of different recourse methods depends highly on properties of the datasets they are applied to, the model they operate on, the application of that model's score, and factual point specificities (Bradbury et al., 2015). However, broadly speaking, recourse methods are classified based on: i) the _model family_ they apply to, ii) the degree of _access_ they have to the underlying model (i.e. white vs. black box methods), iii) the consideration of _manifold proximity_ in the generation of recourse, iv) the underlying _causal relationships_ in the data, and v) the use of _model approximations_ in the generation process (Goyal et al., 2018). Recently, (Goyal et al., 2018) introduced CARLA, a framework for benchmarking different recourse methods which act as an aggregator for popular recourse methods and standard datasets.
Adversarial Attacks and Adversarial TrainingAdversarial vulnerability refers to the susceptibility of a model to be _fooled_ by perturbations to the input data which cannot be detected by humans (so-called _Adversarial Examples_) (Goyal et al., 2018). Adversarial Training (Grover et al., 2016; Goyal et al., 2018) has been introduced to create models which are not susceptible to such attacks. The most popular method of Adversarial Training generates adversarial examples during the training process and includes them in the training dataset with corrected labels alongside the uncorrupted dataset. Often, adversarial training comes at some cost to standard classification accuracy. There have been many attack methods proposed to generate adversarial examples (Bradbury et al., 2015) with varying degrees of access to the model under attack, but most focus on defending against adversarial examples within a given \(\epsilon\)-radius (which are often defined by \(\ell_{1}\), \(\ell_{2}\), or \(\ell_{\infty}\) norms of size \(\epsilon\)). This work follows the popular attack and training formulation from (Goyal et al., 2018), which minimizes the worst-case loss within a defined \(\epsilon\)-radius.
On the Intersection of Robustness and RecourseBoth Adversarial Examples and Counterfactual Explanations are formally described as constrained optimization problems where the objective is to alter a model's output by minimally perturbing input features (Bradbury et al., 2015; Krizhevsky et al., 2014). Recent work (Goyal et al., 2018) proved equivalence between certain adversarial attack methods and counterfactual explanation methods, and further work has demonstrated both theoretically and empirically that increasing the radius of attack during adversarial training increases the cost of the resulting recourse (Goyal et al., 2018). This inherent connection pits security at odds with expressivity and raises an important question as to how an adversarial radius ought to be selected for adversarial training. If the radius is too small, the model may be overly sensitive to an attack, while if it is too large, end users suffer from potentially overly-burdensome recourse costs. In the context of many recourse problems where data is tabular, it is difficult to determine what may constitute an adversarial attack, furthering the difficulty of radius selection. (Bradbury et al., 2015) discussed a formulation for adversarial attacks on tabular data that accounts for both the radius of attack and the importance of a feature, but this is difficult to know a priori and often changes depending on the choice of explanation method selected (Goyal et al., 2018).
Adaptive Adversarial TrainingIt has been observed that different data instances have different inherent adversarial vulnerability due to their varying proximity to other class' data manifolds, calling into question the conventional wisdom that models should be adversarially trained at a single consistent adversarial radius. (Bradbury et al., 2015) first observed this issue in the image classification domain, where certain instances can be _meaningfully transformed_ into other classes even at small adversarial radii. The authors of (Bradbury et al., 2015) proposed a means of discovering instance-wise adversarial radii by iteratively increasing or decreasing each instance's attack radius based on whether attacks are successful. (Krizhevsky et al., 2014) built on this work by further motivating the effects of overly-large adversarial radii on classification accuracy and proposed a variation of (Bradbury et al., 2015)'s method which included adaptive label-smoothing to account for the uncertainty added by larger attack radii, and (Krizhevsky et al., 2014) proposed a means for adaptive adversarial training by increasing the classification margin around correctly-classified datapoints. Adaptive Adversarial Training (AAT) presents a means of "automatically" selecting attack radii during training, and in all works thus far, has shown positive results in terms of the accuracy/robustness trade-off inherent in adversarial training, as well as smoother robustness curves across ranges of attack radii compared with traditional Adversarial Training.
## 3. Preliminaries & Notation
Standard TrainingWe begin with a model \(f\) parameterized by weights \(\theta\) that maps \(\mathcal{X}\rightarrow\mathcal{Y}\), where \(x\in\mathcal{X}\) are features and \(y\in\mathcal{Y}\) are their corresponding labels. Given a dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\), and a loss function \(\ell(\cdot)\), a standard learning objective is to minimize the average loss on the data:
\[\min_{\theta}\quad\frac{1}{N}\sum_{(x_{i},y_{i})\in D}\ell(f_{\theta}(x_{i}),y_ {i}) \tag{1}\]
Let \(f_{nat}\) represent the naturally trained model using the standard loss minimization based optimization technique.
Adversarial AttacksThe goal of an adversarial attack is to strategically generate perturbations \(\delta\) which can significantly enlarge the loss \(\ell(\cdot)\) when added to an instance \(x\). (Krizhevsky et al., 2014) introduced _Fast Gradient Sign Method (FGSM)_ for generating adversarial examples using the following mechanism:
\[x_{i}^{\prime}=x_{i}+\alpha\cdot\text{sign}\left(\nabla_{x_{i}}\ell\left(f_{ \theta}(x_{i}),y_{i}\right)\right) \tag{2}\]
where \(\alpha\) denotes the size of the perturbation, \(x_{i}^{\prime}\) denotes the adversarially perturbed sample, and \(x_{i}\) is the original clean sample. The _sign_ function operates on the gradient of \(\ell\left(f_{\theta}(x_{i}),y_{i}\right)\) w.r.t. \(x_{i}\), which is used to set the gradient to \(1\) if it is greater than \(0\) and \(-1\) if it is less than \(0\). (Krizhevsky et al., 2014) proposed a stronger iterative version of FGSM, performing Projected Gradient Descent (PGD) on the negative loss function:
\[x_{i}(t+1)=\Pi_{x+S}\left(x_{i}(t)+\alpha\cdot\text{sign}\left(\nabla_{x_{i}( t)}\ell(f_{\theta}(x_{i}(t)),y_{i})\right)\right) \tag{3}\]
where \(\alpha\) denotes the perturbation step size at each iteration and \(x_{i}(t+1)\) represents the perturbed example at step \(t+1\) for the clean instance \(x_{i}\). In this work, we use PGD due to its performance, popularity, and relative speed.
Adversarial TrainingAdversarial training is usually formulated as a min-max learning objective, wherein we seek to minimize the worst case loss within a fixed training radius \(\epsilon\).
\[\min_{\theta}\max_{||\delta_{i}||\leq\epsilon}\ \ \frac{1}{N}\sum_{(x_{i},y_{i}) \in D}\ell(f_{\theta}(x_{i}+\delta_{i}),y_{i}) \tag{4}\]
We solve this min-max objective via an alternating stochastic method that takes minimization steps for \(\theta\), followed by maximization steps that approximately solve the inner optimization using \(k\) steps of an adversarial attack. PGD with a fixed \(\epsilon\) is used to perturb an original instance and let \(f_{\text{e-adv}}\) represent the model trained with a PGD radius of \(\epsilon\).
### Adaptive Adversarial Training
(Krizhevsky et al., 2014) first argued that different data instances have different intrinsic adversarial vulnerabilities due to their varying proximity to other class manifolds, and introduced Instance-Adaptive Adversarial Training (AAT) to automatically learn instance-wise adversarial radii. The authors proposed the following objective function:
\[\min_{\theta}\max_{||\delta_{i}||\leq\epsilon_{i}}\ \ \ \frac{1}{N}\sum_{(x_{i},y_{i}) \in D}\ell(f_{\theta}(x_{i}+\delta_{i}),y_{i}) \tag{5}\]
where \(\epsilon_{i}\) denotes each training instance's attack radius. \(\epsilon_{i}\) is iteratively updated at each training epoch, increasing by a constant factor if the attack at the existing radius is unsuccessful and decreasing by a constant factor if it is successful.
(Krizhevsky et al., 2014) presented an alternate form of AAT called Max-Margin Adversarial (MMA) Training that seeks to impart adversarial robustness by maximizing the margin between correctly classified datapoints and the model's decision boundary. Formally, they proposed the following objective:
\[\min_{\theta}\left\{\sum_{i\in\mathcal{S}_{\theta}^{*}}\max\{0,d_{max}-d_{ \theta}(x_{i},y_{i})\}+\beta\sum_{i\in\mathcal{S}_{\theta}^{*}}\ell(f_{\theta} (x_{j}),y_{j})\right\} \tag{6}\]
where \(S_{\theta}^{+}\) is the set of correctly classified examples, \(S_{\theta}^{-}\) is the set of incorrectly classified examples, \(d_{\theta}(x_{i},y_{i})\) is the margin between correctly classified examples and the model's decision boundary, \(d_{max}\) is a hyper-parameter controlling which points to maximize the boundary around (forcing the learning to focus on points with \(d_{\theta}\) less than \(d_{max}\),) and \(\beta\) is a term controlling the trade-off between standard loss and _margin maximization_. The authors use a line search based on PGD to efficiently approximate \(d_{\theta}(x_{i},y_{i})\). For the rest of this study, let \(f_{aat}\) be a model trained using a mechanism from this category of training techniques.
Figure 1. An example scenario demonstrating the effectiveness of AAT in terms of recourse costs.
### Recourse Methods
For the scope of this study, we explore three different classes (Krizhevsky et al., 2017) of recourse methods: i) one random search, ii) one gradient-based search, and iii) one manifold-based approach. We will now briefly discuss each method, and we refer the readers to the original works for further implementation details.
_Growing Spheres (GS)-_(Krizhevsky et al., 2017) proposed a random search method for calculating counterfactual by sampling from points within \(\ell_{2}\)-hyper-spheres around \(x\) of iteratively increasing radii until one or more counterfactual is identified which flips \(f(x)\). Formally, they present a minimization problem in selecting which counterfactual \(x^{\prime}\) to return:
\[\operatorname*{arg\,min}_{x^{\prime}\in\mathcal{X}}\{c(x,x^{\prime})|f(x)\neq f (x^{\prime})\} \tag{6}\]
where \(\mathcal{X}\) is the family of sampled points around \(x\) and \(c\) is a cost function in \(\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}_{+}\): \(||x^{\prime}-x||_{2}+\gamma||x^{\prime}-x||_{0}\), where \(\gamma\) is a hyperparameter controlling the desired sparsity of the resulting counterfactual.
_Score Counterfactual Explanations (SCFE)-_(Krizhevsky et al., 2017) proposed a gradient-based method for identifying counterfactuals \(x^{\prime}\).
\[\operatorname*{arg\,min}_{x^{\prime}}\operatorname*{max}_{\lambda}\lambda(f(x ^{\prime})-y^{\prime})^{2}+d(x,x^{\prime}) \tag{7}\]
where \(d(\cdot,\cdot)\) is some distance function and \(y^{\prime}\) is the desired score from the model. In practice, this is solved by iteratively finding \(x^{\prime}\) and increasing \(\lambda\) until a satisfactory solution is identified.
_CCHVAE:_(Krizhevsky et al., 2017) proposed a manifold-based solution to finding counterfactuals using a Variational Auto Encoder (VAE) to search for counterfactuals in a latent representation \(\mathcal{Z}\). The goal of CCHVAE and other manifold methods is to find counterfactuals that are semantically "similar" to other data points. Formally, given an encoder \(\mathcal{E}\), a decoder \(\mathcal{H}\), and a latent representation \(\mathcal{Z}\) where \(\mathcal{E}:\mathcal{X}\rightarrow\mathcal{Z}\), CCHVAE optimizes the following:
\[\operatorname*{arg\,min}_{x^{\prime}\in\mathcal{Z}}\{||x^{\prime}||\;s.t.\;f( \mathcal{H}(\mathcal{E}(x)+z^{\prime}))\neq f(x)\} \tag{8}\]
## 4. Recourse Trade-Offs with Adaptive Adversarial Training
_Recourse cost._ The cost of recourse is usually approximated using a distance based metric. A common practice among recourse methodologies is to minimize the cost in some form or the other, because in general a low cost recourse is assumed to be easier to act upon. The cost of a recourse for a classification based model is traditionally interpreted as the minimum distance between a factual and the decision boundary. Alternatively, the inherent goal of adversarial training is to maximize the distance between factuals and the decision boundary. Hence, traditional adversarial training exacerbates the recourse costs of a classifier. In this section, we make preliminary observations on the effects of adaptive adversarial training on recourse costs.
An increase in \(\epsilon\) for \(\epsilon\)-adversarial training increases the overall recourse costs and the corresponding relation between \(\epsilon\) and \(C\) is discussed in (Krizhevsky et al., 2017). In comparison with an \(\epsilon\)-adversarial training, we observe the following benefits from the instance adaptive adversarially training:
### Recourse Costs
Let \(\delta^{(nat)}_{x}=d(x,x^{\prime})\) be the distance to the closest adversarial example \(x^{\prime}\) for the instance \(x\) for a standard training based model, and, analogously, let \(c^{(nat)}_{x}=cost(x,x^{\prime\prime})\) be the cost of a recourse \(x^{\prime\prime}\) for an individual represented by \(x\). For simplicity, we assume that both \(c^{(\cdot)}_{(\cdot)}\) and \(cost(\cdot,\cdot)\) use the same \(\ell_{p}\) norm based distance metrics. Let \(H^{-}=\{x\in\mathcal{X}:f(x)=-1\}\) represent the sub-population which was adversely affected by the classifier \(f(\cdot)\), and analogously we have \(H^{+}=\{x\in\mathcal{X}:f(x)=+1\}\). The average cost of recourses for \(H^{-}\) is defined for a naturally trained model as:
\[c^{(nat)}_{*}=\frac{1}{|H^{-}|}\sum_{x\in H^{-}}c^{(nat)}_{x} \tag{9}\]
Let \(\underline{H^{-}}=\{x\in\mathcal{X}:f(x)=-1,c^{(nat)}_{x}\leq\underline{ \epsilon}\}\), where \(\underline{\epsilon}\) is a cost threshold to identify low cost recourses. As observed in Figures 4 and 5, a low cost counterfactual is sufficient in practice for a large section of the population. However, an optimal \(\epsilon_{a}\)-adv classifier provides at least \(\epsilon_{a}\) robustness to all samples in the training dataset. This can be visualized by the sharp peak in the distribution of the observed \(\epsilon\) in the test dataset for all the \(\epsilon\)-adv models (Figure 8). However AAT models provide natural robustness to the data samples, meaning that a data instance closer to the natural decision boundary has \(\epsilon^{H^{-}}_{aat}\) that depends on the data's natural proximity to the decision boundary. For instances with \(\left|\underline{e^{H^{-}}_{aat}}\right|<\epsilon_{a}\), the resulting recourse will be more affordable. For \(\epsilon^{H^{-}}_{aat}<c^{(nat)}_{x}\), low cost recourse within \(\underline{H^{-}}\) will be preserved.
### Proximity to the Desired Manifold
_Manifold Proximity_ measures the distance by some metric between recourse and the target sub-population. For an \(f^{*}_{\epsilon_{a}\text{-adv}}\) model, the recourse suggested have at least \(\epsilon_{a}\) proximity from the target approved sub-population \(H^{+}\) due to the fact that the target sub-population is also \(\epsilon_{a}\) away from the decision boundary. Alternatively \(f_{aat}\) is naturally robust for the target sub-population as well. Hence, the Recourse provided has the potential to be closer in terms of proximity to \(H^{+}\), so long as \(\left|\underline{e^{H^{+}}_{aat}}\right|<\epsilon_{a}\). We report the average proximity \(\rho_{f_{\epsilon\text{-adv}}}\) of the model \(f_{\epsilon\text{-adv}}\) using:
\[\rho_{f_{\epsilon\text{-adv}}}=\frac{1}{|N_{test}|}\sum_{x\in N_{test}}\min_{x ^{\prime}\in H^{-}}\;d(x,x^{+}) \tag{10}\]
where \(d(x,x^{+})\) is a distance measure between a counterfactual \(x\) and a target population \(x^{+}\). We report both \(\rho_{f_{\epsilon\text{-adv}}}\) and \(\rho_{f_{\epsilon\text{-adv}}}\) for the corresponding models. In Figure 7, we find that \(\rho_{f_{\epsilon\text{-adv}}}\) is significantly better than \(\rho_{f_{\epsilon\text{-adv}}}\). A motivating toy problem demonstrating lower recourse costs and closer manifold proximity is also visualized in Figure 1.
### Preservation of Low Cost Recourse
The recourse costs provided to the adversely affected individuals by a model should follow the natural distribution of the difficulty of acting upon the suggested recourse at the population level. With a fixed \(\epsilon\) while training an optimal adversarially trained \(f^{*}_{\epsilon\text{-adv}}\) model, the recourse suggested must necessarily be \(\epsilon\) away from the decision boundary and further \(\epsilon\) away from the nearest target
population sample. Such counterfactuals contradicts with the recourse literature (Kang et al., 2017), which describes a distribution in recourse costs wherein a proportion of individuals only require minimal low cost actionable steps to obtain the desired outcome from a model, whereas other individuals can have a much larger recourse costs. Essentially, \(\epsilon\)-robustness necessarily denies recourse with lower costs than \(\epsilon\).
\(f_{\text{{aat}}}\) does not enforce a strict \(\epsilon\) while training, allowing instances to have a wider range of recourse costs. To this end we compare the rate of extreme low cost recourse \(C_{\Delta}\) across the discussed training methods with real-world datasets to measure the rate at which it degrades in practice. For simplicity, we measure:
\[C_{\Delta}=\frac{1}{|N_{test}|}\sum_{x_{i}\in N_{test}}\mathbf{1}(C_{x_{i}}<\epsilon) \tag{11}\]
where \(C_{x_{i}}\) is the cost of recourse for an instance \(x_{i}\) and \(\epsilon\) is a minimum adversarial training radius. We observe in Figure 4 that Adaptive Adversarial Training preserves low cost recourse rates despite providing overall robustness benefits.
## 5. Experimental Design & Metrics
In this section, we detail our experimentation procedure to empirically evaluate these various training methods and explain our
Figure 3. Attack Success Rate. Traditional Adversarial Training shows higher robustness within its predefined training threshold, but sharper robustness degradation as the attack radius increases.
Figure 2. Standard performance across datatsets. MMA shows particularly competitive standard performance compared with all other Adversarial Training regimens.
metric choices. The CARLA package (Cordes and Riedl, 2017) was used to source the datasets and recourse methods we employed.
### Experimental Setup
Datasets.We performed our experiments on three datasets:
* _Adult Income_: A dataset originating from the 1994 Census of 48,842 individuals for whom the task is to predict whether someone makes more than $50,000/yr. It is comprised of 20 features which are a combination of demographic features (age, sex, racial group), as well as employment features (hours of work per week and salary), and financial features (capital gains/losses.) In keeping with (Kang et al., 2019) and (Bianchi et al., 2019), we removed categorical features for efficient training and approximation of tabular adversarial examples. The target distribution is somewhat skewed, with a 76% positive label proportion.
* _Home Equity Line of Credit (Heloc)_: pulled from the 2019 FICO Explainable Machine Learning (xML) challenge, the Heloc dataset consists of anonymized credit bureau data from 9,871 individuals where the task is to predict whether an individual will repay their HELOC account within two years. The dataset consists of 21 financial features and no demographic data. The target distribution is evenly split, with a 48% positive label proportion.
* _Give Me Some Credit (GSC)_: a credit-scoring dataset pulled from a 2011 Kaggle Competition consisting of 150,000 individuals for whom the task is to predict default. It consists of 11 features, one of which is a demographic feature (age), and the rest are financial variables. The target distribution is heavily skewed, with a 93% positive label proportion.
Models.We trained a total of 7 Neural Network models for each of our datasets: one naturally trained model, one model trained with AAT, one model trained with MMA, and four adversarially trained models. All models are trained using Binary Cross Entropy with the default model architecture from CARLA, with three hidden layers of (Cordes and Riedl, 2017; Bianchi et al., 2019; Bianchi et al., 2019) units. The Adversarially Trained models were all trained with PGD at a variety of \(\epsilon\in[0.05,0.1,0.15,0.2]\). The AAT model did not consider any hyperparameter choices, and the MMA model was trained using the original work's package (Bianchi et al., 2019) with the default hyperparameter choices.
Recourse Methods.We constructed Counterfactual Explanations for all models on a sample of 1000 negatively-classified test data points using three methods: Growing Spheres (GS), C-CHVAE, and SCFE. All hyperparameter choices for these methods were left as their CARLA defaults.
### Metrics
To study the effects of the different training methods on accuracy, robustness, and recourse, we calculate the following metrics:
Standard Classification Performance.A primary consideration in adversarial training is the trade-off in classification accuracy when compared with natural training. We record the standard classification accuracy of all models to measure the drop in accuracy that may accompany the different adversarial training methods. Formally, we measure: \(\frac{1}{|D_{test}|}\sum_{x_{i}\in D_{test}}\mathbf{1}(f(x_{i})=y_{i})\). Given that we are experimenting with datasets with skewed target distributions, we also record the F1 score of each model on the minority target population.
Figure 4. Low cost recourse (\(\ell_{\infty}<0.05\)) proportion for methods that optimize directly in the input space. We observe that AAT models has much higher proportions of low cost recourse, supporting the hypothesis that it allows for robustness while preserving low recourse costs for individuals near natural decision boundaries.
Figure 5: AAT "Discovered" Radii Resulting from Adpative Adversarial Training
Figure 6: Recourse costs (defined as the \(\ell_{2}\) distance between a factual and counterfactual data point) for all methods and datsets. We observe that adaptive adversarial training shows significantly more competitive recourse costs than traditional adversarial training, and MMA training in particular shows almost no increase over natural training despite its robustness benefits.
_Adversarial Success Rate._ Given that we are primarily concerned with the trade-off between robustness and recourse, and following the concept of "boundary error" introduced in (Zhou et al., 2017) to disentangle standard performance and adversarial vulnerability, we also measure the success rate of adversarial attacks at various radii on our models. Formally, given an attack \(\mathcal{H}_{\epsilon}\) such that \(\mathcal{H}_{\epsilon}(x)\) identifies the most adversarial example on \(x\) within a radius \(\epsilon\), we measure \(\frac{1}{|\mathcal{D}_{test}|}\sum_{x_{i}\in\mathcal{D}_{test}}\left(1f( \mathcal{H}_{\epsilon}(x))\neq f(x_{i})\right)\). We observe the adversarial success rate across the radii on which we train our traditional adversarial models. Note that this is an imperfect metric for measuring the success of AAT, as AAT assumes that some "attacks" at given radii represent real movements toward different classes; however, it is still useful to capture this information in considering the trade-off between traditional adversarial training and AAT.
_Counterfactual Proximity._ The primary metric regarding recourse we are interested in observing is the ultimate recourse cost between our resultant models. As each specific domain's cost function is not concretely defined, we follow the convention of opting for \(\ell_{2}\) distance as a standard approximation. Formally, for each model we calculate: \(\frac{1}{|\mathcal{D}_{test}|}\sum_{x_{i}\in\mathcal{D}_{test}}||x_{i}^{*}-x_{ i}||_{2}\), where \(x^{*}\) is the recourse calculated for \(x_{i}\).
_Manifold Proximity._ Motivated by the question of how faithful our resulting counterfactuals are to true movements towards the desired class, we estimate the distance between the counterfactuals each model produces and the desired class manifold these counterfactuals approximate. We use two methods for this: a KNN distance measure and a sphere distance measure For KNN, we record the average \(\ell_{2}\) distance between the resulting counterfactuals and the five nearest neighbors of the desired class. For the sphere measure, we record the average \(\ell_{2}\) distance between the resulting counterfactuals and all neighbors of the desired class within an \(\ell_{2}\) ball of size \(\epsilon\), where \(\epsilon\) is calculated as \(20\%\) of the average \(\ell_{2}\) distance between any two points in the dataset.
## 6. Results & Discussion
_Standard Performance._ Figure 2 displays the classification accuracy and F1 scores of the various models. We observe that for the Adult and Heloc datasets, adversarial training tends to decrease standard performance, with higher training radii correlating with worse performance. We observe that MMA training tends to keep performance consistent, and that AAT worsens performance to a degree similar to adversarial training with an \(\epsilon\) value between \(0.05\) and \(0.1\).
_Robustness._ Figure 3 shows the vulnerability of the models under PGD attack at a variety of radii (\(\epsilon\in[0.05,0.1,0.15,0.2,0.25]\)). We observe that while traditional adversarial training creates substantially more robust models within a defined radius of attack, the degredation in robustness tends to be more severe among traditionally trained models than AAT methods when the radius increases beyond their predefined training threshold. MMA in particular
Figure 7. KNN and Sphere Manifold Proximity for Growing Spheres. We find that not only does adaptive adversarial training produce less expensive recourse than traditional adversarial training, but also recourse that is more faithful to the desired class these counterfactuals approximate.
Figure 8: Decision boundary proximity, estimated by the minimum successful PGD attack radius on a sample of 1000 instances. The height represents a proportion of the data, the average distance is shown in red.
shows surprisingly consistent robustness benfits, although they are more moderate than their adversarially trained counterparts'.
Counterfactual Proximity.Figure 6 displays the cost of recourse across all datasets for the three recourse methods studied. We observe consistently that adaptive adversarial training yields recourse with lower costs than traditional adversarial training, and in the case of MMA costs that are consistently competitive with natural training. This result seems unintuitive given the robustness benefits that MMA provides, and we believe this presents an interesting avenue for further research.
KNN & Sphere Manifold Distance.Figure 7 shows the Manifold Proximity estimates for Growing Spheres across all datasets. We observe that adaptive adversarial training produces recourse that is consistently closer to the desired class manifold than traditional adversarial training. This result, paired with the reduction in recourse costs, may suggest that adaptive adversarial training encourages more natural decision boundaries than traditional adversarial training, allowing for more meaningful recourse at lower costs.
Prevalence of Low Cost Recourse.For recourse methods that optimize costs directly in the input space, we record the percentage of counterfactuals that have an \(\ell_{\infty}\) cost less than 0.05 to measure the proportion of low cost recourse among our models. The results are recorded in Figure 4. We observe that adaptive adversarial training shows higher proportions of low cost recourse than traditional adversarially trained models; surprisingly, MMA training in particular finds proportions of low-cost recourse that are consistently competitive with natural training, despite its benefits in overall robustness.
Discovered Radii & Decision Boundary Distances.Figure 5 displays the instance-wise discovered radii after AAT for all three datasets. We observe that for all datasets, a variety of radii are found with unique distributions. This alludes to the fact that different underlying data distributions have different levels of inherent adversarial vulnerability, underscoring the challenge of estimating a proper singular radius at which to adversarially train. Figure 8 shows an estimation of the distribution of decision boundary proximities across all models, calculated by finding the minimum successful radius for PDG attack across a sample of 1000 instances. We observe that traditional \(\epsilon\)-adversarial training often limits proximity to the decision boundary \(d>\epsilon_{i}\), while adaptive adversarial training shows a greater distribution in ultimate decision boundary proximities. In the case of MMA in particular, we find that the decision boundary proximities closely match that of the natural model, despite its improved robustness.
## 7. Conclusion
This work explores the effects of adaptive adversarial training on robustness and recourse, finding that it shows promising trade-offs between the two. We motivate our work with a observation of the effect of traditional adversarial training on recourse costs, and introduce scenarios under which adaptive adversarial training provides more affordable recourse. We conduct experiments on three datasets demonstrating that adaptive adversarial training yields significant robustness benefits over natural training with little cost incurred on recourse and standard performance, and provide evidence that adaptive adversarial training produces recourse that more faithfully represents movements towards the desired class manifold. Finally we analyze the resulting models' decision boundary margins, providing evidence that supports our observations on recourse costs under traditional adversarial training. We believe that adaptive adversarial training, and Max-Margin adversarial training in particular, presents a promising means of achieving the ultimate goals of robustness while preserving affordable recourse costs for end users.
###### Acknowledgements.
This work is partially supported by the National Science Foundation (NSF) under grants IIS-2143895 and IIS-2040800, and CCF-2023495.
|
2308.02601 | Two Candidate Pulsar TeV Halos Identified from Property-Similarity
Studies | TeV halos have been suggested as a common phenomenon associated with
middle-aged pulsars. Based on our recent work on PSR~J0631+1036, which is the
only known source positionally coincident with a hard TeV gamma-ray source and
likely powers the latter as a TeV halo, we select 3 candidate TeV halos from
the first Large High Altitude Air Shower Observatory (LHAASO) catalog of
gamma-ray sources. The corresponding pulsars, given by the positional
coincidences and property similarities, are PSR J1958+2846, PSR J2028+3332, and
PSR J1849$-$0001. We analyze the GeV $\gamma$-ray data obtained with the Large
Area Telescope (LAT) onboard {\it the Fermi Gamma-ray Space Telescope} for the
first two pulsars, as the last is gamma-ray quiet. We remove the pulsed
emissions of the pulsars from the source regions from timing analysis, and
determine that there are no residual GeV emissions in the regions as any
possible counterparts to the TeV sources. Considering the previous
observational results for the source regions and comparing the two pulsars to
Geminga (and Monogem), the LHAASO-detected TeV sources are likely the pulsars'
respective TeV halos. We find that the candidate and identified TeV halos,
including that of PSR~J1849$-$0001, have luminosites at 50 TeV (estimated from
the differential fluxes) approximately proportional to the spin-down energy
$\dot{E}$ of the pulsars, and the ratios of the former to the latter are $\sim
6\times 10^{-4}$. | Dong Zheng, Zhongxiang Wang | 2023-08-04T08:18:21Z | http://arxiv.org/abs/2308.02601v2 | # Two Candidate Pulsar TeV Halos Identified from Property-Similarity Studies
###### Abstract
TeV halos have been suggested as a common phenomenon associated with middle-aged pulsars. Based on our recent work on the middle-aged pulsar J0631+1036, which is the only known source positionally coincident with a hard TeV \(\gamma\)-ray source and likely powers the latter as a TeV halo, we select 3 candidate TeV halos from the first Large High Altitude Air Shower Observatory (LHAASO) catalog of \(\gamma\)-ray sources. The corresponding pulsars, given by the positional coincidences and property similarities, are PSR J1958+2846, PSR J2028+3332, and PSR J1849\(-\)0001. We analyze the GeV \(\gamma\)-ray data obtained with the Large Area Telescope (LAT) onboard _the Fermi Gamma-ray Space Telescope_ for the first two pulsars, as the last is \(\gamma\)-ray quiet. We remove the pulsed emissions of the pulsars from the source regions from timing analysis, and determine that there are no residual GeV emissions in the regions as any possible counterparts to the TeV sources. Considering the previous observational results for the source regions and comparing the two pulsars to Geminga (and Monogem), the LHAASO-detected TeV sources are likely the pulsars' respective TeV halos. We find that the candidate and identified TeV halos, including that of PSR J1849\(-\)0001, have luminosites at 50 TeV (estimated from the differential fluxes) approximately proportional to the spin-down energy \(\dot{E}\) of the pulsars, and the ratios of the former to the latter are \(\sim 6\times 10^{-4}\).
Gamma-rays (637); Pulsars (1306)
## 1 Introduction
Invoked by the detections of extended TeV emissions around the Geminga and Monogem pulsars (Abeysekara et al., 2017), it has been suggested that very-high-energy (VHE; \(\geq 100\) GeV) TeV halos could be a common phenomenon associated with middle-age (\(\sim\)100 kyr) pulsars (Linden et al., 2017). How such TeV halos are formed is under intense theoretical studies (e.g., Evoli et al., 2018; Lopez-Coto and Giacintti, 2018; Tang and Piran, 2019; Fang et al., 2019; Liu et al., 2019, and also see Mukhopadhyay and Linden, 2022 and references therein); generally electrons/positrons emanated from a pulsar have somehow slowly diffused, forming a region larger than a pulsar wind nebula (PWN; see, e.g., Sudoh et al., 2019; Giacinti et al., 2020) and emitting the observed TeV photons. On the basis of current speculations, \(\sim\)100 TeV halos may be detectable in our Galaxy (e.g., Linden et al., 2017; Sudoh et al., 2019), and studies of them would help clarify the pulsars' contribution to cosmic electrons/positrons (e.g., Manconi et al., 2020).
In our recent studies of the region of a middle-aged pulsar J0631+1036 (Zheng et al., 2023), which has a high positional coincidence with a TeV source likely detected with the High-Altitude Water Cherenkov (HAWC) Observatory as 3HWC J0631+107 (Albert et al., 2020) and with the Large High Altitude Air Shower Observatory (LHAASO; Cao et al., 2019) as 1LHAASO J0631+1040 (Cao et al., 2023), no GeV \(\gamma\)-ray emission was found at the region in the data obtained with the Large Area Telescope (LAT) onboard _the Fermi Gamma-ray Space Telescope (Fermi)_. The non-detection, which sets a constraint on the existence of a PWN associated with the pulsar, strongly suggests the TeV source as a TeV halo powered by the pulsar, as PWNe detected at TeV energies appear to have soft emissions (H. E. S. S. Collaboration et al., 2018) and most of them can be detectable at GeV energies with _Fermi_ LAT (see, e.g., Smith et al., 2023; Zheng et al., in preparation). This possibility is supported by the great similarities in the X-ray properties of the pulsar with Geminga
and the properties of the TeV source with the TeV halo of Geminga.
Following the studies, we have checked for potential TeV halos among the sources in the current VHE source catalogs, which include that reported by the High Energy Spectroscopy System (HESS) Galactic plane survey (H. E. S. S. Collaboration et al., 2018). As argued in Zheng et al. (2023), sources were selected if they have hard TeV emission (i.e., possibly having an energy spectrum peaking around \(\sim\)25 TeV) and do not have obvious supernova remnant (SNR) or PWN counterparts, but have a high positional coincidence with a pulsar. We found three such sources, HESS J1849\(-\)000 (or 1LHAASO J1848\(-\)0001u), 1LHAASO J1959\(+\)2846u, and 1LHAASO J2028\(+\)3352 that are in possible association with PSR J1849\(-\)0001, PSR J1958\(+\)2846, and PSR J2028\(+\)3332 (hereafter J1849, J1958, and J2028, respectively). However, the first one J1849 is an X-ray pulsar that is both radio and \(\gamma\)-ray quiet (Gotthelf et al., 2011; Bogdanov et al., 2019). We thus conducted analysis of the _Fermi_ LAT data similar to that in Zheng et al. (2023) for J1958 and J2028. These two pulsars are \(\gamma\)-ray bright, first discovered from the _Fermi_ LAT observations (Abdo et al., 2009; Pletsch et al., 2012), while J2028 is radio quiet (Griessmeier et al., 2021 and references therein).
In this paper, we report on the results from our analysis for the two pulsars. The analysis and results are presented below in Section 2. Based on the results and properties of the TeV sources (1LHAASO J1959\(+\)2846u and 1LHAASO J2028\(+\)3352) and pulsars, we suggest that the TeV sources are likely TeV halos of the pulsars. This discussion, including a summary for the possible properties of the TeV halos, is presented in Section 3.
## 2 Data Analysis
### LAT data and source model
We used the \(\gamma\)-ray data collected with _Fermi_ LAT (Atwood et al., 2009). The two regions of interest (RoIs) were set to have a size of \(15^{\circ}\times 15^{\circ}\) with each centered at the positions of J1958 and J2028 respectively. The events in each RoI in the energy range of 0.1-500 GeV over the time period of from 2008 August 04 15:43:36 (UTC) to 2023 February 16 00:00:00 (UTC; approximately 14.5 yr) were selected from the latest _Fermi_ Pass 8 database. Following the recommendations of the LAT team1, we excluded the events with quality flags of 'bad' and zenith angles \(\geq\) 90\({}^{\circ}\).
Footnote 1: [http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/)
We used the _Fermi_ LAT 12-year source catalog (4FGL-DR3; Abdollahi et al., 2022) to construct the source models. The sources within 15\({}^{\circ}\) of the pulsar targets in the catalog were included in the source models, and their spectral forms provided by the catalog were used. [Since the _Fermi_ LAT Fourth source catalog was just updated (4FGL-DR4; Ballet et al., 2023) while we were finishing this paper, we checked for possible differences between the two releases. For the RoIs, there are no significant differences, and for the regions of approximately 5\({}^{\circ}\) around the pulsars, there were no differences.] The spectral model files gl1_jem_v07.fits and iso_P8R3_SOURCE_V3_v1.txt were used as the background Galactic and extragalactic diffuse emissions respectively.
### Timing analysis
The two pulsars are bright in the LAT \(\gamma\)-ray band. We selected photons within an 1\({}^{\circ}\) radius circular region centered at each of them for the purpose of constructing their pulse profiles and thus determining the offpulse phase ranges.
#### 2.2.1 Psr j1958\(+\)2846
We first tested to fold the photons with the ephemeris given in LAT Gamma-ray Pulsar Timing Models (GPTM) database2(Ray et al., 2011). No clear pulse profile over the 14.5-yr time period could be obtained.
Footnote 2: [https://confluence.slac.stanford.edu/display/GLAMCOG/LAT+Gamma-ray+Pulsar+Timing+Models](https://confluence.slac.stanford.edu/display/GLAMCOG/LAT+Gamma-ray+Pulsar+Timing+Models)
We then used the method described in Xing et al. (2022), in which the photons during MJD 54682-56540 (the similar time period to that in LAT GPTM database) were folded according to the known ephemeris by using the _Fermi_ TEMPO2 plug-in (Edwards et al., 2006; Hobbs et al., 2006). An empirical template profile was constructed, based on which the times of arrival (TOAs) for as many as possible sets of \(\sim\) 200 day LAT data were generated. The template and TOAs were obtained using the maximum likelihood method described in Ray et al. (2011). We fitted the TOAs with TEMPO2 by adding high-order frequency derivatives and updated the ephemeris. However this ephemeris, whose main parameters \(f\), \(f_{1}\), and \(f_{2}\) are given in Table 1, could only cover the data before MJD 56540.
For the data after MJD 56540, we were able to find two template profiles during MJD 56540-56740 and MJD 56740-59540, while the same method as the above was used. The updated ephemerides are given in Table 1.
We folded the photons in the three time periods according to the ephemerides respectively to construct the
pulse profiles. There were phase shifts between them, \(\simeq 0.4375\) between the first and second and \(\simeq 0.125\) between the first and the third. After making the corrections for the phase shifts, a pulse profile over nearly 13.3-yr was obtained (left panel of Figure 1). Based on the profile, we defined phase 0.0-0.625 as the onpulse phase range and phase 0.625-1.0 as the offpulse phase range.
#### 2.2.2 Psr j2028+3332
For this pulsar, the selected photons could be easily folded according to the ephemeris given in LAT GPTM database. A pulse profile over the whole LAT data time period was obtained, while the ephemeris used is given in Table 1. Based on the pulse profile, which is shown in the right panel of Figure 1, we defined phase 0.0-0.5625 and 0.9375-1.0 as the onpulse phase ranges and phase 0.5625-0.9375 as the offpulse phase range.
### Likelihood and spectral analysis
#### 2.3.1 Onpulse data
We performed standard binned likelihood analysis to the 0.1-500 GeV LAT data during the onpulse phase ranges of two pulsars determined above. The source models described in Section 2.1 were used. The sources within \(5^{\circ}\) from each of the pulsars were set to have free spectral parameters, while for the other sources in the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Source & Time range & \(f\) & \(f_{1}/10^{-12}\) & \(f_{2}/10^{-22}\) \\ & (MJD) & (Hz) & (Hz s\({}^{-1}\)) & (Hz s\({}^{-2}\)) \\ \hline J1958 & 54682–56540 & 3.44 & 3489744(9) & \(-2.5133(1)\) \\ & 56540–56740 & 3.44382(7) & \(-13(2)\) & 2204(470) \\ & 56740–59540 & 3.44349004(5) & \(-2.521(1)\) & 1.9(3) \\ \hline J2028 & 54682–59991 & 5.65906764938(2) & \(-0.1555548(9)\) & \(0.000353(9)\) \\ \hline \end{tabular} The frequency epoch is MJD 55555 for both pulsars.
\end{table}
Table 1: Timing parameters derived for PSR J1958+2846 and PSR J2028+3332
Figure 1: Phase-connected pulse profile (_top_) and two-dimensional phaseogram (_bottom_) constructed for J1958 (_left_) and J2028 (_right_). Two spin cycles are shown for clarity. The onpulse and offpulse phase ranges are marked by dashed lines.
source models, their spectral parameters were fixed at the values given in the catalog. The background normalizations were set as the free parameters.
For fitting the emissions of the pulsars, we used a PLSuperExpCutoff4 (PLSEC) model (Abdollahi et al., 2022)), \(\frac{dN}{dE}=N_{0}(\frac{E}{E_{0}})^{-\Gamma-\frac{d}{2}\ln(\frac{E}{E_{0}})- \frac{4b}{b}\ln^{2}(\frac{E}{E_{0}})-\frac{4b^{2}}{24}\ln^{3}(\frac{E}{E_{0}})}\), where \(\Gamma\) is the photon index, \(d\) the local curvature at \(E_{0}\), and \(b\) a measure of the shape of the exponential cutoff and was fixed at 2/3 (a characteristic value used for the \(\gamma\)-ray pulsars in 4FGL-DR3). From the likelihood analysis, we obtained \(\Gamma=2.12\pm 0.01\) and \(d=0.63\pm 0.03\) (\(\Gamma=2.140\pm 0.004\) and \(d=1.109\pm 0.007\)) for J1958 (J2028). These values are close to those given in 4FGL-DR3. These results, as well as the corresponding TS values, are provided in Table 2.
We also obtained the flux measurements of the \(\gamma\)-ray emissions of J1958 and J2028 from their onpulse data. The number of the energy bins was set to be 10 by evenly dividing the energy range of from 0.1 to 500 GeV in logarithm. Fluxes were obtained from the maximum likelihood analysis of the data in each energy bin. In this analysis, the spectral normalizations of the sources within 5\({}^{\circ}\) from each pulsar were set as free parameters, and all other parameters of the sources in the source models were fixed at the values obtained from the above binned likelihood analysis. When the TS value of a spectral data point was less than 4, we used the 95% flux upper limit instead. The obtained spectra are shown in Figure 2.
#### 2.3.2 Offpulse data
We performed standard binned likelihood analysis to the 0.1-500 GeV LAT data during the offpulse phase ranges of the two pulsars. The parameter setups were the same as those in the analysis of the onpulse data (Section 2.3.1). We assumed a power law for any emission at the position of each pulsar, \(dN/dE=N_{0}(E/1\ \mathrm{GeV})^{-\Gamma}\). No emissions were detected. When we fixed \(\Gamma=2\), the resulting TS values were \(\sim\)0 (Table 2).
To verify the non-detection results in the offpulse data, we calculated 0.1-500 GeV TS maps for the two pulsar regions. PWNe or TeV halos may show extended weak emission at GeV energies (e.g., Di Mauro et al., 2021). However, as shown by the TS maps (Figure 3), no residual emissions are seen in each of the pulsar regions. We also tested to assume a uniform-disk model (with a radius value varied upto 1\({}^{\circ}\)) centered at each of the pulsars and perform binned likelihood analysis. No extended emissions were detected, which confirms the visual inspection results of the TS maps.
## 3 Discussion
By selecting TeV sources sharing the similarities with the TeV halo of Geminga, i.e., having hard emission and being positionally coincident with only a middle-aged pulsar, we have found 1LHAASO J1959+2846u and 1LHAASO J2028+3352 as possible TeV halos. The corresponding pulsars J1958 and J2028 both have bright \(\gamma\)-ray emissions in the _Fermi_ LAT GeV band. We
Figure 2: \(\gamma\)-ray spectra (black dots) of J1958 (_left_) and J2028 (_right_) in 0.1–500 GeV obtained from their onpulse data and the corresponding best-fit PLSEC models (dashed curves). From the offpulse data, the upper limits derived by assuming a power law with \(\Gamma=2\) are shown as the red lines. The fluxes or flux upper limits from different VHE observations of the source regions are also shown, which include in the _left_ panel, 1LHAASO J1959+2846u measured with the Kilometer Squared Array (KM2A) and Water Cherenkov Detector Array (WCDA; Cao et al., 2023), MGRO J1958+2848 (Abdo et al., 2009), VERITAS upper limits on J1958 (Archer et al., 2019), and MAGIC upper limits on the PWN of J1958 (Fernandez-Barral et al., 2017), and in the _right_ panel 1LHAASO J2028+3352 and MGRO upper limit on J2028.
conducted timing analysis of the LAT data to remove the pulsed emissions of the pulsars from the source regions, and found that no GeV \(\gamma\)-ray emissions were detected in each of the regions. The non-detections largely constrain the existence of PWNe associated with the pulsars, as we note that the spectral upper limits of \(\sim\)10\({}^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) (Figure 2) set a luminosity limit on a PWN at a distance of 2 kpc down to \(\sim\)10\({}^{32}\) erg s\({}^{-1}\), lower than those of the known PWNe or candidate PWNe (e.g., Ackermann et al., 2011). Combining the non-detections of any GeV emissions with the hard emission property of the two TeV sources, it is likely that they are TeV halos of the two pulsars.
Below we discuss the different detection results for the source regions of the two pulsars in Section 3.1 & 3.2, which help strengthen their likely associations with the two TeV sources. In Section 3.3, we show the detailed similarities of the two pulsars and their presumed TeV halos with the other pulsar TeV halos, in which we include the pulsar J1849.
### Psr J1958+2846
As shown in the left panel of Figure 3, in addition to 1LHAASO J1959+2846u that is detected with an extension size of 0\(\fdg\)29 in the energy range of \(\geq\)25 TeV and has not been detected in 1-25 TeV, the Milagro gamma-ray observatory (MGRO) had a 4\(\sigma\) detection at the position of J1958 in 1-100 TeV (Abdo et al., 2009), and HAWC and LHAASO respectively reported two nearby sources 3HWC J1957+291 (Albert et al., 2020) and LHAASO J1956+2845 (Cao et al., 2021). The MGRO detection is consistent with the LHAASO results (Figure 2), but the latter two have separation distances of 0\(\fdg\)54 and 0\(\fdg\)74 respectively. We suspect that the latter two are individual sources, not likely in association, which can be verified by near-future updating observations. In addition, the MAGIC telescopes searched for the PWN of J1958 (Fernandez-Barral et al., 2017) and VERITAS telescopes searched for pulsed \(\gamma\)-ray emission of J1958 (Archer et al., 2019), both at the energy of \(\gtrsim\)100 GeV, but no emission was detected (see Figure 2 for their upper limits). Finally, we note that there are two candidate SNRs identified from an optical H\(\alpha\) survey (Sabin et al., 2013), which positionally overlap with the pulsar or 1LHAASO J1959+2846u. However very limited information is available for these two candidate SNRs (e.g., Green, 2019). We suggest they are
Figure 3: TS maps of the J1958 (_left_) and J2028 (_right_) regions in 0.1–500 GeV calculated for the offpulse phase ranges of the two pulsars. In the _left_ panel, the positional error circle and extension of 1LHASSO J1959+2846u (Cao et al., 2023) are marked by white solid and dashed circles respectively, which are coincident with J1958’s position (red diamond). Also shown are the positional error circles of 3HWC J1957+291 (pink dashed; Albert et al., 2020) and LHAASO J1956+2845 (red dashed; Cao et al., 2021). In addition, two candidate SNRs, G65.8\(-\)0.5 and G66.0\(-\)0.0 (marked by yellow dashed and cyan dashed ellipses respectively), reported in Sabin et al. (2013) are located in the region as well. In the _right_ panel, the positional error circle and extension of 1HASSO J2028+3352 are marked as white solid and dashed circles respectively. The pulsar J2028, along with two _Fermi_ LAT sources, are located in the error circle.
not likely the counterpart to the TeV source given the non-detection of any GeV \(\gamma\)-ray emissions in the offpulse data.
### Psr J2028+3332
The field of J2028 is rather clean, as 1LHAASO J2028+3352, having a positional error circle of 0\(\fdg\)86 and an extension size of 1\(\fdg\)7, is the only known VHE source reported (Figure 3). The source has similar hard emission, detected in 25-100 TeV but not in 1-25 TeV. There was a reported upper limit on the pulsar from MGRO observations (Abdo et al., 2009), and the upper limit is lower than that of the LHAASO measurements (Figure 2). This inconsistency is not understood, probably due to the simple assumption of a point source with a fixed power-law spectral form in the MGRO data analysis. We note that there are two _Fermi_ LAT sources also detected in the error circle, 4FGL J2025.3+3341 and 4FGL J2027.0+3343. Given in the LAT catalog, the first is identified as a blazar, thus not likely the counterpart to 1LHAASO J2028+3352, and the second is an unknown-type source with its emission described with a power law and is faint (TS\(\sim\)40). The nature of this second GeV source remains to be investigated, helping clarify whether its presence is purely because of a coincidence due to the large error circle of the TeV source.
In addition, since J2028 is radio quiet, there is no distance estimation for the source. We obtained its phase-averaged \(\gamma\)-ray flux, \(\simeq\)5.1\(\times\)10\({}^{-11}\) erg cm\({}^{-2}\) s\({}^{-1}\) in 0.1-500 GeV. Given its spin-down energy \(\dot{E}=3.5\times 10^{34}\) erg s\({}^{-1}\), the distance should be \(\leq\)2.4 kpc. If we further consider an efficiency of 10% for converting \(\dot{E}\) to the \(\gamma\)-ray emission (Smith et al., 2023), the distance would be \(\simeq\)0.76 kpc. Therefore the pulsar is likely close, which can explain the large extension size of 1LHAASO J2028+3352 if we consider the latter as the putative TeV halo.
### Property comparison
We collected the properties of these two pulsars as well as Geminga, Monogem, and PSR J0631+1036, which are given in Table 3. In addition, although J1849 is \(\gamma\)-ray quiet and we did not conduct analysis for it, we also list it in the table because the corresponding TeV source 1LHAASO J1848-0001u shows similar hard emission. These pulsars have spin periods of 0.2-0.4 s except J1849 whose value is only \(\simeq\)0.04 s The characteristic ages spread from 20 to 600 kyr. They are mostly X-ray faint and have faint (or non-detectable) X-ray PWNe also except J1849 (considering its assumed distance of \(\sim\)7 kpc; Gotthelf et al., 2011). Since they are all detected by LHAASO in 25-100 TeV, we also list their fluxes at 50 TeV (Cao et al., 2023) in Table 3.
As noted in Zheng et al. (2023), the putative TeV halo of PSR J0631+1036 has a flux proportional to \(\dot{E}\) of the pulsar when comparing to the corresponding values of Geminga (where the HAWC measurements were used). We thus plot the luminosity values at 50 TeV of the (candidate) TeV halos given by the LHAASO detections versus \(\dot{E}\) of the pulsars in the left panel of Figure 4, where for the source distances (PSR J0631+1036 and J1958) given by the radio timing we assign 30% uncertainties, for J2028 the upper limit is 2.4 kpc and we also assign a 30% uncertainty on the possible distance of 0.76 kpc, and for PSR J1849 its suggested distance of 7 kpc is assumed to have a 30% uncertainty as well. As can be seen, there is a possible correlation between the 50 TeV luminosities (\(L_{\rm 50TeV}\)) and \(\dot{E}\), although the distances suffer large uncertainties. We test to use the four pulsars with relatively certain properties, Geminga, Monogem, PSR J0631+1036, and J1958 and find a correlation of \(L_{\rm 50TeV}\simeq(2.5\pm 1.7)\dot{E}^{0.90\pm 0.01}\), where the Markov Chain Monte Carlo (MCMC) code emcee(Foreman-Mackey et al., 2013) was used. The index value 0.90 is close to 1, which does suggest the fluxes of the TeV halos are proportional to \(\dot{E}\). It is also interesting to note that J1849 is along this correlation line.
Another way to show this possible correlation is to plot the ratios of \(L_{\rm 50TeV}\) to \(\dot{E}\). In the right panel of Figure 4, we plot the ratios versus ages (\(\tau\)) of the pulsars. Despite the large ranges mainly caused by the distance uncertainties, we test to fit the data points of the four pulsars (given above) with a constant \(c\) and a function of \(a\tau_{\rm kyr}^{b}\) (where \(\tau_{\rm kyr}\) is \(\tau\) in units of kyr), and obtain \(c=(5.9\pm 1.1)\times 10^{-4}\) and \(a=(1.2^{+2.3}_{-0.8})\times 10^{-3}\), \(b=-0.18\pm 0.26\), where for the latter the MCMC method was used. As for the best-fits, the corresponding values of \(\chi^{2}\) over degree of freedom (DoF) are 0.4/3 and 1.5/2 respectively, the latter does not provide a better fit and thus the brightnesses of TeV halos are likely not in any close relation with ages of pulsars. We conclude that the ratios are approximately in a range of 4.8-\(7.0\times 10^{-4}\).
From the comparison analysis, we may summarize the general properties of pulsar TeV halos and thus the method to identify them. The TeV sources should have hard emission with spectrum peaks \(\gtrsim 25\) TeV, which can be used to be differentiated among PWNe (Zheng et al., 2023). The source regions are rather clean, without any potential SNR or PWN counterparts known at energies of X-rays or \(\gamma\)-rays, while middle-aged pulsars can be found positionally coincident with the source regions. The pulsars are generally \(\gamma\)-ray bright and X-ray faint, and may have faint X-ray PWNe. Then if considering the relationship built based on the limited cases (i.e., Figure 4), the TeV halos would have fluxes proportional
to \(\dot{E}\) of the pulsars, and a typical value for the ratios of TeV luminosities (at a given energy) over \(\dot{E}\) would be probably in a range of \(10^{-4}\)-\(10^{-3}\). This research is supported by the Basic Research Program of Yunnan Province (No. 202201AS070005), the National Natural Science Foundation of China (12273033), and the Original Innovation Program of the Chinese Academy of Sciences (E085021002).
|
2307.12297 | Simultaneous temperature estimation and nonuniformity correction from
multiple frames | IR cameras are widely used for temperature measurements in various
applications, including agriculture, medicine, and security. Low-cost IR
cameras have the immense potential to replace expensive radiometric cameras in
these applications; however, low-cost microbolometer-based IR cameras are prone
to spatially variant nonuniformity and to drift in temperature measurements,
which limit their usability in practical scenarios.
To address these limitations, we propose a novel approach for simultaneous
temperature estimation and nonuniformity correction (NUC) from multiple frames
captured by low-cost microbolometer-based IR cameras. We leverage the camera's
physical image-acquisition model and incorporate it into a deep-learning
architecture termed kernel prediction network (KPN), which enables us to
combine multiple frames despite imperfect registration between them. We also
propose a novel offset block that incorporates the ambient temperature into the
model and enables us to estimate the offset of the camera, which is a key
factor in temperature estimation.
Our findings demonstrate that the number of frames has a significant impact
on the accuracy of the temperature estimation and NUC. Moreover, introduction
of the offset block results in significantly improved performance compared to
vanilla KPN. The method was tested on real data collected by a low-cost IR
camera mounted on an unmanned aerial vehicle, showing only a small average
error of $0.27-0.54^\circ C$ relative to costly scientific-grade radiometric
cameras.
Our method provides an accurate and efficient solution for simultaneous
temperature estimation and NUC, which has important implications for a wide
range of practical applications. | Navot Oz, Omri Berman, Nir Sochen, David Mendelovich, Iftach Klapp | 2023-07-23T11:28:25Z | http://arxiv.org/abs/2307.12297v2 | # Simultaneous temperature estimation and nonuniformity correction from multiple frames
###### Abstract
_IR_ cameras are widely used for temperature measurements in various applications, including agriculture, medicine, and security. Low-cost _IR_ cameras have the immense potential to replace expensive radiometric cameras in these applications; however, low-cost microbolometer-based _IR_ cameras are prone to spatially variant nonuniformity and to drift in temperature measurements, which limit their usability in practical scenarios.
To address these limitations, we propose a novel approach for simultaneous temperature estimation and nonuniformity correction (NUC) from multiple frames captured by low-cost microbolometer-based _IR_ cameras. We leverage the camera's physical image-acquisition model and incorporate it into a deep-learning architecture termed kernel prediction network (KPN), which enables us to combine multiple frames despite imperfect registration between them. We also propose a novel offset block that incorporates the ambient temperature into the model and enables us to estimate the offset of the camera, which is a key factor in temperature estimation.
Our findings demonstrate that the number of frames has a significant impact on the accuracy of the temperature estimation and NUC. Moreover, introduction of the offset block results in significantly improved performance compared to vanilla KPN. The method was tested on real data collected by a low-cost _IR_ camera mounted on an unmanned aerial vehicle, showing only a small average error of \(0.27-0.54^{\circ}C\) relative to costly scientific-grade radiometric cameras.
Our method provides an accurate and efficient solution for simultaneous temperature estimation and NUC, which has important implications for a wide range of practical applications.
Deep learning (DL), fixed-pattern Noise (FPN), _IR_ camera, microbolometer, multiframe, nonuniformity correction (NUC), space-variant nonuniformity, temperature estimation.
## I Introduction
Temperature is an important indicator of an object's state. For example, the temperature of a plant is important in deducing information on its well-being [1, 2]. Long-wave _IR_ (LWIR) imaging, commonly termed _IR_ imaging, measures the thermal radiation emitted from an object. To avoid noise and improve accuracy, radiometric _IR_ cameras employ either a cooling mechanism or a sophisticated shuttering apparatus. Both are expensive and energy consuming, which result in a highly expensive camera. Although _IR_ imaging is a well-established technique, the high cost of _IR_ cameras prohibits its widespread use. There exists an alternative approach to radiometric thermal imaging involving the use of low-cost uncooled microbolometer arrays, which can facilitate the creation of inexpensive _IR_ cameras with low energy requirements, but with a significant loss in accuracy. Unlike photon-counting detector arrays, microbolometer arrays gauge alterations in electrical resistance resulting from the radiation emitted from an object [3]. Each microbolometer in the array is heated by the thermal radiation to a temperature that is dependent on the scene, resulting in each microbolometer having a marginally different temperature based on the observed scene and the incident angle of the radiation. The incident radiation causes a miniscule change in the resistance of the microbolometer. The temperature of the scene is reflected by the variation in resistance of each microbolometer. The infinitesimal changes in resistance detected by each microbolometer are used to create an image that corresponds to the temperature of the observed scene.
Although microbolometer arrays are a useful tool for thermal imaging, they have significant limitations. Space-variant nonuniformity and noise from various sources affect the accuracy of these arrays. The nonuniformity drifts due to the change in ambient temperature, which causes unpredictable errors in the sensor readings. The lack of a cold shield in the uncooled camera is a prominent cause of nonuniformity [4]. This self-radiation effect is attributed to the camera's housing and lens, which emit thermal radiation onto the sensor. This self-radiation varies according to the ambient temperature of the camera.
Fixed-pattern noise (FPN) is another factor that contributes to nonuniformity in microbolometer arrays. The readout circuitry of these arrays is typically line-based, like charge-coupled devices. Even minor differences between line-readers on the same array can result in significant variation between lines in the resulting image [5]. Noises in the camera increase
Fig. 1: Estimating the scene temperature from a burst of gray-level frames.
the noise equivalent differential temperature (NEDT), which refers to the minimum detectable change in scene temperature [5]. The NEDT is a measure of the sensitivity of the camera; the higher the NEDT, the less sensitive the camera is to changes in temperature. An image of a uniform heat source (blackbody) is shown in Fig. 2. The spatially-variant nonuniformity is demonstrated by the radial patterns in the gray levels of the left subfigure. The subfigure on the right plots the gray levels along the blue dashed line, showing the impact of nonuniformity and noise on the gray levels.
A widely used application of _IR_ imaging is remote sensing - the process of acquiring information about an object without making physical contact with it. The information is acquired by measuring the radiation that is reflected by, or emitted from the object. The information is then used to deduce its physical properties. Remote sensing is used in a variety of fields (e.g., agriculture, geology, and meteorology).
One common use for _IR_ cameras is to mount them on drones. This setup results in high overlap between frames (Section V-A). The redundant information can be used to simultaneously improve the accuracy of the temperature estimation and correct nonuniformity in the frames. Fig. 3 illustrates how redundant information between frames can be beneficial. The object is affected differently by the nonuniformity in each frame, which means that the true underlying temperature of the object can be extracted.
The aims of this study are to: exploit the redundancy in the data and the physical model of the camera to develop a method of estimating scene temperatures using a low-cost microbolometer-based _IR_ camera, and correct for nonuniformity in the frames.
## II Related work
The estimation of temperature can be broadly divided into two parts: transforming the output of the camera to temperatures, and correcting nonuniformity in the sensor. Determining the transformation from camera output to temperature is called _thermal calibration_. Correcting the nonuniformity in the sensor is called _nonuniformity correction_ (NUC).
### _Thermal calibration_
The raw output of the _IR_ camera is dependent on the object temperature, and the output values themselves are given in gray levels. For example, the dynamic range of the gray level in the low-cost _IR_ Tau2 is \(14_{bits}\). The classical approach is to calibrate the camera for different ambient temperatures [6].
A large dataset of object-ambient temperature pairs must be collected for calibration. The gain and offset are calculated from the per-pixel data to determine the spatially variant nonuniformity. Thus, the calibration process usually requires considerable time and resources.
Schulz and Caldwell [6] used a single-point correction, i.e., a single ambient temperature was used, a constant gain was assumed, and only the offset was found. Riou et al. [5] suggested a two-point correction that requires two ambient temperatures, but solved for both gain and offset; this correction is widely used across industrial _IR_ cameras today. Both methods use a linear regression to extract the gain and offset coefficients. Nugent et al. [7] modeled the gain and offset as a polynomial in the temperature of the object and used least-squares to extract the coefficients. Contemporary work adds prior knowledge to the calibration process. Liang et al. [8] found the gain and offset for a given temperature and interpolated the results for other ambient temperatures. Chang and Li [9] incorporated the integration time of each frame as prior knowledge to the calibration.
The calibration data must be collected for each camera separately, because each camera is slightly different due to the manufacturing process. This requires scientific-grade equipment, making the calibration process infeasible for most users.
### _Nonuniformity correction_
As stated in Section I, the frames of the _IR_ camera suffer from spatially variant nonuniformity. The nonuniformity can be corrected for a single frame, or by combining information from multiple frames (known as scene-based).
#### Ii-B1 Single Frame
A given image contains information that can be exploited for different tasks, such as low frequencies [10], recurring patches in the image [11] or the statistical distribution of patches in the image [12]. Some works used a single image to correct the nonuniformity.
Scribner et al. [13] used a neural network (NN) to find the offset and gain by alternating optimization and gradient descent. Tendero and Gilles [14] used histogram equalization across the columns in a frame, and then applied a discrete
Fig. 3: Simulation of consecutive frames taken during a drone flight from an _IR_ camera. The frames sampled on the left image are marked by colored rectangles. The effects of the spatially variant nonuniformity are seen in the frames on the right. The cross on the road appears in different locations in the different frames and is affected differently by the spatially variant nonuniformity.
Fig. 2: Example of the nonuniformity in low-cost _IR_ cameras. On the left is an image of a \(30^{\circ}C\) blackbody with ambient temperature of \(44.7^{\circ}C\), and on the right are the intensities along the blue dashed line.
cosine transform to denoise the frame. Cao and Tisse [15] relied on spatial dependence between adjunct pixels to estimate both the ambient temperature and the correction. Zhao et al. [16] solved an optimization problem, with a constraint on the directional gradients of each frame.
Recent work has applied deep learning (DL) methods for single-image NUC. Jian et al. [17] learned the nonuniformity pattern from the filtered high frequencies of the frames. He et al. [18] trained a convolutional neural network (CNN) that outputs a corrected image end to end (E2E). Chang et al. [19] constructed a multiscale network to reconstruct a corrected frame. Saragadam et al. [20] solved an optimization problem with a NN as the prior, and a physical model as the constraint. Oz et al. [21] modeled the nonuniformity and trained a network based on the physics of the acquisition model.
Single-image methods require only a single frame so they are easier to apply, but their performance is degraded compared to scene-based methods.
#### Ii-B2 Scene-Based
Scene-based studies rely on the assumption that the change in ambient temperature is slower than the frame rate, and therefore the gain and offset are constant between consecutive frames.
Harris and Chiang [22] calculated shift and normalization terms per pixel and updated these terms recursively when new frames arrived. Hardie et al. [23] registered the frames and then averaged the results per pixel. Vera and Torres [24] improved the NN suggested by Scribner et al. [13] with an adaptive learning rate and a different loss function that accounts for multiframe information. Averbuch et al. [25] reformulated the NUC problem to a Kalman filter. Zuo et al. [26] estimated per-pixel _irradiance_ between two frames. Papini et al. [27] approximated the gain and offset from multiple pairs of blurred and sharp images. The common characteristic of these studies is that an update step must be performed when new frames arrive, before the correction step. The combined update and correction steps are computationally intensive and pose a constraint on the run time of the system.
A NN-based method to simultaneously estimate the scene temperature and correct the nonuniformity using multiframe information has not yet been achieved.
The present study builds on the image-acquisition model, which describes the relationship between the observed scene and the output of the camera (Section III-A). By leveraging redundant information across multiple frames and ambient temperature data, the study develops a kernel prediction network (KPN) that uses DL techniques to estimate the temperature of each pixel (Section IV-A).
The efficacy of the method is demonstrated through comparisons of real measurements obtained with an uncooled _IR_ camera and those from a scientific radiometric camera. These tests illustrate the method's ability to correct for nonuniformity and estimate temperatures accurately across different cameras (Section V-A).
Our main contributions consist of: (1) exploiting the redundant information between frames to simultaneously estimate the scene temperature and correct the nonuniformity using a NN; (2) imposing the physical model of the camera as a constraint on the network to enhance the temperature-estimation accuracy; (3) incorporating the ambient temperature data as an additional input to the network to further improve the accuracy of the temperature estimation; and (4) demonstrating the advantages of using multiple frames over single-frame methods through extensive experiments on synthetic and real data.
## III Background
We develop the physical image-acquisition model of the _IR_ camera in Section III-A, and then expand it to multiple frames in Section III-B.
### _Image acquisition_
A _blackbody_ is an ideal Lambertian surface that emits the maximal radiation at any given wavelength. The spectral density of radiation emitted from a blackbody is described by Planck's law [4]:
\[M_{\lambda}(T)=\frac{2\pi hc^{2}}{\lambda^{5}}\frac{1}{\exp(\frac{hc}{\lambda kT })-1}\quad[W\cdot sr^{-1}\cdot m^{-3}] \tag{1}\]
where \(T\) is the temperature of the blackbody in Kelvin, \(\lambda\) is the radiation wavelength, \(h\) is Planck's constant, \(k\) is the Boltzmann constant and \(c\) is the speed of light.
The power emitted over the entire bandwidth is found using the Stefan-Boltzmann law [4]:
\[M(T)=\int_{0}^{\infty}M_{\lambda}(T)d\lambda=\sigma\cdot T^{4}\quad[W\cdot sr ^{-1}\cdot m^{-2}] \tag{2}\]
where \(\sigma\) is the Stefan-Boltzmann constant.
Equations (1) and (2) hold for an ideal blackbody. Real objects can never emit the maximal radiation for a given wavelength due to physical constraints (e.g., material, viewing angle). The ratio between the ideal emission and the practical emission of an object is called _emissivity_. Thus, the Stefan-Boltzmann law for radiance power of practical objects is:
\[M(T)=\sigma\cdot\epsilon\cdot T^{4}\quad[W\cdot sr^{-1}\cdot m^{-2}] \tag{3}\]
where \(\epsilon\) is the emissivity.
The incident power by an object on a microbolometer is estimated by integrating over the physical dimensions of the system in (3). The incident power on the microbolometer can be written as [4]:
\[\phi(T)=\gamma\cdot\sigma\cdot\epsilon\cdot T^{4}\quad[W] \tag{4}\]
where \(\gamma\) is a coefficient that accounts for the dimensions of the object and the camera's field of view.
The intensities of the pixels in radiometric _IR_ cameras (i.e., gray levels) are linearly proportional to the incident power on the microbolometer. To model the intensities, we consider a small environment near a reference temperature \(T_{0}\) and expand the Stefan-Boltzmann law in (4) a by Taylor series. In Kelvin, the temperature of the object can be considered a small perturbation around a reference temperature, because the reference temperature is usually hundreds of Kelvins, whereas
\(\Delta T\) is usually tens of Kelvins. The Taylor expansion of (4) is:
\[I(t_{obj}) =\gamma\epsilon\sigma T^{4}=\gamma\epsilon\sigma(\Delta T+T_{0})^{4} \tag{5}\] \[\approx 4\gamma\epsilon\sigma T_{0}^{3}\Delta T+\gamma\epsilon\sigma T_ {0}^{4}\] \[\approx g\cdot t_{obj}+d\]
where \(I(t_{obj})\) is the gray-level output of the _IR_ camera, and \(g=4\gamma\epsilon\sigma T_{0}^{3},d=\gamma\epsilon\sigma T_{0}^{4}\) are the gain and offset coefficients, respectively. Using the relationship between Kelvin and Celisuis, we denote \(t_{obj}\equiv T-273.15\) in \({}^{\circ}C\).
Equation 5 shows that the radiation is linear for scene temperature in the small environment near \(T_{0}\), with the terms \(g\) and \(d\) dependent on the object temperature relative to \(T_{0}\).
The incident power \(\phi(t_{obj})\) in (4) changes the temperature of the microbolometer by a small fraction. The change in temperature also changes the electrical resistance of the microbolometer [28]. By applying a constant electrical current on the microbolometer and using an Ohm-like law, a map between the incident power and the voltage of the microbolometer can be derived [4]. In a low-cost uncooled _IR_ camera, the resistance of the microbolometer changes with the ambient temperature.
To account for this, the gain and offset of the _IR_ camera are modeled as a function of the ambient temperature [7]:
\[I(t_{obj},t_{amb})=g(t_{amb})\cdot t_{obj}+d(t_{amb}) \tag{6}\]
For a given ambient temperature, the gray levels of pixel \([u,v]\) can be written as:
\[I(t_{obj})[u,v]=g[u,v]\cdot t_{obj}[u,v]+d[u,v] \tag{7}\]
The gain and offset are two-dimensional (2D), and together they model the space-variant nonuniformity.
The SNR of uncooled _IR_ cameras is often low due to noises, the most dominant being \(\frac{1}{f}\) and electronic (Johnson) noise [28, Chapter 5]. The \(\frac{1}{f}\) noise is more dominant because the camera operates at a low frequency. \(\frac{1}{f}\) noise can be modeled as Gaussian [29] with zero mean.
### _Multiframes_
Consecutive frames over a brief period of time have overlap between them [an example of a real unmanned aerial vehicle (UAV) pattern with overlapping frames can be seen in Section V-A]. These consecutive frames are called a burst. The overlap between frames implies that the same object appears in multiple frames. As seen in (5), the gain and offset are dependent on the pixel location on the sensor, thus different views of the same object can be exploited as redundant information. The redundant information between frames is demonstrated in Fig. 3. To exploit it, first an object must have the same coordinates across all frames. To achieve coordinate alignment, registration is performed.
Image registration is the process of aligning two or more images of the same scene taken from different viewpoints or at different times [30]. Transforming a source frame toward the coordinate system of another destination frame is called a projective transformation, or a homography [31, Ch. 0]. Homography transformation preserves co-linearity between the frames. Moreover, a homography is invertible and linear by definition [31, Def. 2.9]. In layman's terms, the homography preserves the shapes and relations between objects.
After applying the homography on the source frame, an object should have the same coordinates in both source and destination frames. The transformed source frame is called a warped frame. Expanding to \(N\) frames, there exists a set of projective transformations \(m_{1},\dots,m_{N}\) toward a common plane such that the overlap between the frames is maximal [31, Ch. 4]. Objects that appear in the overlapping area will have the same coordinates in every warped frame. For our practical use, we choose a pivot frame for each burst of frames and annotate the pivot frame as \(\mathcal{I}\).
An underlying assumption throughout this work is that the gain and offset in (7) are constant for a series of frames taken over a short duration of time (a second). This assumption holds because the ambient temperature of the camera changes at a much slower rate (several minutes).
Let \(X\) be the fourth power of an accurate 2D temperature map, and \(I_{1},\dots,I_{N}\) be a set of \(N\) frames of \(X\) captured by the _IR_ camera. \(I_{i}\) is in gray levels. \(I_{i}^{x_{i},y_{i}}\) is the value of the pixel in the \([x_{i},y_{i}]\) location of \(I_{i}\). \([u,v]\) are the coordinates of the pivot frame \(\mathcal{I}\). The frames in the burst can be formulated as:
\[I_{1}^{x_{1},y_{1}}=g^{x_{1},y_{1}}(t_{amb})\cdot m_{1}^{-1}\left(X^{u,v} \right)+d^{x_{1},y_{1}}(t_{amb}) \tag{8}\] \[\vdots\] \[I_{N}^{x_{N},y_{N}}=g^{x_{N},y_{N}}(t_{amb})\cdot m_{N}^{-1}\left( X^{u,v}\right)+d^{x_{N},y_{N}}(t_{amb})\]
where \(m_{1},\dots,m_{N}\) are the set of homographies that transforms each frame into \(\mathcal{I}\). The zero-mean noise \(\mathcal{N}\) was omitted for brevity.
Equation 8 formulates the acquisition process of a frame as projecting the temperature map \(X\) using the inverse of the homography \(m_{i}\), and then sampling the projected \(X\) by applying the gain \(g\), offset \(d\) and noise \(\mathcal{N}\). Notice that an object will be sampled at different coordinates for each frame, and since the gain and offset are spatially variant, the object will have a different gain and offset for each frame. The result in (8) means that an object appearing in pixel \([u,v]\) of the temperature map \(X\) will have multiple representation with different values of \(g,d\) and \(\mathcal{N}\), enabling the use of redundant information between the frames. Redundant information between frames has been used for many image-restoration tasks, such as super-resolution [32, 33], denoising [34], and deblurring [35]. Many recent studies in the area have used DL for either alignment [36] or fusion of frames [37], or both [38, 39, 40].
## IV Temperature estimation
The proposed method simultaneously estimates the scene temperature and corrects nonuniformity from a burst of consecutive frames. An overview of the method is presented in Fig. 1. The _IR_ camera captures overlapping gray-level frames. A burst of consecutive gray-level frames is registered toward \(\mathcal{I}\) - the pivot frame that has the maximal overlap with all other frames. The registered gray-level frames are the input to the network, along with the ambient temperature. The output of the network is a 2D map of the estimated scene temperatures.
### _Network_
In (8) we show that different views of the same object have usable redundant information. To exploit the redundancy, these different perspectives require accurate mapping of the frames to pivot frame \(\mathcal{I}\). Naively, a temperature map \(\dot{X}\) can be estimated from (8) by:
\[\begin{split}\ddot{X}_{naive}^{u,v}&=\frac{1}{N} \sum_{i=1}^{N}\left[\frac{1}{g^{u-x_{i},v-y_{i}}}\tilde{I}_{i}^{u,v}-\frac{d^{u -x_{i},v-y_{i}}}{g^{u-x_{i},v-y_{i}}}\right]\longrightarrow\\ \dot{X}_{naive}^{u,v}&=\frac{1}{N}\sum_{i=1}^{N} \left[G^{u-x_{i},v-y_{i}}\tilde{I}_{i}^{u,v}+D^{u-x_{i},v-y_{i}}\right]\end{split} \tag{9}\]
where 2D coefficient maps \(G\) and \(D\). The naive approach requires exact registration between frames. The information must be located on the exact same coordinates across all frames. Inaccurate registration leads to artifacts or ghosting, as well as inexact temperature estimation. Even with a robust registration framework there is always some degree of misalignment between frames, so the naive approach is unsuitable for practical use.
The method for temperature estimation proposed in this work is robust to misalignment between frames. The frames are registered toward \(\mathcal{I}\) using any off-the-shelf registration method, then fed into a NN that predicts a kernel for each pixel in every frame of the burst. The kernels are then applied on overlapping patches around each pixel by an inner product between the patch and kernel. Our method is based on the KPN proposed by De Brabandere et al. [41]. Fig. 4 shows kernels predicted by the network. The kernels compensate for misalignment between frames by spatially shifting their center to compensate for the shifts.
The architecture of the temperature estimation network is based on UNET [42], with the kernel prediction block attached to the rear end of the decoder. The encoder and decoder are composed of three \(3\times 3\) convolution layers with activations and normalizations, and are described in Table I in the supplementary material. The kernel prediction block is composed of three \(1\times 1\) convolution layers with activations, and is described in Table II in the supplementary material. The entire network architecture is detailed in the supplementary material.
Although the KPN corrects nonuniformity, its temperature estimation is inaccurate. To improve the latter to match radiometric cameras, we used the ambient temperature as prior information to calibrate the output of the network. The offset between the gray-level frames and the temperatures was modeled as a polynomial of the mean of the gray-level frames and the ambient temperature:
\[\tilde{d}\left(\tilde{I},t_{amb}\right)=\frac{1}{N}\sum_{n=1}^{N}\underbrace{ \left[\sum_{i,j=0}^{\nu}\delta_{i,j}\cdot\text{Mean}\left(\tilde{I}_{n} \right)^{i}\cdot t_{amb}^{j}\right]}_{\tilde{d}_{n}} \tag{10}\]
where \(\tilde{d}_{n}\) is the offset for frame \(n\), \(\text{Mean}\left(\tilde{I}_{n}\right)\) is the spatial mean of the \(n\)th gray-level frame, \(t_{amb}\) is the ambient temperature, \(\delta_{i,j}\) are the coefficients of the polynomial, and \(\nu\) is the degree of the polynomial.
The offset block was jointly trained with the network, allowing the entire network to train end-to-end. Namely, the coefficients \(\delta_{i,j}\) of \(\tilde{d}_{n}\) were realized by a set of \(N\times(\nu+1)\) weights organized in a matrix, such that a single matrix multiplication and summation is required to calculate the offset for all frames. We found that a polynomial of degree \(\nu=4\) offers sufficient improvement in the accuracy of the temperature estimation, and that training the offset block separately from the network does not offer significant improvement. Fig. 5 shows the results of the offset block. The error between the temperature estimation of the offset block and the ground truth (GT) temperature is shown. The error is sub-degree Celsius,
Fig. 4: Example of kernels \(\mathcal{K}\) for a single pixel estimated by the network in Fig. 6, each kernel for a different frame in the burst. The dimensions of the kernels in the figure are \(9\times 9\) and they are predicted for the center pixel of \(7\) consecutive frames. The middle kernel is from the reference frame. Red has a higher magnitude, and blue has a lower magnitude.
Fig. 5: The difference between the true temperatures and the offset block’s estimates from (10), for different average input gray levels.
Fig. 6: Schematics of the model. The gray-level multiframes are fed into the kernel prediction network (KPN), and the KPN outputs the per-pixel kernels \(\mathcal{K}\) for each frame. Each frame is divided into overlapping patches with the same support as the kernels. The patches and the kernels are multiplied element-wise and each product is summed, resulting in a 2D gain map for each frame. All of the 2D gain maps are summed depth-wise, resulting in a single 2D map. The offset, a single scalar value, is added to the single 2D map to get the estimated temperature map. A zoom-in example of the kernels \(\mathcal{K}\) for a single pixel is shown in Fig. 4. A detailed description of the network architecture and an enlarged figure of the network, Fig. 13, can be found in the supplementary material.
and the offset is accurate enough to calibrate the output of the network.
The following equation describes the temperature estimation by applying a KPN to the image-acquisition model. To combine the information from multiple frames, the gain term in (8) is generalized as a KPN, and the information from all frames is used. The kernels applied to each pixel handle the nonuniformity and noise, and the offset term in (10) handles the thermal calibration, resulting in the temperature estimation \(\hat{X}^{p}\):
\[\hat{X}^{p}=\sum_{n=1}^{N}\left\langle\mathcal{K}_{n}^{p},S^{p}\left(\tilde{I} _{n}^{p}\right)\right\rangle+\tilde{d}\left(\tilde{I},t_{amb}\right) \tag{11}\]
where \(N\) is the number of frames in a burst, \(\mathcal{K}\) is the kernel of size \(K\times K\) produced by the kernel prediction block, and \(S(\cdot)\) is a function that crops a \(K\times K\) patch around a pixel \(p\) in the support of the frames.
The scheme of the model is shown in Fig. 6. The registered burst of frames is fed into the network, which outputs a kernel for each pixel in each frame. These kernels \(\mathcal{K}\) serve as the _gain_ in (11). The registered frames are also fed to the offset block along with the ambient temperature, which outputs the _offset_ term in (11). The gain is applied to the frames and the results are summed depth-wise. The scene temperature estimation \(\hat{X}\) is obtained by adding the offset term to the result of the depth-wise summation.
### _Loss functions_
The loss is comprised of a fidelity term, a gradient smoothness term, and a structural term.
The structural term \(\mathcal{L}_{SS\mathcal{M}}\) maximizes the commonly-used structural similarity metric (SSIM). SSIM loss improves results in image-restoration tasks [43]. The fidelity and gradient terms are similar to Mildenhall et al. [44], except that the \(L_{1}\) loss is used instead of \(L_{2}\), because \(L_{1}\) is more robust to outliers [45]. The loss function is formulated as:
\[\begin{split}\mathcal{L}=&\left|\left|M(\hat{X})-M (X)\right|\right|_{1}+\\ &\lambda_{1}\left|\left|M(\nabla\hat{X})-M(\nabla X)\right| \right|_{1}+\\ &\lambda_{2}\cdot\mathcal{L}_{SS\mathcal{M}}(M(\hat{X}),M(X)) \end{split} \tag{12}\]
where \(\hat{X}\) is the temperature estimated by the network, \(X\) is the GT temperature, \(M\) is a mask of valid pixels in the registration process, \(\lambda_{1},\lambda_{2}\) are hyperparameters to balance to losses, and \(\nabla\) is the magnitude of the Sobel operators. The mask is produced by the registration algorithm. The final values of the hyperparameters were set to \(\lambda_{1}=0.1\) and \(\lambda_{2}=0.01\).
### _Synthetic data_
The network was trained with synthetic data in a supervised manner. The inputs to the network were created from accurate 2D temperature maps collected using a scientific-grade _IR_ camera (A655sc). A degradation model of a low-cost _IR_ camera (Tau2) was applied to the temperature maps, transforming them to gray-level frames. As a result, the network was trained on transforming gray-level frames to accurate temperature maps.
The goal of the degradation model was to faithfully transform temperature maps into gray-level maps, allowing the supervised training process of the network. The modeling process had three stages: 1) collecting data with the _IR_ camera in a controlled environment; 2) finding per-pixel coefficients using the image-acquisition model in Section III-A; 3) using adjunct pixel dependencies as a constraint on the degradation model.
The degradation model required frames of objects with known temperature taken by the Tau2 at different ambient temperatures. To collect these data, the Tau2 was placed inside an environmental chamber in front of a scientific-grade blackbody (SR-800N). The blackbody and environmental chamber were cycled to different pairs of \((t_{amb},t_{obj})\), and frames were acquired for the different permutations. Fig. 2 provides an example of the collected data.
The Tau2 was modeled by the image-acquisition model in (6). The calibration was done according to Nugent et al. [7], using a third-degree polynomial to approximate the coefficients \(g,d\). For each pixel in the sensor, (6) can be formulated as:
\[I_{p}(t_{obj},t_{amb})=\sum_{i=0}^{3}\left(g_{i,p}\cdot t_{amb}^{i}\cdot t_{ obj}{}_{p}^{4}+d_{i,p}\cdot t_{amb}^{i}\right) \tag{13}\]
where \(g_{i,p},d_{i,p}\) are the i-th gain and offset coefficients at pixel \(p\), respectively. Equation 13 can be rewritten as a matrix multiplication:
\[\begin{array}{l}T_{p}^{n}=\left[t_{obj}^{4}{}_{n}\quad\ldots\quad t_{ obj}^{4}{}_{n}t_{amb}^{3}{}_{n}\quad 1\quad\ldots\quad t_{amb}^{3}{}_{n}\right]\\ C_{p}=\left[g_{0,p}\quad\ldots\quad g_{3,p}\quad\quad d_{0,p}\quad\ldots\quad d _{3,p}\right]^{T}\\ I_{N,p}\equiv T_{N,p}\cdot C_{p}\end{array} \tag{14}\]
where \(T_{p}^{\dagger}\) contains the appropriate temperatures of the n-th sample of a permutation, \(T_{N,p}\) is a matrix with all of the temperatures corresponding to all of the samples of the permutation as rows, and \(I_{N,p}\) is a matrix with all of the acquired samples as rows. The coefficients \(C_{p}\) in (14) are found by solving the least-squares problem:
\[C_{p}=T_{N,p}^{\dagger}\cdot I_{N,p} \tag{15}\]
where \(T_{N,p}^{\dagger}\) is the Moore-Penrose pseudo inverse.
Finally, we stack all of the 2D coefficient maps \(C_{p}\) into a 3D tensor \(\underline{C}\), with \(\underline{C}[0]\) being the 2D map of coefficient \(g_{0}\) etc.
The degradation model described in (15) is per pixel, and therefore unique to each camera. This means that nonuniformity will also be modeled by the coefficients (e.g., dead pixels, FPN), limiting the usability of the degradation model to only the specific camera that collected the data. To enable generalization of the degradation model for different cameras, the final stage in the model exploits the circular symmetry of the nonuniformity and uses the dependency between neighboring pixels.
Nonuniformity has a circular symmetry around the middle of the frame [4]. This is due to the ambient temperature of the camera, generating radiation from the chassis and lens, which is also reflected onto the sensor. Rays of thermal radiation from the body of the camera travel to the sensor and affect
each pixel differently. The superposition of these rays on each pixel creates the circular symmetry of the nonuniformity. An example of the circular symmetry can be seen in Fig. 2.
The spatial dependency was modeled as a radial map around the middle of the frame. The radial map was constructed from two mesh grids \(\underline{H},\underline{W}\) with the same dimensions as the frames. Each row in \(\underline{H}\) and each line in \(\underline{W}\) runs from \(-0.5\) to \(0.5\), such that \(\underline{H}=\underline{W}^{T}\). The radial map is defined as:
\[\underline{\underline{R}} =\sqrt{\underline{\underline{H}}^{2}+\underline{\underline{W}}^{2} },\qquad\underline{\underline{H}},\underline{W},\underline{\underline{R}}\in \mathcal{R}^{h,w} \tag{16}\] \[\underline{\underline{R}} =\sqrt{\begin{bmatrix}-0.5&\ldots&-0.5\\ \vdots&\ddots&\vdots\\ 0.5&\ldots&0.5\end{bmatrix}^{2}+\begin{bmatrix}-0.5&\ldots&0.5\\ \vdots&\ddots&\vdots\\ -0.5&\ldots&0.5\end{bmatrix}^{2}}\]
where \(h,w\) are the dimensions of the frames. The power of the matrix is performed element-wise.
The coefficient maps are modeled as:
\[\underline{\underline{C}}[i]=\sum_{j=0}^{M}m_{j}\cdot\underline{\underline{R} }^{j},\qquad m_{j}\in\mathcal{R},\ \underline{\underline{C}},\underline{\underline{R}}\in \mathcal{R}^{h,w} \tag{17}\]
where \(m_{j}\) is the spatial coefficient, and \(M\) is the number of spatial coefficients. Least-squares is solved to find the spatial coefficients.
A frame from a given temperature map is estimated by:
\[\hat{I}(t_{obj},t_{amb})=T(t_{obj},t_{amb})\cdot\hat{C} \tag{18}\]
where \(T(t_{obj},t_{amb})\) is the temperature vector in (14).
The final degradation model was noiseless and only contained low frequencies. Random FPN and Gaussian noise were added to the model during training. This enabled the network to converge to a general solution that is applicable on different cameras with various degradation profiles.
### _Training procedure_
The network was written in Python 3.10 [46] using Pytorch 1.13 [47], and was trained on a single Nvidia Titan A100. The seed was set to \(42\), and the CUDNN backend was set to deterministic mode. The network was trained using the ADAM optimizer [48] with a learning rate of \(10^{-4}\). The learning rate was halved on a validation loss plateau of more than 3 epochs. The network was run for 60 epochs with batches of 16, meaning that each epoch was roughly 800 iterations. Early stopping was applied for a validation loss plateau of 8 epochs. The weights were initialized using the orthogonal scheme [49] with a scaling of \(10^{-1}\). The hyperparameter search was run twice with different seeds (42 and 24), and the best results were chosen.
Multiframes were simulated by randomly sampling homographies for each frame in the dataset, creating different views of the same frame. The inverse homographies were used to register all of the views toward the original frame, which was set as the pivot frame \(\mathcal{I}\). The sampled homographies either created a random walk from one side of the temperature map to the other, or a hover above a random point in the frame. The overlaps between the different views were randomly set between \(60\%\) and \(80\%\), similar to a UAV flight scenario, as seen in Section V-A. Homography and frame warping was implemented with the package Kornia v0.67.
Imperfect registration was simulated by randomly adding perturbations to the inverse homographies: random translation of up to \(\pm 2\) pixels and noise from the distribution \(\mathcal{N}(0,5\cdot 10^{-}5)\) to the perspective elements of the homography (commonly known as \(h_{31},h_{32}\)). Random horizontal and vertical flips, and \(90^{\circ}\) rotations were applied to the frames before the homography sampling.
The gray-level frames were cropped to \(128\times 128\) patches before entering the network. For validation, a constant cropping was applied around the middle of the frame, and no other augmentations were applied.
Random Gaussian noise with \(\sigma^{2}=5\) gray levels and FPN were generated for each frame (Section IV-E). FPN was generated as:
\[\begin{bmatrix}1\\ \vdots\\ 1\end{bmatrix}_{h\times 1}\cdot\begin{bmatrix}U[u_{\min},u_{\max}]\\ \vdots\\ U[u_{\min},u_{\max}]\end{bmatrix}_{1\times w}^{T} \tag{19}\]
where \(U\) is uniform distribution. \(u_{\min},u_{\max}\) were chosen as \(u_{\min}=0.9,u_{\max}=1.01\). The Gaussian noise and FPN were only generated once for each frame and used throughout the entire validation process for reproducibility of results between experiments.
Normalization to range [0,1] was applied to both the temperature map and the gray-level frame. Throughout the training and validation sets, the maximal and minimal values of the temperature maps and the maximal and minimal values of the gray-level frames were obtained. The normalization was applied on the temperature maps as:
\[\bar{X}=\frac{X-X_{\text{min}}}{X_{\text{max}}-X_{\text{min}}} \tag{20}\]
where \(\bar{X}\) is the normalized input and \(X_{\text{min}},X_{\text{max}}\) are the minimal and maximal temperatures, respectively, over all datasets. Normalization for the gray-level frames was applied as:
\[\bar{I}(t_{amb})=\frac{I(t_{amb})-I_{\text{min}}}{I_{\text{max}}-I_{\text{min}}} \tag{21}\]
where \(\bar{I}\) is the normalized gray-level frame and \(I_{\text{min}},I_{\text{max}}\) are the minimal and maximal gray levels, respectively, over all datasets.
The following pipeline summarizes the creation of samples for the network. First, an accurate temperature map is sampled from the dataset. \(N\) homographies are randomly sampled and applied to the temperature map to create an overlapping burst of frames. The model described in Section IV-C is applied to each frame in the burst to turn it into a gray-level frame (18). The same FPN is applied to all frames in the burst (19), and random noise is applied to each frame in the burst separately. Finally, normalization is applied to the ambient temperature (20) and overlapping gray-level frames (21), and both are passed to the network.
### _Data_
The dataset used for training consisted of \(12,897\) frames, and the validation set was composed of \(4,723\) frames, all of which were captured by an UAV flying at a height of \(70-100_{m}\) above various agricultural fields in Israel. Only clear and in-focus frames were selected for the dataset manually by a human user.
The noise variance was established by analyzing the measurements taken in the environmental chamber. All frames were stacked depth-wise, and the variance of each pixel was calculated, resulting in a 2D variance map. The mean of the variance map was \(~{}5\) gray levels, which was used as the noise variance in the network training. The influence of \(t_{amb}\) and \(t_{obj}\) on \(\sigma^{2}\) was determined to be insignificant.
To prevent data leakage between the training and validation sets and evaluate the network's ability to generalize to new data, the validation sets were captured at the same locations as the training sets but on different days. This validation approach was maintained across all training schemes to ensure a fair comparison between different experiments. The same split between the training and validation datasets was maintained throughout the study.
## V Results
To demonstrates the efficacy of our method, we compared the mean absolute error (MAE) of the temperature estimation with different blocks of the network, namely with and without the offset estimation block, and using an end-to-end (E2E) network instead of the KPN architecture. The results are displayed as a function of the number of frames \(N\) in Fig. 7. Whereas our method can handle misalignments between frames, other methods require a perfect alignment, which is impossible in real-world scenarios. As a result, comparing temperature estimation with other methods is impossible, and we could only compare the NUC between our method and other methods on pivot frame \(\mathcal{I}\). Moreover, the other methods are not radiometric, and thus cannot be used for temperature estimation.
Fig. 7 demonstrates the superiority of our method, evidence by the low MAE for almost every number of frames. The E2E was a UNET [42] architecture similar to our KPN network. The main difference was that the last layer estimated the per-pixel result instead of outputting the kernel. A hyperparameter search was also performed for the E2E solution for fair comparison (number of channels and normalization). The MAE of the E2E network in Fig. 7 was unaffected by the number of frames, whereas the KPN results improved with the number of frames, indicating that the E2E network only uses the reference frame. Moreover, the MAE results for E2E were worse than for the KPN network.
The offset block greatly improved the results with increasing number of frames as seen in Fig. 7, with an over \(0.1^{\circ}C\) improvement for KPN and more than \(0.2^{\circ}C\) improvement
Fig. 8: Difference between our temperature estimation and the GT. The left-most figure is the GT. The next figures are zoom-ins of the areas inside the red rectangles. The number of frames used for the estimation is below each map, from left to right 7, 9 and 11 frames. The mean absolute error between the GT and our estimation is written in the top-left corner of each map.
Fig. 7: Mean absolute error (MAE) in degrees Celsius as a function of the number of frames \(N\) for different network configurations.
Figure 10: Convergence of the validation mean absolute error (MAE) loss for different architectures: (a) kernel prediction network (KPN) with offset. (b) KPN without offset. (c) End to end (E2E). Each color represents a different number of frames \(N\).
Figure 9: Zoomed-in results of the different methods. The left-most figure is the reference frame with a red rectangle. The following figures in each row are the results of the areas inside the red rectangles. Number of frames \(N=11\) for all results.
relative to E2E for \(N=11\), indicating that the offset block is beneficial. Results suggested that increasing the number of frames without the offset block reaches a plateau at around \(N=5\) and does not improve the results further, in contrast to the offset block which continues to improve the results with more frames. Since the offset block is lightweight, it offers significant improvement with little computational cost.
The effect of the number of frames \(N\) is shown in Fig. 8. The figure shows per-pixel error in the temperature estimation for different numbers of frames. The left-most figure (column a) shows the GT temperature map. The absolute per-pixel difference between the GT and our method's estimation for the area inside the red rectangle is shown in the next columns (b, c, and d). Each figure shows the estimated temperature for a different number of frames: (b) \(N=7\) frames, (c) \(N=9\) frames and (d) \(N=11\) frames. The color bar on the right of each row shows the error range for the row in degrees Celsius. The MAE \({}^{\circ}C\) between the GT and the estimation is written in the top-left corner of each difference map. Each row is a different frame. The improvement caused by the number of frames is clearly reflected by the homogeneity in the difference map and the decreasing MAE as a function of \(N\). More examples are available in the supplementary material, Figs. 15 to 19.
Because other methods are not radiometric and essentially only improve the appearance of a frame, we could only compare NUC results with other methods. Fig. 9 displays the NUC results of different methods. Column (a) shows the reference sample frame. Column (b) shows the GT temperature map. Column (c) shows the results of our method. Column (d) shows the results of ADMIRE [14] performed on each frame separately and then registered and averaged. Column (e) shows the estimation of DeepIR [20] and column (f) shows the estimation of He et al. [18]. All results were obtained with \(N=11\).
As evidenced by Fig. 9, our NUC method was better than the others. ADMIRE [14] failed to rectify the FPN, DeepIR [20] hallucinated details (e.g., the deformation in the junction in the fourth row, or the abrupt black to white edge in the fifth row). He et al.'s [18] method failed to handle the FPN. Both DeepIR [20] and He et al. [18] methods oversmoothed the results. These methods have low fidelity, and thus are unable to serve for temperature estimation. More results are available in the supplementary material, Figs. 20 to 25.
Fig. 10 depicts the convergence of the validation MAE loss of the E2E and of the KPN with and without the offset block, as a function of the number of frames. Notice that the loss for the E2E networks converges to roughly the same value for all numbers of frames, whereas the KPN-based networks give different results as a function of the number of frames \(N\). When comparing the convergence with and without the offset block, it seems that the offset block has a smoothing effect on the validation loss. This effect might occur because the KPN can concentrate on correcting the NUC, while the offset block handles the temperature estimation.
### _Real data_
We validated the effectiveness of the proposed method on real data. Two cameras, A655sc and Tau2, were attached to a DJI Matrice 600 UAV and both captured the same scenes in nadir view at a height of \(50_{m}\) above the ground at a vertical speed of \(10_{m/s}\). The A655sc is a scientific-grade radiometric camera which outputs a temperature map of the scene, whereas the Tau2 outputs a gray-level map corresponding to the radiation flux. An image of the setup can be found in Fig. 32 of the supplementary material. Notice that the Tau2 used for the experiment was not the one used to collect the calibration data in Section IV, which further strengthens the generality and robustness of the proposed method.
The frame rate for the Tau2 was set to \(30_{Hz}\); the resolution of the Tau2 was \(336\times 256\) pixels, the focal length was \(9.8_{mm}\), and the sensor size was \(4.4_{mm}\) per 256 pixels (in the direction of the flight). The ground sampling distance was \(\frac{50-4.4}{9.8-256}=0.087_{m/pix}\). The drone passed \(\frac{10}{30}=0.33_{m}\) for every frame. This means that an object moved \(\frac{0.33}{0.087}=3.80_{pix}\) between consecutive frames. Thus, an object could appear in \(\frac{256}{30}\cong 67\) frames. The A655sc field of view was much larger than that of the Tau2 so a frame of the A655sc contained multiple frames of the Tau2.
The A655sc requires accurate ambient parameters to produce a valid temperature map. The ambient temperature and humidity were gathered from a nearby weather station (\(28.4^{\circ}C\) and \(32\%\), respectively). The emissivity was tuned using an accurate temperature sensor placed in the scene.
Both cameras were focused to infinity. The flight height of \(50_{m}\) above ground ensured that all objects where within the depth of field of both cameras. A655sc captured \(1,192\) frames at \(5_{Hz}\) and the Tau2 captured \(7,152\) frames at \(30_{Hz}\).
The frames of the Tau2 were divided into overlapping groups of 7 frames each. We used 7 frames due to hardware limitations. The frames of each group were registered toward the middle frame of the group. The registration was performed by SIFT feature-matching using the Python package Komia V0.6.10. The registered frame groups were the input to the network.
The output of the network was the estimated temperature map of the scene. These estimated temperature maps were registered to the A655sc temperature maps by hand-picking correspondence points. The final registration was performed using the Python package OpenCV V4.5.1. The GT and estimated temperature maps are presented in Figs. 26 to 31 in the supplementary material.
Six results are presented in Fig. 11 and four more are presented in Fig. 14 in the supplementary material. We present the difference maps between the estimated and GT temperature maps, produced by the proposed method, in each subfigure. The GT maps are in gray, and the color scale from blue to red is the magnitude of the difference, with blue denoting low and red denoting high errors. The upper-left corner of the upper image displays the MAE of the difference map as a white number.
The MAE values span \(0.27-0.54^{\circ}C\), indicating a high accuracy of temperature estimation comparable to the A655sc precision of (\(\sim 0.5^{\circ}C\)). This was obtained without applying
## References
* [1] A. A.
any thermographic corrections or NUC to the Tau2 data, relying solely on the raw measurements of the radiation flux as gray levels. The supplementary material provides the detailed configuration of the Tau2.
The cumulative distribution function of the MAE between the GT temperature map and the estimated temperature map is shown in Fig. 12. The dashed red lines indicate the \(0.5^{\circ}C\) threshold. All three examples show that more than \(80\%\) of the pixels have a MAE of less than \(0.5^{\circ}C\). This further solidifies the effectiveness of the proposed method.
## VI Conclusion
We presented a novel method for simultaneous temperature estimation and NUC in _IR_ imaging, based on a DL method that incorporates the physical model of the sensor. The method uses redundant information between multiple overlapping frames to infer the scene temperature and correct nonuniformity, without requiring any calibration or external reference. The method also exploits prior knowledge of the camera's ambient temperature, which is measured by a built-in sensor, to improve the accuracy and robustness of the estimation.
We evaluated the performance of the method on synthetic and real data and compared it with existing methods. The results showed that the method can achieve high accuracy and low error, and can handle various scenarios, such as changing ambient temperature, moving objects, and complex backgrounds.
We showed that performance improves with the number of frames, highlighting the benefits of exploiting the redundant information between frames. The training process introduced misalignments between frames, which were handled by the method and did not affect its performance. The method can also generalize well to different camera models and settings, and can be easily adapted to different applications. This was demonstrated by real data collected with a different camera mounted on an UAV. The MAE with the real UAV data was \(0.27-0.54^{\circ}C\), which is comparable to the accuracy of scientific-grade cameras.
The method offers a simple and effective solution for improving the quality and reliability of low-cost uncooled _IR_ imaging and can potentially enable new applications that require accurate and consistent temperature measurements.
## Acknowledgments
The authors are deeply grateful for the help of Ohaliav Keisar with the UAV data collection.
The authors thank Dr. Yaffit Cohen and Eitan Goldstein for the UAV data used in this work; and Moti Barak, Lavi Rosenfeld and Liad Reshef for the design and construction of the environmental chamber.
## Funding
The research was funded by the Israeli Ministry of Agriculture's Kandel Program under grant no. 20-12-0030.
## Disclosures
The authors declare no conflicts of interest.
## References
* [1] H. D. Adams, M. Guardiola-Claramonte, G. A. Barron-Gafford, J. C. Villegas, D. D. Breshears, C. B. Zou, P. A. Troch, and T. E. Huxman, "Temperature sensitivity of drought-induced tree mortality portends increased regional die-off under global-change-type drought," _Proceedings of the National Academy of Sciences_, vol. 106, no. 17, pp. 7063-7066, 2009. [Online]. Available: [https://www.pnas.org/doi/abs/10.1073/pnas.0901438106](https://www.pnas.org/doi/abs/10.1073/pnas.0901438106)
* [2] H. G. Jones, "Monitoring plant and soil water status: established and novel methods revisited and their relevance to studies of drought tolerance," _Journal of Experimental Botany_, vol. 58, no. 2, pp. 119-130, 09 2006. [Online]. Available: [https://doi.org/10.1093/jxb/erfl/118](https://doi.org/10.1093/jxb/erfl/118)
* [3] R. Bhan, R. Saxena, C. Jawania, and S. Lomasu, "Uncooled infrared microbolometer arrays and their characterisation techniques," _Deference Science Journal_, vol. 59, p. 580, 11 2009.
* [4] M. Vollmer and K.-P. Millmann, _Infrared Thermal Imaging: Fundamentals, Research and Applications_, 2nd ed. Wiley-VCH, 2018.
* [5] O. Riou, S. Berrebi, and P. Bremond, "Non uniformity correction and thermal drift compensation of thermal infrared camera," _Thermosense XXVI_, vol. 5405, p. 294, 2004.
* [6] M. Schulz and L. Caldwell, "Nonuniformity correction and correctability of infrared focal plane arrays," _Infrared Phys. Technol_, vol. 36, pp. 763-777, 1995.
* [7] P. W. Nugent, J. A. Shaw, and N. J. Pust, "Correcting for focal-plane-array temperature dependence in microbolometer infrared cameras lacking thermal stabilization," _Optical Engineering_, vol. 52, p. 061304, 2013.
* [8] K. Liang, C. Yang, L. Peng, and B. Zhou, "Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter," _Applied Optics_, vol. 56, p. 884, 2 2017.
* [9] S. Chang and Z. Li, "Calibration algorithm for cooled mid-infrared systems considering the influences of ambient temperature and integration time," _Applied Optics_, vol. S8, p. 8118, 10 2019.
* [10] N. Oz, N. Sochen, O. Markovich, Z. Halamish, L. Shpialter-Karol, and I. Klapp, "Rapid super resolution for infrared imagery," _Optics Express_, vol. 28, p. 27196, 2020.
* [11] A. Shocher, N. Cohen, and M. Irani, "Zero-shot super-resolution using deep internet learning," in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. IEEE, 12 2017, pp. 3118-3126. [Online]. Available: [https://ieeexplore.ieee.org/document/8578427](https://ieeexplore.ieee.org/document/8578427)
* [12] T. R. Shaham, T. Dekel, and T. Michael, "Singam: Learning a generative model from a single natural image," _Proceedings of the IEEE International Conference on Computer Vision_, vol. 2019-Octob, pp. 4569-4579, 2019.
Fig. 12: Mean absolute error (MAE) of pixels as a cumulative function for real data. The y-axis is the percentage of pixels with MAE less than the value on the x-axis. Panel (a) is the MAE of Fig. 11a. Panel (b) is the MAE of Fig. 11b. Panel (c) is the MAE of Fig. 11c.
- 109. [Online]. Available: [https://doi.org/10.1117/12.49324](https://doi.org/10.1117/12.49324)
* [14] Y. Tendero and J. Gilles, "Adnire: a locally adaptive single-image, non-uniformity correction and denoising algorithm: application to uncooled ir camera," _Infrared Technology and Applications_, vol. 8353, pp. 580-595, 5 2012. [Online]. Available: [https://doi.org/10.1117/12.912966](https://doi.org/10.1117/12.912966)
* [15] Y. Cao and C.-L. Tisse, "Single-image-based solution for optics imperature-dependent nonuniformity correction in an uncooled long-wave infrared camera," _Optics Letters_, vol. 39, no. 3, p. 646, Jan. 2014. [Online]. Available: [https://doi.org/10.1364/1.3900646](https://doi.org/10.1364/1.3900646)
* [16] J. Zhao, Q. Zhou, Y. Chen, T. Liu, H. Feng, Z. Xu, and Q. Li, "Single image stripe nonuniformity correction with gradient-constrained optimization model for infrared focal plane arrays," _Optics Communications_, vol. 296, pp. 47-52, 2013.
* [17] X. Jian, C. Lv, and R. Wang, "Nonuniformity correction of single infrared images based on deep filter neural network," _Symmetry_, 2018.
* [18] Z. He, Y. Y. Cao, Y. Dong, J. Yang, and C.-L. Tisse, "Single-image-based nonuniformity correction of uncooled long-wave infrared detectors: a deep-learning approach," _Applied Optics_, vol. 57, p. D155, 2018.
* [19] Y. Chang, L. Yan, L. Liu, H. Fang, and S. Zhong, "Infrared aerothermal nonuniform correction via deep multiscale residual network," _IEEE Geoscience and Remote Sensing Letters_, vol. 16, pp. 1120-1124, 2019, [https://www.uchangyao.github.io/files/DMRN.rar.jp/,Code](https://www.uchangyao.github.io/files/DMRN.rar.jp/,Code).
* [20] V. Saragadam, A. Dave, A. Veeraraghavan, and R. G. Baraniuk, "Thermal image processing via physics-inspired deep networks," in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021. [Online]. Available: [https://github.com/vishwa91/DeepIR](https://github.com/vishwa91/DeepIR)
* [21] N. Oz, N. Sochen, D. Mendelovich, and I. Klapp, "Improving temperature estimation in low-cost infrared cameras using deep neural networks," _Arxiv_, 2022.
* [22] J. Harris and Y.-M. Chiang, "Nonuniformity correction of infrared image sequences using the constant-statistics constraint," _IEEE Transactions on Image Processing_, vol. 8, no. 8, pp. 1148-1151, 1999.
* [23] R. C. Hardie, M. M. Hayat, E. Armstrong, and B. Yasuda, "Scene-based nonuniformity correction with video sequences and registration," _Appl. Opt._, vol. 39, no. 8, pp. 1241-1250, Mar 2000. [Online]. Available: [https://opg.optica.org/jo/abstract.cfm?URI=ao-39-8-1241](https://opg.optica.org/jo/abstract.cfm?URI=ao-39-8-1241)
* [24] E. Vera and S. Torres, "Fast adaptive nonuniformity correction for infrared focal-plane array detectors," _EURASIP Journal on Applied Signal Processing_, 2005.
* [25] A. Awerbuch, G. Liron, and B. Z. Bobrovsky, "Scene based non-uniformity correction in thermal images using kalman filter," _Image and Vision Computing_, vol. 25, pp. 833-851, 6 2007.
* [26] C. Zuo, Q. Chen, G. Gu, and X. Sui, "Scene-based nonuniformity correction algorithm based on interframe registration," _J. Opt. Soc. Am. A_, vol. 28, no. 6, pp. 1164-1176, Jun 2011. [Online]. Available: [https://opg.optica.org/joasa/abstract.cfm?URI=josaa-28-6-1164](https://opg.optica.org/joasa/abstract.cfm?URI=josaa-28-6-1164)
* [27] S. Panjin, P. Yafin, I. Klapp, and N. Sochen, "Joint estimation of unknown radiometric data, gain, and offset from thermal images," _Applied Optics_, vol. 57, p. 10390, 2018.
* [28] P. W. Kruse, _Uncooled thermal imaging arrays, systems, and applications_. SPIE, 2001.
* [29] R. F. Voss, "Linearity of \(\frac{1}{7}\) noise mechanisms," _Phys. Rev. Lett._, vol. 40, pp. 913-916, Apr 1978. [Online]. Available: [https://link.aps.org/doi/10.1103/PhysRevLett.40.913](https://link.aps.org/doi/10.1103/PhysRevLett.40.913)
* [30] M. d. Berg, O. Cheong, M. V. Kreveld, and M. Overmars, _Computational Geometry: Algorithms and Applications_, 3rd ed. Santa Clara, CA, USA: Springer-Verlag TELOS, 2008.
* [31] R. Hartley and A. Zisserman, _Multiple View Geometry in Computer Vision_. Cambridge: Cambridge University Press, 2004.
* [32] S. Farsiu, M. Robinson, M. Elad, and P. Milanfar, "Fast and robust multiframe super resolution," _IEEE Transactions on Image Processing_, vol. 13, no. 10, pp. 1327-1344, 2004.
* [33] S. Kim and W.-Y. Su, "Recursive high-resolution reconstruction of blurred multiframe images," in _[Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing_, 1991, pp. 2977-2980 vol.4.
* [34] M. Zhang and B. K. Gunturk, "Multiresolution bilateral filtering for image denoising," _IEEE Transactions on Image Processing_, vol. 17, no. 12, pp. 2324-2333, 2008.
* [35] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, "Non-uniform deblurring for shaken images," _International Journal of Computer Vision_, vol. 98, pp. 168-186, 6 2012.
* [36] D. DeTone, T. Malisiewicz, and A. Rabinovich, "Deep image homography estimation," 2016. [Online]. Available: [https://arxiv.org/abs/1606.03798](https://arxiv.org/abs/1606.03798)
* [37] C. Godard, K. Matzen, and M. Uyttendaele, "Deep burst denoising," in _15th European Conference of Computer Vision ECCV_, 2018.
* [38] G. Bhat, M. Danelljan, L. V. Gool, and R. Timofte, "Deep burst super-resolution," in _IEEE/CVPR Conference on Computer Vision and Pattern Recognition (CVPR)_. Los Alamitos, CA, USA: IEEE Computer Society, jun 2021, pp. 9205-9214. [Online]. Available: [https://doi.ieecompurtesociety.org/10.1109/CVPR46437.2021.00909](https://doi.ieecompurtesociety.org/10.1109/CVPR46437.2021.00909)
* [39] B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, "Handheld multi-frame super-resolution," _ACM Trans. Graph._, vol. 38, no. 4, jul 2019. [Online]. Available: [https://doi.org/10.1145/3306346.3323024](https://doi.org/10.1145/3306346.3323024)
* [40] M. Deudon, A. Kalaiatzis, I. Goytourn, M. R. Herrin, Z. Lin, K. Sankaran, V. Michalski, S. E. Kahou, J. Combeise, and Y. Bengio, "Highres-net: Recursive fusion for multi-frame super-resolution of satellite imagery," _ArXiv_, vol. abs/2002.06460, 2020.
* [41] B. De Brabandere, X. Jia, T. Tuytelaars, and L. Van Gool, "Dynamic filter networks," in _30th International Conference on Neural Information Processing Systems (NIPS)_, 2016.
* MICCAI 2015_, N. Navab, J. Homegger, W. M. Wells, and A. F. Frangi, Eds. Cham: Springer International Publishing, 2015, pp. 234-241.
* [43] H. Zhao, O. Gallo, I. Frosio, and J. Kautz, "Loss functions for image restoration with neural networks," _IEEE Transactions on Computational Imaging_, vol. 3, no. 1, pp. 47-57, 2017.
* [44] B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll, "Burst denoising with kernel prediction networks," in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2018.
* [45] S. Anwar, S. Khan, and N. Barnes, "A deep journey into super-resolution: A survey," _ACM Comput. Surv._, vol. 53, no. 3, may 2020. [Online]. Available: [https://doi.org/10.1145/3390462](https://doi.org/10.1145/3390462)
* [46] G. Van Rossum and F. L. Drake, _Python 3 Reference Manual_. Scotts Valley, CA: CreateSpace, 2009.
* [47] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, "Automatic differentiation in pytorch," _NIPS_, 2017.
* [48] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," in _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: [http://arxiv.org/abs/1412.6980](http://arxiv.org/abs/1412.6980)
* [49] A. M. Saxe, J. L. McClelland, and S. Ganguli, "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks," in _2nd International Conference on Learning Representations, ICLR 2014, Banif, AB, Canada, April 14-16, 2014, Conference Track Proceedings_, Y. Bengio and Y. LeCun, Eds., 2014. [Online]. Available: [http://arxiv.org/abs/1312.6120](http://arxiv.org/abs/1312.6120)
* [50] D. Hendrycks and K. Gimpel, "Gaussian error linear units (gelus)," _arXiv preprint arXiv:1606.08415_, 2016.
* [51] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in _Proceedings of the 32nd International Conference on Machine Learning_, ser. Proceedings of Machine Learning Research, F. Bach and D. Blei, Eds., vol. 37. Lille, France: PMLR, 07-09 Jul 2015, pp. 448-456. [Online]. Available: [https://proceedings.mlr.press/v37/ioffe15.html](https://proceedings.mlr.press/v37/ioffe15.html)
* [52] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network
## Appendix
### Network Architecture
A schematic diagram of the whole network is given in Fig. 6 in the main text and enlarged in Fig. 13. Below is a detailed description of the network architecture, from the UNET encoder-decoder to the kernel estimation block and the offset block.
First, we describe the UNET encoder-decoder. We use a tensor with \(N\) channels of gray-level frames as the input to the network. The input tensor undergoes a \(3\times 3\) convolution (conv) layer that encodes it from \(N\) channels to \(\mu\) channels without any activation function. The encoded features then pass through the encoder and decoder parts of the network, where the number of channels is multiplied by a factor of \(\xi\) at each level. The encoder and decoder blocks consist of three \(3\times 3\) conv-layers each. The first two layers in each block have GelLU [50] and batch normalization (norm) [51], while the last layer has neither activation nor norm. The last layer in each block produces \(\mu\times\xi^{i}\) channels, where \(i\) is the level index. The encoder block also applies an average pooling layer with a \(\xi\times\xi\) window and stride \(\xi\) to reduce the spatial resolution, while the decoder block uses a pixel shuffle layer [52] with an upsample factor of \(\xi\) to increase it. We concatenate the encoder block output before pooling with the pixel shuffle output at each level to feed it into the decoder block. The encoder and decoder block structures are shown in Table I in the main text.
After the last decoder block in the UNET, we add a kernel-estimation block composed of three \(1\times 1\) conv layers. The first two layers have GelLU, and the last layer has no activation. The kernel estimation block structure is shown in Table II in the main text. The block outputs \(N\times K\times K\) channels. We reshape the output of this block as kernels of size \(K\times K\) for each frame. We then sample a patch of size \(K\times K\) around each pixel in each frame and compute the inner product of the corresponding kernel and patch, as in the first term of (11) in the main text. We sum the inner products from all frames for each pixel.
To map the temperature estimation to the camera range, we use an offset block that takes the means of all input gray-level frames as input and outputs a single scalar. The offset block is a fully connected layer that acts as a polynomial function of the input. The offset block is explained in more detail in Section IV-A in the main text.
The final temperature estimation is obtained by adding the offset scalar to the pixel-wise summation of the gain from the kernel estimation block.
The scale factor for decoder and encoder blocks is \(\xi\equiv 2\) and the number of channels is \(\mu\equiv 64\) throughout the work. The number of levels was empirically set to \(4\).
## Figure list
1. Fig. 14 shows more results of the proposed method on real data.
2. Figs. 15 to 19 display the mean absolute error per pixel as a function of number of frames, both quantitatively and qualitatively.
3. Figs. 20 to 25 compare the results of the proposed method to ADMIRE [14], DeepIR [20] and He [18] et al.'s methods. Figs. 24 and 25 specifically show the hallucination effect of the DeepIR [20] method.
4. Figs. 26 to 31 are the original images used for the real data results in Fig. 11 in the main text. On the left of each figure is the ground truth (GT) temperature map acquired by the A655sc, and on the right is the temperature map estimated by the proposed method. The raw data cannot be displayed because they consist of 7 frames. Both the GT and the estimated temperature map undergoes an histogram equalization to improve the visualization.
5. Fig. 32 shows the unmanned aerial vehicle (UAV) used for the real data experiments.
6. Table III specifies the Tau2 parameters used throughout all of the experiments.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Type** & **GeLU [50]** & **Kernel** & **Output** \\ \hline Conv2D & \(\surd\) & 1 & \(\mu\times h\times w\) \\ Conv2D & \(\surd\) & 1 & \(\mu\times h\times w\) \\ Conv2D & \(\times\) & 1 & \((N\times K^{2})\times h\times w\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Architecture of the kernel predictor block.
Fig. 13: Schematics of the model. The gray-level multiframes are fed into the kernel prediction network (KPN), and the KPN outputs the per-pixel kernels for each frame. Each frame is divided to overlapping patches with the same support as the kernels. The patches and the kernels are multiplied element-wise and each product is summed, resulting in a 2D gain map for each frame. All the 2D gain maps are summed depth-wise, resulting in a single 2D map. The offset, a single scalar value, is added to the single 2D map to get the estimated temperature map.
## References
* [1] A. A.
Figure 16: Difference between the temperature estimation with our method and the ground truth. The left-most figure is the ground truth. The following figures are the zoom-in of the area inside the red rectangle. The number bellow the difference map is the number of frames used for the temperature estimation, from left to right 7, 9 and 11 frames.
Figure 15: Difference between the temperature estimation with our method and the ground truth. The left-most figure is the ground truth. The following figures are the zoom-in of the area inside the red rectangle. The number bellow the difference map is the number of frames used for the temperature estimation, from left to right 7, 9 and 11 frames.
Fig. 17: Difference between the temperature estimation with our method and the ground truth. The left-most figure is the ground truth. The following figures are the zoom-in of the area inside the red rectangle. The number bellow the difference map is the number of frames used for the temperature estimation, from left to right 7, 9 and 11 frames.
Fig. 18: Difference between the temperature estimation with our method and the ground truth. The left-most figure is the ground truth. The following figures are the zoom-in of the area inside the red rectangle. The number bellow the difference map is the number of frames used for the temperature estimation, from left to right 7, 9 and 11 frames.
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
## References
* [1] A. A. Barabasi, A. Barabasi, and A. Barabasi. _The Art of Computer Science_. Springer, 2007.
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
## Appendix A Proofs
Fig. 24: Zoomed-in results of different methods. The left-most figure is the reference frame with a red rectangle. The following figures are the results of the area inside the red rectangle. \(N=11\) for all results. These results displays the hallucination effect of DeepIR [20] method.
## Appendix A
Fig. 25: Zoomed-in results of different methods. The left-most figure is the reference frame with a red rectangle. The following figures are the results of the area inside the red rectangle. \(N=11\) for all results. These results displays the hallucination effect of DeepIR [20] method.
## References
* [1] A. A. Abrahams, J. R. Anderson, and J. A. Anderson, "The 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-d-dimensional 2d-dimensional 2d-d-dimensional 2d-dimensional 2d-d-dimensional 2d-dimensional 2d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d-d-dimensional 2d-d-dimensional 2d-d-d-dimensional 2d-d-d-dimensional 2d-d-d-dimensional 2d-d-d-dimensional 2d-d-d-dimensional 2d-d-d-dimensional 2d-d-d-dimensional 2d-d-d-dimensional 2d-d-d-d-dimensional 2d-d-d-d-dimensional 2d-d-d-d-dimensional 2d-d-d-d-dimensional 2d-d-d-d-d-dimensional 2d-d-d-d-d-dimensional 2d
Fig. 28: Ground truth (left) and estimated (right) temperature maps for the result in Fig. 11 (c).
Fig. 29: Ground truth (left) and estimated (right) temperature maps for the result in Fig. 11 (d). |
2308.14077 | An Analysis of On-the-fly Determinization of Finite-state Automata | In this paper we establish an abstraction of on-the-fly determinization of
finite-state automata using transition monoids and demonstrate how it can be
applied to bound the asymptotics. We present algebraic and combinatorial
properties that are sufficient for a polynomial state complexity of the
deterministic automaton constructed on-the-fly. A special case of our findings
is that automata with many non-deterministic transitions almost always admit a
determinization of polynomial complexity. Furthermore, we extend our ideas to
weighted finite-state automata. | Ivan Baburin, Ryan Cotterell | 2023-08-27T11:51:27Z | http://arxiv.org/abs/2308.14077v1 | # An Analysis of On-the-fly Determinization of Finite-state Automata
###### Abstract
In this paper we establish an abstraction of on-the-fly determinization of finite-state automata using transition monoids and demonstrate how it can be applied to bound the asymptotics. We present algebraic and combinatorial properties that are sufficient for a polynomial state complexity of the deterministic automaton constructed on-the-fly. A special case of our findings is that automata with many non-deterministic transitions almost always admit a determinization of polynomial complexity. Furthermore, we extend our ideas to weighted finite-state automata.
_Keywords: State complexity, On-the-fly algorithm, Determinization, Automata_
## 1 Introduction
One of the fundamental results in the theory of finite-state automata is the fact non-deterministic automata are equivalent in terms of computational power to deterministic ones. The classical conversion utilizes a power set construction, originally presented by Rabin and Scott (1959), which results in an exponential blow-up of state number. Their upper bound was proven to be tight by Moore (1971) who exhibited a simple automaton1 (given in Fig. 1) with \(n\) states that requires at least \(2^{n}\) states for its determinization.
Footnote 1: This automaton is known as Moore’s automaton.
Much research has be published to discuss the bounds on the state complexity of minimal deterministic automata for different types of subregular languages. Bordihn et al. (2009) presents a summary of many known results demonstrating that almost all classes of subregular languages will in the worst case require exponential state complexity. Other methods measure the amount of non-determinism in general automata and connect it with the state complexity, e.g., Hromkovic (2002) motivates the use of tree width as a measure of automaton complexity.
We will take a different approach motivated by the practical application of finite-state techniques. Instead, analyse the on-the-fly variant of the classic power set construction, originally presented by Mohri (1997). Surprisingly, despite its numerous practical applications as in Allauzen et al. (2007), there has not been a concise analysis of its complexity presented in the literature. We will attempt to bridge the gap and present a simple abstraction to estimate its performance and present different criteria which would ensure its efficient (i.e., polynomial time) execution on various classes of automata. Although the criteria are restrictive, they can still be used as an intuition to estimate the state complexity in certain practical applications.
Our main idea lies in connecting the execution paradigm of on-the-fly construction to the size of transition monoid and formulating sufficient criteria which translate into polynomial state complexity of the output. Moreover, since on-the-fly algorithm produces a valid deterministic automaton our bounds additionally translate into upper bounds for minimal deterministic automata. In summary:
* We formalize the connection between the on-the-fly algorithm and transition monoids, and show how it can be used to rederive known state complexity bounds for one-letter and commutative automata;
* We define strongly connected automata and
Figure 1: Moore’s non-deterministic automaton with \(n\) states which for \(n\geq 2\) requires at least \(2^{n}\) states in its deterministic analogue
establish state complexity bounds for them;
* We prove that almost all automata with large number of transitions will only require a polynomial number of states;
* We extend our findings to weighted finite-state automata.
## 2 Finite-state automata
We start by briefly reciting core definitions for finite-state automata.
**Definition 2.1**.: _A **finite-state automaton (FSA)**\(\mathcal{A}\) is a quintuple \((\Sigma,Q,I,F,\delta)\) where \(\Sigma\) is an alphabet, \(Q\) is a finite set of states, \(I\subseteq Q\) is a set of starting states, \(F\subseteq Q\) is a set of accepting states and \(\delta\subseteq Q\times\Sigma\cup\{\varepsilon\}\times Q\) a finite multi-set of transitions._
The symbol \(\varepsilon\) represents an empty symbol, and thus a transition with label \(\varepsilon\) (or in short \(\varepsilon\)-transition) can always be performed without the need for any additional input.
**Definition 2.2**.: _A **path**\(\boldsymbol{\pi}\in\delta^{*}\) is a sequence of consecutive transitions of the form \(q_{0}\xrightarrow{\bullet}q_{1}\xrightarrow{\bullet}\ldots\xrightarrow{ \bullet}q_{m}\) where \(\bullet\) is a placeholder for the transition label. We will refer to the concatenation of symbols along the path as its **yield**._
We say that a word \(\boldsymbol{y}\in\Sigma^{*}\) is **recognized** by the FSA \(\mathcal{A}\) if there exists a path from some starting state \(q\in I\) to some final state \(q^{\prime}\in F\) with yield \(\boldsymbol{y}\). Moreover, we denote with \(\mathcal{L}(\mathcal{A})\) the **language** (set of all words) recognized by \(\mathcal{A}\). Two FSAs \(\mathcal{A}\) and \(\mathcal{A}^{\prime}\) are called **equivalent** if \(\mathcal{L}(\mathcal{A})=\mathcal{L}(\mathcal{A}^{\prime})\).
**Definition 2.3**.: _FSA \(\mathcal{A}_{\textsc{det}}\) is called **deterministic** if and only if it does not contain any \(\varepsilon\) transitions, the starting state is unique, i.e., \(|I|=1\), and for every \((q,a)\in Q\times\Sigma\) there is at most one \(q^{\prime}\in Q\) such that \((q,a,q^{\prime})\in\delta\), thus we have at most one unique labeled transition from every state._
Transforming a non-deterministic automaton into an equivalent deterministic one is a procedure we refer to as **determization**. Although the complexity of the asymptotically optimal algorithm lies in \(\mathrm{EXPSPACE}\), one often uses the on-the-fly version of the classical power set construction, which for many automata is substantially more efficient.
We present the on-the-fly construction for completeness (Algorithm 1) and define a notion of a **power state**\(\mathcal{Q}\subseteq Q\). Given a non-deterministic automaton \(\mathcal{A}\) its deterministic counterpart will have a state space be a subset of a power set construction \(Q_{\textsc{det}}\subseteq\mathcal{P}(Q)\). On-the-fly algorithm will start in the power state \(\mathcal{Q}_{I}\coloneqq\{q\mid q\in I\}\) and explore all possible power states that can be reached following the labeled transitions \(\delta\).
Correctness of Algorithm 1 follows from the power set construction, and we have at most \(2^{|Q|}\) iterations of the while loop because every power state will appear on the stack at most once, thus the worst case execution time is in \(\mathcal{O}\left(|\Sigma||Q|2^{|Q|}\right)\). Moreover we can assume without loss of generality that our FSA \(\mathcal{A}\) is \(\varepsilon\)-free, since there exist a handful of asymptotically efficient procedures for computing an equivalent FSA containing no \(\varepsilon\)-transitions (even for weighted automata), for example as presented by Allauzen et al. (2007).
```
0:\(\mathcal{A}\) a FSA with no \(\varepsilon\)-transitions \(\mathcal{A}_{\textsc{det}}\leftarrow(\Sigma,Q_{\textsc{det}},\mathcal{Q}_{I}, F_{\textsc{det}},\delta_{\textsc{det}})\) \(\textsc{stack}\leftarrow\mathcal{Q}_{I}\) \(Q_{\textsc{det}}\leftarrow\{\mathcal{Q}_{I}\}\) while\(|\textsc{stack}|>0\)do pop\(\mathcal{Q}\) from the stack for all\(a\in\Sigma\)do \(\mathcal{Q}^{\prime}\leftarrow\{q^{\prime}\mid(q,a,q^{\prime})\subseteq\delta,\;q \in\mathcal{Q}\}\) \(\delta_{\textsc{det}}\leftarrow\delta_{\textsc{det}}\cup\{\mathcal{Q}\xrightarrow{ \bullet}\mathcal{Q}^{\prime}\}\) if\(\mathcal{Q}^{\prime}\notin Q_{\textsc{det}}\)then \(Q_{\textsc{det}}\gets Q_{\textsc{det}}\cup\{\mathcal{Q}^{\prime}\}\) push\(\mathcal{Q}^{\prime}\) on the stack \(F_{\textsc{det}}\leftarrow\{\mathcal{Q}\in Q_{\textsc{det}}\mid\mathcal{Q}\cap F \neq\varnothing\}\) return\(\leftarrow\mathcal{A}_{\textsc{det}}\)
```
**Algorithm 1** On-the-fly
## 3 Characterization using transition monoids
We start with a swift introduction to algebraic automata theory, as presented by Pin (1997).
**Definition 3.1**.: _Consider a FSA \(\mathcal{A}=(\Sigma,Q,I,F,\delta)\) and for each \(a\in\Sigma\) define a binary relation \(T^{(a)}\) over \(Q\) with \(T^{(a)}(q):=\delta(q,a)\) for all \(q\in Q\). A **transition monoid**\(\mathbb{S}(\mathcal{A})\) is a binary relation monoid over \(Q\) generated by \(\{T^{(a)}\mid a\in\Sigma\}\) and closed under relation composition operator \(\circ\) with the identity relation \(id\)._
It is often more convenient to think of \(\mathbb{S}(\mathcal{A})\) as a monoid of \(|Q|\times|Q|\) matrices over Boolean semifield \(\mathbb{B}\) closed under Boolean matrix multiplication and the identity element \(\mathcal{I}\), such that:
\[T^{(a)}_{i,j}=1\Leftrightarrow(q_{i},q_{j})\in T^{(a)} \tag{1}\]
Notice that the monoid \(\mathbb{S}(\mathcal{A})\) is per construction closely related to the regular language \(\mathcal{L}(\mathcal{A})\). This property can be further formalized algebraicly.
**Definition 3.2**.: _A language \(\mathcal{L}\subseteq\Sigma^{*}\) is **recognized by a monoid \(\mathbb{M}\)** of binary relations over \(Q\) if there exists a surjective morphism \(\mu:\Sigma^{*}\rightarrow\mathbb{M}\) and an **accepting subset \(\mathbb{M}_{F}\subseteq\mathbb{M}\)** such that \(\mathcal{L}=\mu^{-1}(\mathbb{M}_{F})\)._
From the definition above it is immediately visible that \(\mathbb{S}(\mathcal{A})\) recognizes the language \(\mathcal{L}(\mathcal{A})\). Indeed, if we view \(I\) and \(F\) as Boolean vectors we can define the accepting subset as
\[\mathbb{S}_{F}:=\Big{\{}M\in\mathbb{S}(\mathcal{A})\mid I^{\top}MF\neq 0 \Big{\}} \tag{2}\]
and a morphism \(\mu\) for an arbitrary word \(\boldsymbol{y}=a_{1}a_{2}a_{3}\ldots a_{m}\in\Sigma^{*}\)
\[\mu(\boldsymbol{y})=\mathcal{I}\cdot\prod_{i=1}^{m}T^{(a_{i})}\in\mathbb{S}( \mathcal{A}) \tag{3}\]
with \(\mu(\varepsilon)=\mathcal{I}\). As a consequence, the inverse \(\mu^{-1}(\mathbb{S}_{F})\) only contains words \(\boldsymbol{y}\in\Sigma^{*}\) that are a yield of at least one path from \(I\) to \(F\) in \(\mathcal{A}\).
Due to equivalence between FSAs and their transition monoids there are many interconnected results. For instance, the smallest monoid, known as **syntactic monoid**, recognizing some regular language \(\mathcal{L}\) is isomorphic to the transition monoid of the minimal deterministic FSA accepting \(\mathcal{L}\).
We consider the execution paradigm of Algorithm 1 through the lens of transition monoids. Notice, in every step the algorithm effectively multiplies the top element on the stack with transition matrices, and terminates as soon as no new power states can be produced.
**Observation 3.1**.: _The set of states \(Q_{\textsc{det}}\) in the automaton \(\mathcal{A}_{\textsc{det}}\) is the image space of \(I\) under the transition monoid \(\mathbb{S}(\mathcal{A})\) i. e._
\[Q_{\textsc{det}}\!=\!\Big{\{}\mathcal{Q}\subseteq Q\mid\mathcal{Q}=I^{\top}M,\;M\in\mathbb{S}(\mathcal{A})\Big{\}} \tag{4}\]
This allows us to formulate a simple, but nevertheless interesting, connection between the state complexity of Algorithm 1 and the size of the transition monoid \(\mathbb{S}(\mathcal{A})\).
**Theorem 3.2**.: _Given a FSA \(\mathcal{A}\) the equivalent deterministic automaton \(\mathcal{A}_{\textsc{det}}\) computed by means of Algorithm 1 will satisfy_
\[|Q_{\textsc{det}}|\leq|\mathbb{S}(\mathcal{A})| \tag{5}\]
_and moreover its construction will require at most \(\mathcal{O}\left(|\Sigma||Q||\mathbb{S}(\mathcal{A})|\right)\) steps._
Proof.: Following Observation 3.1 we conclude that since every element of \(Q_{\textsc{det}}\) is the image of at least one relation in \(\mathbb{S}(\mathcal{A})\) we have \(|Q_{\textsc{det}}|\leq|\mathbb{S}(\mathcal{A})|\). Since the number of while-loop iterations in Algorithm 1 corresponds to the number of states, and in each iteration we consider at most \(|\Sigma||Q|\) transitions the second claim follows.
We remark that in general transition monoids tend to be very large, sometimes much larger than the actual number of states in the FSA. For \(n\)-state deterministic FSAs over a binary alphabet the syntactic monoid recognizing the same language can have the size of up to \(n^{n}\left(1-\frac{2}{\sqrt{n}}\right)\). A concise discussion of different exponential bounds can be found in Holzer and Konig (2004).
As a result, to achieve polynomial bounds for Theorem 3.2 we will require additional constraints that we can impose on the automaton \(\mathcal{A}\). In the latter sections we will present different approaches for different classes of automata.
## 4 One-letter automata
In this section we consider a relatively simple class of automata whose alphabet consists only of a single symbol. Such automata are often called **one-letter** automata. Let \(\mathcal{A}_{1}=(\{a\},Q_{1},\delta_{1},I_{1},F_{1})\) be an arbitrary one-letter FSA. It is immediately clear that the transition monoid \(\mathbb{S}(\mathcal{A}_{1})\) of a one-letter FSA is generated entirely by a Boolean matrix \(A\) defined as:
\[A_{i,j}=\left\{\begin{array}{ll}1&\text{if}\quad(q_{i},a,q_{j})\in\delta_{ 1}\\ 0&\text{else}\end{array}\right. \tag{6}\]
We follow the approach originally presented by Markowsky (1976) and analyze matrix semigroups generated by a single Boolean matrix.
**Definition 4.1**.: _The **index** of a Boolean matrix \(B\), denoted as \(\operatorname{index}(B)\), is the least positive integer \(k\) such that \(B\) satisfies:_
\[B^{k+d}=B^{k}\quad\text{for some }d\in\mathbb{N}_{>0} \tag{7}\]
_Similarly, the least possible choice of \(d\) in the equation above given \(k=\operatorname{index}(B)\) is the **period** of \(B\), denoted as \(\operatorname{period}(B)\)._
Since there are exactly \(2^{n^{2}}\) possible \(n\times n\) Boolean matrices it immediately follows that every Boolean matrix has a finite index and period. Note that the size of the transition monoid satisfies
\[|\mathbb{S}(\mathcal{A}_{1})|\leq\operatorname{index}(A)+\operatorname{ period}(A) \tag{8}\]
because it contains all elements from the semigroup generated by \(A\) and the identity element \(\mathcal{I}\). Thus, to ensure a polynomial bound on the size of the transition monoid we require a polynomial bound both for the period and the index of \(A\).
**Proposition 4.1** (Markowsky (1976)).: _For an arbitrary \(n\times n\) Boolean matrix \(B\) we have:_
\[\operatorname{index}(B)\leq n^{2}-2n+2 \tag{9}\]
Proposition 4.1 implies that the index of an arbitrary matrix will be relatively small, thus the main complexity always comes from the period.
Denes et al. (1983) investigated lower bounds connected to periods of Boolean matrices and showed that, in general, they can be exponentially large. Moreover, they demonstrated how their bounds translate directly into the number of states in a minimal equivalent deterministic one-letter automata, thus confirming that certain one-letter automata will always have exponential state complexity. To achieve a polynomial bound on the size of \(\mathbb{S}(\mathcal{A}_{1})\) we restrict the discussion to a constrained class of transition matrices.
**Definition 4.2**.: _A Boolean matrix \(B\) is called **irreducible** if and only if there does not exists a permutation matrix \(P\) such that \(PBP^{\top}\) is a block lower triangular matrix._
This condition is equivalent to the directed graph defined by adjacency matrix \(A\), known as the **precedence graph**\(\mathcal{G}(A)\), being strongly connected. For more details on this observation we refer to Brualdi et al. (1991).
**Proposition 4.2**.: _Given an arbitrary irreducible \(n\times n\) Boolean matrix \(R\) it holds:_
\[\operatorname{period}(R)\leq n \tag{10}\]
Proof.: The period of an arbitrary Boolean matrix \(R\) was shown by De Schutter and De Moor (1999) to be equal to the least common multiple of cyclicities of all maximal strongly connected components of \(\mathcal{G}(R)\). Since \(R\) is irreducible, \(\mathcal{G}(R)\) is strongly connected, thus its cyclicity can be trivially bounded by \(n\) (length of the largest cycle) from above.
This allows us to establish a simple polynomial bound for FSAs with irreducible transition matrices.
**Lemma 4.3**.: _An equivalent one-letter deterministic automata computed by applying Algorithm 1 to automaton \(\mathcal{A}_{1}\) with an irreducible transition matrix \(A\) and \(|Q_{1}|=n\) will have at most \(n^{2}-n+2\) states._
Proof.: By inserting Proposition 4.2 and Proposition 4.1 into Eq. (8) we get the desired bound. The claim follows by Theorem 3.2.
We emphasize that while irreducibility of \(A\) is sufficient for polynomial execution time of Algorithm 1, it is by no means necessary. A more involved bound on the period of \(A\) and its relation to cyclicity of \(\mathcal{G}(A)\) can be found in De Schutter and De Moor (1999).
## 5 Commutative automata
We investigate how many properties from one-letter automata carry over to the general case. We remark that just requiring irreducibility for all transition matrices in \(\mathcal{A}\) no longer suffices. Indeed, the product of irreducible matrices is no longer irreducible, thus we can easily create interim matrices with large period and blow-up the size of the transition monoid.
However for some cases \(|\mathbb{S}(\mathcal{A})|\) can still be bounded. Consider \(\mathcal{T}:=\{T^{(a)}\mid a\in\Sigma\}\) the set of all transition matrices of \(\mathcal{A}\). Assume that each matrix satisfies:
\[T^{(a_{i})}=\left(T^{(a_{1})}\right)^{k_{i}}\quad\text{for some $k_{i}\in \mathbb{N}$} \tag{11}\]
In this case we can just apply the bound from Eq. (8) on \(T^{(a_{1})}\) directly, since additional letters don't affect the size of \(\mathbb{S}(\mathcal{A})\).
One can extend the argument to a more general class of automata. We start with an assumption that all transition matrices in \(\mathcal{T}\) commute; this is a direct consequence of Eq. (11). Then the order of multiplication between the elements of \(\mathcal{T}\) is not relevant, thus one can show a tighter bound on the size of \(\mathbb{S}(\mathcal{A})\).
**Theorem 5.1**.: _Given an automaton \(\mathcal{A}\) let for all \(i\) the size of the transition monoid generated by \(T_{i}\in\mathcal{T}\) be bounded by \(f_{i}(n)\). Further assume that all matrices in \(\mathcal{T}\) commute with respect to Boolean matrix multiplication. Then:_
\[|\mathbb{S}(\mathcal{A})|\leq\prod_{i=1}^{|\Sigma|}f_{i}(n) \tag{12}\]
_which is also the bound on the state complexity of Algorithm 1 applied on \(\mathcal{A}\)._
Proof.: Consider an arbitrary element \(M\in\mathbb{S}(\mathcal{A})\). Using commutativity of \(\mathcal{T}\) we can write \(M\) as a product of transition matrices as following:
\[M=\prod_{i=1}^{|\Sigma|}\left(T^{(a_{i})}\right)^{k_{i}}\quad\text{for some $k_{i}\in\mathbb{N}$} \tag{13}\]
Since all matrices \(T^{(a_{i})}\) generate a monoids of bounded size we can have at most \(f_{i}(n)\) distinct values of \(\left(T^{(a_{i})}\right)^{k}\), thus every element from \(\mathbb{S}(\mathcal{A})\) can be encoded by at least one element from \(\bigtimes_{i=1}^{|\Sigma|}\big{\{}0,\dots,f_{i}(n)-1\big{\}}\).
**Corollary 5.2**.: _Assuming the preconditions from Theorem 5.1 and irreducibility of all transition matrices in \(\mathcal{T}\) the Algorithm 1 applied to \(\mathcal{A}\) would have a state complexity of at most \(n^{2|\Sigma|}\)._
Proof.: We notice by Lemma 4.3 that for an arbitrary \(i\) we have \(f(i)\leq n^{2}-n+2\leq n^{2}\) (given \(n\geq 2\)). The claim follows.
Notice that the conditions from Theorem 5.1 essentially enforce that the transition monoid \(\mathbb{S}(\mathcal{A})\) is commutative. Automata with commutative transition monoids correspond to commutative languages, which have other interesting algebraic properties. Hoffmann (2019) presents a survey of different properties of commutative languages and demonstrates that a "shuffle" of two regular commutative languages can be performed in polynomial time, something that has exponential complexity for general languages. They also derive a similar bound to Theorem 5.1 for the state complexity of the minimal deterministic automaton via underlying one-letter languages. However they utilize a different, non-algebraic approach without establishing the link to on-the-fly algorithm.
## 6 Strongly connected automata
In this section we take a different approach and extend the notion of irreducibility to bound the state complexity of automata by introducing additional constraints on their transitions.
**Definition 6.1**.: _An \(n\times n\) Boolean matrix \(B\) is called \(r\)**-indecomposable** if and only if there do not exist permutation matrices \(P\) and \(Q\) such that \(PBQ\) has a top right \(\mathbf{0}\)-block of size \(s\times t\) with \(s+t+r>n\):_
\[PBQ\neq\begin{bmatrix}*&\mathbf{0}\\ *&*\end{bmatrix}\quad\forall P,Q\in S_{n} \tag{14}\]
_Moreover we call \(B\)\(r\)**-irreducible** if the precedence graph \(\mathcal{G}(B)\) is \(r\)**-connected**, i.e., one can remove arbitrary \(r-1\) vertices from \(\mathcal{G}(B)\) and it will remain strongly connected._
Indecomposable matrices are related to standard irreducible matrices. Specifically, every irreducible matrix is \(0\)-indecomposable and every \(1\)-indecomposable matrix is irreducible. In general one can shown that both of these notions are very similar, i.e., by adding some minor conditions (positivity of the main diagonal) one can derive an equivalence.
**Lemma 6.1** (You et al. (2005)).: _Suppose a Boolean matrix \(B\) has only ones along the main diagonal. Then, for \(r>0\), it is \(r\)-indecomposable if and only if its precedence graph \(\mathcal{G}(B)\) is \(r\)-connected (i.e., \(B\) is \(r\)-irreducible)._
Lemma 6.1 allows us to verify whether a transition matrix is \(r\)-indecomposable in polynomial time. Indeed, notice that indecomposability of a matrix is invariant under permutations. Thus, assuming we permuted the matrix \(A\) such that the main diagonal consists entirely of ones we only have to check whether the underlying graph \(\mathcal{G}(A)\) is \(r\)-connected.
**Observation 6.2**.: _Verifying graph connectivity is a well-known instance of maximum flow problem. A directed graph \(\mathcal{G}=(V,E)\) is \(r\)-connected if and only if for every pair of vertices \(u,v\in V\) with \((u,v)\notin E\) there exist at least \(r\) vertex disjoint directed paths from \(u\) to \(v\)._
Due to the almost linear time algorithm for calculating maximum flow in a graph by Chen et al. (2022) the \(r\)-connectivity of \(\mathcal{G}(A)\) can be verified in \(\mathcal{O}\left(n^{2}\cdot|A|^{1+o(1)}\right)\) steps where \(|A|\) refers to the number of non-zero entries in \(A\).
**Proposition 6.3** (Kim and Roush (1978)).: _An \(n\times n\) Boolean matrix \(B\) is \(r\)-indecomposable if and only if for any non-zero Boolean (row) vector \(v\) it holds:_
\[|vB|\geq\min\left\{n,\;|v|+r\right\} \tag{15}\]
Next we use the notion of \(r\)-indecomposabilty to bound the size of the transition monoid.
**Lemma 6.4**.: _Consider an automaton \(\mathcal{A}\) with the set of transition matrices \(\mathcal{T}\) as defined previously, such that all transition matrices in \(\mathcal{T}\) are \(r\)-indecomposable with \(r>0\). Then \(\mathbb{S}(\mathcal{A})\) will contain at most \(\frac{|\Sigma|^{\lceil(n-1)/r\rceil}}{|\Sigma|-1}+1\) elements._
Proof.: Consider a product of the identity matrix \(\mathcal{I}\) with arbitrary \(\left\lceil\frac{n-1}{r}\right\rceil\) matrices from \(\mathcal{T}\). Due to Proposition 6.3 each column of the resulting product will have at least \(1+\left\lceil\frac{n-1}{r}\right\rceil\cdot r=n\) entries, thus we will get a complete matrix 1. Thus all non-1 elements of \(\mathbb{S}(\mathcal{A})\) will be the products of at most \(\left\lceil\frac{n-1}{r}\right\rceil-1\) transition matrices. If we sum over all possible lengths \(i\) we get the desired bound:
\[|\mathbb{S}(\mathcal{A})|-1\leq\sum_{i=0}^{\left\lceil(n-1)/r\right\rceil-1} \left\lvert\Sigma\right\rvert^{i}\leq\frac{|\Sigma|^{\left\lceil(n-1)/r\right\rceil }}{|\Sigma|-1} \tag{16}\]
Counting the 1-matrix concludes the proof.
**Corollary 6.5**.: _Following the definitions from Lemma 6.4 if all transition matrices are at least \(\left\lceil\frac{n}{k\log n}\right\rceil\)-indecomposable the state complexity of Algorithm 1 will be at most \(2n^{k\log|\Sigma|}\)._
We notice that contrary to Corollary 5.2 the state complexity only has \(\log|\Sigma|\) in the power, which is a big improvement over \(|\Sigma|\). You et al. (2005) lists some examples of \(r\)-connected graphs for large values of \(r\).
## 7 Automata with many transitions
Another interesting class of automata one would expect to have bounded state complexity are automata with many non-deterministic transitions. For example in the extreme case of an automaton with a complete transition relation already \(2\) states are sufficient for its deterministic counterpart (the starting state and the complete power state). We will formulate the bounds for such automata by relating them to strongly connected automata from SS6. To achieve a polynomial state complexity for Algorithm 1 we want to apply Corollary 6.5, however it does require a very strong indecomposability for all transition matrices.
We demonstrate that almost all Boolean matrices with sufficiently many non-zero entries will be strongly indecomposable, thus almost all automata with sufficiently many transitions will have a polynomial state complexity.
**Theorem 7.1**.: _If we consider automaton \(\mathcal{A}\) where we pick individual transition matrices in \(\mathcal{T}\) uniformly at random (however not necessarily independently from each other) from the set of \(n\times n\) Boolean matrices with \(n^{2}/d\) non-zero entries for some constant \(d\) with high probability it holds:_
\[|\mathbb{S}(\mathcal{A})|\leq 2|\Sigma|^{d\log n+o(1)} \tag{17}\]
_Thus with high probability Algorithm 1 will have a polynomial state complexity of at most \(\mathcal{O}\left(n^{d\log|\Sigma|}\right)\)._
Proof.: Kim and Roush (1978) showed that an \(n\times n\) Boolean matrix picked uniformly at random from the set of matrices with exactly \((1+\epsilon+r)n\log n\) non-zero entries will be \(r\)-indecomposable with probability \(1-o(1)\) for an arbitrary \(\epsilon>0\). Notice, if we sample from the matrices with at least as many non-zero entries the bound will also hold. Using the union bound the probability that all \(|\Sigma|\) transition matrices will be \(r\)-indecomposable is thus also \(1-o(1)\). By Lemma 6.4 we can substitute \(r=\frac{n}{d\log n}-(1+\epsilon)\) and bound:
\[|\mathbb{S}(\mathcal{A})|\leq 2|\Sigma|^{(n-1)/r}\leq 2|\Sigma|^{d\log n+o(1)} \tag{18}\]
with high probability. The rest follows from Theorem 3.2.
Notice that the condition for \(r\)-indecomposability from Theorem 7.1 to require at least \((1+\epsilon+r)n\log n\) non-zero entries is logarithmically tight due to every \(r\)-indecomposable matrix having at least \((1+r)n\) non-zero entries.We can also characterize the language properties of such automata, and they are relatively simple -- if you select a sufficiently long word \(\mathbf{y}\in\Sigma^{*}\) it will almost always be accepted.
It is important to remember that not all such automata will have low state complexity, for example one can easily add many "useless" states with a lot of redundant transitions to Moore's automaton such that the underlying language is not affected.
**Remark**.: _If we allow transition matrices to be picked independently, already \(\Omega\left(\sqrt{n^{3}\log n}\right)\) transitions being sufficient for the constant state complexity, originally derived by Kim and Roush (1978) (for binary semigroups). Looking only at uncorrelated transition matrices is a big (and impractical) simplification, thus the extended bounds serve mostly mathematical curiosity._
## 8 Alternative approaches
In this section we consider some criteria which are not connected to transition monoids. The motivation behind is the simple fact that our most general bound as presented in Theorem 3.2 is not even asymptotically tight when applied to deterministic automata. Indeed, as mentioned earlier for deterministic automata with \(|Q|=n\) states and binary
alphabet the size of the transition monoid can be close to \(n^{n}\left(1-\frac{2}{\sqrt{n}}\right)\). However it trivially holds that the on-the-fly algorithm will require at most \(n\) states on an arbitrary deterministic automaton.
The observed discrepancy comes from the absence of information about the starting states of \(\mathcal{A}\) in its transition monoid \(\mathbb{S}(\mathcal{A})\). Thus, even if \(|\mathbb{S}(\mathcal{A})|\) is exponentially large (see Observation 3.1), the number of states \(Q_{\textsc{det}}\) can be much smaller, most notably for deterministic automata it will be \(\leq n\). Essentially \(|\mathbb{S}(\mathcal{A})|\) acts as a bound for the worst possible choice of the initial set \(I\), i.e., resulting in the biggest blow-up. To mitigate this issue we consider some alternative approaches that were proposed to measure non-determinism and analyse how they relate to on-the-fly algorithm.
**Definition 8.1**.: _Let \(\mathcal{A}=(\Sigma,Q,I,F,\delta)\) be a FSA and \(\mathbf{y}\in\Sigma^{*}\). The **tree width** of \(\mathcal{A}\) on \(\mathbf{y}\), denoted as \(\tau_{\mathcal{A}}(\mathbf{y})\), is the number of different paths with yield \(\mathbf{y}\) starting in some initial state \(q\in I\). The tree width of \(\mathcal{A}\) is defined as:_
\[\textsc{tw}(\mathcal{A})=\max\left\{\tau_{\mathcal{A}}(\mathbf{y})\mid\mathbf{y}\in \Sigma^{*}\right\} \tag{19}\]
_We say that \(\mathcal{A}\) has finite tree width if \(\textsc{tw}(\mathcal{A})\) is finite._
It is easy to see that the tree width of an arbitrary deterministic automaton is exactly \(1\) since we have a unique starting state and a unique transition for each symbol. Moreover, tree width gives a natural upper bound for the state complexity of Algorithm 1.
**Lemma 8.1**.: _Given a FSA \(\mathcal{A}\) with \(\textsc{tw}(\mathcal{A})=k\) and \(k\leq n-1\) the on-the-fly algorithm will require at most \(\frac{n^{k}}{(k-1)!}+1\) states._
Proof.: Since for an arbitrary word \(\mathbf{y}\) the width of the computation tree \(\tau_{\mathcal{A}}(\mathbf{y})\) is bounded by \(k\), at no point in the execution will there be a power state \(\mathcal{Q}\in Q_{\textsc{det}}\) with more than \(k\) entries. Thus the total number of reachable power states can be bounded by summing up all possible combinations:
\[\sum_{i=0}^{k}\binom{n}{i}\leq\sum_{i=0}^{k}\frac{n^{i}}{i!}\leq\sum_{i=0}^{k} \frac{n^{k}}{k!}=\frac{n^{k}}{(k-1)!}+1 \tag{20}\]
Palioudakis et al. (2012) have shown the optimality of the state complexity derived in Lemma 8.1 by constructing a family of automata \(\mathcal{A}_{n,k}\) with \(n\) states and \(\textsc{tw}(\mathcal{A}_{n,k})=k\) such that the minimal equivalent deterministic automaton has a state complexity of \(1+\sum_{i=1}^{k}\binom{n-1}{i}\). Moreover, they give a complete characterization of all automata of bounded tree width.
**Lemma 8.2** (Palioudakis et al. (2012)).: _An FSA \(\mathcal{A}=(\Sigma,Q,I,F,\delta)\) has finite tree width if and only if no directed cycle in \(\delta\) contains a non-deterministic transition._
From Lemma 8.2 we can directly infer that automata with irreducible (and indecomposable) transition matrices we considered in SS4, SS5 and SS6 will in general not have a bounded tree width. This indirectly confirms our previous observation that there is no direct connection between the tree width \(\textsc{tw}(\mathcal{A})\) and size of transition monoid \(\mathbb{S}(\mathcal{A})\).
**Theorem 8.3** (Hromkovic et al. (2002)).: _For an arbitrary FSA \(\mathcal{A}\) with \(n\) states the tree width \(\textsc{tw}(\mathcal{A})\) is either bounded by a constant or between linear and superlinear, or otherwise \(2^{\Theta(n)}\)._
Theorem 8.3 demonstrates that no quasipolynomial state complexity bounds can be established using Lemma 8.1. Indeed, by applying Lemma 8.1 we get a state complexity bound of the form \(\mathcal{O}\left(n^{\textsc{tw}(\mathcal{A})}\right)\), thus for a constant tree width we achieve polynomial complexity, and for anything linear or above an exponential state complexity (effectively producing a "blow-up" comparable to the naive power set construction).
## 9 Weighted finite-state automata
Another useful property of the algebraic approach is the possibility of extending the state complexity bounds to general weighted finite-state automata. Despite the fact that contrary to classical automata they can't always be determinized on-the-fly, as shown by Mohri (1997), a slight modification of our core result from Theorem 3.2 still holds.
**Definition 9.1**.: _An algebra \(\langle\mathbb{K},\oplus,\otimes,\mathbf{0},\mathbf{1}\rangle\) is called a **(commutative) semifield** if and only if it satisfies the following properties:_
1. \(\langle\mathbb{K},\otimes,\mathbf{1}\rangle\) _is a commutative group_
2. \(\langle\mathbb{K},\oplus,\mathbf{0}\rangle\) _is a commutative monoid_
3. \(\otimes\) _with_ \(\mathbf{0}\) _annihilates_ \(\mathbb{K}\)__
4. \(\otimes\) _distributes over_ \(\oplus\)__
_Moreover we call the semifield **zero-sum-free** if and only if for all \(x,y\in\mathbb{K}\) it holds:_
\[x\oplus y=\mathbf{0}\Rightarrow x=y=\mathbf{0} \tag{21}\]
A couple prominent examples of zero-sum-free semifields are the Boolean semifield
\[\mathbb{B}\coloneqq\langle\{0,1\},\vee,\wedge,0,1\rangle \tag{22}\]
which contains only two elements and the tropical semifield
\[\mathbb{T}\coloneqq\langle\mathbb{R}\cup\{\infty\},\min,+,\infty,0\rangle \tag{23}\]
which contains infinitely many elements. With the algebra out of the way we can present automata over general semifields.
**Definition 9.2**.: _A **weighted finite-state automaton (WFSA)**\(\mathcal{A}\) over a (commutative) semifield \(\langle\mathbb{K},\oplus,\otimes,\mathbf{0},\mathbf{1}\rangle\) is a quintuple \((\Sigma,Q,\delta,\lambda,\rho)\) where each transition in \(\delta\subseteq Q\times\Sigma\times\mathbb{K}\times Q\) has a weight over \(\mathbb{K}\) and \(\lambda,\rho\) are initial and final weight functions of the form \(I\to\mathbb{K}\) and \(F\to\mathbb{K}\) respectively. A transition with weight \(\mathbf{0}\) is an absent transition, and we enforce that between every two states there is at most one transition per label._
The definition of deterministic weighted finite-state automaton is fully equivalent to Definition 2.3, except that now the transitions have weight over \(\mathbb{K}\) instead of over Boolean semifield \(\mathbb{B}\). Similarly we can extend the notion of the transition monoid to a **weighted transition monoid**\(\mathbb{W}(\mathcal{A})\), which is generated by weighted transition matrices of \(\mathcal{A}\) from \(\mathbb{K}^{Q\times Q}\).
Mohri (1997) presents a weighted on-the-fly algorithm (see Appendix A) for zero-sum-free semifields which has a very similar paradigm to Algorithm 1 with the exception that every power state \(\mathcal{Q}\) is a vector in \(\mathbb{K}^{Q}\). The weights associated with individual states are referred to as **residual weights**. We notice, in the weighted case termination is not always guaranteed.
**Lemma 9.1**.: _The set of states \(Q_{\textsc{det}}\) of automaton \(\mathcal{A}_{\textsc{det}}\) computed by weighted on-the-fly algorithm is a normalized product of \(\lambda(I)\) under the transition monoid \(\mathbb{W}(\mathcal{A})\) i. e. \(Q_{\textsc{det}}=\)_
\[\left\{\mathcal{Q}\in\mathbb{K}^{Q}\ \left|\ \mathcal{Q}=\frac{\left(\lambda(I)^{ \top}M\right)}{\bigoplus_{i=1}^{|Q|}\left(\lambda(I)^{\top}M\right)_{i}} \right.\right\}\cup\{\mathcal{Q}_{I}\} \tag{24}\]
_where \(M\in\mathbb{W}(\mathcal{A})\setminus\{\mathcal{I}\}\) and \(\mathcal{Q}_{I}=\lambda(I)\)._
Proof.: We consider an arbitrary word \(\boldsymbol{y}\in\Sigma^{*}\) and show inductively that Mohri's algorithm (see Appendix A) satisfies the property for the power state \(\mathcal{Q}\) reached by the (unique) path with yield \(\boldsymbol{y}\). Notice, the residual weight of initial state \(\mathcal{Q}_{I}\) is unchanged and set to \(\lambda(I)\in\mathbb{K}^{Q}\).
From Lemma 9.1 we can directly derive the extension of Theorem 3.2 for weighted automata.
**Theorem 9.2**.: _Given a WFSA \(\mathcal{A}\) the equivalent deterministic automaton \(\mathcal{A}_{\textsc{det}}\) computed by means of weighted on-the-fly will satisfy_
\[|Q_{\textsc{det}}|\leq|\mathbb{W}(\mathcal{A})|+1 \tag{25}\]
_and its construction will require at most \(\mathcal{O}\left(|\Sigma||Q||\mathbb{W}(\mathcal{A})|\right)\) steps._
Proof.: From Lemma 9.1 we can conclude that every power state \(\mathcal{Q}\in Q_{\textsc{det}}\) corresponds to at least one matrix from \(\mathbb{W}(\mathcal{A})\). The bound on the execution time follows from the observation that each power state is processed exactly once.
As a consequence, if \(\mathbb{W}(\mathcal{A})\) is finite then weighted on-the-fly is guaranteed to terminate. Moreover, Theorem 9.2 allows us to formulate state complexity bounds for weighted automata by only considering monoids of matrices, which forms an interesting open problem.
## 10 Conclusion
In this work we have presented a new abstraction for analyzing the performance of the on-the-fly determinization algorithm by establishing a connection with algebraic automata theory. This allowed us to derive state complexity bounds and estimate the asymptotic runtime for various classes of finite-state automata. The summary of our results can be found in Tab. 1.
The bounds demonstrate that there are many different properties of automata which can limit
\begin{table}
\begin{tabular}{l l l} \hline \hline Constraints & \# states & Runtime \\ \hline one-letter & \(n^{2}-n+2\) & \(\mathcal{O}\left(n^{3}\right)\) \\ commutative & \(n^{2|\Sigma|}\) & \(\mathcal{O}\left(n^{2\left\lceil\Sigma\right\rceil+1}\right)\) \\ \(r\)-indecomp. & \(2n^{k\log\left\lvert\Sigma\right\rvert}\) & \(\mathcal{O}\left(n^{k\log\left\lvert\Sigma\right\rvert+1}\right)\) \\ \(\frac{n^{2}}{k}\)-dense\({}^{\dagger}\) & \(\mathcal{O}\left(n^{k\log\left\lvert\Sigma\right\rvert}\right)\) & \(\mathcal{O}\left(n^{k\log\left\lvert\Sigma\right\rvert+1}\right)\) \\ \(k\)-tree width & \(\frac{n^{k}}{(k-1)!}+1\) & \(\mathcal{O}\left(n^{k+1}\right)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summarizing state complexity and runtime bounds derived for Algorithm 1 with \(|Q|=n\) over a fixed alphabet \(\Sigma\). We assume irreducibility for one-letter and commutative automata and \(r\)-indecomposability with \(r\geq\lceil n/(k\log n)\rceil\). \(\dagger\) indicates that the bound holds with high probability.
the underlying non-determinism and that these properties are intrinsically non-uniform, and seem to have little in common. This leads us to the idea that algebraic properties of automata might be the method to quantify and measure non-determinism and unify many results present in the field. Although we didn't discuss it explicitly in the paper, some of the criteria -- such as the strongly connected property -- can hold just partially and still translate into limited state complexity (for example if an automaton is a concatenation of two strongly connected automata a similar argument can be made). Thus a straightforward extension of our approach might lead to more complexity bounds.
On the other hand, the algebraic method has some limitations that yet have to be resolved, most importantly the missing connection to the starting and final states. Closing (or at least measuring) this "complexity gap" remains an important open question.
Our methodology can be extended and applied to weighted automata over general semirings. Analyzing matrix monoids with different semirings might lead to new insights and lay the foundation for the complexity theory of weighted finite-state automata, something that to our knowledge has not been formalized until this day.
## Appendix A Mohri's algorithm for weighted determinization
Here we present a version of the weighted on-the-fly algorithm by Mohri (1997) over general semifields \((\mathbb{K},\oplus,\otimes,\mathbf{0},\mathbf{1})\). In contrast to the original version we additionally assume commutativity under \(\otimes\) for Lemma 9.1 to be applicable. The pseudocode for the full algorithm can be found in Algorithm 2, and was adapted slightly to resemble the non-weighted determinization, as presented in Algorithm 1. However there are some important differences that we will clarify.
We consider the output automaton \(\mathcal{A}_{\textsc{det}}\) from the Algorithm 2. Its state space \(Q_{\textsc{det}}\) is a subset of \(\mathbb{K}^{Q}\), meaning that every state \(\mathcal{Q}\in Q_{\textsc{det}}\) is power state and additionally each state with \(\mathcal{Q}\) has a weight from \(\mathbb{K}\) associated with it. However this weight is in no way connected to the initial weight function \(\lambda_{\textsc{det}}\) or final weight function \(\rho_{\textsc{det}}\). To avoid ambiguity we will refer to it as **residual weight** and use \(r_{\mathcal{Q}}\) function to refer to the residual weights for some power state \(\mathcal{Q}\).
Further we observe that the initial weight function \(\lambda_{\textsc{det}}\) is essentially Boolean, since it only has weight \(\mathbf{1}\) for initial power state \(\mathcal{Q}_{I}:=\{(q,\lambda(q))\mid q\in Q,\;\lambda(q)\neq\mathbf{0}\}\) and \(\mathbf{0}\) everywhere else. The final weight function \(\rho_{\textsc{det}}(\mathcal{Q})\) for some power state \(\mathcal{Q}\in Q_{\textsc{det}}\) on the other hand is simply the dot product of all residual weights of \(\mathcal{Q}\) with
their non-deterministic final weights \(\rho\). Thus the main component carrying all information are the residual weights \(r_{\mathcal{Q}}\).
The execution paradigm is very similar to Algorithm 1 where we start in the power state \(\mathcal{Q}\) (initially \(\mathcal{Q}_{I}\)) and then "explore" all possible states and push them on the stack. Moreover, now we add the weights to individual transitions by summing up the weights from all transitions leaving the power state (see the calculation of \(w_{a}\)). The newly created power state will have the residual weights calculated very similarly, but the summation goes over individual weights (see the calculation of \(\mathcal{W}_{a}(q^{\prime})\)). Most importantly before assigning the residual weights to the new power state \(\mathcal{Q}^{\prime}\) we normalize them by the transition weight \(w_{a}\) to keep the weight of the path unchanged.
|
2302.11154 | Open-domain Visual Entity Recognition: Towards Recognizing Millions of
Wikipedia Entities | Large-scale multi-modal pre-training models such as CLIP and PaLI exhibit
strong generalization on various visual domains and tasks. However, existing
image classification benchmarks often evaluate recognition on a specific domain
(e.g., outdoor images) or a specific task (e.g., classifying plant species),
which falls short of evaluating whether pre-trained foundational models are
universal visual recognizers. To address this, we formally present the task of
Open-domain Visual Entity recognitioN (OVEN), where a model need to link an
image onto a Wikipedia entity with respect to a text query. We construct
OVEN-Wiki by re-purposing 14 existing datasets with all labels grounded onto
one single label space: Wikipedia entities. OVEN challenges models to select
among six million possible Wikipedia entities, making it a general visual
recognition benchmark with the largest number of labels. Our study on
state-of-the-art pre-trained models reveals large headroom in generalizing to
the massive-scale label space. We show that a PaLI-based auto-regressive visual
recognition model performs surprisingly well, even on Wikipedia entities that
have never been seen during fine-tuning. We also find existing pretrained
models yield different strengths: while PaLI-based models obtain higher overall
performance, CLIP-based models are better at recognizing tail entities. | Hexiang Hu, Yi Luan, Yang Chen, Urvashi Khandelwal, Mandar Joshi, Kenton Lee, Kristina Toutanova, Ming-Wei Chang | 2023-02-22T05:31:26Z | http://arxiv.org/abs/2302.11154v2 | # Open-domain Visual Entity Recognition:
###### Abstract
Large-scale multi-modal pre-training models such as CLIP [35] and PaLI [9] exhibit strong generalization on various visual domains and tasks. However, existing image classification benchmarks often evaluate recognition on a specific domain (e.g., outdoor images) or a specific task (e.g., classifying plant species), which falls short of evaluating whether pre-trained foundational models are _universal visual recognizers_. To address this, we formally present the task of Open-domain Visual Entity recognitioN (Oven), where a model need to link an image onto a Wikipedia entity with respect to a text query. We construct Oven-Wiki1 by re-purposing 14 existing datasets with all labels grounded onto one single label space: Wikipedia entities. Oven-Wiki challenges models to select among six million possible Wikipedia entities, making it a general visual recognition benchmark with the largest number of labels. Our study on state-of-the-art pre-trained models reveals large headroom in generalizing to the massive-scale label space. We show that a PaLI-based auto-regressive visual recognition model performs surprisingly well, even on Wikipedia entities that have never been seen during fine-tuning. We also find existing pre-trained models yield different strengths: while PaLI-based models obtain higher overall performance, CLIP-based models are better at recognizing tail entities.
Footnote 1: Work was done when interned at Google Research.
## 1 Introduction
Pre-trained large language models [4, 12], _inter alia_, have shown strong transferable text processing and generation skills in tackling a wide variety of natural language tasks [43, 50, 54] across languages and task formats, while requiring very few manually labeled per-task examples. At the same time, while there has been equally impressive progress in multi-modal pre-training [9, 35], it remains unclear whether similarly universal visual skills, _i.e_., recognizing millions of coarse-grained and fine-grained visual concepts, have emerged. _Are pre-trained multi-modal models capable of recognizing open-domain visual concepts?_
Answering this question requires a visual recognition dataset with broad coverage of visual domains and tasks, under a universally defined semantic space. Existing recognition benchmarks such as ImageNet [39, 41], Stanford Cars [24], or SUN database [58] represent a large number of visual concepts, but make specific assumptions about the granularity of the target concepts (e.g. building type such as "castle" in ImageNet but not a specific building in the world such as "Windsor Castle"), or limit attention to concepts of the same type such as car models/years. Visual question answering (VQA) datasets test models' abilities to
recognize concepts which can be of more flexible granularities and object types, but in practice existing VQA datasets tend to focus on higher-level categories. We aim to assess models' abilities to recognize visual concepts from a close to universal, unified space of labels that covers nearly all visual concepts known to humankind, and at a flexible level of granularity, specified by a user or a downstream application. Given a short specification of each element in the target space of visual concepts (such as a textual description), multimodal pre-trained models could in principle recognize concepts without seeing labeled instances covering each of them.
Towards evaluating models on such universal visual recognition abilities, we introduce the task of **O**pen-domain **V**isual **E**ntity recognitio**N (Oven), targeting a wide range of entities and entity granularities, including animals, plants, buildings, locations and much more. Particularly, we construct Oven-Wiki by building on existing image recognition and visual QA datasets and unifying their label spaces/granularities and task formulations. For our unified label space, we use English Wikipedia which covers millions of visual entities of various levels of granularity and also includes a specification of each entity via its Wikipedia page (containing entity name, text description, images, etc.). Wikipedia also evolves as new entities appear or become known in the world, and can be used as a first approximation of a universal visual concept space.
We re-purpose 14 existing image classification, image retrieval, and visual QA datasets, and ground all labels to Wikipedia. In addition to unifying labels, we unify input recognition intent specifications, which is necessary when combining specialized datasets with the goal of evaluating universal recognition. Given an image showing a car and a tree behind it, Oven makes the recognition intent explicit via a natural language query such as "What is the model of the car?" or "What is the species of the tree?". Therefore, the Oven task takes as input an image and a text query1 that expresses visual recognition intent with respect to the image. The goal is to provide an answer by linking to the correct entity (e.g. Bugatti Veyron or Batris GASIPAES) out of the millions of possible Wikipedia entities, each coming with descriptions and a relevant set of images from its Wikipedia page (see Figure 1). Importantly, Oven requires recognition of entities that were unseen in the training data. Models can still take advantage of the text description and/or images on the Wikipedia page of the unseen entities, as well as knowledge acquired through pre-training.
Footnote 1: A query can be expressed in different formats; in this paper, we choose to use a question to reflect the intent.
Human annotators were hired to help create Oven-Wiki for two reasons. First, grounding labels from the component datasets into Wikipedia entities is non-trivial due to language ambiguity. For example, 'Tornado' can be a weather phenomenon or a type of airplane (Panavia Tornado). To reduce such ambiguity in the grounding, we take multiple steps to refine the labels, including the use of human annotators, a state-of-the-art textual entity linking system [13], and heavy filtering. Second, creating unambiguous textual query intents is also challenging. In many cases, a text query can lead to multiple plausible answers (e.g. of various granularities), and a human often needs to make revisions to make sure no other objects could be correct answers. For our training and development/test sets we rely on semi-automatic processing, but additionally introduce a gold evaluation set, for which annotators thoroughly corrected entity linking errors and rewrote ambiguous input query intents.
Based on Oven-Wiki, we examine two representative multi-modal pre-trained models, PaLI [9] and CLIP [35], to establish an empirical understanding of the state-of-the-art in universal entity recognition. Particularly, these two models are used for creating an auto-regressive visual entity recognition model (similar to [13]) and a visual entity retrieval model, respectively. Our study suggests that there is a large room for improvement in generalizing to the massive label space. We show that the PaLI-based auto-regressive visual recognition model performs surprisingly well, even on Wikipedia entities that have never been seen during fine-tuning. Digging deeper, we discover that CLIP variants and PaLI-based models make very different kinds of errors. Particularly, PaLI dominates in recognizing popular Wikipedia entities, whereas CLIP models can win consistently on recognizing tail entities.
## 2 Open Domain Visual Entity Recognition
To drive progress in universal entity recognition, we propose the task of Open-domain Visual Entity recognitioN (Oven). There are two desiderata that we would like to meet for the Oven task. First, there should exist a universal label space. In Oven, we make use of a multi-modal knowledge base, such as Wikipedia, to serve as the universal label space, covering millions of entities. Second, the answer label for each Oven input should be unambiguous. This is particularly challenging when the label space is very large and multi-granular. To accomplish this, Oven makes use of input text queries to define the recognition intent (_e.g._, identifying car types or car models), allowing visual concepts from different granularities to be unambiguously specified.
**Task Definition** The input to an Oven model is an image-text pair \(x=(x^{p},x^{t})\), with the text query \(x^{t}\) expressing intent with respect to the corresponding image \(x^{p}\). Given a unified label space \(\mathcal{E}\) which defines the set of all possible entities, the knowledge base \(\mathcal{K}=\{(e,p(e),t(e))\mid e\in\mathcal{E}\}\) is a set of triples, each containing an entity \(e\), its corresponding text description \(t(e)\) (_i.e._, name of the entity, description, etc.) and a (possibly empty) set of relevant
images \(p(e)\). For instance, an entity \(e=\texttt{Q7395937}\) would have a corresponding textual description \(t(e)=\texttt{Name: Sabatia campestris}\); Description:...:2 and a set \(p(e)\) containing one or more images from the corresponding Wikipedia page3 of Sabatia Campestris. We consider the combination of \(t(e)\) and \(p(e)\) the _multi-modal knowledge_ for the entity \(e\). As Oven is a recognition task, we focus on recognizing and linking entities that are _physically_ present in the image.4
Footnote 2: In this paper, we only consider using the name of the entity as its textual representation, despite the fact that more textual descriptions are available.
Footnote 3: [https://en.wikipedia.org/wiki/File:Sabatia_campestris_Arkansas.jpg](https://en.wikipedia.org/wiki/File:Sabatia_campestris_Arkansas.jpg)
Footnote 4: Extending this framework to entities that are not physically present in the image (e.g. the inventor of the airplane) is also valid and useful. See a follow-up works [10] for more details.
The goal of learning for Oven is to optimize a function \(f_{\Theta}\) that predicts the entity \(e\) from a given test example \(x=(x^{p},x^{t})\) and the associated knowledge base of triples \(\mathcal{K}\). There are different ways to utilize the information available in \(\mathcal{K}\), and models may choose to use only a subset of this information. Figure 2 presents two typical ways of modeling Oven. For encoder-decoder models [9, 55], the most straight-forward utilization is to memorize the entities of the database \(\mathcal{K}\) into model parameters \(\Theta\) via pre-training and fine-tuning, and then _generate_ entity names directly during inference. Given that the generated name might not appear in the database, BM25 is used to map the prediction to the entity with the closet name in the available database For dual-encoder models [8, 17, 22, 35], an alternative is to explicitly compare a given test example \(x\) to representations of entities \(e\in\mathcal{E}\), making the prediction an _entity retrieval_ problem. We refer to Section 4 for concrete examples of how to implement Oven models.
**Data Split and Evaluation** Due to Oven's goal of evaluating pre-trained multi-modal models, we only provide a partial set of visual concepts (_i.e._, seen categories) for model training or fine-tuning. For evaluation, an Oven model is tested on generalization to entities not present in the fine-tuning data (thus unseen), without forgetting the seen concepts. The models need to either acquire information from the knowledge base, or make a prediction using knowledge obtained during pretraining. We evaluate Oven with a metric aiming to balance performance between seen and unseen entities using a harmonic mean, as shown below:
\[\texttt{hm}(\texttt{Acc}_{\texttt{seen}},\texttt{Acc}_{\texttt{UNseen}})=2 \ /\ (\frac{1}{\texttt{Acc}_{\texttt{seen}}}+\frac{1}{\texttt{Acc}_{\texttt{UNseen}}}) \tag{1}\]
Harmonic mean equally weighs the importance of the seen and unseen subsets, and penalizes models with a short barrel. Further details are provided in SS3.
**Oven versus recognition benchmarks** Given that an Oven model need to generalize to unseen entities, it is required to predict over all KB entities, which can exceed 6 million in our experiments (_e.g._, the size of English Wikipedia). This is orders of magnitude larger than existing benchmarks. Second, the large label space has made the generalization to unseen entities the most critical criterion for a successful Oven model, which also allows future open-domain evaluation5. Third, Oven requires models to do multi-modal reasoning, _i.e._, comprehending the text query within its visual context, to predict the answer entity.
Footnote 5: One can collect and label a new set of entities from Wikipedia, to serve as a VQA task because its input format is the same as that of standard VQA models (_e.g._, text query + image). However, Oven is specialized and focuses solely on recognition, with the text input serving mainly for intent disambiguation. Moreover, Oven models are required to generate the name of an entity that exists in a given KB (like models for text entity linking tasks), while VQA models output free-form answers (such as yes/no for verification questions and numbers for counting questions).
**From Oven to Knowledge-Intensive VQA** Although this paper aims to evaluate pre-trained multi-modal models on universal visual entity recognition, we highlight that models that excel at Oven can serve as foundational components for systems that can answer knowledge-intensive questions. For example, given an image and a question "When was the church built?", one could apply an Oven model to link the image to a concrete church's Wikipedia page and then extract the answer from that document. A follow-up work has conducted a thorough study on the value of Wikipedia grounding for answering knowledge-intensive visual questions [10].
## 3 The Oven-Wiki dataset
Based on the task formulation of Oven, we create the Oven-Wiki dataset by combining 14 existing datasets, grounding their labels to Wikipedia, resolving label ambiguities, and providing unambiguous textual query intents for all
Figure 2: **Illustration on two Oven Models.**
examples. The 14 datasets were originally created for image recognition/retrieval, and visual question answering. Below is the complete list:
* **Image Recognition Datasets**: ImageNet21k-P [39, 41], iNaturalist2017 [51], Cars196 [24], SUN397 [58], Food101 [2], Sports100 [19], Aircraft [30], Oxford Flower [34], Google Landmarks v2 [56].
* **Visual QA Datasets**: VQA v2 [20], Visual7W [66], Visual Genome [25], OK-VQA [32], Text-VQA [48].
These datasets belong to two groups: image recognition (or retrieval) which provides _diverse visual entities_, defined as the **Entity Split** (ES); and VQA which provides _visually-situated natural language queries_, defined as the **Query split** (QS). For examples that originate from VQA datasets, we employ human annotators to write templated rules and filter out questions that do not lead to visual entity answers that are present in the image. For examples from recognition datasets, we first extract the super-category of their label (using the Wikipedia database), and then apply a templated query generation engine to generate a query with unambiguous intent that leads to the label (details in the Appendix).
**Label Disambiguation and Human Annotation** Grounding the labels of 14 datasets to Wikipedia entities is challenging, and we perform the following steps to accomplish this. We first apply a state-of-the-art textual entity linking system [13] to recognize text labels and map them into Wikipedia. Human annotators are used to write rules to detect bad linking results or unlinkable labels (e.g. numbers), and correct entity linking errors. The union of original dataset labels were linked to 20,549 unique Wikipedia entities, each with a number of examples for the purpose of training and evaluation. Meanwhile, we construct the candidate label space using the English Wikipedia snapshot from _Oct. 1 2022_, by removing all disambiguation, redirect, and media file pages. As shown in Figure 1 (right), this left us with 6,063,945 Wikipedia entities in total. Note that we only consider using the first Infobox images [57] from each page to serve as the visual support for each Wikipedia entity; these are available for 2,032,340 entities.
We further perform human annotation to create a high-quality evaluation dataset. Specifically, we hired over 30 dedicated annotators to validate the entity links in \(<\)image, query, answer\(>\) triplets sampled from the test split. They were asked to re-annotate the triplets with access to the visual context, ensuring that the query leads to the correct Wikipedia entity answer. Through this process, we collected 24,867 natural language queries, equally distributed over triplets originally sampled from the Entity and Query splits (_i.e._, test splits). We asked the annotators to rewrite the queries so that no other object in the image could be a valid answer. As a result, the percentage of unique queries in the total examples (17,669 out of 24,867) as shown in Table 3 (mid) is significantly higher in the human set than in the other sets. This brings higher query generalization challenges for the human eval set. We report results using the same evaluation metrics on the human data, with respect to seen and unseen entities. Figure 1 provides a glance at the human annotated data.
**Dataset Statistics** Figure 3 (left) presents the general distribution of the super-categories for our final collection of Wikipedia entities that have positive examples. Figure 3 (right) shows detailed statistics for queries and entities for each of the fine-tuning (train), validation, test, and human splits. Note that the models do not know which entities are present in the val/test/human set, and must scan through the whole KB to make predictions. The # of seen/unseen examples indicates the # of examples of which the positive entity labels are in the seen/unseen split.
**Evaluation Details** As aforementioned, we evaluate models by asking them to predict one out of over 6 million English Wikipedia entries. While our data does not cover all 6 million labels as positive examples, models still need to consider all possible outputs due to the presence of unseen entities. We measure the models' performance using both the Entity Split (ES) and Query Split (QS). Specifically, we first compute the harmonic mean of accuracy over examples from the seen and unseen classes, as \(\texttt{Acc}_{\texttt{ES}}=\texttt{hm}(\texttt{Acc}_{\texttt{ESSEN}}, \texttt{Acc}_{\texttt{ESSENEN}})\) and \(\texttt{Acc}_{\texttt{0S}}=\texttt{hm}(\texttt{Acc}_{\texttt{0SGEN}}, \texttt{Acc}_{\texttt{0SSUNEN}})\) as the Equation 1. Then we further calculate the harmonic mean between splits \(\texttt{hm}(\texttt{Acc}_{\texttt{ES}},\texttt{Acc}_{\texttt{0S}})\) to reward models that do well on both splits. We use the validation data, which contains examples from subsets of both seen and unseen entities, for model
Figure 3: Dataset Statistics of the Oven-Wiki. **Left:** Distribution of super-categories of entities that have positive examples (See Appendix for more details). **Mid:** Statistics of different splits of the Oven-Wiki. **Right:** Properties of the Wikipedia dump-2022/10/01.
selection, and we measure performance on the test split and the human evaluation set.
## 4 Fine-tuning Pre-trained Models for Oven
We evaluate two prominent pre-trained multi-modal models: CLIP [35], a widely-used dual encoder model for image and text, and PaLI [9], a state-of-the-art pre-trained encoder-decoder model. Figure 2 has illustrated high-levelly on how encoder-decoder and dual encoder models can model the task of Oven. In the following, we demonstrate with more details about how these two models can be fine-tuned for Oven.
### Dual encoders: CLIP and its variants for Oven
One can naturally apply CLIP on Oven by treating it as an image-to-text retrieval task. For an input image \(x^{p}\), the image encoder is used to form an image embedding. Then the predicted entity could be retrieved by finding the entity that has the maximum dot product value between the entity text embeddings and entity image embeddings among the entire entity database. However, this naive implementation ignores the input intent \(x^{t}\) and the entity images \(p(e)\).
In the following, we present two variants of CLIPs: CLIP Fusion and CLIP2CLIP. The goal of these two variants is to use all of the information provided in the Oven task. Both variants learn a function \(f_{\Theta}\) that maximizes the score of the target entity for the given input image-query pair, using multimodal knowledge from the knowledge base. Given a test example \(x=(x^{p},x^{t})\) and the knowledge base of triples \(\mathcal{K}\), the function is used to make a prediction,
\[e^{\prime}=\operatorname*{arg\,max}_{e\in\mathcal{E}}f_{\Theta}(x^{p},x^{t},p (e),t(e)) \tag{2}\]
**CLIP Fusion** adopts the pre-trained CLIP model as the featurizer to develop this system, via adding a 2-layer Multi-Modal Transformer on top of the CLIP image and text features as a mixed-modality encoder. The left encoder (for an input image-query pair) and the right encoder (for multi-modal knowledge information) use the same architecture, but do not share parameters. We fine-tune all of their parameters on the Oven-Wiki, which includes both the pre-trained CLIP weights and randomly initialized Transformer weights.
**CLIP2CLIP** relies more heavily on the pre-trained CLIP model and introduces only a minimal set of new parameters (, four) to re-weigh and combine CLIP similarity scores. Particularly, it computes the cosine similarity between \(<\!\!x^{p},t(e)\!\!>\), \(<\!\!x^{t},p(e)\!\!>\), \(<\!\!x^{p},p(e)\!\!>\), and \(<\!\!x^{t},t(e)\!\!>\), using the image and text encoders of CLIP, respectively. Then it aggregates these similarities by multiplying them with a learnable vector that reflects importance weights.
**Scaling to 6 million candidates.** It is expensive to perform dot product scoring with respect to 6 million webpages on-the-fly. Fortunately, there exist approximate algorithms for maximum inner product search whose running time and storage space scale sub-linearly with the number of documents [38, 46, 47]. In all our experiments, we use ScaNN [21] as our library for entity retrieval.
### Encoder-Decoder: PaLI for Oven
PaLI [9] is a sequence-to-sequence model pre-trained on web text, image-text pairs (, WebLI) and other sources. PaLI can accept both an image and text as input and generates text as output. In order to map the PaLI predictions to the knowledge base, we run a BM25 [40] model to retrieve the most similar Wikipedia entity name for every generated text output. We found that this can slightly but consistently improve the entity recognition results. Note that we directly fine-tune PaLI on the Oven training data, which does not cover all entities and questions appearing in our Dev and Test splits. However, we found that PaLI is still able to handle entities that are unseen during fine-tuning due to the knowledge acquired during pre-training. To make the comparison with CLIP more comprehensive, we report results on both PaLI-3B and PaLI-17B. The former PaLI variant is at the same magnitude (in its number of parameters) as the largest CLIP model, and the latter PaLI variant is one magnitude larger, and much stronger based on other evaluation [9].
## 5 Experiments
We first describe the essential experimental setups in SS5.1, and then present the main benchmark results in SS5.2.
### Experimental Setups
**Pre-trained Model Details.** For all the CLIP variants, we employ the largest CLIP checkpoints,, ViT-L14, which leverages Vision Transformer [16, 52] as its visual backbone. For the PaLI model [9], we make use of the 3B and 17B parameter pre-trained models provided by the original authors, for fine-tuning on Oven.
**Data Processing Details.** We process all images in our dataset by resizing them to 224\(\times\)224, linearize them into a sequence of 14\(\times\)14 patches, and apply the normalization technique consistent with each model's pretraining to pre-process the images. For natural language text, we perform tokenization based on the adopted pre-trained model's original vocabulary. For CLIP variants that encode Wikipedia images for entity retrieval, we apply the same image processing pipeline whenever the image is available. When the Wikipedia entity does not have an infobox image, we use a black image to represent the visual support.
**Main Results** Results on the validation set are presented in Table 1, and include performance on the Entity and Query splits, as well as the overall combined scores.
There are several interesting (perhaps surprising) observations from Table 1. First, while CLIP variants such as CLIP Fusion and CLIP2CLIP are utilizing more information from Wikipedia (_i.e_., entity names and entity images), they are weaker than the auto-regressive PaLI-3B and PaLI-17B model, across most evaluation data splits. This suggests that high-capacity generative multi-modal pre-trained models are capable of recognizing visual entities. Second, this performance gap is more apparent on the query split than the entity split, potentially due to the VQ2A pre-training objectives [7] and the underlying powerful language models [36] employed by the PaLI model.
Comparing all CLIP-based models, we observe that CLIP Fusion and CLIP2CLIP, which uses all Wikipedia information are generally performing better than the vanilla CLIP model, showcasing the benefits of multimodal information from Wikipedia. Meanwhile, we also observe that CLIP Fusion, where two new layers have been added on top of pretrained CLIP, shows very strong results on seen entities for both the Entity and the Query splits, but weak results on unseen entities, thus leading to lower overall performance. The CLIP2CLIP model, on the other hand, is capable of retaining the cross-entity generalization performance while improving its prediction accuracy on seen entities.
Comparing the PaLI models, we observe a drastic improvement as the number of parameters in the models increased. Particularly, PaLI-17B has a double-digit performance gain in the overall performances, against the PaLI-3B model. This suggests that scaling the capacity of the model is one of the most important factors, and should be considered as a top priority in future multi-modal dual encoder research.
**Results on Human Set and Human Performance.** Table 2 shows that the results on the test set and human set are generally aligned with observations on the validation set. We conduct a study to estimate the human performance on Oven-Wiki, via requesting 3 dedicated human annotators to answer 100 examples (sampled from human evaluation set, answers are non-overlapping). We allow the annotators to use search engines (_e.g_., Google Image Search, Wikipedia Search, etc.)7, as long as the annotators can provide a valid Wikipedia entity name as the answer. As a result of this study, human achieves 77.7% harmonic mean accuracy, which is
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & & \multicolumn{3}{c}{Entity Split\({}_{\texttt{(Dev)}}\)} & \multicolumn{3}{c}{Query Split\({}_{\texttt{(Dev)}}\)} & \multicolumn{3}{c}{Overall\({}_{\texttt{(Dev)}}\)} \\ & \# Params & seen & unseen & \multicolumn{1}{c}{hm} & seen & \multicolumn{1}{c}{unseen} & \multicolumn{1}{c}{hm} & \multicolumn{1}{c}{nm} & \multicolumn{1}{c}{hm} \\ \hline
**Dual Encoders:** & & & & & & & & \\ \(\bullet\) CLIP\({}_{\texttt{ViTL14}}\) & 0.42B & 5.4 & 5.3 & 5.4 & 0.8 & 1.4 & 1.0 & 1.7 \\ \(\bullet\) CLIP Fusion\({}_{\texttt{ViTL14}}\) & 0.88B & 32.7 & 4.3 & 7.7 & 33.4 & 2.2 & 4.2 & 5.4 \\ \(\bullet\) CLIP2CLIP\({}_{\texttt{ViTL14}}\) & 0.86B & 12.6 & 10.1 & 11.2 & 4.1 & 2.1 & 2.8 & 4.4 \\
**Encoder Decoder:** & & & & & & & & \\ \(\bullet\) PaLI-3B & 3B & 21.6 & 6.6 & 10.1 & 33.2 & 14.7 & 20.4 & 13.5 \\ \(\bullet\) PaLI-17B & 17B & 30.6 & 12.4 & 17.6 & 44.2 & 22.4 & 29.8 & 22.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between the fine-tuned models on the Oven-Wiki **validation** set.
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline & & \multicolumn{3}{c}{Entity Split\({}_{\texttt{(Test)}}\)} & \multicolumn{3}{c}{Query Split\({}_{\texttt{(Test)}}\)} & \multicolumn{3}{c}{Overall\({}_{\texttt{(Test)}}\)} & \multicolumn{3}{c}{Human Eval} \\ & \# Params & seen & unseen & seen & unseen & \multicolumn{1}{c}{hm} & seen & \multicolumn{1}{c}{unseen} & \multicolumn{1}{c}{hm} \\ \hline
**Dual Encoders:** & & & & & & & & \\ \(\bullet\) CLIP\({}_{\texttt{ViTL14}}\) & 0.42B & 5.6 & 4.9 & 1.3 & 2.0 & 2.4 & 4.6 & 6.0 & 5.2 \\ \(\bullet\) CLIP Fusion\({}_{\texttt{ViTL14}}\) & 0.88B & 33.6 & 4.8 & 25.8 & 1.4 & 4.1 & 18.0 & 2.9 & 5.0 \\ \(\bullet\) CLIP2CLIP\({}_{\texttt{ViTL14}}\) & 0.86B & 12.6 & 10.5 & 3.8 & 3.2 & 5.3 & 14.0 & 11.1 & 12.4 \\
**Encoder Decoder:** & & & & & & & & & \\ \(\bullet\) PaLI-3B & 3B & 19.1 & 6.0 & 27.4 & 12.0 & 11.8 & 30.5 & 15.8 & 20.8 \\ \(\bullet\) PaLI-17B & 17B & 28.3 & 11.2 & 36.2 & 21.7 & 20.2 & 40.3 & 26.0 & 31.6 \\ \hline
**Human+Search**6 & - & - & - & - & - & - & 76.1 & 79.3 & 77.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of methods on the Oven-Wiki **test** set and **human evaluation** set. Human+Search represents human performances with information retrieval tools such as search engines and others, on a random subset of \(\textsc{Oven-Wiki}_{\texttt{Human\_Eval}}\).
significantly higher than the best comparison systems shown in Table 2.
## 6 Analysis
In this section, we perform empirical studies to analyze the pre-trained CLIP2CLIP and PaLI models, and conduct a detailed analysis of these two models' common errors.
**Does fine-tuning always help generalization?** Figure 4 presents the validation scores of the PaLI model (left) and the CLIP2CLIP model (right), during fine-tuning on Oven-Wiki's training split. It shows that a longer training schedule does not lead to better generalization performance, particularly when evaluated on the unseen entities. Because of this, we employ the early stopping strategy for model selection, and pick the model with the best harmonic mean combined score on the validation set. However, due to this early stopping strategy, both fine-tuned models are not utilizing 100% of the examples in Oven's training data because their unseen performance starts to degenerate within one epoch. This has indicated that more advanced fine-tuning strategies that use better regularization techniques to encourage generalization across Wikipedia entities, could be a promising research to explore in the future.
**How would the number of entities in KB influence the model's prediction?** Figure 5 presents the accuracy of CLIP2CLIP, as a function of the # of total candidates to retrieve from. Here, we compute the accuracy by sub-sampling the negative candidates from KB to different sizes. We observe that when the retrieval candidate entities are only the positive entities (with the # of candidates being 20K), the performance of the CLIP2CLIP model is significantly higher than the open-domain setting (with 6M entities in total). Beyond this, as the KB size increases, model accuracy decreases. Concretely, it shows an approximately linear decline along the log-scale x-axis in Figure 5. This indicates that as the KB size increases, the models' accuracy first drops significantly and then follows with a gradual decline. On the other hand, PaLI's performance is generally more steady as the size of KB grows, potentially because its prediction has already matched up entity names inside KB, so narrowing down the set of candidates does not help the BM25 post-processing. One potential direction is to employ constrained decoding for the PaLI-based model, which we leave for future works.
**How would models perform on head vs. tail entities?** We evaluate the visual entity recognition performances of CLIP2CLIP and PaLI, on entities of different popularity. Particularly, Figure 6 presents a histogram according to models' performance on the entity that has different average monthly Wikipedia page views in 2022 [31]. From the comparison, we can see that PaLI is significantly more accurate compared to CLIP2CLIP, on the head entities (that have more than 5K monthly page views). However, we observe that CLIP2CLIP can perform on par or even outperform PaLI on tail-ish entities (that have less than 2.5K monthly views). This suggests that the retrieval-based visual entity recognition model has its own advantages, in recognizing the difficult and tail entities. Meanwhile, this result also provides a hint that potentially a frequency calibrated evaluation should be developed to reward models more with strong recognition capability on the tail entities.
**Error analysis** To better understand the errors that CLIP2CLIP and PaLI models are making, we sampled a random 100 examples on the human evaluation set, and manually categorize and analyze the errors that PaLI and CLIP2CLIP are making. Particularly, we categorize the errors of the pre-trained models into four categories: (a) erroneous but relevant prediction, on concepts of the same granularity; (b) errors due to predicting very generic concepts; (c) errors due to misunderstanding the intent behind the query. (d) other miscellaneous errors. Note that errors type (d) are mostly mistakes that are unrelated and not easily interpretable. The results are shown in Table 3. Figure 4 has provided some concrete examples of the above types of mistakes made by CLIP2CLIP and PaLI. Interestingly, it shows that the two models, _i.e._, CLIP2CLIP and PaLI, are making very different types of errors in their predictions. Particularly, CLIP based model is good at capturing the right granularity of the entity, but often fails to understand the true intent of the text query. For instance, Figure 4 (c) shows that CLIP2CLIP ignores the text query and starts to predict the name of the barrel racer. In contrast, PaLI is good at following the text query, but can usually predict generic concepts when it does not know the answer confidently (see Figure 4 (b)).
## 7 Related Works
**Learning to Recognize unseen Categories** There has been a significant amount of prior work [26, 28, 53] focusing on the generalization situation where information of
\begin{table}
\begin{tabular}{l r r} \hline \hline & \multicolumn{1}{c}{PaLI-17B} & \multicolumn{1}{c}{CLIP2CLIP} \\ \hline Correct & 29\% & 15\% \\ In-correct & 71\% & **85\%** \\ \(\rightarrow\) (a) Wrong But Relevant & 23\% & 27\% \\ \(\rightarrow\) (b) Too Generic & 15\% & 1\% \\ \(\rightarrow\) (c) Misunderstand Query & 7\% & 37\% \\ \(\rightarrow\) (d) Miscellaneous & 24\% & 20\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Error type distribution for difference models. PaLI predicts more answers with less granularity (less granularity), while most of the CLIP errors are due to not understanding the questions.
novel categories are presented at test time. Zero-shot learning (ZSL) is one of such attempts that tackles learning new categories with zero images for training. To achieve such transfer, ZSL methods typically rely generating unseen image classifiers based on corresponding semantic representations, in the format of manually labeled attributes [26], unsupervised learned word vectors [6], or pre-trained sentence embeddings [23, 35]. Few-shot learning (FSL) [53] proposes a more realistic setup, where learners have access to a limited number of visual exemplars during the model deployment. With this goal, FSL methods aim to extract the inductive bias of learning from the seen classes, such that the model can leverage it in learning the unseen classes, to avoid severe over-fitting. Particularly, prior works either use adapted non-parametric classifiers [42, 49, 61], or meta-optimized linear classifiers [18, 37] to incorporate the few-shot unseen support examples. Comparing to them, our proposed task exposes different challenges as we ask the model to make the best use of open-world Web knowledge (_i.e._, Wikipedia pages with images & figures), which contains textual semantic information and visual appearance of the entities in the open world.
**Vision and Language + Knowledge** There have been efforts in combining knowledge into vision and language tasks, such as Visual QA [5, 11, 32, 44] and entity-focused image captioning [1, 27]. Among them, knowledge-based VQA is most related to Oven, but also differs in many aspects. Specifically, [5] presents a text QA dataset that requires understanding multi-modal knowledge in a KB. [44] propose to perform knowledge-based question answer tasks, centered around questions that resolve relational query over public Figures. Meanwhile, [32] propose to answer questions where the answer is outside of the image context, to assess model's capability in understanding real-world knowledge More recently, [11] studies the zero-shot visual QA setting where
Figure 4: **Fine-tuning PaLI or CLIP2CLIP for large # of steps** increases the seen entity accuracy but hurts the unseen entity accuracy.
Figure 5: **Impact of # Wikipedia Candidates on PaLI and CLIP2CLIP. Increasing the size of Wikipedia makes the tasks difficult.**
Figure 6: **Comparison of Performances on Head vs. Tail Entities (results on Validation set).** PaLI wins over CLIP2CLIP on popular (_i.e._, high monthly page view) Wikipedia entities, but loses on rare (_i.e._, low monthly page view) Wikipedia entities.
some answers (out of a total of 500 frequent answers of general concepts) are unseen during the training, where a KB is supplied to assist the model in answering unseen answers. Comparing to them, Oven steps back to the more fundamental problem of establishing the link between visual content and entity in the KB, but at a larger scale and broader coverage. We believe that stronger models developed on Oven would benefit such knowledge-intensive visual QA tasks.
**Entity Linking** Entity linking (EL) is the task of grounding entity mentions in the text by linking them to entries in a given knowledge base. Supervised EL [33] has demonstrated
\begin{table}
\begin{tabular}{c c c c} \hline \hline Error Type & (a) Wrong but Relevant & (b) Too Generic & (c) Misunderstand Query \\ \hline Input Query & _What is the name of the model_ of this aircraft? & _What is the species of this animal?_ & _What sports event is displayed in the picture?_ \\ \hline Input Image & & & \\ \hline \hline WikiID: Q589498 & & WikiID: Q255496 & WikiID: Q2529836 \\ & Name: _Bae 146_ & Name: _Buterfly_ & Name: _Barrel racing_ \\ \hline \hline WikiID: Q937949 & & WikiID: Q13510645 & WikiID: Q*****4678 \\ & Name: _Dornier 328_ & Name: _Proteuxoa comma_ & Name: _E. W. (barrel racer)_ \\ \hline \hline WikiID: Q218637 & & WikiID: Q592001 & WikiID: Q2529836 \\ & Name: _ATR 42_ & Name: _Hoary comma_ & Name: _Barrel racing_ \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Visualization of mistakes made by the CLIP2CLIP and PaLI-17B Model.** We visualize the Wikipedia infobox images for each of model’s predictions, to provide more context about the visual similarity between the prediction/ground-truth and the input image. Correct predictions are marked as green, whereas incorrect predictions are marked as red. (!: Since no infobox image is available for this Wikipedia entity, a face-anonymized Web image of the entity is visualized for reference.)
its strong performance when all entities are in-distribution during the evaluation. Because KB is updating all the time, recent works [3, 13, 15, 29, 64] focus on a more realistic setting where entity linking needs to be achieved in the zero-shot, with a large portion of entities (to be evaluated) completely unseen during the training. Oven is a visual analog of zero-shot EL, and targets at developing generalizable models that recognize entities unseen in the training. Among all EL literature, visually assisted EL [65] is most relevant to this work, whose goal is to use the associated image of text to improve the precision of text EL. Oven is different as its text queries do not mention the name of the entities, which put visual understanding and reasoning into the central position.
## 8 Discussion
In this paper, we have introduced Oven, a task that aims to unambiguously link visual content to the corresponding entities in a web-scale knowledge base (_i.e_., Wikipedia), covering a total of more than 6 millions of entities. To facilitate the evaluation of Oven, we created the Oven-Wiki dataset, via combining and re-annotating 14 existing visual recognition, retrieval, and visual QA datasets, and linked over 20K labels to the Wikipedia entities. With Oven-Wiki, we evaluate state-of-the-art multi-modal pre-trained models, _i.e_., the CLIP [35]-based entity retrieval models and the PaLI [9]-based entity generation model, via fine-tuning them for the Oven task, to examine their capability on recognizing open-domain visual concepts. As a result, PaLI models have presented significantly stronger performances than the CLIP variants, even on unseen visual entities during the fine-tuning. Meanwhile, although the CLIP-based entity retrieval model is overall weaker, it shows advantages in recognizing the tail visual entities.
One additional nice property of Oven-Wiki is its strong extensibility. As a result of grounding of all recognition labels to Wikipedia entities, we as a community can keep growing the member recognition datasets of Oven-Wiki, by adding positive instances to Wikipedia entities that do not have examples by far. Moreover, successful Oven models can generalize to recognize emerging entities (_e_.\(g\)., iPhone 14 Pro), as long as the corresponding Wikipedia page is created. In summary, we hope Oven will drive future research on knowledge-infused multimodal representation learning via visual entity recognition.
## Ethics Statement
As our dataset, _i.e_., Oven-Wiki, is composed of existing image recognition, image retrieval, and visual question answering datasets, we have introduced minimum risk of exposing additional social bias in our data. However, Oven-Wiki is still at the risk of inheriting existing dataset biases. As a result, we employed existing data curation strategies [60] to reduce such potential risks. Besides such risk, Oven-Wiki also opens up new possibilities that can alleviate ethical concerns in AI systems. Specifically, Oven-Wiki is a dataset that targets advancing research for establishing stronger grounding between the visual content and knowledge base, which can potentially contribute to building more attributed visual systems, such as a visual question answering model that produces answers based on the linked Wikipedia page, with improved interpretability and controllability.
## Acknowledgement
We thank Boqing Gong, Soravit Changpinyo for reviewing on an early version of this paper in depth, with valuable comments and suggestions. We thank Xi Chen for providing different variants of PaLI pre-trained checkpoints. We also thank Radu Soricut, Anelia Angelova, Alan Ritter, Chao-Yuan Wu, Jiacheng Chen for discussions and feedback on the project.
|
2301.08155 | AI Insights into Theoretical Physics and the Swampland Program: A
Journey Through the Cosmos with ChatGPT | In this case study, we explore the capabilities and limitations of ChatGPT, a
natural language processing model developed by OpenAI, in the field of string
theoretical swampland conjectures. We find that it is effective at paraphrasing
and explaining concepts in a variety of styles, but not at genuinely connecting
concepts. It will provide false information with full confidence and make up
statements when necessary. However, its ingenious use of language can be
fruitful for identifying analogies and describing visual representations of
abstract concepts. | Kay Lehnert | 2023-01-10T16:57:16Z | http://arxiv.org/abs/2301.08155v1 | AI Insights into Theoretical Physics and the Swampland Program: A Journey Through the Cosmos with ChatGPT
###### Abstract
In this case study, we explore the capabilities and limitations of ChatGPT, a natural language processing model developed by OpenAI. We find that it is effective at paraphrasing and explaining concepts in a variety of styles, but not at genuinely connecting concepts. It will provide false information with full confidence and make up statements when necessary. However, its ingenious use of language can be fruitful for identifying analogies and describing visual representations of abstract concepts.
Keywords: -- Cosmology -- Swampland Programme -- String Theory -- String-Theoretical Conjectures -- Swampland Conjecture -- de Sitter Conjecture -- Weak Gravity Conjecture -- ChatGPT -- Artificial Intelligence -- Natural Language Processing Model --
###### Contents
* I Introduction
* II Swampland Conjectures
* II The Weak Gravity Conjecture
* II. de Sitter Conjecture
* III Outlook
* IV Conclusion
* V Disclaimer
## I Introduction
ChatGPT1 is a _chatbot_ by OpenAI launched on 30 November [1]. Five days later, it had over 1 million users [2]. It has since gained a lot of traction--social media is flooded with excerpts of conversations with ChatGPT that are often surprisingly human-like, funny, and insightful. Some scholars are concerned that the end of essays as a meaningful assessment tool might be near [3; 4], since ChatGPT can easily generate convincing essays, paragraphs, or chapters for any given topic. In this letter, we explore how well it copes with questions regarding the rather niche topic of string theoretical swampland conjectures. Swampland conjectures are an intriguing application of ChatGPT, because they synthesise different branches of theoretical physics: general relativity and quantum field theory in the framework of sting theory, applied to cosmology. Despite the fact that they contain a heavy mathematical formalism, a lot of the work is still done on a conceptual level. This favours ChatGPT's strong language capabilities. We examine how strong ChatGPT's skills are, when it comes to combining, mixing, and synthesising the relevant concepts. Furthermore, we explore how we can use ChatGPT to our advantage in research, outreach, and education.
Footnote 1: [https://chat.openai.com/chat](https://chat.openai.com/chat)
This letter presents excerpts from a conversation2 between the author and ChatGPT, which begins with broader and more general questions about the field of string theory, before narrowing it down to specific swampland conjectures3 and ChatGPT's _understanding_ of them. We attempt to highlight strengths and weaknesses of ChatGPT, present some use cases, and raise some red-flags.
Footnote 2: The full conversation can be found on [https://github.com/kaelehul/ChatGPT](https://github.com/kaelehul/ChatGPT).
Footnote 3: See e.g. Vafa [5], Palti [6] for an overview.
We start with the question 'Can you give me a definition of ChatGPT?'.4
Footnote 4: Throughout the text, ChatGPT’s responses are written in typewriter. ChatGPT is used in its December 15 version.
\begin{tabular}{l l} \multicolumn{2}{l}{} \\ & & ChatGPT \\ \end{tabular}
*
Footnote 4: Throughout the text, ChatGPT’s responses are written in typewriter. ChatGPT is used in its December 15 version.[ENDFOOTNOTE]
[MISSING_PAGE_POST]
customer service chatbots, personal assistants, or educational chatbots. It can generate responses to user input based on the context of the conversation, allowing it to carry on a conversation in a way that is natural and human-like.
ChatGPT has a vast knowledge of a variety of subjects and can produce surprisingly elegant or creative explanations of concepts. It can be used to obtain definitions and explanations of concepts in different styles, e.g. to explain string theory to a high-school student:
String theory is a theoretical framework in physics that attempts to reconcile quantum mechanics and general relativity, two of the fundamental theories that describe the behavior of the universe. It is based on the idea that the fundamental building blocks of the universe are not particles, but rather tiny, vibrating strings.
According to string theory, the fundamental forces of nature (such as gravity, electromagnetism, and the strong and weak nuclear forces) are not caused by the exchange of particles, but rather by the vibration of these strings. Different patterns of vibration correspond to different particles and forces.
One of the key predictions of string theory is the existence of extra dimensions beyond the three dimensions of space and one dimension of time that we are familiar with. These extra dimensions may be hidden from our everyday experience, but they could potentially play a role in the behavior of particles and forces at very small scales.
Although string theory is still being developed and is not yet a fully developed theory, it has the potential to provide a deeper understanding of the fundamental nature of the universe and could potentially resolve some of the outstanding problems in physics.
Using ChatGPT's ability to impersonate a predefined persona can help the user to find the right words for her target audience. ChatGPT can be asked to rephrase a given paragraph in a given style, or can directly be asked to present the answer in the words of a character. The given answer regarding string theory covers all relevant points and is adequate for a high-school student. Furthermore, if there is no simple answer or the answer is less clear, an attempt will be made to provide one. As the following response demonstrates, ChatGPT's response to the question 'What is the most important equation in string theory?' is sound and even includes equations that frequently appear in string theory.
It is difficult to identify a single'most important' equation in string theory, as the theory is still being developed and is not yet a fully formulated and complete theory. String theory is a theoretical framework that attempts to reconcile quantum mechanics and general relativity, and it involves a wide range of mathematical concepts and equations.
One equation that is often associated with string theory is the Euler-Lagrange equation, which is a mathematical expression that describes the motion of a system in terms of its energy. This equation plays a role in the formulation of string theory, as it is used to describe the behavior of strings and the forces that act on them.
Another important equation in string theory is the Schrodinger equation, which is a mathematical expression that describes how the quantum state of a system changes over time. The Schrodinger equation is a fundamental equation of quantum mechanics and is used to describe the behavior of particles on the quantum level.
It is worth noting that string theory is a very complex and interdisciplinary field, and there are many other equations and mathematical concepts that are important to the development of the theory. These include the Einstein field equations, which describe the fundamental nature of gravity, and various equations that describe the behavior of particles and forces at very small scales.
As a follow-up question, we asked ChatGPT what those equations look like. The answer, along with general explanations of what the different terms mean and what the
equations describe, was:
\[L-\frac{\partial L}{\partial q}\frac{\partial q}{\partial t} =\frac{\partial L}{\partial q^{\prime}}\frac{\partial q^{\prime}} {\partial t} \tag{1}\] \[i\hbar\frac{\partial\Psi}{\partial t} =H\Psi\] (2) \[G_{\mu\nu} =8\pi T_{\mu\nu}\] (3) \[=R_{\mu\nu}-(1/2)g_{\mu\nu}R+\Lambda g_{\mu\nu}, \tag{4}\]
where the expansion of the Einstein equation in terms of the Ricci tensor and Ricci scalar was only provided on explicit request.
## II Swampland conjectures
To put ChatGPT's knowledge of swampland conjectures to the test, we asked it 'What do you know about the string theoretical swampland program?':
The swampland program is a research program in string theory that aims to identify and study the constraints on the possible low-energy effective theories that can emerge from string theory. It is based on the idea that string theory may contain a vast landscape of possible theories, and that only a small subset of these theories correspond to physically realistic models of the universe.
The swampland program seeks to identify the criteria that must be satisfied by a low-energy effective theory in order for it to be consistent with string theory. These criteria are known as the swampland conditions. The swampland program has led to the development of several important conjectures and criteria, including the swampland distance conjecture, the swampland de Sitter conjecture, and the swampland conjectures on trans-Planckian censorship. The swampland program is a highly active and influential research program in string theory and has attracted a great deal of attention and interest in the scientific community. It has the potential to provide important insights into the fundamental nature of the universe and the structure of string theory.
It provides three examples of swampland conjectures in its response: the distance conjecture [5; 6; 7; 8; 9], the de Sitter conjecture [5; 6; 10; 11; 12; 13] as well as the trans-Planckian censorship conjecture [6; 14; 15]. There are other conjectures. ChatGPT gladly provides a list of 25 conjectures upon request:
1. The distance conjecture
2. The de Sitter conjecture
3. The weak gravity conjecture
4. The trans-Planckian censorship conjecture
5. The scalar weak gravity conjecture
6. The bound on entropy conjecture
7. The no global symmetries conjecture
8. The no parametrically large global symmetries conjecture
9. The sublattice conjecture
10. The gradient conjecture
11. The dS distance conjecture
12. The trans-Planckian censorship principle
13. The dS swampland conjecture
14. The swampland dS conjecture
15. The dS conjecture
16. The averaged null energy condition conjecture
17. The trans-Planckian censorship criterion
18. The quantum gravity conjecture
19. The string swampland conjecture
20. The swampland distance conjecture
21. The swampland de Sitter conjecture
22. The swampland weak gravity conjecture
23. The swampland trans-Planckian censorship conjecture
24. The no global symmetries in quantum gravity conjecture
25. The no parametrically large global symmetries in quantum gravity conjecture
In the previous paragraph, we already offered sources for numbers one, two, and four. Number three is also a widely discussed conjecture, see. e.g. [6; 9; 16; 17; 18; 19; 20; 21; 22; 23]. Number five also exists, see e.g. [6; 24; 25]. To our knowledge, there is no conjecture named _the bound on entropy conjecture_. However, entropy bounds play an important role in connections between the species scale, the distance conjecture, the weak gravity conjecture, and the de Sitter conjecture [6]. Number seven exists [6; 26; 27; 28; 29]. Number eight is made up. It sounds like a combination of the distance conjecture and the no global symmetries conjecture. Number nine exists and is a combination of
the distance conjecture and the weak gravity conjecture [30; 31; 17]. Number ten is a conjecture in mathematics, but not (yet?) in the swampland programme. Numbers eleven through fifteen are merely combinations of the names of existing conjectures. Number sixteen was new to the author of this letter, but was somewhat discussed roughly a decade ago [32; 33; 34; 35]. The remaining conjectures on the list are just word baublery, i.e. combinations of existing terms but not original conjectures. This example shows two things: First, ChatGPT cannot separate a concept from a word. If several words describe the same concept, it is not necessarily able to identify the synonyms -- it often can provide synonyms for words, however, in this example it states different names for the same conjecture as different conjectures. Second, ChatGPT presents a fabricated response very confidently. In this example, it tries to comply with the request, without indicating that there actually is no list of 25 conjectures and that it made up most of them. Nevertheless, it achieves impressive results in terms of naming, and it makes certain to incorporate well-known conjectures. Nevertheless, it is also possible the prompt was not well-written, since it did not specify that it has to be a list of 25 unique and independent conjectures.
To further examine ChatGPT's capabilities, we focus on specific swampland conjectures and ask ChatGPT for definitions and relations in the following subsections.
### The Weak Gravity Conjecture
Gravity is the weakest force. No observation was made so far to contradict this statement. According to the Weak Gravity Conjecture (WGC), this is true for all string theoretical compactifications [16]. More precisely, the WGC says that there is a stable particle whose repulsive gauge force exceeds the attractive gravitational force and which minimises the mass-to-charge ratio \(m/q\), i.e. \(|m/q|<1\)[16].
A more formal definition of the WGC reads as follows: In a theory with a \(U(1)\) gauge symmetry; gauge coupling \(g\); and action
\[S=\int\!\mathrm{d}^{d}X\sqrt{-g}\left[\left(M_{p}^{d}\right)^{d-2}\frac{R^{d }}{2}-\frac{F^{2}}{4g^{2}}+...\right], \tag{5}\]
with \(d\) the number of dimensions of the Effective Field Theory (EFT), \(R^{d}\) the \(d\)-dimensional Ricci scalar, \(F^{2}=\frac{1}{q!}F_{\mu_{1}...\mu_{q}}F^{\mu_{1}...\mu_{q}}\) the strength of the gauge field \(A_{\nu...}\)[17], e.g. with the relation \(F_{\mu\nu}=\frac{1}{2}\partial_{[\mu}A_{\nu]}\), and \(M_{p}^{d}\) the \(d\)-dimensional Planck mass; a particle with mass
\[m\leq\sqrt{\frac{d-2}{d-3}gq}\left(M_{p}^{d}\right)^{\frac{d-2}{2}}, \tag{6}\]
\(q\) the charge of the particle, exists [6; 17].
ChatGPT knows the following about the WGC:
The weak gravity conjecture (WGC)
is a conjecture in theoretical physics that proposes a connection between the strength of a theory's gravitational force and the existence of charged particles or objects with extremely large charge-to-mass ratios. The conjecture was originally proposed by Cumrun Vafa and Alexander Westphal in 2006, and it has since become an important concept in the study of string theory and the swampland program.
The basic idea behind the WGC is that in any theory that contains a consistent low-energy effective description of gravity, there must be at least one particle or object with a charge-to-mass ratio that is larger than the charge-to-mass ratio of a extremal black hole in that theory. This means that in any theory that satisfies the WGC, it should be possible to create a particle or object with a large enough charge-to-mass ratio that it can overcome the gravitational force and escape to infinity.
The WGC has been motivated by the desire to understand the behavior of charged particles and objects in theories with a consistent low-energy effective description of gravity, and it has led to the development of several important results and insights in the field of theoretical physics. However, the conjecture is still being actively studied and is not yet fully understood.
Nothing in this statement is obviously wrong, as it remains rather vague while naming the relevant concepts that one might expect being mentioned. However, the remark 'The conjecture was originally proposed by Cumrun Vafa and Alexander Westphal in 2006,[...]' is interesting. When prompted for a source, ChatGPT responds with
The weak gravity conjecture was originally proposed by Cumrun Vafa and Alexander Westphal in a paper published in 2008 titled 'A Stringy Test of the Fate of the Landscape'. The paper was published in the journal Physical Review D and can be found online at the following link: [https://journals.aps.org/prd/abstract/10.1103/PhysRevD.78.046003](https://journals.aps.org/prd/abstract/10.1103/PhysRevD.78.046003)
ChatGPT even provides a summary of this paper.5 In its previous answer, ChatGPT mentioned a paper written in 2006. Here, it provides a paper written in 2008. It sticks to the authors, though. However, no such paper exists! The two authors never published together, and the title is fictitious. The provided link leads to the paper _Toward an AdS/cold atoms correspondence: A geometric realization of the Schrodinger symmetry_ by D. T. Son Son (2008). A better answer might have been Arkani-Hamed _et al._ (2016). This again shows that the user of ChatGPT has to be cautious when using ChatGPT's answers. They are grammatically correct but not necessarily factual.
Footnote 5: In this paper, Vafa and Westphal proposed the weak gravity conjecture as a way to address certain problems that arise in the context of string theory, including the existence of a vast landscape of possible vacua and the difficulty of constructing stable de Sitter vacua. The conjecture suggests that in any consistent theory of quantum gravity, there must exist particles or fields with charge-to-mass ratios that are larger than a certain critical value. This has implications for the stability of de Sitter vacua and the nature of black hole entropy. The weak gravity conjecture has since become an important part of the swampland program, and it continues to be studied and refined by researchers in the field.
When asked for the most important, peer-reviewed, and published papers about the weak gravity conjecture, ChatGPT returns the following:
* "The Weak Gravity Conjecture" by N. Arkani-Hamed, L. Motl, A. Nicolis, and C. Vafa (Journal of High Energy Physics, 2006). This paper introduced the concept of the weak gravity conjecture and discussed its implications for quantum gravity.
* "The Weak Gravity Conjecture from Emergence" by J. Kim, B. Vercnocke, and T. Wiseman (Journal of High Energy Physics, 2017). This paper discussed the relationship between the weak gravity conjecture and emergent gravity, and proposed a new version of the conjecture.
* "The Weak Gravity Conjecture and Emergence of Strong Coupling" by J. Kim, B. Vercnocke, and T. Wiseman (Physical Review Letters, 2018). This paper discussed the relationship between the weak gravity conjecture and the emergence of strong coupling, and proposed a new version of the conjecture.
* "The Weak Gravity Conjecture and Swampland Conjectures" by N. Arkani-Hamed, Y. Bai, and R. Sundrum (Journal of High Energy Physics, 2020). This paper discussed the relationship between the weak gravity conjecture and other swampland conjectures, and proposed a new version of the conjecture that incorporates these other conjectures.
All the papers it provides have eerily similar titles -- and not a single one does actually exist. The titles are intriguing. Even the brief summaries sound plausible. Some of the authors even publish in this or similar branches of physics (N. Arkani-Hamed, L. Motl, A. Nicolis, C. Vafa (2016), B. Vercnocke (2017), R. Sundrum (2019)) while others publish in other fields or are made up (J. Kim, T. Wiseman, Y. Bai).
The responses it provides sound convincing and rich in detail, even though everything is wholly fabricated. This might make its answer unreliable, but can, if wisely done, used to our advantage! Specifically, it can be used to find interesting and helpful analogies. For instance, we might be looking for a more down to earth analogy and ask ChatGPT to explain the WGC in the words of a farmer:
The weak gravity conjecture is a concept in theoretical physics that proposes that the strength of a theory's gravitational force is related to the existence of charged particles or objects with extremely large charge-to-mass ratios. Essentially, this means that the stronger the gravitational force, the more likely it is that there are charged particles or objects present that can affect the force.
From a farmer's perspective, this concept could be compared to the importance of understanding and managing the weather and climate conditions in a particular region. By understanding how different weather patterns and phenomena (such as storms, wind, and humidity) can affect their crops, farmers can make informed decisions about when to plant, water, and harvest, and can optimize the productivity and sustainability of their fields.
Similarly, the weak gravity conjecture suggests that the strength of a theory's gravitational force can be analyzed and predicted based on the presence of charged particles or objects. This concept can be useful for farmers who are looking to optimize the productivity and sustainability of their fields, by
helping them to understand and manage the various forces and phenomena that can affect their crops.
It's a bit far-fetched to assume that the WGC can be used to optimise agricultural productivity. However, to communicate the concept in outreach or educational activities, drawing parallels between the environmental conditions' impact on the yield and the impact of the description of the gravitational force on the observed particles, might be helpful.
Especially in outreach and education, we like to work with images and representations of concepts. We can use ChatGPT to generate visual representations of a concept's core ideas. We asked ChatGPT to describe how it envisions the weak gravity conjecture. The response served as a prompt for _Stable Diffusion6_, which then creates an image based on ChatGPT's description. The outcome is shown in fig. 1. Admittedly, this is more of an aesthetic picture than a simplified illustration of the concept.
Footnote 6: [https://stablediffusionweb.com](https://stablediffusionweb.com)
### de Sitter Conjecture
Another widely discussed swampland conjecture is the de Sitter conjecture. It states that for a scalar field potential \(V(\phi)\)
\[|\mathbf{\nabla}V|\geq\frac{c}{M_{P}}\cdot V, \tag{7}\]
with \(M_{P}\) the Planck mass and \(c\) a constant of order one [6; 8; 10; 11]. It is a strong conjecture, as it not only rules out metastable de Sitter vacua but also local maxima and saddle points [11]. When the acceleration is driven by dark energy, then dark energy must be dynamical, i.e. the equation of state is time-dependent, if dark energy is represented by a rolling scalar field [6; 12].
When asked what the de Sitter conjecture implies for the standard model of cosmology, ChatGPT replies:
The de Sitter conjecture is a conjecture in theoretical physics that states that it is impossible to construct a stable de Sitter space (a type of space with a positive cosmological constant) within the framework of string theory. This conjecture has been motivated by the desire to understand the behavior of theories with a positive cosmological constant and the challenges that they pose to the standard model of particle physics.
If the de Sitter conjecture is true, it could have significant implications for the standard model of cosmology, which is the current leading theory of the origin, evolution, and ultimate fate of the universe. The standard model of cosmology is based on the idea that the universe is expanding at an accelerating rate, driven by a mysterious form of energy known as dark energy. The standard model of cosmology also includes the hypothesis that the universe underwent a period of rapid expansion known as cosmic inflation in the very early universe.
If the de Sitter conjecture is true, it would suggest that the standard model of cosmology may need to be revised in order to be consistent with string theory. This could involve modifying or replacing the current hypothesis of cosmic inflation or finding an alternative explanation for the
Figure 1: ChatGPT description of an image representing the weak gravity conjecture: ‘The image should depict a bright, glowing orb representing the strength of the gravitational force, surrounded by swirling darkness representing charged particles or objects affected by this force. The overall tone of the image should convey a sense of wonder and excitement, as the viewer approaches the orb and learns more about this mysterious and elusive concept in theoretical physics.’ Image created by _Stable Diffusion_[40].
observed accelerating expansion of the universe. The de Sitter conjecture is an active area of research in the field of theoretical physics, and it is still not fully understood. Further research will be needed to determine the implications of the de Sitter conjecture for the standard model of cosmology and other areas of physics.
ChatGPT is able to provide a relatively vague, surface-level explanation of the key concepts. It identifies accelerated expansion and inflation as the most important aspects of the concordance model that may need to be modified. ChatGPT learns from the written word. In the context of the de Sitter conjecture, the most often discussed concepts are dark energy, accelerated expansion, and inflation. Therefore, ChatGPT can make this connection. However, it cannot genuinely connect concepts and infer implications no one else has yet thought of. This is also illustrated by the following example.
The relationship between the de Sitter conjecture and the weak gravity conjecture is rarely studied. ChatGPT describes the relation between the de Sitter conjecture and the weak gravity conjecture as follows:
The de Sitter conjecture and the weak gravity conjecture (WGC) are two separate conjectures in theoretical physics that have been proposed as part of the swampland program in string theory. However, there is a connection between these two conjectures, as they both involve the concept of a positive cosmological constant and the behavior of charged particles or objects in theories with a consistent low-energy effective description of gravity.
The de Sitter conjecture states that it is impossible to construct a stable de Sitter space (a type of space with a positive cosmological constant) within the framework of string theory. This conjecture has been motivated by the desire to understand the behavior of theories with a positive cosmological constant and the challenges that they pose to the standard model of particle physics.
The WGC, on the other hand, proposes a connection between the strength of a theory's gravitational force and the existence of charged particles or objects with extremely large charge-to-mass ratios. The WGC has been motivated by the desire to understand the behavior of charged particles and objects in theories with a consistent low-energy effective description of gravity.
Although the de Sitter conjecture and the WGC are two separate conjectures, they are related in the sense that they both involve the concept of a positive cosmological constant and the behavior of charged particles or objects in theories with a consistent low-energy effective description of gravity. They are both active areas of research in the field of theoretical physics and have the potential to provide important insights into the fundamental nature of the universe and the structure of string theory.
Since there is no mention of gravity in the de Sitter conjecture and no mention of the cosmological constant in the weak gravity conjecture, the response is largely incorrect. Admittedly, it is challenging to find analogies and connections between the two conjectures. In the literature, one frequently encounters links between the weak gravity conjecture and the distance conjecture [6; 9; 23; 30; 41; 42] and between the distance conjecture and the de Sitter conjecture [43; 44]. It is expected that some analogies could be drawn. However, no direct relations are known and ChatGPT is at this stage not able to identify them.
We find that ChatGPT does not create truly novel knowledge. Nonetheless, it can assist us to think about concepts in new terms. We've shown that ChatGPT can draw some elusive parallels between agriculture and the weak gravity conjecture. Here, we provide another example, by asking ChatGPT to explain the de Sitter conjecture in terms of a financial advisor:
The de Sitter conjecture is a concept in theoretical physics that relates to the stability of certain spaces known as de Sitter spaces. These spaces are characterized by a positive cosmological constant, which means that they are expanding at an accelerating rate.
In the framework of string theory, de Sitter spaces are believed to be unstable, which has important implications for our understanding of the universe and the fundamental nature of reality. Essentially, it means that certain types of spaces and configurations may not be as stable or long-lasting as we thought,
and that we need to be cautious when considering them as possible models or explanations for the universe.
From a financial perspective, this could be compared to investing in a company or asset that seems stable and promising at first, but ultimately turns out to be unstable or risky. It's important to do thorough research and analysis before making any investments, and to be aware of the potential risks and uncertainties that may be involved. In the same way, it's important to carefully consider the implications of the de Sitter conjecture and to approach it with caution and skepticism.
Both, financial markets and de Sitter space require thorough investigation to identify instability points and can be in configurations that are (quasi)stable over a limited period. To be cautious is never bad advice, neither when it comes to investment decisions nor when it comes to model universes and theory development.
We conclude this section by fig. 27, which represents the de Sitter conjecture and could be used as a cover image for a book chapter or a slide deck.
Footnote 7: Figure 2 was created by _DALL-E 2_: [https://openai.com/dall-e-2/](https://openai.com/dall-e-2/).
## III Outlook
In the previous paragraphs, we show that ChatGPT can synthesise various concepts and present them coherently. It succeeds in highlighting the core ideas and identifying the relevant keywords. It excels at combining what is known. However, it cannot genuinely make new connections or create new knowledge. Nevertheless, it is a useful tool for education and outreach. It helps us to rephrase and find the right words for our target audience. It can be used to formulate analogies and find parallels to other fields. It can summarise what we already know. In the module _Physics in Society8_ at Durham University, ChatGPT scored \(71\pm 2\,\%\), where students score \(71\pm 5\,\%\) on short, 300-words essays [4]. The same case study shows low, single-digit percentages for plagiarism using _TurnitIn9_ and _Grammarly10_. It is worth noting that there are tools to indicate the use of GPT11. However, detection could be evaded by using rephrasing tools such as _QuillBot12_. It is up to the teacher to decide whether essays are still a meaningful form of assessment. ChatGPT is a new and powerful tool to craft short essays. It can help non-native speakers overcome language barriers by assisting them in formulating their ideas and insights.
Footnote 8: A module on the history and philosophy of physics and science.
Footnote 9: [https://www.turnitin.com](https://www.turnitin.com)
Footnote 10: [https://www.gramarily.com/plagiarism-checker](https://www.gramarily.com/plagiarism-checker)
Footnote 11: Under [https://openai-openai-detector.hf.space](https://openai-openai-detector.hf.space), an online tool can be found, which uses the GPT 2 dataset [46; 47]
Posters are another application, where ChatGPT comes in handy. Assume you want to make a poster for a general science venue to present the swampland conjectures. You can then ask ChatGPT to write short paragraphs on a selection of swampland conjectures, a brief introduction to string theory and the swampland programme, prompt it to suggest a catchy title, and even have it describe some illustrations, which you can then create using other AI tools like _Stable Diffusion_, _DALL-E_, or _MidJourney13_ (see figs. 1 and 2 for examples). Then, you include the relevant formulas you want to discuss, create graphs that adhere to scientific standards, and we
Figure 2: ChatGPT’s description of an image representing the de Sitter conjecture: ‘The image should depict a bright, glowing sphere representing a de Sitter space, surrounded by a vast and complex landscape of theories and configurations. The sphere should represent the stability of the de Sitter space, and the landscape around it should represent the theories and configurations that are affected by this stability. The overall tone of the image should convey a sense of wonder and excitement, as the viewer approaches the sphere and learns more about this mysterious and elusive concept in theoretical physics.’ Image created by _DALL-E 2_[45].
You can concentrate on the poster's conceptualisation because you spend less time writing the paragraphs.
In the near future, we will see a variety of AI-tools. They are already powerful companions that can be used in tandem. A ChatGPT produced text can be rephrased using _QuillBot_. An image description by ChatGPT can be used as an input prompt for _DALL-E_, _Stable Diffusion_, or _MidJourney_ to create a visualisation. ChatGPT can be used to write a whole script for a video tutorial, which can then be used in _Fliki_14, to create a video, based on the script, including animations and a computer-generated voice. These tools are already very user-friendly, but further development is underway. Moreover, with GPT-4, the framework behind ChatGPT will get a major upgrade in the near future [48].
Footnote 14: [https://fliki.ai](https://fliki.ai)
## IV Conclusion
As a natural language processing model, ChatGPT is naturally good with words. If assigned a task it can only partially fulfil, it uses its abilities to come up with an answer and presents it with full confidence, even if the answer is inaccurate. This can be used to one's advantage, if one is able to identify the made up parts of an answer. Perhaps those parts start a thought process that leads to new insights. Asking ChatGPT to respond in the style of a given persona can help us find new and helpful parallels to other fields. This is of great use for educational and outreach endeavours. ChatGPT is an always-on sparring partner who is eager to assist and collaborate, when it comes to testing ideas and concepts. At its current state, it is not well suited to gain new knowledge or answer genuine questions. It will respond with absolute assurance, but the answer might be utterly false and completely fabricated. Nevertheless, its capabilities are astounding. We are on the brink of an AI-aided era of advancement if used with caution.
## V Disclaimer
The creation of this letter was assisted by the _chatbot_ ChatGPT by OpenAI [1]. Specifically, everything written in typewriter is a ChatGPT output. Furthermore, the title of this letter was proposed by ChatGPT itself. The images were created by _DALL-E 2_[45] and _Stable Diffusion_[40].
Kay Lehnert is a recipient of the John and Pat Hume Scholarship and acknowledges support from the Swiss Study Foundation.
The average power consumption per request of ChatGPT, _DALL-E_ or _Stable Diffusion_ are not public. Furthermore, it is not clear whether the energy used to train the models should also be taken into account, when estimating the CO\({}_{2}\)-equivalent of this article. The author's personal computer caused less than 100 g CO\({}_{2}\)e and is compensated by [https://climeworks.com](https://climeworks.com). To internalise possible additional external factors, 1 kg of CO\({}_{2}\) was compensated to cover for this work. Therefore, we consider this work as _probably_ carbon-neutral.
|
2310.11345 | Solitary solutions to the steady Euler equations with piecewise constant
vorticity in a channel | We consider a two-dimensional, two-layer, incompressible, steady flow, with
vorticity which is constant in each layer, in an infinite channel with rigid
walls. The velocity is continuous across the interface, there is no surface
tension or difference in density between the two layers, and the flow is
inviscid. Unlike in previous studies, we consider solutions which are localised
perturbations rather than periodic or quasi-periodic perturbations of a
background shear flow. We rigorously construct a curve of exact solutions and
give the leading order terms in an asymptotic expansion. We also give a
thorough qualitative description of the fluid particle paths, which can include
stagnation points, critical layers, and streamlines which meet the boundary. | Karsten Matthies, Jonathan Sewell, Miles H. Wheeler | 2023-10-17T15:31:03Z | http://arxiv.org/abs/2310.11345v1 | # Solitary solutions to the steady Euler equations with piecewise constant vorticity in a channel
###### Abstract.
We consider a two-dimensional, two-layer, incompressible, steady flow, with vorticity which is constant in each layer, in an infinite channel with rigid walls. The velocity is continuous across the interface, there is no surface tension or difference in density between the two layers, and the flow is inviscid. Unlike in previous studies, we consider solutions which are localised perturbations rather than periodic or quasi-periodic perturbations of a background shear flow. We rigorously construct a curve of exact solutions and give the leading order terms in an asymptotic expansion. We also give a thorough qualitative description of the fluid particle paths, which can include stagnation points, critical layers, and streamlines which meet the boundary.
###### Contents
* 1 Introduction
* 1.1 The problem
* 1.2 Main results
* 1.3 Related work
* 1.4 Outline of the paper
* 2 Preliminaries
* 3 Reformulation
* 3.1 Pointwise flattening with a Mobius map
* 3.2 Properties of this coordinate change
* 3.3 The evolution equation
* 4 The centre manifold
* 4.1 Analysis of the linearised operator
* 4.2 Applying the centre manifold theorem
* 4.3 Regularity and estimates
* 5 Streamline patterns
* 5.1 Signs of components of the flow
* 5.2 Unbounded critical layer
* 5.3 Bounded critical layer
* 6 Acknowledgements
* A Proof of Proposition 4.10
## 1. Introduction
The incompressible Euler equations have been studied for over 250 years, but even when considering time-independent, two-dimensional flows, there is still much being discovered. See [5], or [48] for a tighter focus on the steady case. The time-independent case is of interest both for its own sake, and to better understand possible end states of the time-dependent problem. For instance, steady states give counterexamples to the phenomenon of _inviscid damping_, where solutions of the time-dependent Euler equations decay in some sense, despite energy being conserved [4, 42].
The study of solutions to the Euler equations with piecewise constant vorticity is classical. There are broadly speaking two cases: _vortex patches_, where the vorticity has compact support, and _vorticity fronts_, with unbounded regions of vorticity. Studies of these phenomena date back to the 19th century, for instance by Rayleigh [47], Kelvin [51], and Kirchhoff [39] with a more recent wave of interest in vortex patches including work such as [9, 31, 11, 6, 18]. Time-dependent results have also been shown for vorticity fronts [33, 7, 32, 34].
A common setting for study of the Euler equations is in a two-dimensional channel. It is shown in [26] that in a channel, any solution to the steady Euler equations is either a shear flow, or has a stagnation point. This contrasts with water waves with a free upper boundary, which exhibit many interesting solutions without stagnation [30]. Indeed, many of the standard techniques in the classical study of water waves assume the stream function is strictly monotone in the vertical variable, so cannot allow for stagnation points. However, there has been great progress in understanding non-monotone stream functions in recent years [53, 43, 54, 56, 13], often using the weaker assumption that the vorticity can be expressed as a _vorticity function_ of the stream function.
We consider a two-dimensional flow in a channel with piecewise constant vorticity, and find explicit expansions of exact solutions, while thoroughly describing their qualitative behaviour. This behaviour includes stagnation points, which complements [26], and since the boundary at which the vorticity changes is free, this problem also fits in with the literature on water waves with stagnation. However, for some values of our parameters, there are solutions for which even the weaker assumption that a vorticity function exists does not hold. We focus on localised solutions, which have many challenges not present in the periodic case. Reformulating our problem as an evolution equation, with the horizontal spatial direction playing the role of time, allows us to use powerful tools from spatial dynamics. These are particularly useful when working outside a periodic regime, especially a centre manifold theorem of Mielke [44], which, for sufficiently small localised solutions, allows us to reduce our PDE to an ODE.
### The problem
We consider a two-dimensional flow which is incompressible and inviscid. The flow has two layers, each with the same constant density, but with different constant vorticities. There is no surface tension between the two layers. The fluid domain is infinite in width, but bounded above and below by rigid walls of constant height. It is also steady, that is, the wave profile moves with constant speed. In other words, if \(X\) is the horizontal coordinate, and \(t\) is time, a steady solution depends on \((X,t)\) only through the combination \(X-ct\), for some constant wave speed \(c\). Then moving to a co-moving frame of reference gives a time-independent flow. We use dimensionless units
for distance and time, so that the channel has height 1, and the difference in vorticity between the two layers is 1.
More precisely, given real constants \(\omega_{0},\omega_{1}\) with \(\omega_{0}-\omega_{1}=1\), and \(h\in(0,1)\), we seek \(\eta\in C^{2}(\mathbb{R})\) which in turn defines regions \(\Omega_{0}=\{(X,Y)\mid 0<Y<h+\eta(X)\}\) and \(\Omega_{1}=\{(X,Y)\mid h+\eta(X)<Y<1\}\). We also seek a velocity field \(U,V\in C^{1}(\overline{\Omega_{0}})\cap C^{1}(\overline{\Omega_{1}})\cap C^{0 }(\overline{\Omega_{0}\cup\Omega_{1}})\). These must satisfy
\[\partial_{X}U+\partial_{Y}V =0 \text{in }\Omega_{0}\cup\Omega_{1} \tag{1.1a}\] \[\partial_{Y}U-\partial_{X}V =\omega_{i} \text{in }\Omega_{i}\text{ for }i=0,1, \tag{1.1b}\]
where (1.1a) is the incompressibility condition, and (1.1b) enforces the piecewise constant vorticity. The vorticity of each fluid element is constant, so fluid cannot enter or leave either of \(\Omega_{0}\) and \(\Omega_{1}\). This gives the kinematic boundary conditions
\[V =0 \text{on }Y =1 \tag{1.1c}\] \[V =0 \text{on }Y =0\] (1.1d) \[\eta_{X}U-V =0 \text{on }Y =h+\eta(X). \tag{1.1e}\]
We call \(\{(X,Y)\mid Y=h+\eta(X)\}\) the _interface_. Points at which \(U=V=0\) are often of interest, and we refer to these as _stagnation points_, or points at which the flow _stagnates_. We use _critical layers_ to mean curves along which the horizontal velocity \(U\) vanishes [53]. A solution is _shear_, or a _shear flow_ if it has no \(X\) dependence. We say a solution is _localised_, or _homoclinic_, or forms a _solitary wave_ if \(\lim_{X\to\infty}\eta(X)=\lim_{X\to-\infty}\eta(X)=0\). A solitary wave with \(\eta(X)>0\) for all \(X\) is called a _wave of elevation_, and one with \(\eta(X)<0\) for all \(X\), a _wave of depression_.
### Main results
We construct localised solutions to (1.1) by bifurcating from a shear flow. Localised solutions present many challenges that periodic solutions do not. For instance, since periodic solutions can be considered to have a compact domain, one has access to compact embeddings between Holder and Sobolev spaces. Furthermore, small-amplitude periodic solutions are a linear phenomenon, whereas small-amplitude solitary waves are weakly-nonlinear; see (4.26) and the surrounding discussion.
Figure 1. (A) The setup of the problem; see (1.1) and (1.2). (B) A shear solution in (2.3).
We now, slightly informally, state our main result. Let the _trivial solution_, which we will bifurcate from, be given by
\[U(X,Y) =\begin{cases}\omega_{0}(Y-h)+\tilde{c}&\text{for}\quad 0\leq Y \leq h\\ \omega_{1}(Y-h)+\tilde{c}&\text{for}\quad h\leq Y\leq 1\end{cases} \tag{1.2}\] \[V(X,Y) =0\] \[\eta(X) =0,\]
where the constant \(\tilde{c}\) is the speed of the flow at the interface; see Figure 1.
**Theorem 1.1**.: _Fix \(N\in\mathbb{N}\) with \(N\geq 4\), \(h\in(0,1)\), and \(\omega_{0},\omega_{1}\in\mathbb{R}\) such that \(\omega_{0}-\omega_{1}=1\) and_
\[\theta:=(3h-1)\omega_{0}-(3h-2)\omega_{1}\neq 0. \tag{1.3}\]
_Let \(U^{*}\) be the horizontal velocity component of the shear flow (1.2) with \(\tilde{c}=h(1-h)\). Then, for \(\varepsilon>0\) sufficiently small, there exists a localised solution \((U^{\varepsilon},V^{\varepsilon},\eta^{\varepsilon})\) of (1.1) that is close to \((U^{*},0,0)\) and is approximated by the functions_
\[\overline{U}^{\varepsilon}(X,Y) =\begin{cases}U^{*}(Y)+\varepsilon-(1-h)\overline{\eta}^{ \varepsilon}(X)&\text{for}\quad Y\in[0,h+\overline{\eta}^{\varepsilon}(X)]\\ U^{*}(Y)+\varepsilon+h\overline{\eta}^{\varepsilon}(X)&\text{for}\quad Y\in[h+ \overline{\eta}^{\varepsilon}(X),1]\end{cases} \tag{1.4}\] \[\overline{V}^{\varepsilon}(X,Y) =\int_{0}^{Y}\overline{U}^{\varepsilon}_{X}(X,\tilde{Y})\;d \tilde{Y}\] \[\overline{\eta}^{\varepsilon}(X) =-\frac{3\varepsilon}{\theta}\operatorname{sech}^{2}\Big{(}\frac {\sqrt{3\varepsilon}}{2h(1-h)}X\Big{)},\]
_in the sense that_
\[\|U^{\varepsilon}-\overline{U}^{\varepsilon}\|_{L^{\infty}}=\mathcal{O}( \varepsilon^{2}),\quad\|V^{\varepsilon}-\overline{V}^{\varepsilon}\|_{L^{ \infty}}=\mathcal{O}(\varepsilon^{\frac{5}{2}}),\quad\|\eta^{\varepsilon}- \overline{\eta}^{\varepsilon}\|_{L^{\infty}}=\mathcal{O}(\varepsilon^{2}). \tag{1.5}\]
_The velocity components \(U^{\varepsilon},V^{\varepsilon}\) are real-analytic away from the interface, and \(\eta^{\varepsilon}\in C^{N}(\mathbb{R})\). Furthermore, \(U^{\varepsilon}\) and \(\eta^{\varepsilon}\) are even in \(X\) while \(V^{\varepsilon}\) is odd in \(X\)._
In the terminology of Lin and Zeng [42], there are steady structures which are an arbitrarily small, but non-zero, localised perturbation from a trivial shear flow.
_Remark 1.2_.: Each of the solutions described in Theorem 1.1 in fact gives rise to an infinite family of solutions through phase shifts, i.e., through transformations of the form \(X\mapsto X+a\).
_Remark 1.3_.: Let \(\Psi^{\varepsilon}\) be the stream function (see Lemma 2.1) associated with \((U^{\varepsilon},V^{\varepsilon})\). If \(\omega_{0}<1-2h\) or \(\omega_{0}>2-2h\), then for \(\varepsilon\) sufficiently small there does not exist a vorticity function \(\gamma^{\varepsilon}\) such that \(\omega=\gamma^{\varepsilon}(\Psi^{\varepsilon})\). If \(1-2h<\omega_{0}<2-2h\), on the other hand, then there does exist a vorticity function. See Proposition 2.4 for more details.
We can also give a relatively complete picture of what the flow looks like, and where and how it stagnates. All the solutions constructed in Theorem 1.1 have a unique interior stagnation point, and some have two further stagnation points on the boundary. Much of the qualitative behaviour can be determined by the sign of \(\theta\), and whether \(\omega_{0}\) is larger or smaller than the critical value \(1-h\).
**Theorem 1.4**.: _Every non-shear solution constructed in Theorem 1.1 has either a unique stagnation point in \(\Omega_{0}\cup\Omega_{1}\), or one stagnation point in \(\Omega_{0}\cup\Omega_{1}\) and two saddle points on the boundary closest to the interior stagnation point. The uniqueness, nature and location of the interior stagnation point, and whether \(\eta^{\varepsilon}\) gives a wave of elevation or depression, depends only on \(\theta\) and \(\omega_{0}\), as laid out in Table 1. Moreover \(\eta^{\varepsilon}\) is strictly monotone on \(X>0\) and on \(X<0\). The solution has a critical layer, which is unbounded if \(\omega_{0}\neq 1-h\), and bounded if \(\omega_{0}=1-h\). If \(\omega_{0}=1-h\), there is a streamline which connects the saddle points on the boundary. See Figure 2._
\begin{table}
\begin{tabular}{r|l|l|l|l} Region & Sign of \(\theta\) & Size of \(\omega_{0}\) & Wave profile & Location and nature \\ \hline (i) & \(\theta<0\) & \(\omega_{0}<1-h\) & Elevation & Upper layer, unique saddle \\ (ii) & \(\theta<0\) & \(\omega_{0}=1-h\) & Elevation & Lower layer, non-unique centre \\ (iii) & \(\theta<0\) & \(\omega_{0}>1-h\) & Elevation & Lower layer, unique centre \\ (iv) & \(\theta>0\) & \(\omega_{0}<1-h\) & Depression & Upper layer, unique centre \\ (v) & \(\theta>0\) & \(\omega_{0}=1-h\) & Depression & Upper layer, non-unique centre \\ (vi) & \(\theta>0\) & \(\omega_{0}>1-h\) & Depression & Lower layer, unique saddle \\ \end{tabular}
\end{table}
Table 1. Behaviour of solutions in different regions of parameter space; see Figure 2. The stagnation point being referred to is the interior stagnation point.
Figure 2. Streamlines of solutions in the six different regions of parameter space from Table 1. The dashed streamlines correspond to the interface \(Y=h+\eta(X)\). The dotted line in parameter space corresponds to \(\theta=0\), we have not constructed solutions here.
### Related work
One body of work that this problem is related to is that of travelling water waves with critical layers, as we are considering the Euler equations with a free boundary. See [30] and the references within for a general overview of the steady water waves literature, including in particular, problems with solutions exhibiting critical layers. Wahlen [53] gave the first exact construction of a rotational flow with critical layers and a free surface when considering the one-layer, constant vorticity case. Since then, new results have taken this line of inquiry in different directions. For example, Ehrnstrom, Escher, Villari, and Wahlen [17, 16] found solutions with an arbitrary number of critical layers, with stagnation points exhibiting "cat's eye" structures. Matioc then moved away from these constant or affine vorticity functions, and considered a two-layer problem, where the two layers had distinct, constant vorticities and densities [43]. Walsh, Buhler, and Shatah considered gravity-vorticity waves, [54], where they took the lower layer to be of finite depth and zero vorticity, and the upper layer to be either of infinite height and constant vorticity, or finite height and a more general vorticity. The study of critical layers is not limited to local perturbations, for instance, Constantin, Strauss, and Varvaruca [13] considered gravity waves of constant vorticity, and used global bifurcation to construct large amplitude, overhanging waves. Wheeler considered a flow with an arbitrary number of layers, with waves of vorticity [56]. Whereas [53, 17, 16, 13] considered the case when a vorticity function exists, we can work outside this scope. More generally, water wave problems usually have a discontinuity in velocity across free surfaces, due to surface tension, or a change in density [43, 54, 56]. We do not have such a "jump" condition.
As already mentioned, our work here is also connected to invicid damping, where small smooth perturbations from a shear flow decay to a shear flow when evolved under the Euler equations. Bedrossian and Masmoudi were the first to demonstrat this phenomenon at the non-linear level [4]. Recently [49], Sinambela and Zhao constructed less regular flows for which linear invicid damping occurs. For a general overview see the survey [35]. Invicid damping is an analogue of Landau damping; see [46].
Results on invicid damping are often complemented by results on the _flexibility_ or _rigidity_ of a flow. If a given steady solution of the Euler equations, often a shear flow, has nearby steady solutions, this is said to be flexible, if not it is rigid. If one can show flexibility near a shear flow, that is, that steady non-shear flows exist arbitrarily close to a shear flow, then invicid damping cannot hold. We go on to show that such a family of solutions exists. Lin and Zeng [42] showed that any periodic travelling steady solution with vorticity sufficiently close to a constant must be a shear flow. They also showed a flexibility result, that for arbitrary horizontal period, there exists steady flows in any neighbourhood of the Couette flow in \(H^{\frac{5}{2}}\). Constantin, Drivas, and Ginsburg [14] showed that given modest regularity assumptions, shear, steady, non-stagnating flows in a channel can be perturbed to give non-shear, steady, non stagnating flows in a perturbed channel. Hamel and Nadirashvili [26] went on to show that in fact, in a channel, any steady flow which does not stagnate is a shear flow, and in the half plane, any flow which does not stagnate and has bounded velocity is a shear flow. Zelati, Elgindi, and Widmayer consider waves of vorticity, [15], and construct steady analytic flows arbitrarily close to a shear flow. In [49], Sinambela and Zhao considered a large family of shear flows, and constructed steady flows arbitrarily close in \(H^{s}\) for arbitrarily
large \(s\). The recent work by Franzoi, Masmoudi, and Montalto [21] constructs spatially quasiperiodic steady flows near the Couette flow.
Most of the work mentioned so far considered the periodic case, apart from [21] which considers the quasiperiodic case. As is often done in water wave problems, in order to consider localised solutions instead, we turn to spatial dynamics. This asks us to reformulate our PDE as an evolution equation, with \(X\) playing the role of time, and turns out to yield many tools which are especially good at finding non-periodic solutions. One such spatial dynamical tool is a centre manifold reduction used by Kirchgassner [37]. However this can only be applied to semi-linear problems. Ours is a quasilinear problem, and for that, we need a stronger centre manifold reduction of Mielke [44]. This allows us to reduce to an ODE. Note, this is an exact reduction, not an approximation; the evolution in \(X\) of all sufficiently small bounded solutions to our infinite dimensional problem is confined to a two-dimensional manifold. Although we only encounter two-dimensional centre manifolds in this paper, with more than two layers we would expect to have higher dimensional manifolds such as those in [8]. There are great many papers applying spatial dynamics to water wave problems; see the surveys [23, 24] and the references therein for those focusing on gravity-capillary waves. In one of the more classical papers, Kirchgassner used spatial dynamics to construct capillary-gravity waves [38]. Groves, Toland, and Buffoni [8] considered the free surface problem and found solitary solutions lying on a four-dimensional manifold, from this centre manifold reduction of Mielke. Groves and Wahlen constructed Stokes waves with vorticity using spatial dynamics in [25]. More recently, Wang [55] also uses this reduction, when considering a problem similar to ours, except with free upper boundary and fluids of different densities. Kozlov, Kuznetsov, and Lokharu [40] found solitary gravity waves with vorticity and critical layers. They too considered the rotational case, and like us, they went on to find solutions with streamlines attached to the boundary. In addition to classical spatial dynamics, recently-developed centre manifold techniques 'without a phase space' [20, 10] have been applied to problems related to water waves [52, 36, 12]; also see [1]. There are also many classical constructions of solitary water waves which employ fixed-point methods rather than spatial dynamics [41, 22, 2, 3, 50].
One body of work that also considers solutions to the two-dimensional Euler equations with continuous velocity but a discontinuity in vorticity is that of vortex patches. These are solutions of the Euler equations with compactly supported vorticity. See the work by Burbea [9], by Hmidi, Mateu and Verdera [31], by Hassainia and Wheeler [29], and by Castro, Cordoba and Gomez-Serrano [11]. Hassainia, Masmoudi and Wheeler [28] also find vortex patches with cat's eye structures, which also appear in many papers on water waves with critical layers.
A vorticity front can be thought of as a generalisation of vortex patches, where the vorticity has unbounded support. Hunter, Moreno-Vasquez, Shu and Zhang consider a time dependent problem similar to ours, but without rigid walls [34]: they seek vorticity fronts, as do we, and they also consider perturbations from the same trivial flow, but extended to their infinite-depth domain. It differs in that where we ultimately reduce to a perturbation of the Korteweg-De Vries equation, which is local in \(x\), they reduce to equations involving a Hilbert transform, which is non-local in \(x\). Furthermore, they consider the case where both layers are semi-infinite, and they consider the time-dependent
case. They also find that their problem is non-dispersive. Subsequent work [32, 33, 7] considers an equation that approximates the evolution of the interface between vorticity fronts, and show that despite quadratic nonlinearity in the equation, the approximation holds for cubically non-linear time scales.
### Outline of the paper
Firstly, in Section 2 we prove various preliminary results about the background shear flow, conserved quantities, and the stream function. In Section 3 we change coordinates to reformulate the problem into a system of PDEs and boundary conditions on known domains. We then write our PDEs in the form of an evolution equation: \(\frac{\partial}{\partial x}(u,v,\eta)=\mathcal{F}(u,v,\eta;c)\), where the only derivatives to appear in \(\mathcal{F}\) are with respect to \(y\). Finally, we write \(\mathcal{F}\) as \(L+\mathcal{R}\), where \(L\) is the Frechet derivative of \(\mathcal{F}\), and \(\mathcal{R}\) is the non-linear remainder term. In Section 4 we prove Theorem 1.1. First, in Section 4.1, we verify some hypotheses about the spectrum of \(L\). This allows us to apply the centre manifold theorem in Section 4.2, which lets us reduce the problem to solving an ODE rather than a PDE. Roughly speaking, the centre manifold theorem works by finding two things. The first is a _reduction function_, \(\psi\colon\mathcal{E}_{0}\times\mathbb{R}\to\mathcal{W}\), where \(\mathcal{W}\) is some space of functions in \(y\), and \(\mathcal{E}_{0}\) is the generalised kernel of \(L\). Two functions, \(\xi_{0}(y)\) and \(\xi_{1}(y)\), span \(\mathcal{E}_{0}\), so we often identify \(\mathcal{E}_{0}\) with \(\mathbb{R}^{2}\), and consider \(\psi\) a function on \(\mathbb{R}^{3}\). The second is an ODE of the form \((a^{\prime}(x),b^{\prime}(x))=F(a(x),b(x),\varepsilon)\), where \(\varepsilon\) is a small parameter related to \(c\). These are such that if \((a,b,\varepsilon)\) solves the ODE, then \((x,y)\mapsto a(x)\xi_{0}(y)+b(x)\xi_{1}(y)+\psi(a(x),b(x),\varepsilon)(y)\) is a solution to our problem. The reduction function \(\psi\) is not related to, and not to be confused with, the stream function \(\Psi\). In Section 4.2 we find solutions to leading order, and prove Theorem 1.1.
In Section 5 we prove Theorem 1.4. We show in Section 5.1 that \(\eta\) is monotone on \(x>0\) and \(x<0\), and that the sign of \(V\) can be similarly characterised. In Sections 5.2 and 5.3 we show, given \(h\), \(\omega_{0}\), \(\omega_{1}\), how and where the flow stagnates, and whether \(\eta\) is deflected towards or away from this stagnation. In Section 5.3, we deal with certain critical values, where we have that \(U\) has the same sign in the entire flow, except for a bounded region of space which contains a stagnation point.
In the appendix we have the details of an argument of Kirchgassner which we allude to in Section 4.2.
## 2. Preliminaries
We first introduce some notation and terminology. Sometimes we write a function that depends on, say, \((X,Y,\omega)\). This unsubscripted \(\omega\) should be taken to be equal to \(\omega_{0}\) if \((X,Y)\in\Omega_{0}\), and equal to \(\omega_{1}\) if \((X,Y)\in\Omega_{1}\). For an open set \(\mathcal{U}\), then as usual \(C^{n}(\mathcal{U})\) denotes the set of \(n\)-times continuously differentiable functions with domain \(\mathcal{U}\). By \(C^{n}(\overline{\mathcal{U}})\), we mean the Banach space of functions which have domain \(\mathcal{U}\), and whose derivatives up to order \(n\) exist, can be continuously extended to \(\overline{\mathcal{U}}\), and are bounded. To specify a codomain \(\mathcal{V}\), we use the notations \(C^{n}(\mathcal{U},\mathcal{V})\) and \(C^{n}(\overline{\mathcal{U}},\mathcal{V})\) respectively.
As usual, incompressibility (1.1a) guarantees the existence of a _stream function_\(\Psi\) which is a first integral of the fluid particle motion.
**Lemma 2.1**.: _If \((U,V,\eta)\) solves (1.1), there exists \(\Psi\in C^{2}(\overline{\Omega_{0}})\cap C^{2}(\overline{\Omega_{1}})\cap C^ {1}(\overline{\Omega_{0}\cup\Omega_{1}})\) such that \(\Psi_{X}=-V\), \(\Psi_{Y}=U\), and \(\Psi=0\) on \(Y=h+\eta(X)\)._
Proof.: Define \(\Psi\) as the integral
\[\Psi(X,Y)=\int_{h+\eta(X)}^{Y}U(X,\tilde{Y})\;d\tilde{Y}, \tag{2.1}\]
which clearly vanishes on \(Y=h+\eta(X)\). Using the regularity of \(U\) and \(\eta\), it is straightforward to check that \(\Psi\in C^{0}(\overline{\Omega_{0}\cup\Omega_{1}})\) with \(\Psi_{Y}=U\). Differentiating (2.1) with respect to \(X\) and then integrating by parts using (1.1a), we discover that
\[\Psi_{X}(X,Y)=\int_{h+\eta(X)}^{Y}U_{X}(X,\tilde{Y})\;d\tilde{Y}-U(X,h+\eta(X) )\eta_{X}(X)=-V(X,Y),\]
where the boundary terms at \(Y=h+\eta(X)\) cancel thanks to (1.1e). The regularity \(\Psi\in C^{2}(\overline{\Omega_{0}})\cap C^{2}(\overline{\Omega_{1}})\cap C^{ 1}(\overline{\Omega_{0}\cup\Omega_{1}})\) follows immediately.
One important symmetry this problem has is that it is _reversible_ in \(X\). More precisely, if \((U,V,\eta)\) solves (1.1), then the functions \((\tilde{U},\tilde{V},\tilde{\eta})\) defined by
\[\tilde{U}(X,Y)=U(-X,Y),\quad\tilde{V}(X,Y)=-V(-X,Y),\quad\tilde{\eta}(X)=\eta( -X), \tag{2.2}\]
also solve (1.1). Physically, this corresponds to reflecting the particle motion across the \(Y\) axis and reversing time. Note that the solutions in Theorem 1.1 are invariant under this symmetry.
We next introduce several well-known invariants for (1.1). Since \(X\) plays a time-like role, these are called 'conserved quantities' in what follows.
**Lemma 2.2** (Conserved quantities).: _Let \((U,V,\eta)\) be a solution to (1.1). Then the quantities_
\[Q_{0} =\int_{0}^{h+\eta(X)}U(X,Y)\;dY\] \[Q_{1} =\int_{h+\eta(X)}^{1}U(X,Y)\;dY\] \[S =\int_{0}^{1}\left(\frac{V(X,Y)^{2}-U(X,Y)^{2}}{2}+\omega YU(X,Y) \right)\;dY\]
_are constants independent of \(X\)._
We call \(Q_{0}\) the _mass flux_ of the lower layer and \(Q_{1}\) the mass flux of the upper layer. The _flow force_\(S\) does not play a role in our later arguments, but it is included here for completeness.
Proof.: First consider the mass flux of the lower layer, \(Q_{0}\), and find that
\[\frac{d}{dX}\int_{0}^{h+\eta}U\;dY =\eta_{X}U(X,h+\eta)+\int_{0}^{h+\eta}U_{X}\;dY\] \[=\eta_{X}U(X,h+\eta)-\int_{0}^{h+\eta}V_{Y}\;dY\] \[=\eta_{X}U(X,h+\eta)-V(X,h+\eta)=0.\]
The second equality is due to (1.1a), and the last is due to (1.1e). An almost identical proof shows the conservation of \(Q_{1}\). Now consider the flow force. We see that
\[\frac{dS}{dX} =\frac{d}{dX}\left(\int_{0}^{h+\eta}\left(\frac{V^{2}-U^{2}}{2}+ \omega YU\right)\;dY+\int_{h+\eta}^{1}\left(\frac{V^{2}-U^{2}}{2}+\omega YU \right)\;dY\right)\] \[=\int_{0}^{1}\left(VV_{X}-UU_{X}+\omega YU_{X}\right)\;dY+(\omega _{0}-\omega_{1})(h+\eta)U(X,h+\eta)\eta_{X}\] \[=\int_{0}^{1}\left((U_{Y}-\omega)V+UV_{Y}-\omega YV_{Y}\right)\; dY+(\omega_{0}-\omega_{1})(h+\eta)U(X,h+\eta)\eta_{X},\]
where the last line uses (1.1a) and (1.1b). Performing integration by parts on the last term in the integrand, and using (1.1e) to eliminate the boundary terms, gives us
\[\frac{dS}{dX}=\int_{0}^{1}(VU_{Y}+UV_{Y})\;dY=\int_{0}^{1}(UV)_{Y}\;dY=0.\qed\]
Shear flows, that is, solutions of (1.1) which do not depend on \(X\), play a key role in the analysis. As we are viewing \(X\) as a time-like variable, we call these solutions _equilibria_.
**Lemma 2.3** (Equilibria).: _All solutions to (1.1) with no \(X\) dependence are of the form_
\[U=\omega(Y-h-\eta)+\tilde{c},\quad V=0,\quad\eta\;\mathrm{constant}. \tag{2.3}\]
Proof.: Suppose \((U,V,\eta)\) is an equilibrium solution to (1.1). By (1.1a), (1.1d), and (1.1c), we have that \(V=0\). Furthermore, from the definition of equilibrium, \(\eta\) is a constant. Finally, integrating (1.1b) gives \(U=\omega(Y-h-\eta)+\tilde{c}\), where \(\tilde{c}\) is an arbitrary constant equal to the horizontal velocity at \(y=h+\eta\).
If, as in (1.2), an equilibrium solution has \(\eta=0\), we call it a _trivial_ solution. However, notice that for any general equilibrium of the form in (2.3), \(h\) can be redefined to give an equilibrium of the form in (1.2).
Many studies on steady solutions of the two-dimensional Euler equations assume the existence of a so-called 'vorticity function' \(\gamma\) such that the \(\omega=U_{Y}-V_{X}=\gamma(\Psi)\). This then implies that the stream function \(\Psi\) satisfies the semilinear equation \(\Delta\Psi=\gamma(\Psi)\). Interestingly, as mentioned in Remark 1.3, many of the solutions that we construct in Theorem 1.1 do not possess a vorticity function.
**Proposition 2.4**.: _Let \((U^{*},0,0)\) be the shear solution given by \(U^{*}=\omega(Y-h)+h(1-h)\), and let \((U,V,\eta)\) be a solution to (1.1) with stream function \(\Psi\) satisfying_
\[\|U-U^{*}\|_{L^{\infty}}<\delta,\qquad(1+\|U^{*}\|_{L^{\infty}})\|\eta\|_{L^{ \infty}}<\delta \tag{2.4}\]
_for some sufficiently small \(\delta\) depending only on \(\omega_{0}\) and \(h\). If \(\omega_{0}<1-2h\) or \(\omega_{0}>2-2h\), then there does not exist a single-valued function \(\gamma\) such that \(\omega=\gamma(\Psi)\). If \(1-2h<\omega_{0}<2-2h\), then there does exist such a function._
_Remark 2.5_.: We focus on the shear flow \((U^{*},0,0)\) as it corresponds to \(\varepsilon=0\) in Theorem 1.1. In particular, using (1.4) and (1.5) we see that Proposition 2.4 applies to the solutions \((U^{\varepsilon},V^{\varepsilon},\eta^{\varepsilon})\) in Theorem 1.1, provided \(\omega_{0}\neq 1-2h,2-2h\) and \(\varepsilon\) is sufficiently small. Similar arguments could be made for perturbations of more general shear flows.
Proof of Proposition 2.4.: In what follows all stream functions are defined using (2.1) so that they vanish at their respective interfaces. Thus the stream function corresponding to \((U^{*},0,0)\) is
\[\Psi^{*}=\Psi^{*}(Y)=\tfrac{1}{2}\omega(Y-h)^{2}+h(1-h)(Y-h),\]
and (2.4) implies the estimate
\[\|\Psi^{*}-\Psi\|_{L^{\infty}}<\delta. \tag{2.5}\]
First consider the case \(\omega_{0}<1-2h\). Using (2.5) and \(\omega_{1}=\omega_{0}-1\), we find
\[\Psi(0,1)<\Psi^{*}(1)+\delta=\tfrac{1}{2}(1-h)^{2}(\omega_{0}-1+2h)+\delta<0,\]
provided \(\delta>0\) is sufficiently small, and similarly
\[\Psi(0,0)<\Psi^{*}(0)+\delta=\tfrac{1}{2}h^{2}\omega_{0}-(1-h)h^{2}+\delta<- \tfrac{1}{2}h^{2}+\delta<0.\]
Since \(\Psi(0,h+\eta(0))=0\) by construction, the intermediate value theorem guarantees the existence of \(Y_{1}\in(h+\eta(0),1)\) and \(Y_{0}\in(0,h+\eta(0))\) with \(\Psi(0,Y_{0})=\Psi(0,Y_{1})<0\). As \((0,Y_{0})\in\Omega_{0}\) and \((0,Y_{1})\in\Omega_{1}\), the corresponding vorticities \(\omega(0,Y_{0})=\omega_{0}\) and \(\omega_{1}=\omega(0,Y_{1})\) are distinct, and hence there cannot exist a global vorticity \(\gamma\) such that \(\omega=\gamma(\Psi)\). The \(\omega_{0}>2h-2\) follows by an analogous argument.
It remains to consider the case where \(1-2h<\omega_{0}<2-2h\). We claim that \(\Psi<0\) in \(\Omega_{0}\) and \(\Psi>0\) in \(\Omega_{1}\). This will imply that \(\omega=\gamma(\Psi)\) where \(\gamma(t)=\omega_{0}-H(t)\) and \(H\) is the Heaviside step function. For \(|Y-h|\leq\sqrt{\delta}\), these strict signs for \(\Psi\) follow from the fact that \(\Psi(X,\eta(X))=0\) together the uniform lower bound
\[\Psi_{Y}(X,Y)=U(X,Y)>U^{*}(Y)-\delta\geq(1-h)h-\|\omega\|_{L^{\infty}}\sqrt{ \delta}-\delta>0,\]
which holds for \(\delta\) sufficiently small. For \(|Y-h|>\sqrt{\delta}\), suppose first that \(\omega_{0}\geq 1-h\). Then \(\Psi^{*}\) has at most one critical point in \((0,h)\), which is a local minimum, and is strictly increasing on \((h,1)\). Thus, for \(\delta>0\) sufficiently small, we have
\[\max_{\mathbb{R}\times[0,h-\sqrt{\delta}]}\Psi <\max_{[0,h-\sqrt{\delta}]}\Psi^{*}+\delta<\max\{\Psi^{*}(0),\Psi ^{*}(h-\sqrt{\delta})\}+\delta\] \[\leq\max\{-\tfrac{1}{2}h^{2}+\delta,-\sqrt{\delta}h(1-h)+\delta( 2-h)\}<0\]
and similarly
\[\min_{\mathbb{R}\times[h+\sqrt{\delta},1]}\Psi >\min_{[h+\sqrt{\delta},1]}\Psi^{*}-\delta>\min\{\Psi^{*}(h+\sqrt {\delta}),\Psi^{*}(1)\}-\delta\] \[\geq\max\{\sqrt{\delta}h(1-h)-\delta(1+h),\tfrac{1}{2}(1-h)^{2}- \delta\}>0.\]
The arguments for \(\omega_{0}\leq 1-h\) are similar but with the role of the two layers reversed, and the claim is proved.
## 3. Reformulation
### Pointwise flattening with a Mobius map
One of main challenges of (1.1) is that it is a free-boundary problem, i.e., the domains \(\Omega_{0}\) and \(\Omega_{1}\) are unknowns. To overcome this, we introduce new coordinates to map the problem onto a known domain, at the cost of making the PDE much more nonlinear.
In particular, we seek a coordinate transformation which flattens interface to a straight line of constant height \(h\), pointwise in \(X\), and fixes the upper and lower boundaries. Thus the new independent variables \(x=x(X,Y)\) and \(y=y(X,Y)\) should satisfy
\[x(X,Y)=X,\quad y(X,0)=0,\quad y(X,h+\eta(X))=h,\quad y(X,1)=1.\]
The obvious choice is to let \(Y\) depend on \(y\) in a piecewise linear fashion1
Footnote 1: Thus the new independent variables would be
\[\tilde{x}=X,\quad\tilde{y}=\begin{cases}\frac{hY}{h+\eta(X)}&\quad\text{for $0 \leq Y\leq h+\eta(X)$}\\ \frac{(1-h)Y-\eta(X)}{1-h-\eta(X)}&\quad\text{for $h+\eta(X)\leq Y\leq 1$}, \end{cases}\]
for instance with
\[\tilde{x}=X,\quad\tilde{y}=\begin{cases}\frac{(1-h)Y-\eta(X)}{1-h-\eta(X)}& \quad\text{for $h+\eta(X)\leq Y\leq 1$},\end{cases}\]
for instance with
\[\tilde{u}(x,y)=U(X,Y)\]
and
\[\tilde{v}(x,y)=V(X,Y)\]
, (3.1d)
where \(c\) is a parameter. Notice that \(u=v=\eta=0\) always gives a solution, in particular the trivial solution from (1.2) with \(\tilde{c}=c\).
We now differentiate (3.1b) to get
\[y_{X} =-\frac{\eta_{x}y(1-y)}{(h+\eta)(1-(h+\eta))}, y_{Y} =\frac{(\eta y-\eta h+h(1-h))^{2}}{h(1-h)(h+\eta)(1-(h+\eta))},\] \[y_{XY} =\eta_{x}\frac{(\eta y-\eta h+h(1-h))^{2}(2y-1)}{h(1-h)(h+\eta)^ {2}(1-(h+\eta))^{2}}, y_{YY} =\frac{2\eta(\eta y-\eta h+h(1-h))^{3}}{h^{2}(1-h)^{2}(h+\eta)^{2} (1-(h+\eta))^{2}}\, \tag{3.2}\]
and rearrange and differentiate (3.1c) and (3.1d) to find
\[U_{X} =y_{XY}(u+\omega(y-h)+c)+y_{Y}u_{x}+y_{X}y_{Y}(u_{y}+\omega), V_{X} =v_{x}+y_{X}v_{y}, \tag{3.3}\] \[U_{Y} =y_{YY}(u+\omega(y-h)+c)+y_{Y}^{2}(u_{y}+\omega), V_{Y} =y_{Y}v_{y}.\]
Substituting (3.2) and (3.3) into (1.1a)-(1.1b) then gives
\[0 =\eta_{x}(2y-1)(u+\omega(y-h)+c)+(h+\eta)(1-(h+\eta))u_{x}-\eta_{ x}y(1-y)(u_{y}+\omega)\] \[\qquad+(h+\eta)(1-(h+\eta))v_{y}, \tag{3.4a}\] \[\omega =\frac{2\eta(\eta y-\eta h+h(1-h))^{3}(u+\omega(y-h)+c)+(\eta y- \eta h+h(1-h))^{4}(u_{y}+\omega)}{h^{2}(1-h)^{2}(h+\eta)^{2}(1-(h+\eta))^{2}}\] \[\qquad-v_{x}+\frac{\eta_{x}y(1-y)}{(h+\eta)(1-(h+\eta))}v_{y}, \tag{3.4b}\]
for \(y\in(0,h)\cup(h,1)\), while the boundary conditions (1.1c)-(1.1e) become
\[v =0 \text{on }y=1, \tag{3.4c}\] \[v =0 \text{on }y=0,\] (3.4d) \[\eta_{x} =\frac{(h+\eta)(1-(h+\eta))v}{h(1-h)(u+c)} \text{on }y=h, \tag{3.4e}\]
where \(\eta\in C^{2}(\mathbb{R})\), and \(u,v\in C^{1}(\overline{\mathbb{R}\times(0,h)})\cap C^{1}(\overline{\mathbb{R} \times(0,h)})\cap C^{0}\big{(}\overline{\mathbb{R}\times(0,1)}\big{)}\).
### Properties of this coordinate change
One benefit of this change of coordinates is that there are conserved quantities related to the mass fluxes \(Q_{0},Q_{1}\) in Lemma 2.2 which have a particularly simple form. We let
\[q_{0}=\int_{0}^{h}u(x,y)\;dy,\qquad q_{1}=\int_{h}^{1}u(x,y)\;dy,\]
and call \(q_{0}\) the _pseudoflux_ of the lower layer, and \(q_{1}\) the pseudoflux of the upper layer. Notice that for solutions, these are conserved quantities, since
\[\frac{d}{dx}q_{0} =\frac{d}{dx}\int_{0}^{h+\eta(X)}U(X,Y)Y_{y}y_{Y}(X,Y)\;dY+\frac{ d}{dx}\left(\frac{h^{2}\omega_{0}}{2}-hc\right)\] \[=\frac{d}{dX}\int_{0}^{h+\eta(X)}U(X,Y)\;dY=0,\]
and similarly for \(q_{1}\), where in the last step we have used Lemma 2.2.
We will ultimately restrict our attention to solutions with vanishing pseudofluxes, i.e. to solutions satisfying the constraints
\[\int_{0}^{h}u\;dy =0 \tag{3.5}\] \[\int_{h}^{1}u\;dy =0. \tag{3.6}\]
The shear flows from (2.3) with \(\tilde{c}=c\) satisfy (3.5)-(3.6). Furthermore, since the pseudofluxes are conserved quantities, the same is true for any flows which tend to one of these shear flows as \(x\to-\infty\) or \(x\to+\infty\), in particular the solitary waves constructed in Theorem 1.1. For a general solution \((U,V,\eta)\) in the original variables, without well-defined limits as \(x\to\pm\infty\), we can always choose the value of \(c\) in (3.1c) so that one of (3.5) and (3.6) is satisfied, but imposing both is an additional restriction.
While we do not need the flow force \(S\) from Lemma 2.2 in our arguments, we nevertheless record its expression in terms of the new variables for completeness,
\[S=\int_{0}^{1}\left(\frac{1}{2}Y_{y}v^{2}-\frac{(u-\omega(y-h)+c)^{2}}{2Y_{y} }+\frac{\omega y(h+\eta)(1-h)}{\eta(y-h)+h(1-h)}(u-\omega(y-h)+c)\right)\,dy.\]
Note while the pseudofluxes are related to - but not exactly - the mass fluxes, this is exactly the flow force, just written in the new coordinates.
The reversibility in (2.2) is also preserved. That is, if \((u,v,\eta;c)\) satisfies (3.4)-(3.6), and we define
\[\check{u}(x,y)=u(-x,y),\quad\check{v}(x,y)=-v(-x,y),\quad\check{\eta}(x)=\eta( -x), \tag{3.7}\]
then \((\tilde{u},\tilde{v},\tilde{\eta};c)\) also satisfy (3.4)-(3.6).
Lastly, let us describe the \(x\)-independent solutions. In physical coordinates, with fixed \(\omega\) and \(h\), (1.1) has a two-parameter family of equilibria given by (2.3). However, once we enforce the constraints (3.5)-(3.6) on the psuedofluaxes, we are left with are most three equilibria.
**Lemma 3.1**.: _For fixed \(h,\omega,c\), the number of equilibrium solutions of (3.4)-(3.6) is given by the number of real roots \(\eta\) of_
\[\eta(\eta^{2}+\theta\eta+2\varepsilon)=0, \tag{3.8}\]
_where \(\varepsilon=c-h(1-h)\) and \(\theta\) is defined in (1.3)._
_Remark 3.2_.: There is always one equilibrium with \(\eta=0\). Supposing as in Theorem 1.1 that \(\theta\neq 0\), there is one non-zero equilibrium when \(\varepsilon=0\), while for \(0<\varepsilon<\frac{1}{8}\theta^{2}\) there are two non-zero equilibria.
Proof of Lemma 3.1.: By Lemma 2.3, any equilibrium solution of (3.4) must correspond to an equilibrium solution (2.3) of (1.1). Applying the coordinate transformation (3.1) we deduce that \(v=0\), \(\eta\) is constant, and
\[u=Y_{y}\omega(Y-h-\eta)+\tilde{c}Y_{y}-\omega(y-h)-c. \tag{3.9}\]
Enforcing (3.5)-(3.6), i.e., that the pseudofluaxes are zero, yields a system of algebraic equations
\[0 =-\tfrac{1}{2}\omega_{0}(h+\eta)^{2}+\tilde{c}(h+\eta)+\tfrac{1} {2}\omega_{0}h^{2}-hc, \tag{3.10a}\] \[0 =\tfrac{1}{2}\omega_{1}(1-h-\eta)^{2}+\tilde{c}(1-h-\eta)- \tfrac{1}{2}\omega_{1}(1-h)^{2}-c(1-h) \tag{3.10b}\]
for \(\eta\in(-h,1-h)\) and \(\tilde{c}\in\mathbb{R}\). Either equation can be uniquely solved for \(\tilde{c}\) as a function of \(\eta\), and eliminating \(\tilde{c}\) yields (3.8) as desired.
### The evolution equation
We now reformulate (3.4) as an evolution equation in \(x\). The first step is to algebraically solve (3.4) for the derivatives \(u_{x}\), \(v_{x}\), and \(\eta_{x}\). Solving the boundary condition (3.4e) for \(\eta_{x}\) gives us
\[\eta_{x}=\frac{(h+\eta)(1-(h+\eta))v(x,h)}{h(1-h)(u(x,h)+c)}.\] (3.11a) Solving ( 3.4a ) for \[u_{x}\], and substituting ( 3.11a ) to eliminate \[\eta_{x}\] we find \[u_{x}=\frac{v(x,h)}{h(1-h)(u(x,h)+c)}\Big{(}(1-2y)(u+\omega(y-h)+c)+y(1-y)(u_ {y}+\omega)\Big{)}-v_{y},\] (3.11b) while similarly rearranging ( 3.4b ) yields \[v_{x} =\frac{2\eta(\eta y-\eta h+h(1-h))^{3}(u+\omega(y-h)+c)+(\eta y- \eta h+h(1-h))^{4}(u_{y}+\omega)}{h^{2}(1-h)^{2}(h+\eta)^{2}(1-(h+\eta))^{2}}\] \[\qquad+\frac{y(1-y)}{h(1-h)(u(x,h)+c)}v(x,h)v_{y}-\omega. \tag{3.11c}\]
We then abbreviate (3.11) as
\[\frac{\partial}{\partial x}\begin{pmatrix}u\\ v\\ \eta\end{pmatrix}=\mathcal{F}(u,v,\eta;c). \tag{3.12}\]
Since the formula for \(\mathcal{F}\) does not contain any \(x\) derivatives, we can think of \(x\) as being fixed, meaning \(u\) and \(v\) are functions of \(y\) only, and \(\eta\) is just a real number.
Another way of thinking of this change of viewpoint is that \(\mathcal{F}\) is a map between Banach spaces of functions in \(y\). We now define these function spaces: let
\[\mathcal{X} =\bigg{\{}(u,v,\eta)\in L^{2}((0,1))\times L^{2}((0,1))\times \mathbb{R}\ \bigg{|}\ \int_{0}^{h}u(y)\ dy=\int_{h}^{1}u(y)\ dy=0\bigg{\}} \tag{3.13}\] \[\mathcal{W} =\bigg{\{}(u,v,\eta)\in H^{1}((0,1))\times H^{1}_{0}((0,1)) \times\mathbb{R}\ \bigg{|}\ \int_{0}^{h}u(y)\ dy=\int_{h}^{1}u(y)\ dy=0\bigg{\}}\] \[\mathcal{U} =\big{\{}(u,v,\eta,c)\ \mid\ (u,v,\eta)\in\mathcal{W},\ u(h)\neq-c, \ h(1-h)u(h)\neq-c(h+\eta)(1-h-\eta)\big{\}}.\]
More plainly, \(\mathcal{U}\) is the subset of \(\mathcal{W}\times\mathbb{R}\) in which none of the denominators that appear in (3.11) vanish. We see that \(\mathcal{U}\) is open and for any \(c_{*}\neq 0\), \(\mathcal{U}\) contains \((0,0,0,c_{*})\).
**Lemma 3.3**.: \(\mathcal{F}\colon\mathcal{U}\to\mathcal{X}\) _is analytic._
Proof.: Fix \((u,v,\eta,c)\in\mathcal{U}\). Then \(u,v\in H^{1}((0,1))\subset C^{0}([0,1])\), and so the derivatives \(u_{y}\) and \(v_{y}\) are well-defined, as are the pointwise values \(u(h)\), \(v(0)\), \(v(1)\), and \(v(h)\). Comparing with (3.11), we conclude that \(\mathcal{F}(u,v,\eta;c)\) is a well-defined element of \(L^{2}((0,1))\times L^{2}((0,1))\times\mathbb{R}\). To verify the integral conditions in the definition of \(\mathcal{X}\), let \(\mathcal{F}(u,v,\eta;c)=(f,g,\alpha)\). For the first integral condition, we see that
\[\int_{0}^{h}f\ dy =\int_{0}^{h}\bigg{(}\frac{v(h)\big{(}(1-2y)(u+\omega(y-h)+c)+y(1 -y)(u_{y}+\omega)\big{)}}{h(1-h)(u(h)+c)}-v_{y}\bigg{)}\ dy\] \[=\frac{v(h)}{h(1-h)(u(h)+c)}\int_{0}^{h}\frac{d}{dy}\Big{(}y(1-y )(u+\omega(y-h)+c)\Big{)}\ dy-\int_{0}^{h}v_{y}\ dy\] \[=v(h)-v(h)=0.\]
The second integral condition follows by a similar argument.
Now we show analyticity. Pointwise multiplication by a bounded function, and differentiation are both linear and bounded from \(\mathcal{W}\) to \(\mathcal{X}\), therefore analytic. Similarly, the relevant trace maps are bounded and linear from \(\mathcal{U}\to\mathbb{C}\). As \(\mathcal{F}\) is a composition of these analytic maps with appropriate rational functions, we conclude that it is also analytic.
We now separate \(\mathcal{F}\) into linear and non-linear parts around \((u,v,\eta;c)=(0,0,0;c_{*})\) with \(c=c_{*}\neq 0\), i.e., around a shear solution which does not stagnate on the interface. More precisely, we define a linear mapping \(L\) and nonlinear remainder \(\mathcal{R}\) by
\[L=(D_{u,v,\eta}\mathcal{F})(0,0,0,c_{*}),\quad\mathcal{R}(u,v,\eta,\varepsilon )=\mathcal{F}(u,v,\eta,c_{*}+\varepsilon)-L(u,v,\eta). \tag{3.14}\]
For the moment the value of \(c_{*}\) is unspecified, but we will end up focusing on the case \(c_{*}=h(1-h)\). Calculating \(L\) explicitly, we find
\[L\begin{pmatrix}u\\ v\\ \eta\end{pmatrix}=\begin{pmatrix}p(y)v(h)-v_{y}\\ u_{y}-c_{*}\eta p^{\prime}(y)\\ v(h)/c_{*}\end{pmatrix}\]
where the coefficient function \(p\) is defined by
\[p(y)=\frac{(1-2y)(\omega(y-h)+c_{*})+y(1-y)\omega}{h(1-h)c_{*}}.\]
As \(\mathcal{F}\) is analytic by Lemma 3.3, \(L\colon\mathcal{W}\to\mathcal{X}\) is a bounded linear operator while \(\mathcal{R}\colon\mathcal{U}\to\mathcal{X}\) is analytic.
## 4. The centre manifold
We now state a centre manifold theorem. This is a result due to Mielke [44]; see Theorem 3.3 of [27], and Theorem 2.1 of [45]. It considers the general differential equation
\[\frac{dw}{dx}=Lw+\mathcal{R}(w,\varepsilon).\] (4.1a) In our application, the variable \[w(x,y)\] corresponds to \[(u(x,y),v(x,y),\eta(x))\], and \[\varepsilon=c-c_{*}.\]
**Theorem 4.1** (Centre manifold theorem).: _Let \(\mathcal{X},\mathcal{W}\) be Hilbert spaces with \(\mathcal{W}\) continuously embedded in \(\mathcal{X}\). Suppose a bounded linear operator \(L\colon\mathcal{W}\to\mathcal{X}\) has spectrum \(\sigma(L)\), and satisfies the following three hypotheses:_
1. _The centre spectrum_ \(\sigma_{0}=\{z\in\sigma(L)\mid\operatorname{Re}(z)=0\}\) _consists only of finitely many eigenvalues, all of which have finite algebraic multiplicity._
2. _There exist_ \(R>0\)_,_ \(C>0\)_, such that for all_ \(k\in\mathbb{R}\) _with_ \(|k|>R\)_, we have that_ \(L-ikI\colon\mathcal{X}\to\mathcal{X}\) _is invertible, and satisfies_ \[|k|\|(u,v,\eta)\|_{\mathcal{X}}\leq C\|(L-ikI)(u,v,\eta)\|_{\mathcal{X}}.\]
3. _There exists_ \(\delta>0\) _such that there are no_ \(z\in\sigma(L)\) _satisfying_ \(0<|\operatorname{Re}(z)|<\delta\)_._
_Suppose we also have a nonlinear function \(\mathcal{R}\) and an integer \(N\geq 2\), such that there exists a neighbourhood \(\mathcal{U}\subset\mathcal{W}\times\mathbb{R}\) of \(0\) such that \(\mathcal{R}\in C^{N}(\mathcal{U},\mathcal{X})\), and that_
\[\mathcal{R}(0,0)=0,\quad D_{w}\mathcal{R}(0,0)=0.\]
_Define \(\mathcal{E}_{0}\subset\mathcal{W}\) to be the generalised eigenspace corresponding to the purely imaginary eigenvalues of \(L\). Let \(P_{0}\) be a continuous projection onto \(\mathcal{E}_{0}\) which commutes with \(L\), and whose kernel is called \(\mathcal{E}_{h}\)._
_Then there exists a map \(\psi\in C^{N}(\mathcal{E}_{0}\times\mathbb{R},\mathcal{E}_{h})\) with_
\[\psi(0,0)=0,\quad D_{w}\psi(0,0)=0, \tag{4.1b}\]
_and a neighbourhood \(\mathcal{V}_{w}\times\mathcal{V}_{\varepsilon}\) of \((0,0)\) in \(\mathcal{E}_{0}\times\mathbb{R}\), such that for \(\varepsilon\in\mathcal{V}_{\varepsilon}\), the manifold_
\[\mathcal{M}_{0}(\varepsilon)=\{w_{0}+\psi(w_{0},\varepsilon)\mid w_{0}\in \mathcal{V}_{w}\} \tag{4.1c}\]
_has the following properties:_
1. \(\mathcal{M}_{0}(\varepsilon)\) _is locally invariant, i.e., if_ \(w\) _is a solution of (_4.1a_) satisfying_ \(w(0)\in\mathcal{M}_{0}(\varepsilon)\cap\mathcal{V}_{w}\) _and_ \(w(x)\in\mathcal{V}_{w}\) _for all_ \(x\in[0,\hat{x}]\)_, then_ \(w(x)\in\mathcal{M}_{0}(\varepsilon)\) _for all_ \(x\in[0,\hat{x}]\)
2. \(\mathcal{M}_{0}(\varepsilon)\) _contains the set of bounded solutions of (_4.1a_) staying in_ \(\mathcal{V}_{w}\) _for all_ \(x\in\mathbb{R}\)_, i.e., if_ \(w\) _is a solution of (_4.1a_) satisfying_ \(w(x)\in\mathcal{V}_{w}\) _for all_ \(x\in\mathbb{R}\)_, then_ \(w(0)\in\mathcal{M}_{0}(\varepsilon)\)_._
3. _Suppose_ \(w_{0}\) _satisfies the reduced equation_ \[\frac{dw_{0}}{dx}=P_{0}\big{(}Lw_{0}+L\psi(w_{0},\varepsilon)+\mathcal{R}(w_{0 }+\psi(w_{0},\varepsilon),\varepsilon)\big{)},\] (4.1d) _and_ \(w_{0}(x)\in\mathcal{V}_{w}\cap\mathcal{E}_{0}\) _for all_ \(x\)_. Then_ \(w_{0}+\psi(w_{0},\varepsilon)\) _is a solution to the full problem, i.e., satisfies (_4.1a_)._
### Analysis of the linearised operator
We show that the linear operator \(L\) defined in (3.14) satisfies Hypotheses (i)-(iii) of Theorem 4.1.
Finding the spectrum of \(L\) is made easier by the fact that \(L\) is Fredholm. To show this, we first consider the slightly simpler operator \(\tilde{L}\) given by
\[\tilde{L}\begin{pmatrix}u\\ v\\ \eta\end{pmatrix}=L\begin{pmatrix}u\\ v\\ \eta\end{pmatrix}+\begin{pmatrix}0\\ (c_{*}p^{\prime}-1)\eta\\ 0\end{pmatrix}.\]
**Lemma 4.2**.: \(\tilde{L}:\mathcal{W}\to\mathcal{X}\) _is invertible._
Proof.: First we show \(\tilde{L}\) has trivial kernel. This is not a particularly complicated calculation, but variants of it will appear frequently, so this will serve as a simple example. Suppose \((u,v,\eta)\in\mathcal{W}\) is in the kernel of \(\tilde{L}\). This means we seek continuous functions of \(y\), namely \(u\) and \(v\), and a real number \(\eta\) satisfying
\[p(y)v(h)-v_{y} =0 \text{ for }y\in(0,h)\cup(h,1) \tag{4.2a}\] \[u_{y}-\eta =0 \text{ for }y\in(0,h)\cup(h,1)\] (4.2b) \[\frac{v(h)}{c_{*}} =0\] (4.2c) \[v(0)=v(1) =0\] (4.2d) \[\int_{0}^{h}u=\int_{h}^{1}u =0, \tag{4.2e}\]
where (4.2a)-(4.2c) are just \(\tilde{L}(u,v,\eta)=0\) rewritten while (4.2d)-(4.2e) and the continuity of \(u\) and \(v\) are imposed by \((u,v,\eta)\in\mathcal{W}\).
The equations (4.2c) and (4.2a) imply that \(v\) is constant on each of \((0,h)\) and \((h,1)\), so then (4.2d) gives that \(v=0\). Solving (4.2b) and appealing to the continuity of \(u\) at \(h\) yields
\[u=\begin{cases}\eta y+C_{0}&0\leq y\leq h\\ \eta y+C_{0}&h\leq y\leq 1,\end{cases}\]
for some constant \(C_{0}\). Inserting this into the constraint (4.2e) yields \(\eta=C_{0}=0\) and hence \(u=0\), so that \((u,v,\eta)=(0,0,0)\) as desired.
Notice also that \(\tilde{L}\) is of full range. In particular, the solution to \(\tilde{L}(u,v,\eta)=(f,g,\alpha)\) can be shown by a direct calculation, which we leave to the reader, to be
\[u(y) =\left(\frac{2}{h}\int_{0}^{h}G(\tilde{y})\;d\tilde{y}-\frac{2}{1- h}\int_{h}^{1}G(\tilde{y})\;d\tilde{y}\right)(y-h)\] \[\qquad+G(y)-\frac{1-h}{h}\int_{0}^{h}G(\tilde{y})\;d\tilde{y}- \frac{h}{1-h}\int_{h}^{1}G(\tilde{y})\;d\tilde{y}\] \[v(y) =\int_{0}^{y}\alpha c_{*}p(\tilde{y})-f(\tilde{y})\;d\tilde{y}\] \[\eta =\frac{2}{h}\int_{0}^{h}G(\tilde{y})\;d\tilde{y}-\frac{2}{1-h} \int_{h}^{1}G(\tilde{y})\;d\tilde{y}\qquad\text{where}\qquad G(y)=\int_{0}^{y} g(\tilde{y})\;d\tilde{y}.\]
It can also be shown straightforwardly that this \((u,v,\eta)\in\mathcal{W}\). Therefore, \(\tilde{L}\) is invertible as desired.
**Corollary 4.3**.: _The spectrum of \(L\) consists only of eigenvalues._
Proof.: We have just shown \(\tilde{L}\) is invertible, and hence in particular that it is Fredholm with index \(0\). Therefore, since \(L-\tilde{L}\) has finite-dimensional range, and therefore is a compact operator, \(L\) is also Fredholm of index \(0\). Similarly, \(L-zI\) is Fredholm of index \(0\) for all \(z\in\mathbb{C}\). Therefore \(L-zI\) is injective if and only if it is surjective. This means the spectrum of \(L\) is exactly those \(z\in\mathbb{C}\) for which \(L-zI\) has non-trivial kernel.
We are now ready to verify Hypothesis (i) of Theorem 4.1. Writing \(z=ik\), we ask for which \(k\in\mathbb{C}\) the operator \(L-ikI\) has non-trivial kernel. The answer turns out to be largely captured by the _dispersion relation_:
\[\frac{1}{c_{*}} =\mathfrak{d}(k), \tag{4.3}\] \[\text{where}\quad\;\;\mathfrak{d}(k) =k\big{(}\coth(kh)+\coth((1-h)k)\big{)}.\]
However, not all eigenvalues are given by solutions to (4.3). If
\[\sin(hz)=\sin((1-h)z)=0,\ z\neq 0, \tag{4.4}\]
then \(L-zI\) has non-trivial kernel, no matter the values of \(c_{*},\omega_{0},\omega_{1}\). We are primarily concerned with (4.3) for two reasons. Firstly, (4.4) is very rarely satisfied; in particular, it requires \(h\) to be rational, which generically is not true. Secondly, even if there exists a \(z_{*}\) satisfying (4.4), this does not contradict our hypotheses. This is because this \(z_{*}\) must be real with \(|z_{*}|>\pi\), and the hypotheses are only concerned with eigenvalues on, or arbitrarily close to, the imaginary axis.
**Lemma 4.4**.: _If \(k\in\mathbb{C}\setminus\{0\}\) and \(L-ikI\) has non-trivial kernel, then \(k\) satisfies (4.3) or (4.4)._
Proof.: Suppose \((u,v,\eta)\neq 0\) solves \((L-ikI)(u,v,\eta)=0\). Since \(v_{y}=v(h)p(y)-iku\), and \(u\) is in \(H^{1}\), we have that \(v\) is twice differentiable. Substitution yields the second order ODE \(v_{yy}-k^{2}v=0\), which has general solution
\[v=\begin{cases}A\sinh(ky)&\text{for}\quad 0\leq y\leq h\\ B\sinh(k(1-y))&\text{for}\quad h\leq y\leq 1,\end{cases} \tag{4.5a}\]
for some constants \(A\) and \(B\). Therefore, inserting the formula for \(v\) into the first component of \((L-ikI)(u,v,\eta)=0\) yields
\[u=\begin{cases}c_{*}\eta p+iA\cosh(ky)&\text{for}\quad 0\leq y\leq h\\ c_{*}\eta p-iB\cosh(k(1-y))&\text{for}\quad h\leq y\leq 1.\end{cases} \tag{4.5b}\]
The conditions on \(v(h)\) and the continuity of \(u\) give the equations
\[A\sinh(kh) =ikc_{*}\eta \tag{4.6a}\] \[B\sinh(k(1-h)) =ikc_{*}\eta\] (4.6b) \[A\cosh(kh)+B\cosh(k(1-h)) =i\eta. \tag{4.6c}\]
Suppose first that \(\eta=0\). By (4.6), \(A=0\) if and only if \(B=0\). Since we want a non-trivial solution, we must have \(A\neq 0,B\neq 0\). Therefore \(ikh=n\pi\) and \(ik(1-h)=m\pi\) for some \(m,n\in\mathbb{N}\), meaning \(h\) must be rational, and \(ik=(m+n)\pi\). By assumption \(k\neq 0\), so we infer \(m+n\neq 0\). These, with (4.6c) means that \((-1)^{n}A+(-1)^{m}B=0\). Together these give us that
\[v=A\sinh(ky)\quad\text{for }y\in[0,1].\]
Therefore, we have a one-dimensional kernel spanned by
\[u=\cos(iky)\qquad v=-\sin(iky)\qquad\eta=0.\]
These are precisely the eigenvalues which correspond to (4.4), rather than (4.3).
Now suppose \(\eta\neq 0\), and \((u,v,\eta)\neq 0\) solves \((L-ikI)(u,v,\eta)=0\). Examining (4.6a) and (4.6b), we see that \(\sinh(kh)\) and \(\sinh(k(1-h))\) are both non-zero, so (4.6a) and (4.6b) can be solved for \(A\) and \(B\). Substituting these values into (4.6c) yields (4.3).
It is interesting to note that \(\mathfrak{d}\) is a meromorphic function on \(\mathbb{C}\) in \(k\), with a removable singularity at \(k=0\). Using this to evaluate (4.3) at \(k=0\), and then solving for \(c_{*}\), yields \(c_{*}=h(1-h)\). Notice that \(h(1-h)\) is never \(0\), so it is a valid value for \(c_{*}\).
**Lemma 4.5**.: \(L\) _has an eigenvalue of \(0\) if and only if \(c_{*}=h(1-h)\). Furthermore, in this case, \(0\) is the only purely imaginary eigenvalue, and has algebraic multiplicity two._
Proof.: We seek solutions to \(L(u,v,\eta)=0\). It can be easily shown using the equations that come directly from \(L(u,v,\eta)=0\), the conditions on \(v\), and the integral conditions on \(u\), that if a non-trivial kernel exists, it must be spanned by a vector of the form
\[u=u_{*} =\begin{cases}c_{*}\left(p(y)-h^{-1}\right)&\text{for}\quad 0\leq y \leq h\\ c_{*}\left(p(y)+(1-h)^{-1}\right)&\text{for}\quad h\leq y\leq 1\end{cases} \tag{4.7}\] \[v =0\] (4.8) \[\eta =1. \tag{4.9}\]
The choice \(c_{*}=h(1-h)\) then guarantees \(p\) is such that \(u\) is continuous at \(h\), so that we indeed have a one-dimensional kernel.
We now show that there are no other purely imaginary eigenvalues. Lemma 4.4 means that this is equivalent to showing that (4.3) has no non-zero real roots. We differentiate
\(\mathfrak{d}\) with respect to \(k\), and see
\[\mathfrak{d}^{\prime}(k)=\frac{\sinh(2hk)-2hk}{2\sinh^{2}(hk)}+\frac{\sinh(2(1-h)k )-2(1-h)k}{2\sinh^{2}((1-h)k)}.\]
Therefore \(\mathfrak{d}(k)\) is strictly decreasing for \(k<0\) and strictly increasing for \(k>0\), so it achieves its minimum value only at \(k=0\). Hence, for \(k\in\mathbb{R}\setminus\{0\}\), we have
\[\mathfrak{d}(k)>\frac{1}{c_{*}}.\]
Therefore (4.3) has no nonzero real roots, thus \(0\) is the only purely imaginary eigenvalue.
We now seek second order generalised eigenvectors, i.e., solutions to \(L(u,v,\eta)=(u_{*},0,1)\). Without loss of generality, we can take \(\eta=0\). Therefore we need to solve
\[c_{*}p-v_{y} =u_{*}\] \[u_{y} =0\] \[v(0) =v(1)=0\] \[v(h) =c_{*}\] \[\int_{0}^{h}u\;dy =\int_{h}^{1}u\;dy=0,\]
which has solution \((0,v_{*},0)\) where
\[v_{*}=\begin{cases}c_{*}yh^{-1}&\quad\text{for}\quad 0\leq y\leq h\\ c_{*}(1-y)(1-h)^{-1}&\quad\text{for}\quad h\leq y\leq 1.\end{cases}\]
Performing a similar process to find a third order generalised eigenvector shows that none exist, so we conclude that when \(c_{*}=h(1-h)\), \(0\) is an eigenvector, and has algebraic multiplicity \(2\).
**Corollary 4.6**.: _For \(c_{*}=h(1-h)\), \(L\) satisfies Hypothesis_ (i) _of Theorem 4.1._
Proof.: We have shown that for this value of \(c_{*}\), \(L\) has one purely imaginary eigenvalue, and it has finite algebraic multiplicity. Furthermore, since \(L\) is Fredholm, its spectrum consists only of eigenvalues, therefore \(\sigma_{0}=\{0\}\). Thus, Hypothesis (i) is satisfied.
We now verify Hypothesis (ii). Since \(\sigma_{0}\) is finite, the inverse \((L-ikI)^{-1}\), which we view as a linear operator \(\mathcal{X}\to\mathcal{X}\), exists for sufficiently large \(k\).
**Proposition 4.7**.: _For any value of \(c_{*}\neq 0\), the operator \(L\) satisfies Hypothesis_ (ii) _of Theorem 4.1._
Proof.: Suppose \((L-ikI)(u,v,\eta)=(f,g,\alpha)\). For the remainder of this proof, \(\|\cdot\|\) should be taken to mean the \(L^{2}((0,1))\) norm, and \(P=1+\|p\|\).
Our first step will be to find a bound on \(|v(h)|\) which grows sub-linearly in \(k\). We see that
\[v(h)^{2}=\int_{0}^{h}2vv_{y}\;dy=-2ik\int_{0}^{h}uv\;dy-2\int_{0}^{h}fv\;dy+2v (h)\int_{0}^{h}pv\;dy,\]
and therefore,
\[|v(h)|^{2}\leq 2|v(h)|\|p\|\|v\|+2|k|\|u\|\|v\|+2\|f\|\|v\|.\]
Thinking of this as a quadratic in \(|v(h)|\), then applying Cauchy-Schwarz gives us
\[|v(h)| \leq\|p\|\|v\|+\sqrt{\|p\|^{2}\|v\|^{2}+2\|ku\|\|v\|+2\|f\|\|v\|}\] \[\leq 2\|p\|\|v\|+\sqrt{2\|ku\|\|v\|}+\|f\|+\|v\|\leq 2P\|v\|+ \sqrt{2|k|}\|(u,v)\|+\|f\|\] \[\leq\left(2P+\sqrt{2|k|}\right)\|(u,v)\|+\|f\|, \tag{4.10}\]
which is a bound of the desired form.
Now we multiply the first component of \((L-ikI)(u,v,\eta)=(f,g,\alpha)\) by \(\bar{u}\), the complex conjugate of the second by \(v\), and subtract to get
\[\bar{u}v_{y}+\bar{u}_{y}v+ik(|u|^{2}+|v|^{2})-c_{*}\bar{\eta}p^{\prime}v-v(h)p \bar{u}=-f\bar{u}+\bar{g}v.\]
Integrating over \([0,1]\) yields
\[\int_{0}^{1}(\bar{u}v)_{y}\;dy+ik\|(u,v)\|^{2}=\int_{0}^{1}-\bar{f}u+\bar{g}v+ v(h)p\bar{u}+c_{*}\bar{\eta}p^{\prime}v\;dy.\]
The boundary conditions on \(v\) mean that \(\bar{u}(0)v(0)=\bar{u}(1)v(1)=0\), and so using this and Cauchy-Schwarz,
\[|k|\|(u,v)\|^{2}\leq\|(u,v)\|\|(f,g)\|+|v(h)|\|p\|\|u\|+|\eta||c_{*}|\|p^{ \prime}\|\|v\|,\]
so by (4.10),
\[\left(|k|-\|p\|\sqrt{2|k|}-2P^{2}\right)\|(u,v)\|^{2}\leq\|(u,v)\|\|(f,g)\|+\| p\|\|u\|\|f\|+|\eta||c_{*}|\|p^{\prime}\|\|v\|.\]
Dividing through by \(\|(u,v)\|\) gives us
\[\left(|k|-\|p\|\sqrt{2|k|}-2P^{2}\right)\|(u,v)\|\leq P\|(f,g)\|+|\eta||c_{*}| \|p^{\prime}\|,\]
then adding \((|k|-|c_{*}|\|p^{\prime}\|)\,|\eta|\) to both sides yields
\[\left(|k|-\|p\|\sqrt{2|k|}-2P^{2}\right)\|(u,v)\|+\left(|k|-|c_{*}|\|p^{\prime }\|\right)|\eta|\leq P\|(f,g)\|+|k\eta|.\]
Then, applying Cauchy-Schwarz shows that
\[\frac{1}{2}\left(|k|-\|p\|\sqrt{2|k|}-2P^{2}-|c_{*}|\|p^{\prime}\|\right)\|(u, v,\eta)\|\leq P\|(f,g)\|+|k\eta|.\]
The third component of \((L-ikI)(u,v,\eta)=(f,g,\alpha)\) tells us that \(v(h)/c_{*}-ik\eta=\alpha\). Applying this, then (4.10) to the final term of the previous line yields
\[\frac{1}{2}\left(|k|-\|p\|\sqrt{2|k|}-2P^{2}-|c_{*}|\|p^{\prime }\|\right)\|(u,v,\eta)\| \leq P\|(f,g)\|+|\alpha|+|c_{*}|^{-1}|v(h)|\] \[\leq P\|(f,g)\|+|\alpha|+|c_{*}|^{-1}\|f\|\] \[\quad+|c_{*}|^{-1}\left(2P+\sqrt{2|k|}\right)\|(u,v)\|.\]
Finally, rearranging, we see
\[\|(u,v,\eta)\|\leq\frac{2(P+|c_{*}|^{-1})\|(f,g)\|+2|\alpha|}{|k|-(\|p\|+2|c_ {*}|^{-1})\,\sqrt{2|k|}-2P^{2}-|c_{*}|\|p^{\prime}\|-4P|c_{*}|^{-1}}.\]
For sufficiently large \(|k|\), the denominator is larger than \(\frac{1}{2}|k|\). Therefore there exists \(R>0\) such that for \(|k|>R\),
\[|k|\|(u,v,\eta)\|_{\mathcal{X}}\leq C\|(f,g,\alpha)\|_{\mathcal{X}}\]
as required, where the constant \(C\) depends only on \(R\), \(h\), \(\omega_{0}\), \(\omega_{1}\), and \(c_{*}\).
We now verify Hypothesis (iii), and show a spectral gap around the imaginary axis. This could in principle be done by only studying the solutions to (4.3). However, we use an easier proof which makes use of Hypothesis (ii) as well.
**Proposition 4.8**.: _There exists \(\delta>0\) such that there are no \(z\) in the spectrum of \(L\) satisfying \(0<|\operatorname{Re}(z)|<\delta\)._
Proof.: We show this by contradiction. Suppose this spectral gap does not exist. Hence, there must exist a sequence \((z_{n})\) in the spectrum such that \(\operatorname{Re}(z_{n})\to 0\). Define
\[\mathcal{S}=\sigma(L)\cap\{z\in\mathbb{C}\mid-\pi\leq\operatorname{Re}(z)\leq \pi\},\]
where \(\sigma(L)\) is the spectrum of \(L\). None of the \(z\in\mathcal{S}\) can satisfy (4.4), so for all \(z\in\mathcal{S}\), we have that (4.3) holds, i.e., that \(\mathfrak{d}(-iz)=c_{*}^{-1}\). Since \(\mathfrak{d}(-iz)\) is a meromorphic function in \(z\), \(\mathcal{S}\) is an isolated set. In other words, \((z_{n})\) is a sequence taking values in an isolated set, with \(\operatorname{Re}(z_{n})\to 0\), so we conclude \(|\operatorname{Im}(z_{n})|\to\infty\).
However, by Hypothesis (ii) there exist constants \(C\), \(R\) such that for all \(z\in i\mathbb{R}\) with \(|z|>R\), we have
\[|z|\|(u,v,\eta)\|_{\mathcal{X}}\leq C\|(L-zI)(u,v,\eta)\|_{\mathcal{X}},\]
and \(L-zI\) invertible. Pick \(n_{*}\) such that for all \(n>n_{*}\), we have that \(|\operatorname{Im}(z_{n})|>R\) and \(|\operatorname{Re}(z_{n})|<R/C\). Let \(\tilde{z}_{n}=i\operatorname{Im}(z_{n})\). Then \((z_{n}-\tilde{z}_{n})(L-\tilde{z}_{n}I)^{-1}\colon\mathcal{X}\to\mathcal{X}\) has operator norm strictly less than \(1\). This means \(I-(z_{n}-\tilde{z}_{n})(L-\tilde{z}_{n}I)^{-1}\colon\mathcal{X}\to\mathcal{X}\) is injective, so \(I-(z_{n}-\tilde{z}_{n})(L-\tilde{z}_{n}I)^{-1}\colon\mathcal{W}\to\mathcal{W}\) is injective. Hence
\[L-z_{n}I=(L-\tilde{z}_{n}I)(I-(L-\tilde{z}_{n}I)^{-1}(z_{n}-\tilde{z}_{n})I)\]
is also injective. Therefore, for all \(n>n_{*}\), we have that \(z_{n}\) is in the resolvent set, but we assumed it was in the spectrum, and so arrive at our contradiction.
We have now verified Hypotheses (i)-(iii) of Theorem 4.1 in the case \(c_{*}=h(1-h)\), and so proceed for the rest of the paper with \(c_{*}=h(1-h)\). This specific value of \(c_{*}\) is of particular interest, as it corresponds to \(k=0\), so we might expect to find waves of infinite period, i.e., solitary waves.
The final step in our analysis of \(L\) is to seek the projection \(P_{0}\) from Theorem 4.1. Notice that because \(0\) is the only purely imaginary element of the spectrum, \(\mathcal{E}_{0}\) is the generalised kernel of \(L\). Lemma 4.5 gives us a basis \(\{\xi_{0},\xi_{1}\}\) of \(\mathcal{E}_{0}\), where
\[\xi_{0}=(u_{*},0,1)\qquad\xi_{1}=(0,v_{*},0). \tag{4.11}\]
Thus we can write \(P_{0}\) as
\[P_{0}(u,v,\eta)=A(u,v,\eta)\xi_{0}+B(u,v,\eta)\xi_{1}, \tag{4.12}\]
where \(A,B\colon\mathcal{X}\to\mathbb{R}\) are bounded linear functions. Calculating
\[LP_{0}=\begin{pmatrix}u_{*}\\ 0\\ 1\end{pmatrix}B\quad\text{and}\quad P_{0}L=\begin{pmatrix}u_{*}\\ 0\\ 1\end{pmatrix}AL+\begin{pmatrix}0\\ v_{*}\\ 0\end{pmatrix}BL,\]
we see that \(L\) and \(P_{0}\) commute if and only if \(A\circ L=B\) and \(B\circ L=0\).
**Proposition 4.9**.: _The projection \(P_{0}\) from Theorem 4.1 is given by (4.12), where_
\[A(u,v,\eta) =\frac{3}{2h^{2}(1-h)}\int_{0}^{h}y^{2}u(y)\;dy-\frac{3}{2h(1-h)^{ 2}}\int_{h}^{1}(1-y)^{2}u(y)\;dy \tag{4.13a}\] \[\qquad+\frac{(8(1-h)^{4}-15(1-h)^{3}+5(1-h))\omega_{1}-(8h^{4}-15 h^{3}+5h)\omega_{0}}{20h^{2}(1-h)^{2}}\eta\] \[B(u,v,\eta) =\frac{3}{h^{2}(1-h)}\int_{0}^{h}yv(y)\;dy+\frac{3}{h(1-h)^{2}} \int_{h}^{1}(1-y)v(y)\;dy, \tag{4.13b}\]
_and \(\xi_{0}=(u_{*},0,1)\) and \(\xi_{1}=(0,v_{*},0)\), as defined in (4.11)._
Proof.: This can be verified by brute force. However, we now show a little of the construction, in order to give some intuition about why the projection is of this form. We first seek \(B\). Notice that \(L(u,v,\eta)=(f,g,a)\) implies that \(u-h(1-h)\eta p=G+\text{constant}\), where \(G\) is any primitive of \(g\). This implies
\[\frac{1}{h}\int_{0}^{h}G(y)\;dy-\frac{1}{1-h}\int_{h}^{1}G(y)\;dy=0. \tag{4.14}\]
Now construct \(B\) such that it satisfies \(B\circ L=0\). Motivated by (4.14), let \(V\) be a primitive of \(v\), and define
\[B(u,v,\eta) =\frac{3}{h(1-h)}\left(-\frac{1}{h}\int_{0}^{h}V\;dy+\frac{1}{1-h }\int_{h}^{1}V\;dy\right)\] \[=\frac{3}{h(1-h)}\left(-\frac{1}{h}\int_{0}^{h}(h-y)v\;dy+\frac{1 }{1-h}\int_{h}^{1}(1-y)v\;dy+\int_{0}^{h}v\;dy\right)\] \[=\frac{3}{h^{2}(1-h)}\int_{0}^{h}yv\;dy+\frac{3}{h(1-h)^{2}}\int _{h}^{1}(1-y)v\;dy.\]
If \(L\) had an inverse, solving \(A\circ L=B\) for \(A\) would give a formula for \(A\). Doing exactly this is impossible, as \(L\) is not invertible, but we can use the fact that \(B\) depends only on \(v\) to do something very similar. Notice that \(L(u,v,\eta)=(f,g,a)\) implies that
\(h(1-h)\alpha p-f=v_{y}\). This motivates the definition
\[A(u,v,\eta) =B\left(u,\int_{h}^{y}h(1-h)\eta p(\tilde{y})-u(\tilde{y})\;d \tilde{y}+h(1-h)\eta,\eta\right)\] \[=-\frac{3}{2h^{2}(1-h)}\int_{0}^{h}y^{2}(h(1-h)\eta p(y)-u(y))\;dy\] \[\qquad+\frac{3}{2h(1-h)^{2}}\int_{h}^{1}(1-y)^{2}(h(1-h)\eta p(y) -u(y))\;dy+\frac{3}{2}\eta\] \[=\frac{3}{2h^{2}(1-h)}\int_{0}^{h}y^{2}u(y)\;dy-\frac{3}{2h(1-h)^ {2}}\int_{h}^{1}(1-y)^{2}u(y)\;dy\] \[\qquad+\frac{(8(1-h)^{4}-15(1-h)^{3}+5(1-h))\omega_{1}-(8h^{4}-1 5h^{3}+5h)\omega_{0}}{20h^{2}(1-h)^{2}}\eta.\]
Notice that
\[A(L(u,v,\eta)) =B\left(v(h)p-v_{y},\int_{h}^{y}v(h)p-v(h)p+v_{y}\;d\tilde{y}+v(h ),\frac{v(h)}{h(1-h)}\right)\] \[=B(u,v,\eta).\]
Therefore \(A\) and \(B\) exhibit the required behaviour when composed with \(L\). It remains only to check that \(P_{0}\) is indeed a projection onto the generalised kernel of \(L\). We see that
\[A(u_{*},0,1)=B(0,v_{*},0)=1,\]
and
\[B(u_{*},0,1)=A(0,v_{*},0)=0,\]
so we are done.
### Applying the centre manifold theorem
We quickly verify the hypotheses of Theorem 4.1 on the non-linear operator \(\mathcal{R}\). First, fix an integer \(N\geq 4\) which will not change for the rest of this paper. Although Theorem 4.1 only needs this \(N\) to be greater than or equal to \(2\), we need it to be greater than or equal to \(4\) to make some arguments about the regularity of the solutions we find. See, for example, (4.23b) and Appendix A.
The remainder function \(\mathcal{R}\) defined in (3.14) is analytic, so is indeed in \(C^{N}(\mathcal{U},\mathcal{X})\), where \(\mathcal{U}\) is the open set defined in (3.13). Furthermore, we obtained \(\mathcal{R}\) as a remainder after linearising in \((u,v,\eta)\), so by construction it satisfies
\[\mathcal{R}(0,0,0,0)=0,\quad D_{(u,v,\eta)}\mathcal{R}(0,0,0,0)=0.\]
We are now ready to apply Theorem 4.1 and find solutions to (3.4) on the centre manifold. Let \(\varepsilon=c-c_{*}=c-h(1-h)\), and let \(w=(u,v,\eta)\). By Theorem 4.1, there exists \(\delta>0\), \(\psi\in C^{N}(\mathcal{E}_{0}\times\mathbb{R},\mathcal{E}_{h})\) such that for all \(\varepsilon\in(-\delta,\delta)\), if \(w\) satisfies
\[\frac{dw}{dx}=Lw+\mathcal{R}(w,\varepsilon),\quad\sup_{x}\lVert w(x,\,\cdot\,)- (\varepsilon,0,0)\rVert_{\mathcal{W}}<\delta \tag{4.15}\]
then it must be of the of the form
\[w(x,y)=a(x)\xi_{0}(y)+b(x)\xi_{1}(y)+\psi\big{(}a(x)\xi_{0}+b(x)\xi_{1}, \varepsilon\big{)}(y), \tag{4.16}\]
where \(\xi_{0},\xi_{1}\) are defined in (4.11) and \(a\) and \(b\) are scalar functions.
We now introduce and clarify some points of notation. Firstly, since \(\mathcal{E}_{0}\times\mathbb{R}\), the domain of \(\psi\), is a 3 dimensional space, we abuse notation slightly, and sometimes consider \(\psi\) to be a function on \(\mathbb{R}^{3}\) given by
\[\psi(a,b,\varepsilon)=\psi(a\xi_{0}+b\xi_{1},\varepsilon).\]
Secondly, since \(\psi\) takes values in \(\mathcal{W}\), we denote its \(u\), \(v\) and \(\eta\) components as \(e_{1}\cdot\psi\), \(e_{2}\cdot\psi\), and \(e_{3}\cdot\psi\) respectively. This means (4.16) can be written completely equivalently as
\[\begin{pmatrix}u(x,y)\\ v(x,y)\\ \eta(x)\end{pmatrix}=a(x)\begin{pmatrix}u_{*}(y)\\ 0\\ 1\end{pmatrix}+b(x)\begin{pmatrix}0\\ v_{*}(y)\\ 0\end{pmatrix}+\begin{pmatrix}e_{1}\cdot\psi(a(x),b(x),\varepsilon)\\ e_{2}\cdot\psi(a(x),b(x),\varepsilon)\\ e_{3}\cdot\psi(a(x),b(x),\varepsilon)\end{pmatrix}(y). \tag{4.17}\]
Reversibility immediately gives a symmetry on \(\psi\). Recall we defined reversibility in (2.2). Using Theorem 3.15 of [27], we see that
\[\psi(a,-b,\varepsilon)=\begin{pmatrix}e_{1}\cdot\psi(a,b,\varepsilon)\\ -e_{2}\cdot\psi(a,b,\varepsilon)\\ e_{3}\cdot\psi(a,b,\varepsilon)\end{pmatrix}, \tag{4.18}\]
i.e., that the first and third components of \(\psi\) are even in \(b\), and the second component is odd in \(b\).
In order for a solution of the form in (4.16) to exist, \(a\) and \(b\) must satisfy a differential equation, which is found using \(P_{0}\). Inserting (4.16) into (4.15) yields
\[L(w)+\mathcal{R}(w,\varepsilon)=w_{x}=a_{x}\xi_{0}+b_{x}\xi_{1}+(D\psi(w, \varepsilon))(a_{x}\xi_{0}+b_{x}\xi_{1},\varepsilon). \tag{4.19}\]
We now apply \(P_{0}\) to find the reduced equation (4.1d). The composition \(P_{0}\circ\psi=0\), therefore \(P_{0}\circ D\psi=0\). By construction, \(P_{0}\) commutes with \(L\), therefore applying \(P_{0}\) to (4.19) gives
\[\begin{split} a_{x}\xi_{0}+b_{x}\xi_{1}&=L(a\xi_{0} +b\xi_{1}+P_{0}\psi(a,b,\varepsilon))+P_{0}\mathcal{R}(w,\varepsilon)=b\xi_{ 0}+P_{0}\mathcal{R}(w,\varepsilon)\\ &=b\xi_{0}+P_{0}\mathcal{R}(a\xi_{0}+b\xi_{1}+\psi(a,b, \varepsilon),\varepsilon).\end{split} \tag{4.20}\]
We can rewrite (4.20) more compactly as
\[(a_{x},b_{x})=F(a,b,\varepsilon). \tag{4.21}\]
Let \(e_{1}\cdot F\) and \(e_{2}\cdot F\) denote the first and second components of \(F\) respectively. Since \(F\) is a composition of \(C^{N}\) functions, it follows that \(F\in C^{N}(\mathbb{R}^{3},\mathbb{R}^{2})\). Note that \(F\) inherits the reversibility of \(\psi\) shown in (4.18). Specifically, \(e_{1}\cdot F\) is odd in \(b\), and \(e_{2}\cdot F\) is even in \(b\). This means that if \(a(x)\) and \(b(x)\) solve (4.21) and we define
\[\check{a}(x)=a(-x),\quad\check{b}(x)=-b(-x),\]
then \(\check{a}(x)\) and \(\check{b}(x)\) also satisfy (4.21).
We now find expansions for \(\psi\) and \(F\). Since for all \(\varepsilon\), the zero function \(u=v=\eta=0\) is a solution of (3.4), Theorem 4.1 tells us that \(\psi(0,0,\varepsilon)=0\). Since \(\psi_{a}(0,0,0)=\psi_{b}(0,0,0)=0\), we have
\[\psi(a,b,\varepsilon)=\mathcal{O}((|a|+|b|)(|a|+|b|+|\varepsilon|)). \tag{4.22a}\]
Therefore,
\[P_{0}\mathcal{R}(a\xi_{0}+b\xi_{1}+\psi(a,b,\varepsilon),\varepsilon) =P_{0}\mathcal{R}\big{(}a\xi_{0}+b\xi_{1}+\mathcal{O}((|a|+|b|)(|a| +|b|+|\varepsilon|)),\varepsilon\big{)}\] \[=P_{0}\mathcal{R}(a\xi_{0}+b\xi_{1},\varepsilon)+\mathcal{O} \big{(}(|a|+|b|)(a^{2}+b^{2}+\varepsilon^{2})\big{)}. \tag{4.22b}\]
Notice that we know \(P_{0}\mathcal{R}(a\xi_{0}+b\xi_{1},\varepsilon)\) explicitly. Substituting (4.22b) into (4.20), we obtain an expansion for \(F\). However, appealing to parity can sharpen the order of the error term. Since \(e_{1}\cdot F\) is odd in \(b\), and \(e_{2}\cdot F\) is even in \(b\), we deduce
\[e_{1}\cdot F =b(1+\mathcal{O}(|a|+|b|+|\varepsilon|)) \tag{4.23a}\] \[e_{2}\cdot F =\frac{3}{h^{2}(1-h)^{2}}\varepsilon a+\frac{3\theta}{2h^{2}(1-h) ^{2}}a^{2}+\mathcal{O}\big{(}b^{2}+(|a|+b^{2})(a^{2}+b^{2}+\varepsilon^{2}) \big{)}, \tag{4.23b}\]
where recall \(\theta\) was defined in (1.3). Notice that these really are the dominating terms, in that their coefficients are non-zero. The terms written here can be found by examining \(P_{0}\mathcal{R}(ax_{0}+b\xi_{1})\). Higher order terms can be found explicitly by a recursive method, but for our purposes this is unnecessary.
We now solve (4.21) for \(a\) and \(b\). For fixed \(\varepsilon\), by Picard-Lindelof, (4.21) with initial condition \((a(0),b(0))=(a_{*},0)\) has a unique solution. By reversibility, this solution curve in the \(a,b\) plane is unchanged under reflection about the \(a\) axis. In other words, for fixed \(\varepsilon\), solutions to (4.21) that cross the \(a\) axis are symmetric about the \(a\) axis. This motivates us to focus on solutions where \(a\) is even, and \(b\) is odd.
We now rescale with the aim of eventually eliminating higher order terms, and proceed having picked \(\varepsilon\) to be strictly positive. Let
\[\tilde{x}=\frac{\sqrt{3\varepsilon}}{h(1-h)}x,\quad a(x)=\frac{2\varepsilon}{ \theta}\tilde{a}(\tilde{x}),\quad b(x)=\frac{2\sqrt{3}\varepsilon^{\frac{3}{2 }}}{h(1-h)\theta}\tilde{b}(\tilde{x}). \tag{4.24}\]
When rescaling, we have that \(\tilde{a}=\mathcal{O}(\varepsilon^{-1})a\), but \(\tilde{b}=\mathcal{O}(\varepsilon^{-\frac{3}{2}})b\). For conciseness, we write the rescaled version of (4.21)
\[(\tilde{a}_{\tilde{x}},\tilde{b}_{\tilde{x}})=\tilde{F}(\tilde{a},\tilde{b}, \varepsilon). \tag{4.25}\]
Notice that \(\tilde{F}\) satisfies
\[\tilde{F}(\tilde{a},\tilde{b},\varepsilon)=\begin{pmatrix}\tilde{b}(1+ \mathcal{O}(\varepsilon))\\ \tilde{a}+\tilde{a}^{2}+\mathcal{O}\big{(}\varepsilon(|\tilde{a}|+|\tilde{b}| ^{2})\big{)}\end{pmatrix}. \tag{4.26}\]
When \(\varepsilon=0\), (4.26) is exactly the KdV equation for travelling waves, a nonlinear equation. We expect the solution to persist as \(\varepsilon\) moves away from 0, and describe such a solution as _weakly nonlinear_. There are several ways to show that the solutions do indeed persist, for example, by considering the stable and unstable manifolds of \(\tilde{F}\) at the fixed point at the origin. Instead, we adapt an argument of Kirchgassner [38].
**Proposition 4.10**.: _For all sufficiently small \(\varepsilon\geq 0\), (4.25) has a solution \((\tilde{a}^{\varepsilon},\tilde{b}^{\varepsilon})\) with the following properties._
1. \(\tilde{a}^{\varepsilon}\) _is even and_ \(\tilde{b}^{\varepsilon}\) _is odd._
2. \((\tilde{a}^{\varepsilon},\tilde{b}^{\varepsilon})\) _is homoclinic to 0._
3. \((\tilde{a}^{\varepsilon},\tilde{b}^{\varepsilon})\in C^{N}(\mathbb{R})\)_._
_._
4. _The explicit formula for_ \((\tilde{a}^{0},\tilde{b}^{0})\)_, is given by_ \[\tilde{a}^{0}(\tilde{x})=-\tfrac{3}{2}\operatorname{sech}^{2}\left(\tfrac{1}{2} \tilde{x}\right),\quad\tilde{b}^{0}(\tilde{x})=\tfrac{3}{2}\tanh\left(\tfrac{1}{ 2}\tilde{x}\right)\operatorname{sech}^{2}\left(\tfrac{1}{2}\tilde{x}\right).\] (4.27)
5. _The map_ \(\varepsilon\mapsto\tilde{a}^{\varepsilon}\) _is in_ \(C^{N-1}(\mathbb{R},C^{2}(\mathbb{R}))\)_._
6. \((\tilde{a}^{\varepsilon},\tilde{b}^{\varepsilon})=(\tilde{a}^{0},\tilde{b}^{ 0})+\mathcal{O}(\varepsilon)\)_, where the_ \(\mathcal{O}\) _is with respect to the_ \(C^{1}(\mathbb{R})\) _norm._
Proof.: We argue as in Proposition 5.1 of [38]. For more details, see Appendix A.
_Remark 4.11_.: Since the phase portrait of \(\tilde{F}\) is symmetric about the line \(\tilde{b}=0\), the only zeroes of \(\tilde{b}^{\varepsilon}\) are at \(\tilde{x}=0\). Indeed, if \(\tilde{b}^{\varepsilon}\) vanished at two distinct points, then \(\tilde{b}^{\varepsilon}\) would be periodic rather than homoclinic. The oddness of \(\tilde{b}^{\varepsilon}\) implies that \(\tilde{b}^{\varepsilon}(0)=0\), therefore this zero is unique.
### Regularity and estimates
We now undo the scaling from (4.24) to find functions \(a^{\varepsilon}(x)\), \(b^{\varepsilon}(x)\) satisfying (4.21). By conclusion (c) of Theorem 4.1, inserting these into the reduction function does indeed give a solution to (3.4). Then undoing the coordinate change (3.1) gives us solutions to (1.1).
We explicitly find the leading order terms in \(C^{0}(\mathbb{R},H^{1}((0,1)))\), and show that the solution is small in \(C^{1}(\mathbb{R},H^{1}((0,1)))\).
**Proposition 4.12**.: _For all sufficiently small \(\varepsilon\geq 0\), there exist functions \(u^{\varepsilon}(x,y)\), \(v^{\varepsilon}(x,y)\), \(\eta^{\varepsilon}(x)\) such that:_
1. \((u^{\varepsilon},v^{\varepsilon},\eta^{\varepsilon};h(1-h)+\varepsilon)\) _solves (_3.4_)._
2. \(u^{\varepsilon}\) _and_ \(\eta^{\varepsilon}\) _are even in_ \(x\)_,_ \(v^{\varepsilon}\) _is odd in_ \(x\)_._
3. _In_ \(C^{0}(\mathbb{R},H^{1}((0,1)))\)_, we have_ \[u^{\varepsilon}(x,y) =-\frac{3\varepsilon}{\theta}\operatorname{sech}^{2}\Big{(}\frac {\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}u_{*}(y)+\mathcal{O}(\varepsilon^{2})\] (4.28a) \[v^{\varepsilon}(x,y) =\frac{3\sqrt{3}\varepsilon^{\frac{3}{2}}}{h(1-h)\theta}\tanh \Big{(}\frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}\operatorname{sech}^{2} \Big{(}\frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}v_{*}(y)+\mathcal{O}( \varepsilon^{\frac{5}{2}})\] (4.28b) \[\eta^{\varepsilon}(x) =-\frac{3\varepsilon}{\theta}\operatorname{sech}^{2}\Big{(}\frac {\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}+\mathcal{O}(\varepsilon^{2}).\] (4.28c)
4. _These functions satisfy the estimates_ \[\sup_{x}\lVert u^{\varepsilon}_{x}(x,\,\cdot\,)\rVert_{H^{1}}= \mathcal{O}(\varepsilon^{\frac{3}{2}}),\quad\sup_{x}\lVert v^{\varepsilon}_{x }(x,\,\cdot\,)\rVert_{H^{1}}=\mathcal{O}(\varepsilon^{2}),\quad\sup_{x}|\eta^ {\varepsilon}_{x}(x)|=\mathcal{O}(\varepsilon^{\frac{3}{2}})\] (4.29a) \[\sup_{x,y}|u^{\varepsilon}_{y}(x,y)|=\mathcal{O}(\varepsilon),\quad\sup_{x,y}| v^{\varepsilon}_{y}(x,y)|=\mathcal{O}(\varepsilon).\] (4.29b)
5. _These solutions are homoclinic to 0 in_ \(H^{1}\)_, i.e., for any fixed_ \(\varepsilon\)_, we have_ \[\lim_{x\to\infty}\lVert(u^{\varepsilon}(x,\,\cdot\,),v^{\varepsilon}(x,\, \cdot\,),\eta^{\varepsilon}(x))\rVert_{H^{1}}=\lim_{x\to-\infty}\lVert(u^{ \varepsilon}(x,\,\cdot\,),v^{\varepsilon}(x,\,\cdot\,),\eta^{\varepsilon}(x)) \rVert_{H^{1}}=0.\] (4.30)
Proof.: Applying the inverse of the scaling from (4.24), and appealing to Proposition 4.10(f) yields
\[a^{\varepsilon}(x) =-\frac{3\varepsilon}{\theta}\operatorname{sech}^{2}\Big{(}\frac{ \sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}+\mathcal{O}(\varepsilon^{2})\] \[b^{\varepsilon}(x) =\frac{3\sqrt{3}\varepsilon^{\frac{3}{2}}}{h(1-h)\theta}\tanh \Big{(}\frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}\operatorname{sech}^{2} \Big{(}\frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}+\mathcal{O}(\varepsilon^{ \frac{5}{2}}),\]
where the \(\mathcal{O}\) is with respect to the \(C^{1}\) norm. We now consider (4.17). This, the parity of the components of \(\psi\) with respect to \(b\), and the parity of \(a^{\varepsilon}\) and \(b^{\varepsilon}(x)\) gives us the required parity on \(u^{\varepsilon}\), \(v^{\varepsilon}\) and \(\eta^{\varepsilon}\). Using (4.22a), and the fact that the oddness of \(e_{2}\cdot\psi\) with respect to \(b\) gives us that \(e_{2}\cdot\psi=b\mathcal{O}(|a|+|b|+|\varepsilon|)\), we see that
\[u^{\varepsilon}(x,y) =-\frac{3\varepsilon}{\theta}\operatorname{sech}^{2}\Big{(}\frac {\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}u_{*}(y)+\mathcal{O}(\varepsilon^{2})\] \[v^{\varepsilon}(x,y) =\frac{3\sqrt{3}\varepsilon^{\frac{3}{2}}}{h(1-h)\theta}\tanh \Big{(}\frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}\operatorname{sech}^{2} \Big{(}\frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}v_{*}(y)+\mathcal{O}( \varepsilon^{\frac{5}{2}})\] \[\eta^{\varepsilon}(x) =-\frac{3\varepsilon}{\theta}\operatorname{sech}^{2}\Big{(}\frac {\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}+\mathcal{O}(\varepsilon^{2}),\]
where the \(\mathcal{O}\)s are with respect to the \(C^{0}(\mathbb{R},H^{1}((0,1)))\) norm. Differentiating (4.17) with respect to \(x\) gives
\[\begin{pmatrix}u_{x}(x,y)\\ v_{x}(x,y)\\ \eta_{x}(x)\end{pmatrix} =a^{\varepsilon}_{x}(x)\begin{pmatrix}u_{*}(y)\\ 0\\ 1\end{pmatrix}+b^{\varepsilon}_{x}(x)\begin{pmatrix}0\\ v_{*}(y)\\ 0\end{pmatrix}+\ a^{\varepsilon}_{x}(x)\psi_{a}(a^{\varepsilon}(x),b^{ \varepsilon}(x),\varepsilon)(y)\] \[\qquad+b^{\varepsilon}_{x}(x)\psi_{b}(a^{\varepsilon}(x),b^{ \varepsilon}(x),\varepsilon)(y). \tag{4.31}\]
We deduce (4.29a) from this, and the fact \(D\psi=\mathcal{O}(|a|+|b|+|\varepsilon|)\). For the other two estimates, notice that we can solve (3.4a) and (3.4b) for \(u_{y}\) and \(v_{y}\), as the determinant of the linear system is nonzero so long as \(\eta,\eta_{x}\) are sufficiently small. Doing so shows that \((u^{\varepsilon}_{y}(x,y),v^{\varepsilon}_{y}(x,y))\) is an analytic function of \(\varepsilon\), \(u^{\varepsilon}(x,y)\), \(u^{\varepsilon}(x,h)\), \(u^{\varepsilon}_{x}(x,y)\), \(v^{\varepsilon}(x,y)\), \(v^{\varepsilon}(x,h)\), \(v^{\varepsilon}_{x}(x,y)\), \(\eta^{\varepsilon}(x)\), and \(\eta^{\varepsilon}_{x}(x)\). Calling this function \(f\colon\mathbb{R}^{9}\to\mathbb{R}^{2}\), we know all the arguments of \(f\) are \(\mathcal{O}(\varepsilon)\) uniformly in \(x\) and \(y\). Since \(f(0)=0\), we see we have the estimates (4.29b).
Finally, these solutions are homoclinic to \(0\) because \((\tilde{a}^{\varepsilon},\tilde{b}^{\varepsilon})\) is homoclinic to \(0\), thus \((a^{\varepsilon},b^{\varepsilon})\) is homoclinic to \(0\). Since \(\psi(0,0,\varepsilon)=0\), we have \((u^{\varepsilon},v^{\varepsilon},\eta^{\varepsilon})\) is homoclinic to \(0\).
We would like to convert Proposition 4.12 into statements about \(U^{\varepsilon}\), \(V^{\varepsilon}\), and \(\eta^{\varepsilon}\). Before we do this, however, it is useful to determine their regularity. Viewing \((u,v,\eta)\) as a function of \(x\), the centre manifold theorem tells us \((u^{\varepsilon},v^{\varepsilon},\eta^{\varepsilon})\in C^{N}(\mathbb{R}, \mathcal{X})\) for \(\varepsilon\) sufficiently small. This gives us immediately that \(\eta\in C^{N}(\mathbb{R})\). Showing \(U^{\varepsilon}\) and \(V^{\varepsilon}\) are analytic is a little more involved. Firstly we show that \(u^{\varepsilon}\) and \(v^{\varepsilon}\) are in \(H^{1}\). We can then use the smoothness of the coordinate change (3.1) to show that \(U^{\varepsilon}\) and \(V^{\varepsilon}\) are in \(H^{1}\). This is enough regularity on the functions to mean we can apply elliptic regularity results, in particular, by examining (1.1), we see \(U^{\varepsilon}\) and \(V^{\varepsilon}\) are harmonic, and thus real analytic, away from the interface.
We begin with the following lemma.
**Lemma 4.13**.: _For any \(M>0\), we have \(u^{\varepsilon},v^{\varepsilon}\in H^{1}((-M,M)\times(0,1))\)._
Proof.: We know that for each \(x\), the function \(y\mapsto u^{\varepsilon}(x,y)\) has a weak derivative in \(L^{2}((0,1))\), which we call \(u_{y}(x,y)\). We also know from the centre manifold theorem that \(\|u(x,\,\cdot\,)\|_{H^{1}((0,1))}\) varies continuously with \(x\). We have that \(u^{\varepsilon}_{y}(\,\cdot\,,\,\cdot\,)\in L^{2}((-M,M)\times(0,1))\), as
\[\int_{-M}^{M}\int_{0}^{1}u^{\varepsilon}_{y}(x,y)^{2}\;dy\;dx \leq\int_{-M}^{M}\!\!\|u^{\varepsilon}(x,\,\cdot\,)\|_{H^{1}((0,1) )}^{2}\;dx\] \[\leq 2M\sup_{x\in[-M,M]}\!\!\|u^{\varepsilon}(x,\,\cdot\,)\|_{H^{ 1}((0,1))}^{2}<\infty\]
by continuity. A straightforward application of Fubini's theorem then shows that \(u^{\varepsilon}_{y}\) is the weak derivative of \(u^{\varepsilon}\) in \(H^{1}((-M,M)\times(0,1))\).
We now find the \(x\) derivative. The fact that \(u^{\varepsilon}\in C^{1}(\mathbb{R},H^{1}((0,1))\) means that for each \(x\), there exists a function of \(y\) in \(H^{1}((0,1))\), which we call \(u^{\varepsilon}_{x}(x,y)\), such that \(\|u^{\varepsilon}_{x}(x,\,\cdot\,)\|_{H^{1}((0,1))}\) varies continuously with \(x\), and that
\[\lim_{h\to 0}\frac{\|u^{\varepsilon}(x+h,\,\cdot\,)-u^{\varepsilon}(x,\, \cdot\,)-u^{\varepsilon}_{x}(x,\,\cdot\,)h\|_{H^{1}((0,1))}}{|h|}=0.\]
Therefore, since the trace map is bounded on \(H^{1}((0,1))\), we have that for all \(y\),
\[\frac{|u^{\varepsilon}(x+h,y)-u^{\varepsilon}(x,y)-u^{\varepsilon}_{x}(x,y)h| }{|h|}\to 0,\]
or in other words, \(u^{\varepsilon}_{x}(x,y)\) is the classical partial derivative of \(u^{\varepsilon}(x,y)\), so it is certainly the weak derivative as well. We have that \(u^{\varepsilon}_{x}(\,\cdot\,,\,\cdot\,)\in L^{2}((-M,M)\times(0,1))\), as
\[\int_{-M}^{M}\int_{0}^{1}u^{\varepsilon}_{x}(x,y)^{2}\;dy\;dx =\int_{-M}^{M}\!\!\|u^{\varepsilon}_{x}(x,\,\cdot\,)\|_{L^{2}(( 0,1))}^{2}\;dx \tag{4.32}\] \[\leq\int_{-M}^{M}\!\!\|u^{\varepsilon}_{x}(x,\,\cdot\,)\|_{H^{1} ((0,1))}^{2}\;dx\] \[\leq 2M\sup_{x\in[-M,M]}\!\!\|u^{\varepsilon}_{x}(x,\,\cdot\,)\|_{ H^{1}((0,1))}^{2}<\infty\]
by continuity. Rewriting (4.32) with \(u^{\varepsilon}_{x}\) replaced by \(u^{\varepsilon}\) shows that \(u^{\varepsilon}\) is in \(L^{2}((-M,M)\times(0,1))\) as well. Therefore, since first order weak derivatives of \(u^{\varepsilon}\) exist, and they and \(u^{\varepsilon}\) are in \(L^{2}((-M,M)\times(0,1))\), we have that \(u^{\varepsilon}\in H^{1}((-M,M)\times(0,1))\) as required.
The case for \(v^{\varepsilon}\) follows similarly.
We now use this to show a similar result for \(U^{\varepsilon}\) and \(V^{\varepsilon}\).
**Lemma 4.14**.: _For all \(M>0\), we have \(U^{\varepsilon},V^{\varepsilon}\in H^{1}((-M,M)\times(0,1))\)._
Proof.: Using Lemma 4.13, it is straightforward to show that
\[(u^{\varepsilon}+\omega(y-h)+c)Y_{y}^{-1}\in H^{1}((-M,M)\times(0,1)).\]
Now in order to show that \(U^{\varepsilon}\) and \(V^{\varepsilon}\) are in \(H^{1}((-M,M)\times(0,1))\), all we need to show is that composition with the change of coordinates gives a bounded linear map from
\(H^{1}((-M,M)\times(0,1))\) to itself. Recall the change of coordinates (3.1a)-(3.1b). Given \(\mathfrak{u}\in H^{1}\cap C^{\infty}\), we let
\[U(X,Y)=\mathfrak{u}(x(X,Y),y(X,Y)).\]
It can be seen that that
\[\begin{split}\|U\|_{H^{1}}^{2}&\leq\frac{2\sup(1+y _{X}^{2}+y_{Y}^{2})}{\inf|y_{Y}|}\|\mathfrak{u}\|_{H^{1}}^{2}\\ &=\frac{2(1+\mathcal{O}(\varepsilon^{3})+1+\mathcal{O}( \varepsilon))}{1-\mathcal{O}(\varepsilon)}\|\mathfrak{u}\|_{H^{1}}^{2}\leq 5 \|\mathfrak{u}\|_{H^{1}}^{2},\end{split} \tag{4.33}\]
where the second equality uses the formula for \(y\), (3.1b), and the estimates on \(\eta^{\varepsilon}\) in (4.28c) and (4.29a). We deduce the change of coordinates is a bounded linear map on \(H^{1}\cap C^{\infty}\), but this is a dense subspace of \(H^{1}\), so we have our required result.
We are now ready to apply elliptic regularity to show \(U^{\varepsilon}\) and \(V^{\varepsilon}\) are analytic.
**Proposition 4.15**.: _The velocity components \(U^{\varepsilon}\) and \(V^{\varepsilon}\) are analytic on \(\Omega_{0}\cup\Omega_{1}\)._
Proof.: Notice that \(U^{\varepsilon}\) and \(V^{\varepsilon}\) are weakly harmonic in \(\Omega_{0}\) and in \(\Omega_{1}\), as for all \(\phi\in C_{c}^{\infty}(\Omega_{i})\), we have
\[\begin{split}\int_{\Omega_{i}}\nabla U^{\varepsilon}\cdot \nabla\phi\;dX\;dY&=\int_{\Omega_{i}}U_{X}^{\varepsilon}\phi_{X}+ U_{Y}^{\varepsilon}\phi_{Y}\;dX\;dY\\ &=\int_{\Omega_{i}}-V_{Y}^{\varepsilon}\phi_{X}+(\omega+V_{X}^{ \varepsilon})\phi_{Y}\;dX\;dY\\ &=\int_{\Omega_{i}}V^{\varepsilon}\phi_{XY}-V^{\varepsilon}\phi_ {XY}\;dX\;dY=0,\end{split}\]
and similarly for \(V^{\varepsilon}\). By standard elliptic regularity arguments, \(U^{\varepsilon}\) and \(V^{\varepsilon}\) are therefore smooth harmonic functions, and hence real-analytic.
We are now ready to prove results on \(U^{\varepsilon}\) and \(V^{\varepsilon}\) of a similar form to those in Proposition 4.12.
**Corollary 4.16**.:
* \(U^{\varepsilon},\eta^{\varepsilon}\) _are even in_ \(x\)_, and_ \(V^{\varepsilon}\) _is odd in_ \(x\)_._
* \((U^{\varepsilon},V^{\varepsilon},\eta^{\varepsilon})\) _is homoclinic to_ \((\omega(Y-h)+h(1-h)+\varepsilon,0,0)\) _with respect to the_ \(C^{0}\) _norm._
* \(U^{\varepsilon}_{X}\) _and_ \(\eta^{\varepsilon}_{X}\) _are_ \(\mathcal{O}(\varepsilon^{\frac{3}{2}})\)_, and_ \(V^{\varepsilon}_{X}\) _is_ \(\mathcal{O}(\varepsilon^{2})\) _with respect to the_ \(C^{0}\) _norm._
Proof.: Conclusion (a) is immediate. Conclusion (b) follows by arguing similarly to in Lemma 4.14. We see that the coordinate change (3.1) gives a bounded linear map from \(H^{1}((0,1))\) to itself. Therefore (4.30) implies
\[\lim_{X\to\pm\infty}\|U^{\varepsilon}(X,\,\cdot\,)-\omega(\,\cdot\,-h)-h(1-h)- \varepsilon\|_{H^{1}}=0,\qquad\lim_{X\to\pm\infty}\|V^{\varepsilon}(X,\,\cdot \,)\|_{H^{1}}=0,\]
as required, and we already know \(\eta^{\varepsilon}(X)\) is homoclinic to \(0\).
For conclusion (c), the estimate on \(\eta_{X}^{\varepsilon}\) comes immediately from (4.29a). As for the other two, again, we argue similarly to in Lemma 4.14, and consider (3.3) to see
\[U_{X}^{\varepsilon} =Y_{XY}\mathcal{O}(1)+y_{Y}u_{x}^{\varepsilon}=(\eta_{X}^{ \varepsilon}+u_{x}^{\varepsilon})\mathcal{O}(1)=\mathcal{O}(\varepsilon^{ \frac{3}{2}})\] \[V_{X}^{\varepsilon} =v_{x}+Y_{X}v_{y}=\mathcal{O}(\varepsilon^{2}),\]
as required.
We now have everything we need to show that \(\overline{U}^{\varepsilon},\overline{V}^{\varepsilon},\overline{\eta}^{\varepsilon}\) satisfy the estimates in Theorem 1.1.
**Theorem 4.17**.: _The estimates on \(\overline{U}^{\varepsilon},\overline{V}^{\varepsilon},\overline{\eta}^{ \varepsilon}\) in Theorem 1.1 are true._
Proof.: Recall (3.1), and the definition of \(\overline{\eta}^{\varepsilon}\) from Theorem 1.1. It will be useful here to define the quantities
\[H =\begin{cases}-h^{-1}&\quad\text{for}\quad 0\leq y\leq h\\ (1-h)^{-1}&\quad\text{for}\quad h\leq y\leq 1\end{cases}\] \[h^{+} =\sup_{X\in\mathbb{R}}y(X,h+\overline{\eta}^{\varepsilon}(X))\] \[h^{-} =\inf_{X\in\mathbb{R}}y(X,h+\overline{\eta}^{\varepsilon}(X)).\]
Notice that \(h^{+}=h+\mathcal{O}(\varepsilon^{2})\), and similarly for \(h^{-}\). We have just shown the required result for \(\eta\) in Conclusion (c) of Proposition 4.12. We deal with \(U\) first. In what follows, \(X\) and \(Y\) should be understood to mean \(X(x,y)\) and \(Y(x,y)\) respectively. Using (3.1c) and (4.28), we see that
\[U^{\varepsilon}(X,Y) =y_{Y}(\omega(y-h)+h(1-h))+\varepsilon+\eta^{\varepsilon}(x)u_{ *}(y)+\mathcal{O}(\varepsilon^{2})\] \[=\omega(y-h)+h(1-h)+\varepsilon+\frac{\eta^{\varepsilon}(x)}{h(1 -h)}\left(\omega y(1-y)+h^{2}(1-h)^{2}H\right)+\mathcal{O}(\varepsilon^{2}). \tag{4.34}\]
Notice that despite the discontinuities in \(\omega\) and \(H\), we have that \(\omega y(1-y)+h^{2}(1-h)^{2}H\) is continuous at \(y=h\), and it is in fact Lipschitz continuous. Therefore, for \(y\) between \(0\) and \(h^{+}\), replacing \(\omega\) with \(\omega_{0}\) and \(H\) with \(-1/h\) in (4.34) introduces errors of \(\mathcal{O}(\varepsilon^{2})\). More precisely, for \((x,y)\in\mathbb{R}\times[0,h^{+}]\), we have that
\[U^{\varepsilon}(X,Y)=\omega_{0}(y-h)+h(1-h)+\varepsilon+\frac{\eta^{ \varepsilon}(x)}{h(1-h)}\left(\omega_{0}Y(1-Y)-h(1-h)^{2}\right)+\mathcal{O}( \varepsilon^{2}),\]
where as usual, the error term is uniform in \((x,y)\). Notice that if \((X,Y)\) satisfies \(Y\leq h+\overline{\eta}^{\varepsilon}(X)\), then it also satisfies \(Y\leq Y(x,h^{+})\). This means that when we apply the coordinate transformation (3.1b), we see that for all \(X,Y\) satisfying \(Y\leq h+\overline{\eta}^{\varepsilon}(X)\), we have
\[U^{\varepsilon}(X,Y) =\omega_{0}(Y-h)+h(1-h)+\varepsilon-(1-h)\eta^{\varepsilon}(x)+ \mathcal{O}(\varepsilon^{2})\] \[=\overline{U}^{\varepsilon}(X,Y)+\mathcal{O}(\varepsilon^{2}).\]
We argue similarly to conclude that for all \(X,Y\) satisfying \(Y\geq h+\overline{\eta}^{\varepsilon}(X)\), we have
\[U^{\varepsilon}(X,Y)=\omega_{1}(Y-h)+h(1-h)+\varepsilon+h\overline{\eta}^{ \varepsilon}(x)+\mathcal{O}(\varepsilon^{2})=\overline{U}^{\varepsilon}(X,Y) +\mathcal{O}(\varepsilon^{2}).\]
Showing the result for \(V\) follows a similar but more straightforward argument. We see for \((X,Y)\) with \(Y\leq h+\overline{\eta}^{\varepsilon}(X)\), we have
\[V^{\varepsilon}(X,Y) =\frac{3\sqrt{3}\varepsilon^{\frac{3}{2}}}{\theta}\tanh\Big{(} \frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}\operatorname{sech}^{2}\Big{(} \frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}(1-H(y-h))+\mathcal{O}(\varepsilon ^{\frac{5}{2}})\] \[=-\frac{3\sqrt{3}\varepsilon^{\frac{3}{2}}}{\theta h}\tanh\Big{(} \frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}\operatorname{sech}^{2}\Big{(} \frac{\sqrt{3\varepsilon}}{2h(1-h)}x\Big{)}y+\mathcal{O}(\varepsilon^{\frac{5 }{2}})\] \[=-\frac{3\sqrt{3}\varepsilon^{\frac{3}{2}}}{\theta h}\tanh\Big{(} \frac{\sqrt{3\varepsilon}}{2h(1-h)}X\Big{)}\operatorname{sech}^{2}\Big{(} \frac{\sqrt{3\varepsilon}}{2h(1-h)}X\Big{)}Y+\mathcal{O}(\varepsilon^{\frac{5 }{2}})\] \[=\overline{V}^{\varepsilon}(X,Y)+\mathcal{O}(\varepsilon^{\frac{5 }{2}}),\]
and similarly, for \((X,Y)\) with \(Y\geq h+\overline{\eta}^{\varepsilon}(X)\), we have
\[V^{\varepsilon}(X,Y) =\frac{3\sqrt{3}\varepsilon^{\frac{3}{2}}}{\theta(1-h)}\tanh \Big{(}\frac{\sqrt{3\varepsilon}}{2h(1-h)}X\Big{)}\operatorname{sech}^{2} \Big{(}\frac{\sqrt{3\varepsilon}}{2h(1-h)}X\Big{)}(1-Y)+\mathcal{O}(\varepsilon ^{\frac{5}{2}})\] \[=\overline{V}^{\varepsilon}(X,Y)+\mathcal{O}(\varepsilon^{\frac{5 }{2}}).\qed\]
In a sense, (4.28) is a more precise approximation of the solution than \(\overline{U}^{\varepsilon},\overline{V}^{\varepsilon},\overline{\eta}^{ \varepsilon}\). This is because in (4.28), the discontinuity in the derivatives happens at \(y=h\), which is where it occurs in the exact solution, but the discontinuity in the derivatives of \(\overline{U}^{\varepsilon}\), \(\overline{V}^{\varepsilon}\), happens at \(Y=h+\ \overline{\eta}^{\varepsilon}\).
## 5. Streamline patterns
In this section we will prove Theorem 1.4. Both the qualitative nature of the results we show, and the general method used are similar in spirit to those in, for example [53]. From now onward we assume without loss of generality that \(\omega_{0}\leq 1-h\). We lose no generality from this, as if it is not the case, then performing the vertical reflection
\[y\mapsto 1-y,\quad h\mapsto 1-h,\quad V\mapsto-V \tag{5.1}\]
and relabelling \(\Omega_{0}\), \(\Omega_{1}\), puts us in the regime where once again \(\omega_{0}\leq 1-h\) holds. After performing this reflection, we still have the vorticity of lower layer is one greater than the vorticity of the upper layer.
### Signs of components of the flow
One thing we can deduce quite straightforwardly is a sign for \(U^{\varepsilon}\) on the interface, as
\[U^{\varepsilon}(X,h+\eta^{\varepsilon}(X))=h(1-h)+\mathcal{O}(\varepsilon)>0. \tag{5.2}\]
We will next show that the interface is monotone, that is that \(\eta^{\varepsilon}_{x}(x)\) has one sign for \(x>0\) and the opposite sign for \(x<0\).
**Proposition 5.1**.: _For all \(x\), \(\eta^{\varepsilon}_{x}(x)\) has the same sign as \(\theta x\)._
Proof.: At \(x=0\) we have that \(a^{\varepsilon}_{x}(0)=\eta^{\varepsilon}_{x}(0)=0\) by evenness of \(a^{\varepsilon}\) and \(\eta^{\varepsilon}\). We now show the \(x\neq 0\) case. A corollary of Proposition 4.10 is that for all \(x\), \(\tilde{b}^{\varepsilon}(x)\) shares a sign with \(x\). Therefore, for \(x\neq 0\),
\[\frac{b^{\varepsilon}(x)}{\theta x}>0. \tag{5.3}\]
In particular, by Remark 4.11, \(b^{\varepsilon}(x)=0\) if and only if \(x=0\). We also know using (4.23a), that for \(x\neq 0\),
\[\frac{a_{x}^{\varepsilon}(x)}{b^{\varepsilon}(x)}=1+\mathcal{O}(|a^{\varepsilon }(x)|+|b^{\varepsilon}(x)|+|\varepsilon|)>0. \tag{5.4}\]
Therefore, when \(x\neq 0\), we know that \(a_{x}^{\varepsilon}(x)\neq 0\).
Consider (4.31). We are interested in \(\eta_{x}^{\varepsilon}\) so we take the third component of each vector, then divide the resulting equation by \(a_{x}^{\varepsilon}\). Since \(\psi_{a}\) and \(\psi_{b}\) are 3-component vectors, we continue our notational convention, and denote their third components by \(e_{3}\cdot\psi_{a}\) and \(e_{3}\cdot\psi_{b}\) respectively. We also know from (4.18) that \(e_{3}\cdot\psi\) is even in \(b\) and \(C^{N}\), hence \(e_{3}\cdot\psi_{b}\) is odd in \(b\) and \(C^{N-1}\), therefore
\[e_{3}\cdot\psi_{b}(a^{\varepsilon},b^{\varepsilon},\varepsilon)=b^{\varepsilon }(1+\mathcal{O}(|a^{\varepsilon}|+|b^{\varepsilon}|+|\varepsilon|))=b^{ \varepsilon}(1+\mathcal{O}(\varepsilon)).\]
Putting all this together, we get
\[\frac{\eta_{x}^{\varepsilon}(x)}{a_{x}^{\varepsilon}(x)}=1+e_{3}\cdot\psi_{a} (a^{\varepsilon},b^{\varepsilon},\varepsilon)+\frac{b_{x}^{\varepsilon}(x)b ^{\varepsilon}(x)}{a_{x}^{\varepsilon}(x)}(1+\mathcal{O}(\varepsilon)). \tag{5.5}\]
We know from differentiability of \(\psi_{a}\) that \(e_{3}\cdot\psi_{a}=\mathcal{O}(|a^{\varepsilon}|+|b^{\varepsilon}|+| \varepsilon|)=\mathcal{O}(\varepsilon)\), hence, multiplying (5.5) by \(a_{x}^{\varepsilon}/b^{\varepsilon}\), we see
\[\frac{\eta_{x}^{\varepsilon}(x)}{b^{\varepsilon}(x)} =(1+\mathcal{O}(\varepsilon))\frac{a_{x}^{\varepsilon}(x)}{b^{ \varepsilon}(x)}+b_{x}^{\varepsilon}(x)(1+\mathcal{O}(\varepsilon))\] \[=1+\mathcal{O}(\varepsilon)+b_{x}^{\varepsilon}(x)\big{(}1+ \mathcal{O}(|a^{\varepsilon}(x)|+|b^{\varepsilon}(x)|+|\varepsilon|)\big{)}= 1+\mathcal{O}(\varepsilon)>0,\]
where we have used the fact that \(b_{x}^{\varepsilon}(x)=\mathcal{O}(\varepsilon^{2})\), which can be deduced from the form of \(F\) in (4.21), the rescaling (4.24) and the boundedness of \(\tilde{a}^{\varepsilon}\) and \(\tilde{b}^{\varepsilon}\). Therefore, we see that \(\eta_{x}^{\varepsilon}\), \(b^{\varepsilon}\), and \(\theta x\) share a sign.
**Corollary 5.2**.: _For all \(x\), we have \(\theta\eta^{\varepsilon}(x)<0\)._
Proof.: We know by conclusion (b) of Corollary 4.16 that \(\eta^{\varepsilon}\) tends to \(0\) as \(x\) tends to \(\pm\infty\). This and Proposition 5.1 give us the result.
We now show that \(V^{\varepsilon}\) also shares a sign with \(\theta X\), except on the upper and lower boundaries, where we know \(V^{\varepsilon}=0\). We use a maximum principle argument, one possible reference for which would be [19, Section 6.4, Theorem 3].
**Proposition 5.3**.: _For all \((X,Y)\in\mathbb{R}\times(0,1)\), we have that \(V^{\varepsilon}(X,Y)\) and \(\theta X\) share a sign._
Proof.: Notice that by oddness, we have immediately that \(V^{\varepsilon}(0,Y)=0\). We now consider the case where \(X>0\) and \(\theta>0\), but the other cases follow an almost identical argument. Let \(\Omega_{i}^{M}=\{(X,Y)\in\Omega_{i}\mid 0<X<M\}\). Since \(V^{\varepsilon}\) is harmonic on \(\Omega_{i}^{M}\), we can apply the strong maximum principle, which says \(V^{\varepsilon}\) attains a minimum on \(\overline{\Omega_{i}^{M}}\), and does so only on \(\partial\Omega_{i}^{M}\). We know that \(V^{\varepsilon}(X,0)=0\) from (1.1c), and that \(V^{\varepsilon}(0,Y)=0\) by oddness. We also know from (1.1e) that
\[V^{\varepsilon}(X,h+\eta^{\varepsilon}(X))=\eta_{X}^{\varepsilon}(X)U^{ \varepsilon}(X,h+\eta^{\varepsilon}(X)),\]
therefore,
\[V^{\varepsilon}(X,h+\eta^{\varepsilon}(X)))>0.\]
We know from Corollary 4.16 that \(\|V^{\varepsilon}(M,\,\cdot\,)\|_{H^{1}((0,1))}\) tends to \(0\) as \(M\) tends to infinity. This means we can use an argument by contradiction to show that \(V^{\varepsilon}>0\) on \(\Omega_{i}^{\infty}\).
Suppose there exists \((\hat{X},\hat{Y})\) with \(\hat{X}>0\) such that \(V^{\varepsilon}(\hat{X},\hat{Y})<0\). Since
\[\|V^{\varepsilon}(X,\,\cdot\,)\|_{H^{1}((0,1))}\to 0,\]
there exists \(M>\hat{X}\) such that for all \(Y\in[0,1]\), we have
\[|V^{\varepsilon}(M,Y)|<\tfrac{1}{2}|V^{\varepsilon}(\hat{X},\hat{Y})|.\]
Hence, the minimum \(V^{\varepsilon}\) attains on \(\partial\Omega_{i}^{M}\) is no less than \(\tfrac{1}{2}V^{\varepsilon}(\hat{X},\hat{Y})\), and so \(V^{\varepsilon}\) does not attain its minimum on the boundary, contradicting the maximum principle. Thus, for all \(M>0\), we have that \(V^{\varepsilon}\geq 0\) on \(\overline{\Omega_{i}^{M}}\). However, we know that \(V^{\varepsilon}\) does not attain its minimum in \(\Omega_{i}^{M}\), therefore \(V^{\varepsilon}>0\) on \(\Omega_{i}^{M}\). Therefore, for all \(X>0,Y\in(0,1)\), we have \(V^{\varepsilon}(X,Y)>0\).
The cases where \(\theta<0\) or \(X<0\) follow very similarly, and we conclude that for all \((X,Y)\in\mathbb{R}\times(0,1)\), we have that \(V^{\varepsilon}(X,Y)\) and \(\theta X\) share a sign.
We are now in a position to start describing the critical layers and stagnation points of the flow. Recall the critical layer of the solution is the set on which \(U^{\varepsilon}=0\). We start by identifying the stagnation points in the trivial solutions. If \(\omega_{0}=1-h\), then the trivial solution stagnates at \(y=0\) and at \(y=1\). Otherwise the trivial solution stagnates in the upper layer at \(y=h-h(1-h)/\omega_{1}\), since recall we are now assuming \(\omega_{0}\leq 1-h\).
We now investigate the stagnation points of the non-trivial solutions. We separate into two cases, which occupy different regions of parameter space. It turns out that the critical layers are qualitatively very different; in the first case they are unbounded, and in the second, bounded.
### Unbounded critical layer
We first assume that \(1-h>\omega_{0}\), i.e., that stagnation occurs in \(\Omega_{1}\), rather than on the upper or lower boundaries. Notice that if we also have \(1-2h>\omega_{0}\), then we are outside the \(\omega=\gamma(\Psi)\) regime.
We now show there is a critical layer, and investigate some of its properties.
**Lemma 5.4**.: _Suppose \(1-h>\omega_{0}\). Given \(\varepsilon\), there exists a unique function \(Y_{*}(X)\) such that \(U^{\varepsilon}(X,Y_{*}(X))=0\). This function \(Y_{*}\) is analytic._
Proof.: First notice that in \(\Omega_{0}\), we have
\[U^{\varepsilon}(X,Y)=\omega_{0}(Y-h)+h(1-h)+\mathcal{O}(\varepsilon)>h\min(1-h,1-h-\omega_{0})+\mathcal{O}(\varepsilon)>0,\]
so any zeros of \(U^{\varepsilon}\) must be in \(\Omega_{1}\). Notice for all \(X\), we have
\[U^{\varepsilon}(X,1)=\omega_{1}(1-h)+\mathcal{O}(\varepsilon)<0,\]
and (5.2) tell us that
\[U^{\varepsilon}(X,h+\eta^{\varepsilon}(X))>0.\]
Therefore, by the intermediate value theorem, for all \(X\), there exists \(Y_{*}\in(h+\eta^{\varepsilon}(X),1)\) such that \(U^{\varepsilon}(X,Y_{*})=0\).
We now show this \(Y_{*}\) is unique. By (1.1b), in \(\Omega_{1}\),
\[U^{\varepsilon}_{Y}=\omega_{1}+V^{\varepsilon}_{X}\omega_{1}+\mathcal{O}( \varepsilon^{2})<0.\]
This implies that for each \(X\), \(U^{\varepsilon}\) is a strictly decreasing function of \(Y\), so in fact, for every \(X\), there exists a unique \(Y_{*}(X)\in(h+\eta^{\varepsilon}(X),1)\) such that \(U^{\varepsilon}(X,Y_{*})=0\). Since \(U^{\varepsilon}\) is an analytic function, and \(U^{\varepsilon}_{Y}(X,Y_{*}(X))\neq 0\), we can apply the analytic implicit function theorem to see that \(Y_{*}\) is an analytic function of \(X\).
_Remark 5.5_.: Solving (3.1c), we see that if \(U^{\varepsilon}(X,Y)=0\) for some \((X,Y)\in\Omega_{1}\), then \(y^{\varepsilon}(X,Y)=h-h(1-h)/\omega_{1}+\mathcal{O}(\varepsilon)\). By smoothness of the coordinate transformations, we see that \(Y=h-h(1-h)/\omega_{1}+\mathcal{O}(\varepsilon)\), or in other words, the distance between the critical layer of our solution, and of the background shear flow is of order \(\varepsilon\), uniformly in \(X\). In particular, for \(\omega_{0}<1-h\), the critical layer \(Y_{*}\) does not touch the boundary \(Y=1\) or the interface.
We can deduce from this lemma that we have a stagnation point at \((0,Y_{*}(0))\), and will now examine its behaviour.
**Theorem 5.6**.: _Suppose \(1-h>\omega_{0}\). The flow given by \((U^{\varepsilon},V^{\varepsilon})\) has a unique stagnation point at \((0,Y_{*}(0))\), which is a centre if \(\theta>0\), and a saddle if \(\theta<0\)._
Proof.: We know that \(U^{\varepsilon}(0,Y)=0\) if and only if \(Y=Y_{*}(0)\), and that for \(Y\neq 0,1\), the vertical velocity \(V^{\varepsilon}(X,Y)=0\) if and only if \(X=0\) by Propostition 5.3. Therefore we have a unique stagnation point at \((0,Y_{*}(0))\). We now consider the nature of this stagnation point. Let \(s,s^{\prime}\) be the eigenvalues of the derivative of \((U^{\varepsilon},V^{\varepsilon})\) at a stagnation point. We know from incompressibility, and the fact we have real solutions that \(s+s^{\prime}=0\), and \(ss^{\prime}\in\mathbb{R}\). Hence, if the determinant of the derivative of \((U^{\varepsilon},V^{\varepsilon})\) is positive, \(s\) and \(s^{\prime}\) are both purely imaginary, and we have a centre. If the determinant is negative, they are both real, and we have a saddle. Notice that by evenness of \(U^{\varepsilon}\), we have \(U^{\varepsilon}_{X}(0,Y)=0\) for all \(Y\). Using this and (1.1b), we have
\[\det(D(U^{\varepsilon},V^{\varepsilon})(0,Y))=U^{\varepsilon}_{X}V^{ \varepsilon}_{Y}-U^{\varepsilon}_{Y}V^{\varepsilon}_{X}=-V^{\varepsilon}_{X} (\omega_{1}+V^{\varepsilon}_{X}).\]
We know that in \((0,\infty)\times(0,1)\), we have \(\theta V^{\varepsilon}>0\), that \(V^{\varepsilon}(0,Y)=0\), and that \(V^{\varepsilon}\) is harmonic, so we can apply the Hopf lemma to deduce that for all \(Y\in(0,1)\), we have \(\theta V^{\varepsilon}_{X}(0,Y)>0\). We also see from Conclusion (c) of Corollary 4.16 that
\[\omega_{1}+V^{\varepsilon}_{X}=\omega_{1}+\mathcal{O}(\varepsilon^{2})<0,\]
so therefore
\[\theta\det(D(U,V)(0,Y)))>0,\]
and in particular, this is true at \(Y=Y_{*}(0)\), the stagnation point. In other words, if \(\theta>0\), we have a centre, if \(\theta<0\), we have a saddle.
### Bounded critical layer
We now consider the case \(\omega_{0}=1-h\), \(\omega_{0}-\omega_{1}=1\). This implies \(\theta=2h-1\). Notice that it is sufficient to consider cases when \(h\neq\frac{1}{2}\), as \(\theta=0\) if and only if \(h=\frac{1}{2}\). Notice also, that we can reflect in \(Y\), as we did in (5.1), to insist that \(h>\frac{1}{2}\). This will imply that \(\theta>0\).
We will now show that the critical layer is bounded, and ends where it intersects with the upper boundary.
**Theorem 5.7**.: _Suppose \(\omega_{0}=1-h\), \(\omega_{1}=-h\), \(\frac{1}{2}<h<1\). There exists \(X_{*}>0\) such that the set on which \(U^{\varepsilon}=0\) is given by a the graph of an analytic function \(Y_{*}\colon[-X_{*},X_{*}]\to[0,1]\), satisfying \(Y_{*}(X_{*})=Y_{*}(-X_{*})=1\). Let \(R\) be the region_
\[R=\{(X,Y)\mid|X|<X_{*},\ Y_{*}(X)<Y<1\}. \tag{5.6}\]
_Then \(U^{\varepsilon}<0\) in \(R\), and \(U^{\varepsilon}>0\) outside \(\overline{R}\). Furthermore, the flow given by \((U^{\varepsilon},V^{\varepsilon})\) has three stagnation points: a centre located at \((0,Y_{*}(0))\), and saddle points at \((\pm X_{*},1)\)._
Proof.: We first see that \(U^{\varepsilon}(0,1)<0\), and \(U^{\varepsilon}(0,0)>0\), by observing that
\[U^{\varepsilon}(0,1) =y_{Y}(0,1)(u^{\varepsilon}(0,1)+\omega_{1}(1-h)+h(1-h)+\varepsilon)\] \[=y_{Y}(0,1)(u^{\varepsilon}(0,1)+\varepsilon).\]
Using (3.2) and (4.17), we see that
\[U^{\varepsilon}(0,1)=(1+\mathcal{O}(\varepsilon))(a^{\varepsilon}(0)u_{*}(1) +\varepsilon+\mathcal{O}(\varepsilon^{2})),\]
and from (4.7) and (4.24), we have
\[U^{\varepsilon}(0,1) =(1+\mathcal{O}(\varepsilon))\left(\frac{2h\varepsilon}{\theta} \tilde{a}^{\varepsilon}(0)+\varepsilon+\mathcal{O}(\varepsilon^{2})\right)\] \[=\frac{-3h\varepsilon}{\theta}+\varepsilon+\mathcal{O}( \varepsilon^{2})=\frac{-1-h}{2h-1}\varepsilon+\mathcal{O}(\varepsilon^{2})<0.\]
A very similar argument shows that \(U^{\varepsilon}(0,0)>0\).
Given (3.1c) and the limiting behaviour of \(u^{\varepsilon}\) in Proposition 4.12, we see that if \(|X|\) is sufficiently large, we have for all \(Y\) that
\[U^{\varepsilon}(X,Y)=y_{Y}(\omega(y-h)+h(1-h)+\varepsilon+u^{\varepsilon}(x,y) )\geq y_{Y}(\varepsilon+u^{\varepsilon}(x,y))>0. \tag{5.7}\]
Note, this is not true in the unbounded critical layer case. We conclude that by the intermediate value theorem, and evenness of \(U^{\varepsilon}\), there exists \(X_{*}>0\) such that
\[U^{\varepsilon}(X_{*},1)=U^{\varepsilon}(-X_{*},1)=0.\]
Using (1.1c), Proposition 5.3, and the Hopf lemma, we see that for \(X\neq 0\),
\[XV^{\varepsilon}_{Y}(X,1)<0,\]
therefore applying (1.1a) gives us that \(XU^{\varepsilon}_{X}(X,1)>0\), so \(X_{*}\) is the only positive solution of \(U^{\varepsilon}(X,1)=0\). We also see that if \(|X|>X_{*}\), we have \(U^{\varepsilon}(X,1)>0\), and if \(|X|<X_{*}\), we have \(U^{\varepsilon}(X,1)<0\).
We now note that for all \(X\in(-X_{*},X_{*})\), we have \(U^{\varepsilon}(X,1)<0<U^{\varepsilon}(X,h+\eta(X))\), therefore there exists \(Y_{*}(X)\in(h+\eta^{\varepsilon}(X),1)\) such that \(U^{\varepsilon}(X,Y_{*}(X))=0\). We see from (1.1b) and the fact that \(\omega_{1}<0\) that in \(\Omega_{1}\),
\[U^{\varepsilon}_{Y}(X,Y)=\omega_{1}+V^{\varepsilon}_{X}(X,Y)=\omega_{1}+ \mathcal{O}(\varepsilon^{2})<0,\]
and similarly in \(\Omega_{0}\), we have \(U^{\varepsilon}_{Y}(X,Y)>0\). Therefore, since \(U^{\varepsilon}_{Y}(X,0)>0\), for each \(X\) this \(Y_{*}(X)\) is unique. We can apply the analytic implicit function theorem to conclude that \(Y_{*}(X)\) is an analytic function. Next, we seek to show that \(Y_{*}(X)\to 1\) as \(X\to X_{*}\) and as \(X\to-X_{*}\). Notice that
\[\begin{pmatrix}-1\\ -1\end{pmatrix}\cdot\nabla U^{\varepsilon}(X_{*},1)=-U^{\varepsilon}_{X}(X_{* },1)-U^{\varepsilon}_{Y}(X_{*},1)=-\omega_{1}+\mathcal{O}(\varepsilon^{\frac{ 3}{2}})>0.\]
Thus, there exists \(\delta>0\) such that for all \(t\in(0,\delta)\), we have \(U^{\varepsilon}(X_{*}-t,1)<0<U^{\varepsilon}(X_{*}-t,1-t)\), therefore \(1-t<Y_{*}(X_{*}-t)<1\), so we are done.
Define the bounded region \(R\) as in (5.6). We know that \(U^{\varepsilon}(X,Y_{*}(X))=0\), and that \(U(X,1)<0\) for \(|X|<X_{*}\). Therefore we can apply the maximum principle to show that in \(R\), we have \(U^{\varepsilon}<0\). We can also show that \(U^{\varepsilon}>0\) outside \(\overline{R}\). To do this we first recall that we deduced in (5.7) that there exists \(M>X_{*}\) such that for all \(X\) with \(|X|\geq M\), and all \(Y\), we have \(U^{\varepsilon}(X,Y)>0\).
For \(X\in(-M,M)\), we can apply the Hopf lemma to \(V^{\varepsilon}\) at \((X,0)\), and deduce with (1.1a) that \(XU^{\varepsilon}_{X}(X,0)>0\). We already showed at the very start of this proof that \(U^{\varepsilon}(0,0)>0\), and so we see that for all \(X\), \(U^{\varepsilon}(X,0)>0\). Therefore we apply the maximum principle to \(U^{\varepsilon}\) on \(((-M,M)\times(0,1))\setminus\overline{R}\), and conclude that on this set too, \(U^{\varepsilon}>0\).
Finally, we discuss the stagnation point. We deduce from what we have just discussed that there is a stagnation point at \((0,Y_{*}(0))\), and that there are no others away from the boundaries, and apply the same arguments as in Theorem 5.6 to conclude that since the stagnation happens in \(\Omega_{1}\), and \(\theta>0\), we have a centre.
We now show that in this region of parameter space, we have a streamline which is attached to the upper layer, as shown in Figure 2.
**Proposition 5.8**.: _Suppose \(\omega_{0}=1-h\), \(\omega_{1}=-h\), \(\frac{1}{2}<h<1\). Recall \(X_{*}\) from Theorem 5.7, the \(X\) coordinate of where the critical layer meets the upper boundary. There exists a streamline with endpoints \((-X_{*},1)\) and \((X_{*},1)\)._
Proof.: We first recall that streamlines are level curves of the stream function \(\Psi^{\varepsilon}\). Pick the stream function such that \(\Psi^{\varepsilon}(-X_{*},1)=0\). For notational convenience, let \(\tilde{X}=-U^{\varepsilon}_{Y}(-X_{*},1)\), \(\tilde{Y}=-U^{\varepsilon}_{X}(-X_{*},1)\). We have that \(\tilde{X},\tilde{Y}>0\). Notice that for all sufficiently small \(t\),
\[\Psi^{\varepsilon}(-X_{*}+t\tilde{X},1-\tfrac{3}{2}t\tilde{Y}) =-\tfrac{3}{2}t^{2}\tilde{X}\tilde{Y}U^{\varepsilon}_{X}(-X_{*}, 1)+\tfrac{9}{8}t^{2}\tilde{Y}^{2}U^{\varepsilon}_{Y}(-X_{*},1)+\mathcal{O}(t^ {3})\] \[=\tfrac{3}{2}t^{2}\tilde{X}\tilde{Y}^{2}-\tfrac{9}{8}t^{2}\tilde {X}\tilde{Y}^{2}+\mathcal{O}(t^{3})>0,\]
and
\[\Psi^{\varepsilon}(-X_{*}+t\tilde{X},1-3t\tilde{Y}) =-3t^{2}\tilde{X}\tilde{Y}U^{\varepsilon}_{X}(X_{*},1)+\tfrac{9} {2}t^{2}\tilde{Y}^{2}U^{\varepsilon}_{Y}(X_{*},1)+\mathcal{O}(t^{3})\] \[=3t^{2}\tilde{X}\tilde{Y}^{2}-\tfrac{9}{2}t^{2}\tilde{X}\tilde{Y} ^{2}+\mathcal{O}(t^{3})<0.\]
Therefore, by the intermediate value theorem, for all \(t>0\) sufficiently small, there exists \(Y_{**}\in(1-3t\tilde{Y},1-\tfrac{3}{2}t\tilde{Y})\) such that \(\Psi^{\varepsilon}(-X_{*}+t\tilde{X},Y_{**})=0\).
We also have that \(\Psi^{\varepsilon}\) is increasing in \(Y\), in that for all \(s\in(\frac{3}{2},3)\), we have
\[\Psi^{\varepsilon}_{Y}(-X_{*}+t\tilde{X},1-st\tilde{Y}) =U^{\varepsilon}(-X_{*}+t\tilde{X},1-st\tilde{Y}) \tag{5.8}\] \[=t\tilde{X}U^{\varepsilon}_{X}(-X_{*},1)-st\tilde{Y}U^{\varepsilon }_{Y}(-X_{*},1)\] \[=-t\tilde{X}\tilde{Y}+st\tilde{Y}\tilde{X}>0.\]
Thus, for each \(X\), this \(Y_{**}\) is unique in \((1-3(X_{*}+X)\tilde{Y}\tilde{X}^{-1},1-\tfrac{3}{2}(X_{*}+X)\tilde{Y}\tilde{X }^{-1})\). We can now apply the implicit function theorem to deduce that we have some small \(\delta>0\), and a smooth function \(Y_{**}(X)\), defined for \(X\in[-X_{*},-X_{*}+\delta]\), such that
\(\Psi^{\varepsilon}(X,Y_{**}(X))=0\), and \(Y_{**}(X)\to 1\) as \(X\to-X_{*}\). In other words, \((X,Y_{**}(X))\) gives part of a streamline which touches the upper boundary at \((-X_{*},1)\).
We now seek the rest of the streamline. Consider the motion of a fluid particle starting from \((-X_{*}+\delta,Y_{**}(-X_{*}+\delta))\). Notice that (5.8) shows that \(U^{\varepsilon}\) is positive along \(Y_{**}\), so \(Y_{**}\) cannot lie in \(R\), the region defined in (5.6) where \(U^{\varepsilon}<0\), which lies above \(Y_{*}\). Therefore, as \(t\to-\infty\), the particle will approach \((-X_{*},1)\).
We now show that the particle reaches the line \(X=0\), so by the parity of \(U^{\varepsilon}\) and \(V^{\varepsilon}\), as \(t\to\infty\), the particle will approach \((X_{*},1)\). For \(X<0\), we see \(U^{\varepsilon}(X,Y_{*}(X))=0\), and \(V^{\varepsilon}(X,Y_{*}(X))>0\). Hence, for \(X<0\), fluid particles cannot enter \(R\), only leave it. Thus, since the fluid particle started outside \(R\), it cannot enter \(R\) before crossing the line \(X=0\). Therefore, while the fluid particle is in \((-\infty,0]\times[0,1]\), it cannot travel upwards (using Proposition 5.3), or to the left. Note that the particle cannot approach the centre at \((0,Y_{*}(0))\), so must remain at least some minimum distance \(\mu\) from it at all times.
Consider the compact set \(K=[-X_{*}+\delta,0]\times[0,1]\setminus B_{\mu}((0,Y_{*}(0)))\). This region has no stagnation points in it, therefore, the fluid flow in this region has some minimum speed. The fluid particle cannot move up or left, travels with some minimum speed, and cannot pass through the lower boundary, therefore must leave the right hand edge of \(K\). It cannot do so by coming within \(\mu\) of the centre, therefore it must leave at some point on the line \(X=0\). Therefore by the parity of \(U^{\varepsilon}\) and \(V^{\varepsilon}\) we are done.
## 6. Acknowledgements
KM received partial support through The Leverhulme Trust RPG-2020-107. JS received support through EPSRC, EP/T518013/1.
## Appendix A Proof of Proposition 4.10
Proof.: Recall we have fixed \(N\geq 4\), and have that \(F\in C^{N}\). Let the terms of the Taylor polynomial of \(e_{1}\cdot F\) be given by \(\mu_{ijk}a^{i}b^{j}\varepsilon^{k}\), and of \(e_{2}\cdot F\) by \(\lambda_{ijk}a^{i}b^{j}\varepsilon^{k}\). Throughout the following, we have a family of remainder functions, indexed by \(j\), such that \(r^{i}_{j}(\alpha,\beta,\varepsilon)=\mathcal{O}(|\alpha|^{i}+|\beta|^{i}+ \varepsilon^{i})\), and is even in \(\beta\).
We know
\[a_{x}=b(1+\mu_{110}a+\mu_{011}\varepsilon+r^{2}_{1}(a,b,\varepsilon)).\]
Therefore, by the implicit function theorem and implicit differentiation,
\[b =a_{x}(1-\mu_{110}a-\mu_{011}\varepsilon+r^{2}_{2}(a,a_{x}, \varepsilon))\] (A.1a) \[b_{x} =a_{xx}(1+r^{1}_{3}(a,a_{x},\varepsilon))-\mu_{110}a^{2}_{x}+r^{ 3}_{4}(a,a_{x},\varepsilon).\] (A.1b)
Therefore, since we also know that \(b_{x}=\lambda_{101}a\varepsilon+\lambda_{200}a^{2}+r^{3}_{5}(a,b,\varepsilon)\), we have
\[a_{xx}=\lambda_{101}a\varepsilon+\lambda_{200}a^{2}+\lambda_{020}b^{2}+\mu_{11 0}a^{2}_{x}+r^{3}_{6}(a,a_{x},\varepsilon).\]
Applying the scaling, we see
\[\tilde{a}_{\tilde{x}\tilde{x}}=\tilde{a}+\tilde{a}^{2}+\varepsilon R(\tilde{ a},\tilde{a}_{\tilde{x}},\varepsilon).\]
Where \(R\in C^{N-1}(\mathbb{R}^{3})\), and is even in \(\tilde{a}_{\tilde{x}}\).
Now we let \(z^{\varepsilon}=\tilde{a}^{\varepsilon}-\tilde{a}^{0}\). We see that
\[z^{\varepsilon}_{\tilde{x}\tilde{x}}=z^{\varepsilon}+2\tilde{a}^{0}z^{ \varepsilon}+(z^{\varepsilon})^{2}+\varepsilon R(\tilde{a}^{0}+z^{\varepsilon},\tilde{a}^{0}_{\tilde{x}}+z^{\varepsilon}_{\tilde{x}},\varepsilon).\] (A.2)
Let \(C^{n}_{\mathrm{b,e}}(\mathcal{U})\) be the set of even functions with domain \(\mathcal{U}\), and with finite \(C^{n}\) norm. We now use the implicit function theorem to show that for \(\varepsilon,\|z\|_{C^{2}}\) sufficiently small, we have that for each \(\varepsilon\), (A.2) has a unique solution \(z^{\varepsilon}\in C^{2}_{\mathrm{b,e}}\), such that the map \(\varepsilon\mapsto z^{\varepsilon}\) is in \(C^{N-1}(\mathbb{R},C^{2}(\mathbb{R}))\). First, we let
\[K_{1}\colon C^{0}_{\mathrm{b,e}}(\mathbb{R})\to C^{2}_{\mathrm{b,e}}( \mathbb{R}),\qquad f(\,\cdot\,)\mapsto\int_{-\infty}^{\infty}-\frac{1}{2}e^{- \mid\,\cdot\,-t\mid}f(t)\;dt.\]
This satisfies \(K_{1}(f^{\prime\prime}-f)=f\), and is bounded from \(C^{0}_{\mathrm{b,e}}\) to \(C^{2}_{\mathrm{b,e}}\). If we also define
\[r\colon C^{2}_{\mathrm{b,e}}(\mathbb{R})\times\mathbb{R}\to C^{0}_{\mathrm{b,e }}(\mathbb{R}),\qquad(f,\varepsilon)\mapsto f^{2}+\varepsilon R(\tilde{a}^{0}+ f,\tilde{a}^{0}_{\tilde{x}}+f_{\tilde{x}},\varepsilon).\]
Notice, \(r\) does indeed have codomain consisting of even functions, as \(\tilde{a}^{0}\) is even, and \(R\) is even in its second argument. Thus, (A.2) can be written equivalently as
\[z^{\varepsilon}=2K_{1}(\tilde{a}^{0}z^{\varepsilon})+K_{1}(r(z^{\varepsilon},\varepsilon)).\]
Let \(K_{2}z=2K_{1}(\tilde{a}^{0}z)\). An easy extension of Arzela-Ascoli shows that \(L\) is compact from \(C^{0}_{\mathrm{b,e}}(\mathbb{R})\) to \(C^{0}_{\mathrm{b,e}}(\mathbb{R})\). Therefore its spectrum is made only of eigenvalues and possibly \(0\), which means in particular, if \(\ker(K_{2}-I)=\{0\}\), then \(K_{2}-I\) is continuously invertible from \(C^{0}_{\mathrm{b,e}}(\mathbb{R})\) to \(C^{0}_{\mathrm{b,e}}(\mathbb{R})\). If \((K_{2}-I)z=0\), we see \(2\tilde{a}^{0}z=z_{\tilde{x}\tilde{x}}-z\). Multiplying by \(\tilde{a}^{0}_{\tilde{x}}\) then integrating by parts gives \((\tilde{a}^{0})^{2}z=\tilde{a}^{0}_{\tilde{x}}z_{\tilde{x}}-\tilde{a}^{0}z\), which can be rearranged to \(\tilde{a}^{0}_{\tilde{x}\tilde{x}}z=\tilde{a}^{0}_{\tilde{x}}z_{\tilde{x}}\). Hence \(z\) is a multiple of \(\tilde{a}^{0}_{\tilde{x}}\). But this is an odd function, therefore \(\ker(K_{2}-I)=\{0\}\) as required. Therefore \(K_{2}-I\) is continuously invertible from \(C^{0}_{\mathrm{b,e}}(\mathbb{R})\) to \(C^{0}_{\mathrm{b,e}}(\mathbb{R})\), and it is easy to then show that \(K_{2}-I\) is continuously invertible from \(C^{2}_{\mathrm{b,e}}(\mathbb{R})\) to \(C^{2}_{\mathrm{b,e}}(\mathbb{R})\).
We can now write (A.2) as \(\Phi(z,\varepsilon)=0\), where
\[\Phi\colon C^{2}_{\mathrm{b,e}}(\mathbb{R})\times\mathbb{R}\to C^{2}_{\mathrm{ b,e}}(\mathbb{R}),\qquad(z,\varepsilon)\mapsto z-(I-K_{2})^{-1}(K_{1}(r(z, \varepsilon)))\]
is a \(C^{N-1}\) function. Taking a Frechet derivative with respect to \(z\) at \((0,0)\), we have that
\[D_{z}\Phi(0,0)=I-(I-K_{2})^{-1}(K_{1}(D_{z}r(0,0)))=I,\]
an isomorphism. Therefore, we have by the implicit function theorem that \(z^{\varepsilon}\) exists for \(\varepsilon\) sufficiently small, and that the map \(\varepsilon\mapsto\tilde{a}^{\varepsilon}\) is in \(C^{N-1}(\mathbb{R},C^{2}(\mathbb{R}))\). Note that in fact, we have \(z^{\varepsilon}\) exists for \(\varepsilon\) small and negative, but since our rescaling does not make sense for negative \(\varepsilon\), we ignore these solutions. Applying the rescaling (4.24) to (A.1a), we see that \(\tilde{b}^{\varepsilon}=\tilde{a}^{\varepsilon}_{\tilde{x}}(1+\mathcal{O}( \varepsilon))\), where the \(\mathcal{O}\) is with respect to the \(C^{0}\) norm. However, considering the second component of (4.25), we see that \(\tilde{b}^{\varepsilon}_{\tilde{x}}(\tilde{x})\) has \(C^{N}\) dependence on \((\tilde{a}^{\varepsilon}(\tilde{x}),\tilde{a}^{\varepsilon}_{\tilde{x}}(\tilde{ x})\), and \(\varepsilon\), so we see that in fact,
\[\tilde{b}^{\varepsilon}=\tilde{a}^{\varepsilon}_{\tilde{x}}+\mathcal{O}( \varepsilon)=\tilde{a}^{0}_{\tilde{x}}+\mathcal{O}(\varepsilon),\]
where the \(\mathcal{O}\) is with respect to the \(C^{1}\) norm. |
2303.09432 | Chromatic aberrations of geometric Satake over the regular locus | Let $G$ be a connected and simply-connected semisimple group over
$\mathbf{C}$, let $G_c$ be a maximal compact subgroup of $G(\mathbf{C})$, and
let $T$ be a maximal torus. The derived geometric Satake equivalence of
Bezrukavnikov-Finkelberg localizes to an equivalence between a full subcategory
of $\mathrm{Loc}_{G_c}(\Omega G_c; \mathbf{C})$ and
$\mathrm{QCoh}(\check{\mathfrak{g}}^{\mathrm{reg}}[2]/\check{G})$, which can be
thought of as a version of the geometric Satake equivalence "over the regular
locus". In this article, we study the story when $\mathrm{Loc}_{T_c}(\Omega
G_c; \mathbf{C})$ is replaced by the $\infty$-category of $T$-equivariant local
systems of $A$-modules over $\mathrm{Gr}_G(\mathbf{C})$, where $A$ is a
complex-oriented even-periodic $\mathbf{E}_\infty$-ring equipped with an
oriented group scheme $\mathbf{G}$. We show that upon rationalization,
$\mathrm{Loc}_{T_c}(\Omega G_c; A)$, which was studied variously by
Arkhipov-Bezrukavnikov-Ginzburg and Yun-Zhu when $A = \mathbf{C}[\beta^{\pm
1}]$, can be described in terms of the spectral geometry of various
Langlands-dual stacks associated to $A$ and $\mathbf{G}$. For example, this
implies that if $A$ is an elliptic cohomology theory with elliptic curve $E$,
then $\mathrm{Loc}_{T_c}(\Omega G_c; A) \otimes \mathbf{Q}$ can be described
via the moduli stack of $\check{B}$-bundles of degree $0$ on $E^\vee$. | Sanath K. Devalapurkar | 2023-03-16T16:08:45Z | http://arxiv.org/abs/2303.09432v1 | # Chromatic aberrations of geometric Satake over the regular locus
###### Abstract.
Let \(G\) be a connected and simply-connected semisimple group over \(\mathbf{C}\), let \(G_{c}\) be a maximal compact subgroup of \(G(\mathbf{C})\), and let \(T\) be a maximal torus. The derived geometric Satake equivalence of Bezrukavnikov-Finkelberg localizes to an equivalence between a full subcategory of \(\operatorname{Loc}_{G_{c}}(\Omega G_{c};\mathbf{C})\) and \(\operatorname{QCoh}(\hat{\mathbf{g}}^{\operatorname{reg}}[2]/\tilde{G})\), which can be thought of as a version of the geometric Satake equivalence "over the regular locus". In this article, we study the story when \(\operatorname{Loc}_{T_{c}}(\Omega G_{c};\mathbf{C})\) is replaced by the \(\infty\)-category of \(T\)-equivariant local systems of \(A\)-modules over \(\operatorname{Gr}_{G}(\mathbf{C})\), where \(A\) is a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring equipped with an oriented group scheme \(\mathbf{G}\). We show that upon rationalization, \(\operatorname{Loc}_{T_{c}}(\Omega G_{c};A)\), which was studied variously by Arkhipov-Bezrukavnikov-Ginzburg and Yun-Zhu when \(A=\mathbf{C}[\beta^{\pm 1}]\), can be described in terms of the spectral geometry of various Langlands-dual stacks associated to \(A\) and \(\mathbf{G}\). For example, this implies that if \(A\) is an elliptic cohomology theory with elliptic curve \(E\), then \(\operatorname{Loc}_{T_{c}}(\Omega G_{c};A)\otimes\mathbf{Q}\) can be described via the moduli stack of \(\bar{B}\)-bundles of degree \(0\) on \(E^{\vee}\).
Part of this work was done when the author was supported by the PD Soros Fellowship and NSF DGE-2140743. The present article is a preliminary version, so any comments and suggestions for improving it are greatly appreciated! I'll post major updates to the arXiv, but I'll upload minor edits to my website; so please see my website for the most up-to-date version.
## 1. Introduction
The study of the \(\
## 1. Introduction
Let \(G\) be a simply-connected semisimple algebraic group or a torus over \(\mathbf{C}\). Many deep results in geometric representation theory are concerned with describing the "topological"/A-side category of D-modules on algebraic (ind-)schemes associated to \(G\) (such as the flag variety, the nilpotent cone, the affine Grassmannian, the affine flag variety, etc.) in terms of representation-theoretic/algebro-geometric B-side data associated to \(\check{G}\), the Langlands dual. These equivalences can be interpreted as refinements of the Fourier/Mellin transform. By the Riemann-Hilbert equivalence, the A-side category of D-modules on \(X\) may be interpreted instead as categories of constructible sheaves of \(\mathbf{C}\)-vector spaces on \(X(\mathbf{C})\). The goal of this manuscript is to study analogues of some of these equivalences when we instead consider the category of constructible sheaves of \(A\)-module spectra on \(X(\mathbf{C})\), where \(A\) is a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring (such as topological K-theory KU, or an elliptic cohomology theory).
### Summary of content
In this article, we take a few steps towards establishing a chromatic homotopy-theoretic analogue of the derived geometric Satake equivalence. Let \(B\) be a Borel subgroup of \(G\). Let \(\mathscr{K}\) denote \(\mathbf{C}(\!(t)\!)\), and let \(\mathscr{O}\) denote \(\mathbf{C}[\![t]\!]\). The affine Grassmannian \(\mathrm{Gr}_{G}\) is defined as the sheafification of the functor of points \(\mathrm{CAlg}_{\mathbf{C}}\ni R\mapsto G(R\otimes_{\mathbf{C}}\mathscr{K})/G( R\otimes_{\mathbf{C}}\mathscr{O})\). It has the property that \(\mathrm{Gr}_{G}(\mathbf{C})\) is homotopy equivalent to \(\Omega G_{c}\simeq\Omega^{2}BG_{c}\), where \(G_{c}\) is a maximal compact subgroup of \(G(\mathbf{C})\); see [10]. (Note that \(G_{c}\) is homotopy equivalent to \(G(\mathbf{C})\), so for most of the topological parts of this article, the distinction between them will be irrelevant.) The classical geometric Satake equivalence says:
**Theorem 1.1.1** (Classical geometric Satake, [11]).: _The abelian category \(\mathrm{Perv}_{G(\mathscr{O})}(\mathrm{Gr}_{G};\mathbf{Q})\) of \(G(\mathscr{O})\)-equivariant perverse sheaves on \(\mathrm{Gr}_{G}\) is equivalent to \(\mathrm{Rep}(\check{G}_{\mathbf{Q}})\), where \(\check{G}_{\mathbf{Q}}\) is the Langlands dual group1 over \(\mathbf{Q}\)._
Footnote 1: This denotes the base-change to \(\mathbf{Q}\) of the Chevalley scheme over \(\mathbf{Z}\), i.e., the split reductive group scheme whose root datum coincides with the root datum of \(\check{G}_{\mathbf{C}}\).
In [1], building on work of Ginzburg [12], Bezrukavnikov-Finkelberg proved a _derived_ analogue of the geometric Satake equivalence:
**Theorem 1.1.2** (Derived geometric Satake, [1]).: _There is an equivalence \(\mathrm{DMod}_{G(\mathscr{O})}(\mathrm{Gr}_{G})\simeq\mathrm{QCoh}(\check{ \mathfrak{g}}_{\mathbf{C}}[2]/\check{G}_{\mathbf{C}})\) of \(\mathbf{C}\)-linear \(\infty\)-categories, where \(\check{\mathfrak{g}}_{\mathbf{C}}[2]\) is the derived \(\mathbf{C}\)-scheme \(\mathrm{Spec}\,\mathrm{Sym}_{\mathbf{C}}(\check{\mathfrak{g}}_{\mathbf{C}}^{*} [-2])\)._
**Remark 1.1.3**.: The Bezrukavnikov-Finkelberg equivalence leads to a simpler equivalence on the level of local systems: \(\mathrm{Loc}_{G_{c}}(\Omega G_{c};\mathbf{C})\simeq\mathrm{QCoh}(\check{ \mathfrak{g}}_{\mathbf{C}}^{\mathrm{reg}}[2]/\check{G}_{\mathbf{C}})\). This can be proved using [13, Proposition 2.2.1] and [1, Proposition 2.8]. This statement over the regular locus in fact plays a key role in proving the derived geometric Satake equivalence.
Our goal in this article (partly inspired by Adams' quote above, the work [1] of Hopkins-Kuhn-Ravenel corresponding to the diametric case of \(G\) being a _finite_ group, and the discussion in [15] and Appendix B) is to begin exploring the analogous story when \(\mathbf{C}\) is replaced by a generalized cohomology theory. Specifically, we will replace \(\mathbf{C}\) with an even-periodic \(\mathbf{E}_{\infty}\)-ring equipped with specific additional data. The idea of considering other coefficient cohomology theories in the context of geometric representation theory is not new; see [1]
for an early discussion of such ideas, as well as [12, 13, 14] for more recent work in this direction.
**Remark 1.1.4**.: Part of the reason the derived contributions are vital to generalizing the geometric Satake equivalence is that when one considers sheaves with coefficients in a 2-periodic \(\mathbf{E}_{\infty}\)-ring (or any \(\mathbf{E}_{\infty}\)-ring with nonzero homotopy in positive degrees), contributions from higher cohomology are circulated to degree 0. For instance, the result of Bezrukavnikov-Finkelberg implies that \(\operatorname{Shv}^{c}_{G(\mathscr{O})}(\operatorname{Gr}_{G}(\mathbf{C}); \mathbf{C}[\beta^{\pm 1}])\simeq\operatorname{QCoh}(\tilde{\mathfrak{g}}_{\mathbf{C}}[2] /\check{G}_{\mathbf{C}})\otimes_{\mathbf{C}}\mathbf{C}[\beta^{\pm 1}]\) where \(|\beta|=2\); but this is in turn equivalent to \(\operatorname{QCoh}(\tilde{\mathfrak{g}}_{\mathbf{C}}/\check{G}_{\mathbf{C}}) \otimes_{\mathbf{C}}\mathbf{C}[\beta^{\pm 1}]\), which is _not_ the 2-periodification of \(\operatorname{Rep}(\check{G}_{\mathbf{C}})\). However, let us note that in the setting of relative geometric Langlands (as discussed in [11]), 2-periodification is a rather destructive procedure: the particular shifts involved on the coherent side are extremely important, since they provide a geometric analogue of the point of evaluation of the Langlands dual L-function.
We will study a variant of a result of Arkhipov-Bezrukavnikov-Ginzburg (ABG) from [1], which is closely related to the geometric Satake equivalence. Namely, let \(I=G(\mathscr{O})\times_{G}B\) denote the Iwahori subgroup of \(G(\mathscr{O})\). Then:
**Theorem 1.1.5** (Arkhipov-Bezrukavnikov-Ginzburg).: _There is an equivalence \(\operatorname{DMod}_{I}(\operatorname{Gr}_{G})\simeq\operatorname{IndCoh}(( \widetilde{\mathscr{N}}\times_{\tilde{\mathfrak{g}}}\{0\})/\check{G})\), where \(\widetilde{\mathscr{N}}=T^{*}(\check{G}/\check{B})\) is the Springer resolution. This is in turn equivalent to \(\operatorname{QCoh}(\widetilde{\tilde{\mathfrak{g}}}_{\mathbf{C}}[2]/\check{G }_{\mathbf{C}})\) by Koszul duality, where \(\widetilde{\tilde{\mathfrak{g}}}_{\mathbf{C}}[2]=\check{G}\times^{\check{B}} \check{\mathfrak{b}}[2]\) is a shifted analogue of the Grothendieck-Springer resolution._
**Remark 1.1.6**.: As in Remark 1.1.3, the ABG equivalence leads to a simpler equivalence on the level of local systems: \(\operatorname{Loc}_{T_{c}}(\Omega G_{c};\mathbf{C})\simeq\operatorname{QCoh}( \widetilde{\tilde{\mathfrak{g}}}_{\mathbf{C}}^{\operatorname{reg}}/\check{G}_{ \mathbf{C}})\). Upon 2-periodification, we therefore see that \(\operatorname{Loc}_{T_{c}}(\Omega G_{c};\mathbf{C}[\beta^{\pm 1}])\simeq \operatorname{QCoh}(\widetilde{\tilde{\mathfrak{g}}}_{\mathbf{C}}^{ \operatorname{reg}}/\check{G}_{\mathbf{C}})\otimes_{\mathbf{C}}\mathbf{C}[ \beta^{\pm 1}]\). Again, this statement over the regular locus in fact plays a key role in proving the ABG equivalence.
Note that pullback along the inclusion of a point into \(\operatorname{Gr}_{G}(\mathbf{C})\) defines a symmetric monoidal functor \(\operatorname{Shv}^{c}_{I}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{C}[ \beta^{\pm 1}])\to\operatorname{Shv}^{c}_{I}(*;\mathbf{C}[\beta^{\pm 1}])\), and there is an equivalence \(\operatorname{Loc}_{T_{c}}(G_{c};\mathbf{C}[\beta^{\pm 1}])\simeq \operatorname{End}_{\operatorname{Loc}_{T_{c}}(\Omega G_{c};\mathbf{C}[\beta^ {\pm 1}])}(\operatorname{Loc}_{T_{c}}(*;\mathbf{C}[\beta^{\pm 1}]))\). Using the ABG theorem, one can prove an equivalence
\[\operatorname{Loc}_{T_{c}}(G_{c};\mathbf{C}[\beta^{\pm 1}])\simeq \operatorname{QCoh}(\check{\mathfrak{t}}\times_{\widetilde{\tilde{\mathfrak{g}} }/\check{G}}\check{\mathfrak{t}})\otimes_{\mathbf{C}}\mathbf{C}[\beta^{\pm 1}], \tag{1}\]
where the map \(\check{\mathfrak{t}}\to\widetilde{\tilde{\mathfrak{g}}}/\check{G}\) is given by the Kostant slice, and \(T_{c}\) acts on \(G_{c}\) by conjugation.
The goal of this article is to study a generalization of Remark 1.1.6 and (1). Fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\), and let \(\mathbf{G}\) be an oriented group scheme in the sense of [15]. If \(T\) is a torus and \(X\) is a sufficiently nice \(T\)-space, one can define a \(\pi_{0}A\)-linear \(\infty\)-category \(\operatorname{Loc}^{\operatorname{gr}}_{T}(X;A)\) of "genuine \(T\)-equivariant \(\operatorname{Mod}_{A}\)-valued local systems2 on \(X\)"; see Section 2.2 and Notation 2.3.6. Let \(\mathscr{M}_{T}\) denote the \(\operatorname{Hom}\)-stack \(\operatorname{Hom}(\mathbb{X}^{\bullet}(T),\mathbf{G})\), and let \(\mathscr{M}_{T,0}\) denote its underlying stack over \(\pi_{0}A\). For instance, if \(\mathbf{G}_{0}\) is an elliptic curve, \(\mathscr{M}_{T,0}\) can be identified with the moduli stack (scheme) of \(T\)-bundles on \(E\) of degree 0 equipped with a trivialization at the
zero section. Let \(\mathbf{G}_{0}^{\vee}\) denote the group scheme \(\operatorname{Hom}(\mathbf{G}_{0},B\mathbf{G}_{m})\) (this is a slight variant of the construction studied in [10]). Then, one of our main results is the following; we will unwind the statement in special cases below.
**Theorem** (See Corollary 4.5.5 for a precise statement).: _Suppose that \(G\) is a connected and simply-connected semisimple algebraic group or a torus over \(\mathbf{C}\), and let \(T\) act on \(G\) by conjugation. Let \(G_{c}\) denote the maximal compact subgroup of \(G(\mathbf{C})\), and fix a principal nilpotent element of \(\tilde{\mathfrak{n}}\). Fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\), and let \(\mathbf{G}\) be an oriented group scheme in the sense of [11]. Assume that the underlying \(\pi_{0}A\)-scheme \(\mathbf{G}_{0}\) is \(\mathbf{G}_{a}\), \(\mathbf{G}_{m}\), or an elliptic curve \(E\). Let \(\operatorname{Bun}^{0}_{B}(\mathbf{G}_{0,\mathbf{Q}}^{\vee})^{\operatorname{ reg}}\) denote the moduli stack of regular \(B\)-bundles of degree zero on \(\mathbf{G}_{0,\mathbf{Q}}^{\vee}\). Then, there is an \(\mathbf{E}_{2}\)-monoidal equivalence of \(\pi_{0}A_{\mathbf{Q}}\)-linear \(\infty\)-categories_
\[\operatorname{Loc}_{T_{c}}^{\operatorname{gr}}(\Omega G_{c};A)\otimes \mathbf{Q}\simeq\operatorname{QCoh}(\operatorname{Bun}^{0}_{\tilde{B}}( \mathbf{G}_{0,\mathbf{Q}}^{\vee})^{\operatorname{reg}}).\]
We view the above result as a first step towards describing \(\operatorname{Shv}^{c}_{T_{c}}(\Omega G_{c};A)\otimes\mathbf{Q}\) in a manner analogous to [1]. We hope to complete this description in a sequel to this article, and further use the above result to revisit (the \(2\)-periodification of) the ABG equivalence. The basic point in the proof of Corollary 4.5.5 is the computation of the \(T_{c}\)-equivariant \(A\)-homology \(\pi_{0}C_{*}^{T_{c}}(\Omega G_{c};A)\) in terms of the Langlands dual \(\tilde{G}\). It is likely that the rationalization in Corollary 4.5.5 is unnecessary, but we have not attempted to verify this.
**Remark 1.1.7**.: Essentially the same argument shows that there is an \(\mathbf{E}_{2}\)-monoidal equivalence of \(\pi_{0}A_{\mathbf{Q}}\)-linear \(\infty\)-categories
\[\operatorname{Loc}_{G_{c}}^{\operatorname{gr}}(\Omega G_{c};A)\otimes \mathbf{Q}\simeq\operatorname{QCoh}(\operatorname{Bun}^{0,\operatorname{ss} }_{\tilde{G}}(\mathbf{G}_{0,\mathbf{Q}}^{\vee})^{\operatorname{reg}}),\]
where \(\operatorname{Bun}^{0,\operatorname{ss}}_{\tilde{G}}(\mathbf{G}_{0,\mathbf{Q} }^{\vee})^{\operatorname{reg}}\) denotes the moduli stack of regular semistable \(\tilde{G}\)-bundles of degree zero. For simplicity, we will only focus on \(T_{c}\)-equivariant local systems.
**Example 1.1.8**.: When \(G\) is a torus, it is easy to establish an analogue of the geometric Satake equivalence, even before rationalization: if \(T\) is a torus over \(\mathbf{C}\), let \(\tilde{T}_{A}:=\operatorname{Spec}A[\mathbb{X}_{*}(T)]\) denote the dual torus over \(A\). Then there is an \(\mathbf{E}_{2}\)-monoidal \(A\)-linear equivalence \(\operatorname{Loc}_{T}(\operatorname{Gr}_{T}(\mathbf{C});A)\simeq\operatorname {QCoh}(\mathscr{L}_{\mathbf{G}}B\tilde{T}_{A})\); see Proposition 4.6.1. One can also "quantize" by considering loop-rotation equivariance, which results in a \(\mathbf{G}\)-analogue of the algebra of differential operators on \(\tilde{T}\); see Section 3.3 for more. In Section 4.6, we discuss the story for a torus where \(A\) is replaced by the sphere spectrum \(S\) -- already in this case, homotopy-theoretic considerations prevent one from describing \(\operatorname{Loc}_{T}(\operatorname{Gr}_{T}(\mathbf{C});S)\) in terms of the algebraic geometry of some spectral stack over the sphere spectrum.
**Remark 1.1.9**.: The reason that the left-hand side of Corollary 4.5.5 is not merely \(\operatorname{Loc}_{T_{c}}(\Omega G_{c};\mathbf{Q})\otimes_{\mathbf{Q}}A_{ \mathbf{Q}}\) (which could then be described by (1)) is that the rationalization of equivariant \(A\)-(co)homology is essentially never isomorphic to equivariant \(A\otimes\mathbf{Q}\)-(co)homology. This is the key reason for why Corollary 4.5.5 is not a consequence of the results of Arkhipov-Bezrukavnikov-Ginzburg. This perspective also features in [11]. For example, if \(X\) is a finite CW-complex equipped with an action of a group \(H\), then \(\operatorname{KU}^{*}(X)\otimes\mathbf{Q}\cong\operatorname{H}^{*}(X;\mathbf{ Q})\otimes_{\mathbf{Q}}\mathbf{Q}[\beta^{\pm 1}]\), but \(\operatorname{KU}^{*}_{H}(X)\otimes\mathbf{Q}\) is generally not isomorphic to \(\operatorname{H}^{*}_{H}(X;\mathbf{Q})\otimes_{\mathbf{Q}}\mathbf{Q}[\beta^{ \pm 1}]\). Indeed, they already differ if \(X\) is a point: in this case, \(\operatorname{KU}^{*}_{H}(X)\otimes\mathbf{Q}\) is the rationalization of the representation ring of \(H\), which is generally not isomorphic to \(\operatorname{H}^{*}_{H}(X;\mathbf{Q})\otimes_{\mathbf{Q}}\mathbf{Q}[\beta^{ \pm 1}]\) (for instance, if \(H\) is finite, the latter is \(\mathbf{Q}[\beta^{\pm 1}]\)).
Theorem 1.1.10 is closely related to the following instantiation of Langlands duality:
**Theorem 1.1.10**.: _In the above setup (so that the underlying \(\pi_{0}A\)-scheme \(\mathbf{G}_{0}\) is \(\mathbf{G}_{a}\), \(\mathbf{G}_{m}\), or an elliptic curve \(E\)), there is a "\(\mathbf{G}\)-Kostant slice" \(\kappa:(\mathscr{M}_{T,0})_{\mathbf{Q}}\to\operatorname{Bun}^{0}_{\tilde{B}}( \mathbf{G}^{\vee}_{0,\mathbf{Q}})^{\operatorname{reg}}\) over \(\pi_{0}A_{\mathbf{Q}}\) such that there is an equivalence of \(\pi_{0}A_{\mathbf{Q}}\)-linear \(\infty\)-categories:_
\[\operatorname{Loc}^{\operatorname{gr}}_{T_{c}}(G_{c};A)\otimes\mathbf{Q}\simeq \operatorname{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}}\times_{\operatorname{ Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})}(\mathscr{M}_{T,0})_{ \mathbf{Q}}).\]
_Here, \(T_{c}\) acts on \(G_{c}\) by conjugation._
**Remark 1.1.11**.: Let \(K_{0}(\operatorname{Rep}(G_{c}))\) denote the (complex) representation ring of \(G_{c}\). In [1], Brylinski and Zhang proved that there is an isomorphism \(\operatorname{KU}^{*}_{G_{c}}(G_{c})\cong\Omega^{*}_{K_{0}(\operatorname{ Rep}(G_{c}))/\mathbf{Z}}\otimes_{\mathbf{Z}}\mathbf{Z}[\beta^{\pm 1}]\). When \(A=\operatorname{KU}\), one can use the Hochschild-Kostant-Rosenberg theorem to view the variant of Theorem 1.1.10 for \(\operatorname{Loc}^{\operatorname{gr}}_{G_{c}}(G_{c};\operatorname{KU})\otimes \mathbf{Q}\) as a categorification of the Brylinski-Zhang isomorphism. See Appendix A for further discussion. In Remark A.5, we also use Hochschild homology to describe a generalization of the \(\hbar=0\) case of [1, Theorem 1], which computes the equivariant _co_homology of \(\Omega G_{c}\).
**Remark 1.1.12**.: Motivated by [1, Theorem 1.1], one can heuristically interpret Theorem 1.1.10 as describing a version of mirror symmetry for the wrapped Fukaya category of the symplectic orbifold \(T^{*}(G_{c}/_{\operatorname{ad}}T_{c})\), albeit with coefficients in the complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\).
Let us discuss Corollary 4.5.5 individually for each case \(\mathbf{G}_{0}=\mathbf{G}_{a},\mathbf{G}_{m}\), and an elliptic curve.
1. When \(A=\mathbf{Q}[\beta^{\pm 1}]\), Corollary 4.5.5 describes an equivalence between \(\operatorname{Loc}^{\operatorname{gr}}_{T_{c}}(\Omega G_{c};\mathbf{Q}[\beta^ {\pm 1}])\) and \(\operatorname{QCoh}(\widetilde{\mathfrak{g}}^{\operatorname{reg}}/\tilde{G})\). This is a rather formal consequence of the following observation proved in Proposition 4.1.5:
**Observation 1.1.13**.: There is a "Kostant section" \(\kappa:\check{\mathfrak{t}}\to\widetilde{\mathfrak{g}}/\tilde{G}\) and a Cartesian square
where \((T^{*}\check{T})^{\operatorname{bl}}\) is a particular affine blowup of \(T^{*}\check{T}\cong\check{T}\times\mathfrak{t}\).
This can be viewed as an analogue of [1, Proposition 2.2.1] and [1, Proposition 2.8], and it can be used to reprove the rationalization of [1, Theorem 6.1]. There is an isomorphism \(\widetilde{\mathfrak{g}}/\tilde{G}\cong\check{\mathfrak{g}}/\tilde{B}\), and in characteristic zero, this can be identified with \(\operatorname{Bun}^{0}_{\tilde{B}}(B\mathbf{G}_{a})\), viewed as the shifted tangent bundle of \(B\tilde{B}\). Moreover, there is an isomorphism \(\operatorname{Spec}\operatorname{H}^{T}_{0}(\operatorname{Gr}_{G}(\mathbf{C} );\mathbf{Q}[\beta^{\pm 1}])\cong(T^{*}\check{T})^{\operatorname{bl}}\), and \((T^{*}\check{T})^{\operatorname{bl}}\) admits a \(W\)-action (via the \(W\)-action on \(\check{T}\) and \(T^{*}_{\{1\}}\check{T}\cong\mathfrak{t}\)) such that \((T^{*}\check{T})^{\operatorname{bl}}/\!\!/W\cong\operatorname{Spec} \operatorname{H}^{G}_{0}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{Q})\) is isomorphic to the group scheme of regular centralizers in \(\check{\mathfrak{g}}\). See [1] for further discussion.
In this case, Theorem 1.1.10 says that if \(T\) acts on \(G\) by conjugation, then there is an equivalence \[\operatorname{Loc}_{T_{c}}^{\operatorname{gr}}(G_{c};\mathbf{Q}[\beta^{\pm 1}]) \simeq\operatorname{QCoh}(\check{\mathfrak{t}}_{\mathbf{Q}}\times_{\tilde{ \mathfrak{g}}_{\mathbf{Q}}/\tilde{G}_{\mathbf{Q}}}\check{\mathfrak{t}}_{ \mathbf{Q}}).\] Similarly, if \(G\) acts on itself by conjugation, one obtains an equivalence \[\operatorname{Loc}_{G_{c}}^{\operatorname{gr}}(G_{c};\mathbf{Q}[\beta^{\pm 1}]) \simeq\operatorname{QCoh}(\check{\mathfrak{t}}_{\mathbf{Q}}/\!/W\times_{ \check{\mathfrak{g}}_{\mathbf{Q}}/\tilde{G}_{\mathbf{Q}}}\check{\mathfrak{t}}_ {\mathbf{Q}}/\!W).\] These equivalences can be de-periodified (Remark 4.4.10). Motivated by [10, Theorem 1.1], these equivalences suggest viewing \(\check{\mathfrak{t}}\times_{\tilde{\mathfrak{g}}/\tilde{G}}\check{\mathfrak{t}}\) (resp. \(\check{\mathfrak{t}}/\!W\times_{\tilde{\mathfrak{g}}/\tilde{G}}\check{\mathfrak{ t}}/W\)) as a (derived) mirror to the symplectic orbifold \(T^{*}(G_{c}/_{\operatorname{ad}}T_{c})\) (resp. \(T^{*}(G_{c}/_{\operatorname{ad}}G_{c})\)). Concretely, these results show that if \(f\) is a regular nilpotent element of \(\check{\mathfrak{g}}\) and \(Z_{f}(\check{B})\) is its centralizer in \(\check{B}\), then there is an equivalence \[\operatorname{Loc}^{\operatorname{gr}}(G_{c};\mathbf{Q}[\beta^{\pm 1}]) \simeq\operatorname{QCoh}(Z_{f}(\check{B}_{\mathbf{Q}}));\] therefore, \(Z_{f}(\check{B}_{\mathbf{Q}})\) is a mirror to \(G(\mathbf{C})=T^{*}(G_{c})\) viewed as a symplectic manifold. These results are not new, and can easily be deduced from the work of Bezrukavnikov-Finkelberg [11] and Yun-Zhu [12]. Notice that if \(G_{c}=T_{c}\), then we are simply stating that there is an equivalence \(\operatorname{Loc}(T_{c};\mathbf{Q}[\beta^{\pm 1}])\simeq\operatorname{ QCoh}(\check{T})\), given by taking monodromy.
**Remark 1.1.14**.: Upon adding loop rotation equivariance, there is an equivalence between \(\operatorname{Loc}_{T_{c}\times S^{1}_{\operatorname{rot}}}^{\operatorname{gr}} (\Omega G_{c};\mathbf{C})\) and a particular localization of the universal category \(\tilde{\mathscr{C}}^{\operatorname{univ}}=U_{h}(\check{\mathfrak{g}})\text{- mod}^{\check{N},(\check{T},w)}\) from [11, Section 2.4]; this is a consequence of Theorem 4.1.11 and Proposition 4.5.2.
See Example B.3 for an explicit description of \(\operatorname{H}_{s}^{G\times S^{1}_{\operatorname{rot}}}(\operatorname{Gr}_ {G}(\mathbf{C});\mathbf{C})\) when \(G=\operatorname{SL}_{2}\). From the homotopical perspective, the action of \(S^{1}\) by loop rotation on \(\operatorname{Gr}_{G}(\mathbf{C})\) arises by viewing \(\operatorname{Gr}_{G}(\mathbf{C})\simeq\Omega^{\lambda}BG(\mathbf{C})\), where \(\lambda\) is the \(2\)-dimensional rotation representation of \(S^{1}\); in other words, \(\operatorname{Gr}_{G}(\mathbf{C})\) admits the structure of a framed \(\mathbf{E}_{2}\)-algebra, and the action of \(S^{1}\) is via change-of-framing.
2. When \(A=\operatorname{KU}\), Corollary 4.5.5 describes an equivalence between \(\operatorname{Loc}_{T_{c}}^{\operatorname{gr}}(\Omega G_{c};\operatorname{KU} )\otimes\mathbf{Q}\) and \(\operatorname{QCoh}(\widetilde{\check{G}}_{\mathbf{Q}}^{\operatorname{reg}}/ \check{G}_{\mathbf{Q}})\), where \(\widetilde{\check{G}}_{\mathbf{Q}}^{\operatorname{reg}}/\check{G}_{\mathbf{Q}}\) is the regular locus in the stacky quotient of the multiplicative Grothendieck-Springer resolution \(\widetilde{\check{G}}_{\mathbf{Q}}=\check{G}_{\mathbf{Q}}\times_{\check{B}_ {\mathbf{Q}}}\check{B}_{\mathbf{Q}}\). As above, this is a rather formal consequence of the following observation, which is a _multiplicative_ analogue of [12, Proposition 2.2.1] and [1, Proposition 2.8]: **Observation 1.1.15**.: There is a "Kostant section" \(\kappa:\check{T}\to\widetilde{\check{G}}/\check{G}\) and a Cartesian square \[\begin{CD}\operatorname{Spec}\pi_{0}C_{*}^{T}(\operatorname{Gr}_{G}(\mathbf{C} );\operatorname{KU})\otimes\mathbf{Q}\cong(\check{T}\times T)^{\operatorname{ bl}}\end{CD}\] where \((\check{T}\times T)^{\operatorname{bl}}\) is a particular affine blowup of \(\check{T}\times T\). Moreover, there is an isomorphism \(\operatorname{Spec}\pi_{0}C_{*}^{T}(\operatorname{Gr}_{G}(\mathbf{C}); \operatorname{KU})\otimes\mathbf{Q}\) and \((\check{T}\times T)^{\operatorname{bl}}\). There is also a \(W\)-action on \((\check{T}\times T)^{\operatorname{bl}}\) (by the \(W\)-action on \(T\) and \(\check{T}\)) such that \((\check{T}\times T)^{\operatorname{bl}}/\!W\cong\check{T}\).
\(\operatorname{Spec}\pi_{0}C_{*}^{G}(\operatorname{Gr}_{G}(\mathbf{C});\operatorname{ KU})\otimes\mathbf{Q}\) is isomorphic to the group scheme of regular centralizers in \(\check{G}\). Again, see [1] for further discussion.
In this case, Theorem 1.1.10 says that if \(T\) acts on \(G\) by conjugation, then there is an equivalence
\[\operatorname{Loc}_{T_{c}}^{\operatorname{gr}}(G_{c};\operatorname{KU})\otimes \mathbf{Q}\simeq\operatorname{QCoh}(\check{T}_{\mathbf{Q}}\times_{\widetilde{ \mathscr{O}}_{\mathbf{Q}}/\check{G}_{\mathbf{Q}}}\check{T}_{\mathbf{Q}}).\]
Similarly, if \(G\) acts on itself by conjugation, one obtains an equivalence
\[\operatorname{Loc}_{G_{c}}^{\operatorname{gr}}(G_{c};\operatorname{KU}) \otimes\mathbf{Q}\simeq\operatorname{QCoh}(\check{T}_{\mathbf{Q}}/\!\!/W\times _{\check{G}_{\mathbf{Q}}/\check{G}_{\mathbf{Q}}}\check{T}_{\mathbf{Q}}/\!\!/W).\]
If \(\{f\}\) is a regular unipotent element of \(\check{G}_{\mathbf{Q}}\) (determined by the image of the origin in \(\check{T}_{\mathbf{Q}}/\!\!/W\) under the multiplicative Kostant slice), and \(Z_{f}^{\mu}(\check{B}_{\mathbf{Q}})\) is the centralizer of \(f\in\check{G}_{\mathbf{Q}}\), then the preceding equivalence in turn implies an equivalence
\[\operatorname{Loc}^{\operatorname{gr}}(G_{c};\operatorname{KU})\otimes \mathbf{Q}\simeq\operatorname{QCoh}(Z_{f}^{\mu}(\check{B}_{\mathbf{Q}})).\]
Therefore, \(Z_{f}^{\mu}(\check{B}_{\mathbf{Q}})\) can be viewed as a KU-theoretic mirror to \(G(\mathbf{C})=T^{*}(G_{c})\) viewed as a symplectic manifold. The main input into these results are not new, and can be deduced from the work of Bezrukavnikov-Finkelberg-Mirkovic [1]. Notice that if \(G_{c}=T_{c}\), then we are simply stating that there is an equivalence \(\operatorname{Loc}(T_{c};\operatorname{KU})\simeq\operatorname{QCoh}(\check{T }_{\operatorname{KU}})\), given by taking monodromy.
**Remark 1.1.16**.: We expect (see Conjecture 4.2.9 for a more precise statement) that upon adding loop rotation equivariance, there is an equivalence between \(\operatorname{Loc}_{T_{c}\times S_{\operatorname{rot}}^{1}}^{\operatorname{gr }}(\Omega G_{c};\operatorname{KU})\otimes\mathbf{Q}\) and a particular localization of the quantum universal category \(\check{\mathscr{O}}_{q}^{\operatorname{univ}}\) from [11, Section 2.4]. Using the calculations in this article, this expected equivalence reduces to proving an analogue of [10, Theorem 8.1.2] for the quantum group and the multiplicative nil-Hecke algebra; such a conjecture also appears as [12, Conjecture 3.17].
We also expect (see Conjecture 4.2.12) that there is an equivalence between \(\operatorname{Loc}_{T_{c}\times\mu_{p,\operatorname{rot}}}^{\operatorname{gr }}(\Omega G_{c};\operatorname{KU})[\frac{1}{q-1}]\) and a particular localization of \(\check{\mathscr{O}}_{\check{\zeta}_{p}}^{\operatorname{univ}}\), i.e., the quantum universal category \(\check{\mathscr{O}}\) at a primitive \(p\)th root of unity.
The reader is referred to Example B.5 for an explicit description of \(\pi_{0}C_{*}^{G\times S_{\operatorname{rot}}^{1}}(\operatorname{Gr}_{G}( \mathbf{C});\operatorname{KU})\otimes\mathbf{Q}\) when \(G=\operatorname{SL}_{2}\).
(c) Suppose \(A\) is a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring and \(\mathbf{G}\) is an oriented elliptic curve over \(A\) (in the sense of [11]). Let \(E=\mathbf{G}_{0}\) be the underlying classical scheme of \(\mathbf{G}\) over the classical ring \(\pi_{0}(A)\), so that \(E\) is an elliptic curve, and let \(E^{\vee}\) be the dual elliptic curve. The Cartesian squares from (a) and (b) above can be generalized to this setting (see Theorem 4.4.7). For simplicity, let us explain this in the case \(G=\operatorname{SL}_{2}\), i.e., \(\check{G}=\operatorname{PGL}_{2}\).
**Observation 1.1.17**.: Then, there is a "Kostant section" \(\kappa:E=\operatorname{Pic}^{0}(E^{\vee})\to\operatorname{Bun}_{\check{B}}^{0} (E^{\vee})\) which sends a line bundle \(\mathscr{L}\) to the trivial extension \(\mathscr{O}_{E^{\vee}}\subseteq\mathscr{O}_{E^{\vee}}\oplus\mathscr{L}\) if \(\mathscr{L}\not\cong\mathscr{O}_{E^{\vee}}\), and to the Atiyah extension \(\mathscr{O}_{E^{\vee}}\subseteq\mathscr{F}_{2}\twoheadrightarrow\mathscr{O}_{E^ {\vee}}\) from [1] if \(\mathscr{L}\) is trivial. Note that by construction, the \(\check{G}\)-bundle underlying \(\kappa(\mathscr{L})\) is semistable of degree \(0\). Moreover, Theorem 4.4.7 says that there is a Cartesian
square
where \((\mathbf{G}_{m}\times E)^{\mathrm{bl}}\) is a particular affine blowup of \(\mathbf{G}_{m}\times E\). 3
Footnote 3: The desired affine blowup \((\mathbf{G}_{m}\times E)^{\mathrm{bl}}\) is obtained by blowing up \(\mathbf{G}_{m}\times E\) at the locus cut out by the zero sections of \(\mathbf{G}_{m}\) and \(E\), and deleting the proper preimage of the zero section of \(E\); see also [1, Lemma 4.1].
Notice that \(\mathbf{G}_{m}\times E\) admits an action of \(W=\mathbf{Z}/2\), via inversion on \(\mathbf{G}_{m}\) and \(E\); this extends to an action of \(\mathbf{Z}/2\) on \((\mathbf{G}_{m}\times E)^{\mathrm{bl}}\), and the above diagram suggests viewing \((\mathbf{G}_{m}\times E)^{\mathrm{bl}}/\!\!/(\mathbf{Z}/2)\) as an _elliptic_ analogue of the group scheme of regular centralizers.
Furthermore, there is an isomorphism
\[\Gamma((\mathbf{G}_{m}\times E)^{\mathrm{bl}};\mathscr{O}_{(\mathbf{G}_{m} \times E)^{\mathrm{bl}}})\cong\pi_{0}C_{*}^{T}(\mathrm{Gr}_{G}(\mathbf{C});A) \otimes\mathbf{Q}\]
between the coherent cohomology of \((\mathbf{G}_{m}\times E)^{\mathrm{bl}}\) and the rationalization of the \(T\)-equivariant \(A\)-homology of \(\mathrm{Gr}_{G}(\mathbf{C})\). Using this, Corollary 4.5.5 shows that there is an equivalence between a variant of \(\mathrm{Loc}_{T_{c}}^{\mathrm{gr}}(\Omega G_{c};A)\otimes\mathbf{Q}\) and an explicit full subcategory of \(\mathrm{QCoh}(\mathrm{Bun}_{\tilde{B}}^{0}(E^{\vee}))\).
In this case, Theorem 1.1.10 says that if \(T\) acts on \(G\) by conjugation, then there is an equivalence
\[\mathrm{Loc}_{T_{c}}^{\mathrm{gr}}(G_{c};A)\otimes\mathbf{Q}\simeq\mathrm{ QCoh}(E\times_{\mathrm{Bun}_{B}^{0}(E^{\vee})}E)\otimes_{\pi_{0}A}\pi_{0}A_{ \mathbf{Q}}.\]
If \(\{\mathscr{O}_{E^{\vee}}\subseteq\mathscr{F}_{2}\}\in\mathrm{Bun}_{\tilde{B}} ^{0}(E^{\vee})\) denotes the Atiyah bundle, then let \(Z_{f}^{E}(\tilde{B}):=(\{\mathscr{O}_{E^{\vee}}\subseteq\mathscr{F}_{2}\} \times_{\mathrm{Bun}_{\tilde{B}}^{0}(E^{\vee})}E)\) be the "centralizer in \(\tilde{B}\) of the regular 'elli-potent' element \(\{\mathscr{O}_{E^{\vee}}\subseteq\mathscr{F}_{2}\}\in\mathrm{Bun}_{\tilde{B}} ^{0}(E^{\vee})\)". There is then an equivalence
\[\mathrm{Loc}^{\mathrm{gr}}(G_{c};A)\otimes\mathbf{Q}\simeq\mathrm{QCoh}(Z_{f} ^{E}(\tilde{B}))\otimes_{\pi_{0}A}\pi_{0}A_{\mathbf{Q}}.\]
Therefore, \(Z_{f}^{E}(\tilde{B})\) can be viewed as an \(A\)-theoretic mirror to \(G(\mathbf{C})=T^{*}(G_{c})\) viewed as a symplectic manifold.
**Remark 1.1.18**.: One might hope that these results hold without rationalization, but we do not know how to prove such a statement. In the case of KU, for instance, the key obstruction is that we do not know whether the \(2\)-periodification of \(\widetilde{\tilde{G}}_{\mathbf{Q}}^{\mathrm{reg}}/\!\!/\tilde{G}_{\mathbf{Q}}\) can be lifted to a flat stack \((\widetilde{\tilde{G}}^{\mathrm{reg}}/\!\!/\tilde{G}_{\mathrm{KU}}\) over \(\mathrm{KU}\). If it does lift, then it seems reasonable to expect a KU-linear equivalence of the form \(\mathrm{Loc}_{T_{c}}(\Omega G_{c};A)\simeq\mathrm{QCoh}((\widetilde{\tilde{G }}^{\mathrm{reg}}/\!\!/\tilde{G}_{\mathrm{KU}})\).
In Appendix B, we discuss some motivation for this article stemming from the Coulomb branches of 3d \(\mathscr{N}=4\), 4d \(\mathscr{N}=2\), and 5d \(\mathscr{N}=1\) pure gauge theories (i.e., no matter). We also give explicit generators and relations for the Coulomb branches of 3d \(\mathscr{N}=4\) and 4d \(\mathscr{N}=2\) pure gauge theories with gauge group \(\mathrm{SL}_{2}\) (i.e., \(\pi_{0}C_{*}^{G}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{Q})\) and \(\pi_{0}C_{*}^{G}(\mathrm{Gr}_{G}(\mathbf{C});\mathrm{KU})\) with \(G=\mathrm{SL}_{2}\)). The \(4\)-dimensional case is a \(q\)-analogue of the quantization of the Atiyah-Hitchin manifold from [1, Equation 5.51].
We will use the following notation throughout; furthermore, the reader should keep in mind that _everything_ in this article will be derived, unless explicitly mentioned otherwise.
**Notation 1.1.19**.: Let \(G\) be a connected (often simply-connected) semisimple group over \(\mathbf{C}\) (or a torus). Fix a maximal torus \(T\subseteq B\) contained in a Borel subgroup of \(G\). Let \(U=[B,B]\) denote the unipotent radical of \(B\), so that \(B/U\cong T\). Let \(\Phi\) be the set of roots of \(G\), \(\Phi^{+}\) the set of positive roots, and \(\Delta\) a set of simple roots. Let \(W\) be the Weyl group; if \(w\in W\), let \(\dot{w}\in N_{G}(T)\) denote a lift of \(w\) to the normalizer of \(T\) in \(G\). Let \(\Lambda\) denote the weight lattice, and \(\Lambda^{+}=\Lambda^{\mathrm{pos}}\) the set of dominant weights. We will also follow other standard notation in homotopy theory: for instance, \(\mathscr{S}\) will denote the \(\infty\)-category of spaces, and \(\mathrm{Sp}\) will denote the \(\infty\)-category of spectra.
There has been some work done previously towards analogues of the geometric Satake equivalence with other coefficients. For instance, when \(A=\mathrm{KU}\), a conjecture was proposed in [1]; in a similar vein, a discussion of the case \(A=\mathrm{KU}\) is the content of the talk [1]. In [13], Yang and Zhao study a higher chromatic analogue of quantum groups, and it would be interesting to study the relationship between the present article and their work. After this paper was written, the preprint [2] was posted on the arXiv; it is concerned with ideas similar to the ones studied here. Our work is closely related to the exciting program of Ben-Zvi-Sakellaridis-Venkatesh (see [1, 2] for an overview); we hope to describe this relationship in future work.
### Acknowledgements
I'd like to acknowledge Lin Chen, Charles Fu, Tom Gannon, and Kevin Lin for helpful conversations and for entertaining my numerous silly questions. I'm also grateful to Victor Ginzburg for a very enlightening discussion, and Pavel Safronov for a useful email. Thanks to Ben Gammage for discussions which helped shape my understanding of some of the topics in Appendix B, and to Hiraku Nakajima for a very informative email exchange on the same topic. Part of this work started after I took a class taught by Roman Bezrukavnikov; I'm very grateful to him for introducing me to [1], which led me down the beautiful road to geometric representation theory. Last, but certainly far from least, the influence, support, advice, and encouragement of my advisors Dennis Gaitsgory and Mike Hopkins is evident throughout this project; I cannot thank them enough.
## 2. Homotopy theory background
### Review of generalized equivariant cohomology
We review the construction of generalized equivariant cohomology via spectral algebraic geometry from [10], in a form suitable for our applications. This review will necessarily be brief, since a detailed exposition may be found in _loc. cit._; there is also some discussion in the early sections of [1] in the setting of ordinary (as opposed to spectral) algebraic geometry.
**Setup 2.1.1**.: Fix an \(\mathbf{E}_{\infty}\)-ring \(A\) and a commutative \(A\)-group \(\mathbf{G}\), so \(\mathbf{G}\) defines a functor \(\operatorname{CAlg}_{A}\to\operatorname{Mod}_{\mathbf{Z},\geq 0}\) which is representable by a _flat_\(A\)-algebra. We will write \(\mathbf{G}_{0}\) to denote the resulting commutative group scheme over \(\pi_{0}A\).
**Remark 2.1.2**.: The equivalence \(\Omega^{\infty}:\operatorname{Sp}_{\geq 0}\xrightarrow{\sim}\operatorname{CAlg}( \mathscr{S}_{*})\) extends to an equivalence between \(\operatorname{Mod}_{\mathbf{Z},\geq 0}\) and topological abelian groups. More precisely, by the Dold-Kan correspondence and the Schwede-Shipley theorem, there are equivalences of categories
\[\operatorname{Mod}_{\mathbf{Z}}^{\geq 0}\simeq\operatorname{Ch}_{\geq 0}( \mathbf{Z})\simeq\operatorname{Fun}(\mathbf{\Delta}^{op},\operatorname{Ab})=s \operatorname{Ab}.\]
The image of \(\operatorname{Mod}_{\mathbf{Z}}^{\geq 0}\) under the equivalence \(\Omega^{\infty}:\operatorname{Sp}_{\geq 0}\xrightarrow{\sim}\operatorname{CAlg}( \mathscr{S}_{*})\) can be characterized as follows. Let us model grouplike infinite loop spaces \(X\) as functors \(X:\operatorname{Fin}_{*}\to\mathscr{S}\) such that \(\pi_{0}\mathrm{Map}_{\mathscr{S}}(Y,X)\) is an abelian group for all spaces \(Y\) (i.e., \(X\) is grouplike) and such that the map \(X([n])\to X([1])^{n}\) is an equivalence. Such an object should be in the image of \(\operatorname{Mod}_{\mathbf{Z}}^{\geq 0}\) iff it is "strictly commutative". One way to make this precise is as follows. Let Lattice denote the full subcategory of the category of abelian groups spanned by the groups \(\mathbf{Z}^{n}\) with \(n\geq 0\), so there is a functor \(\operatorname{Fin}_{*}\to\operatorname{Lattice}\). Then an infinite loop space is in the image of \(\operatorname{Mod}_{\mathbf{Z}}^{\geq 0}\) if and only if the functor \(\operatorname{Fin}_{*}\to\mathscr{S}\) classifying it factors through a finite-product-preserving functor \(\operatorname{Lattice}\to\mathscr{S}\). In other words, \(\operatorname{Mod}_{\mathbf{Z}}^{\geq 0}\) is equivalent to the full subcategory spanned by the grouplike objects in the category \(\operatorname{Fun}^{\pi}(\operatorname{Lattice},\mathscr{S})\). This is a very strong condition to impose on an infinite loop space: it forces the infinite loop space to decompose as a product of Eilenberg-Maclane spaces. For example, \(\mathbf{C}P^{\infty}\) admits such a factorization, but \(\operatorname{BU}\) (with either the additive or multiplicative infinite loop space structure) does not.
**Definition 2.1.3**.: A _preorientation of_\(\mathbf{G}\) is a pointed map \(S^{2}\to\Omega^{\infty}\mathbf{G}(A)\) of spaces, i.e., a map \(\Sigma^{2}\mathbf{Z}\to\mathbf{G}(A)\) of \(\mathbf{Z}\)-modules (by adjunction). This induces a map \(\mathbf{C}P^{\infty}=\Omega^{\infty}\Sigma^{2}\mathbf{Z}\to\Omega^{\infty} \mathbf{G}(A)\) of topological abelian groups, and hence a map \(\operatorname{Spf}A^{\mathbf{C}P^{\infty}}\to\mathbf{G}\) of \(\mathbf{E}_{\infty}\)-\(A\)-group schemes. (Note that \(\operatorname{Spf}A^{\mathbf{C}P^{\infty}}\) need not admit the structure of a commutative \(A\)-group scheme: for instance, \(A^{\mathbf{C}P^{\infty}}\) need not be flat over \(A\).)
**Definition 2.1.4**.: Given a preorientation \(S^{2}\to\Omega^{\infty}\mathbf{G}(A)\), we obtain a map \(\mathscr{O}_{\mathbf{G}}\to C^{*}(S^{2};A)\) of \(\mathbf{E}_{\infty}\)-\(A\)-algebras. On \(\pi_{0}\), this induces a map \(\pi_{0}\mathscr{O}_{\mathbf{G}}=\mathscr{O}_{\mathbf{G}_{0}}\to\pi_{0}C^{*}(S^ {2};A)\). However, the target can be identified with the trivial square-zero extension \(\pi_{0}A\oplus\pi_{-2}A\), so that the preorientation defines a derivation \(\mathscr{O}_{\mathbf{G}_{0}}\to\pi_{-2}A\). This defines a map \(\beta:\omega=\Omega^{1}_{\mathbf{G}_{0}/\pi_{0}A}\to\pi_{-2}A\). The preorientation is called an _orientation_ if \(\mathbf{G}_{0}\) is smooth of relative dimension \(1\) over \(\pi_{0}A\), and the composite
\[\pi_{n}(A)\otimes_{\pi_{0}A}\omega\to\pi_{n}(A)\otimes_{\pi_{0}A}\pi_{-2}A \xrightarrow{\beta}\pi_{n-2}A\]
is an isomorphism for each \(n\in\mathbf{Z}\). This forces \(A\) to be \(2\)-periodic (but does not force its homotopy to be concentrated in even degrees).
**Warning 2.1.5**.: As discussed in [11, Section 3.2], the universal \(\mathbf{E}_{\infty}\)-\(\mathbf{Z}\)-algebra over which the additive group scheme \(\mathbf{G}_{a}\) admits an orientation is given by \(\mathbf{Z}[\mathbf{C}P^{\infty}][\frac{1}{\beta}]=\mathbf{Q}[\beta^{\pm 1}]\). Therefore, we are allowed to let \(\mathbf{G}=\mathbf{G}_{a}\) in the story below only when \(A\) is a \(2\)-periodic \(\mathbf{E}_{\infty}\)-\(\mathbf{Q}\)_-algebra_. (If \(A\) is not an \(\mathbf{E}_{\infty}\)-\(\mathbf{Z}\)-algebra, one cannot in general define \(\mathbf{G}_{a}=\operatorname{Spec}A[t]\) as a commutative \(A\)-group: the coproduct \(A[t]\to A[x,y]\) will in general not be a map of \(\mathbf{E}_{\infty}\)-\(A\)-algebras.)
We can now review the definition of \(T\)-equivariant \(A\)-cohomology when \(T\) is a torus.
**Construction 2.1.6**.: Fix an \(\mathbf{E}_{\infty}\)-ring \(A\) as above and a commutative \(A\)-group \(\mathbf{G}\). Given a compact abelian Lie group \(T\), define an \(A\)-scheme \(\mathscr{M}_{T}\) by the mapping stack \(\operatorname{Hom}(\mathbb{X}^{*},\mathbf{G})\). We will be particularly interested in the case when \(T\) is a torus. Let \(\mathscr{T}\) be the full subcategory of \(\mathscr{S}\) spanned by those spaces which are homotopy equivalent to \(BT\) with \(T\) being a compact abelian Lie group. By arguing as in [11, Theorem 3.5.5], a reorientation of \(\mathbf{G}\) is equivalent to the data of a functor \(\mathscr{M}:\mathscr{T}\to\operatorname{Aff}_{A}\) along with compatible equivalences \(\mathscr{M}(BT)\simeq\mathscr{M}_{T}\). The \(\mathbf{E}_{\infty}\)-\(A\)-algebra \(\mathscr{O}_{\mathscr{M}_{T}}\) is the \(T\)-equivariant \(A\)-cochains of a point, and will occasionally be denoted by \(A_{T}\).
We can now sketch the construction of the \(T\)-equivariant \(A\)-cochains of more general \(T\)-spaces; see [11, Theorem 3.2]. Let \(T\) be a torus over \(\mathbf{C}\) for the remainder of this discussion, and let \(\mathbf{G}\) be an _oriented_ commutative \(A\)-group. Let \(\mathscr{S}(T)\) denote the \(\infty\)-category of finite \(T\)-spaces, i.e., the smallest subcategory of \(\operatorname{Fun}(BT,\mathscr{S})\) which contains the quotients \(T/T^{\prime}\) for closed subgroups \(T^{\prime}\subseteq T\), and which is closed under finite colimits. There is a functor \(\mathscr{F}_{T}:\mathscr{S}(T)^{\operatorname{op}}\to\operatorname{QCoh}( \mathscr{M}_{T})\) which is uniquely characterized by the requirement that it preserve finite limits and sends \(T/T^{\prime}\mapsto q_{*}\mathscr{O}_{\mathscr{M}_{T^{\prime}}}\). Here, \(q:\mathscr{M}_{T^{\prime}}\to\mathscr{M}_{T}\) is the canonical map induced by the inclusion \(T^{\prime}\subseteq T\). If \(X\in\mathscr{S}(T)\), then the \(T\)_-equivariant \(A\)-cochains of \(X\)_ is the global sections \(\Gamma(\mathscr{M}_{T};\mathscr{F}_{T}(X))\); we will denote it by \(C^{*}_{T}(X;A)\).
**Remark 2.1.7**.: We will denote the functor \(\Gamma(\mathscr{M}_{T};\mathscr{F}_{T}(-)):\mathscr{S}(T)^{\operatorname{op} }\to\operatorname{Mod}(\Gamma(\mathscr{M}_{T};\mathscr{O}_{\mathscr{M}_{T}}))\) by \(C^{*}_{T}(-;A):\mathscr{S}(T)^{\operatorname{op}}\to\operatorname{Mod}(A_{T})\).
**Definition 2.1.8**.: If \(X\in\mathscr{S}(T)\), then the \(T\)_-equivariant \(A\)-chains of \(X\)_ is the quasicoherent sheaf on \(\mathscr{M}_{T}\) given by the \(\mathscr{O}_{\mathscr{M}_{T}}\)-linear dual \(\mathscr{F}_{T}(X)^{\vee}\). We will denote its global sections by \(C^{T}_{*}(X;A)\). Note that \(C^{T}_{*}(*;A)\simeq A_{T}\), which completes to the \(A\)-cochains (_not_\(A\)-chains) of \(BT\).
**Warning 2.1.9**.: Let \(A\) be an \(\mathbf{E}_{\infty}\)-\(\mathbf{Z}\)-algebra, and let \(\mathbf{G}=\mathbf{G}_{a}\); then Warning 2.1.5 says that \(A\) must be an \(\mathbf{E}_{\infty}\)-\(\mathbf{Q}[\beta^{\pm 1}]\)-algebra. Suppose for simplicity that \(T=\mathbf{G}_{m}\); then \(\pi_{*}C_{*}(BT;A)\) may therefore be identified with the divided power algebra \(\Gamma_{\pi_{*}(A)}(\hbar^{\vee})\) with \(|\hbar^{\vee}|=2\). Since \(A\) is rational, this may further be identified with the polynomial ring \(\pi_{*}(A)[\hbar^{\vee}]\). Unfortunately, this can be confused with \(\pi_{*}(A_{T})\), albeit with the reversed grading. Although this identification is technically correct, it is rather abusive: there is no canonical way to identify \(A_{T}\) with \(C_{*}(BT;A)\) when \(A\) is an \(\mathbf{E}_{\infty}\)-\(\mathbf{Q}[\beta^{\pm 1}]\)-algebra. We will therefore refrain from making this identification, since it is not valid for more general \(\mathbf{E}_{\infty}\)-rings \(A\).
**Notation 2.1.10**.: Let \(\lambda:T\to\mathbf{G}_{m}\) be a character, and let \(T_{\lambda}=\ker(\lambda)\). Then the map \(q:\mathscr{M}_{T_{\lambda}}\to\mathscr{M}_{T}\) is a closed immersion, and we will denote the ideal in
\(\mathscr{O}_{\mathscr{M}_{T}}\) defined by this closed immersion by \(\mathscr{I}_{\lambda}\). Equivalently, let \(V_{\lambda}\) denote the \(T\)-representation obtained by the projection \(T\to T_{\lambda}\). Then \(\mathscr{I}_{\lambda}\) is given by the line bundle \(\mathscr{F}_{T}(S^{V_{\lambda}})\).
It is trickier to extend the definition of equivariant cochains to nonabelian groups, but a construction is sketched in [10, Section 3.5], and a detailed construction is given in [11]. We recall this for completeness; in this article, we will only be concerned with torus-equivariance. The methods of this article should work for more general compact Lie groups, but we have not studied this here.
**Construction 2.1.11**.: Let \(G\) be a reductive group scheme over \(\mathbf{C}\). Let \(\mathscr{S}(G)\) denote the smallest subcategory of \(\operatorname{Fun}(BG,\mathscr{S})\) which contains the quotients \(G/T^{\prime}\) for closed _commutative_ subgroups \(T^{\prime}\subseteq G\), and which is closed under finite colimits. Then there is a functor \(C^{*}_{G}(-;A):\mathscr{S}(G)^{\operatorname{op}}\to\operatorname{Mod}(A)\) which is uniquely characterized by the requirement that it preserve finite limits and sends \(G/T^{\prime}\mapsto A_{T^{\prime}}\). According to [10, End of Section 3.5] and [11, Section 3], when \(G\) is connected, there is a flat \(A\)-scheme \(\mathscr{M}_{G}\) and a functor \(\mathscr{F}_{G}:\mathscr{S}(G)^{\operatorname{op}}\to\operatorname{QCoh}( \mathscr{M}_{G})\), such that composition with the forgetful functor \(\operatorname{QCoh}(\mathscr{M}_{G})\to\operatorname{Mod}(A)\) is the functor \(C^{*}_{G}(-;A)\). If \(X\in\mathscr{S}(G)\), we will write \(\mathscr{F}_{G}(X)^{\vee}\) to denote the linear dual of \(\mathscr{F}_{G}(X)\) in \(\operatorname{QCoh}(\mathscr{M}_{G})\), and refer to it as the _\(G\)-equivariant \(A\)-chains_ on \(X\).
**Remark 2.1.12**.: Let \(X\) be an ind-finite space with a \(G\)-action, so that \(X\) can be written as the filtered colimit of a diagram \(\{X_{i}\}\) of subspaces, each of which are in \(\mathscr{S}(G)\). Write \(C^{*}_{G}(X;A)\) to denote \(\varprojlim_{i}C^{*}_{G}(X_{i};A)\). Similarly for \(\mathscr{F}_{G}(X)\).
**Example 2.1.13**.: Let \(G\) be a connected compact Lie group, and let \(T\) be a maximal torus in \(G\). The flag variety \(G/T\) is a \(G\)-space whose stabilizers are commutative, and therefore \(G/T\in\mathscr{S}(G)\). Therefore, \(C^{*}_{G}(G/T;A)=A_{T}\). For the remainder of this text, we will make the following _assumption_: after inverting \(|W|\), there is a (homotopy-coherent) \(W\)-action on \(A_{T}\) by maps of \(\mathbf{E}_{\infty}\)-\(A\)-algebras, and \(A_{G}:=C^{*}_{G}(*;A)\) is equivalent to \(A^{hw}_{T}\) as an \(\mathbf{E}_{\infty}\)-\(A\)-algebra.
### Categories of equivariant local systems
Fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\) and an oriented \(A\)-group scheme \(\mathbf{G}\). Let \(T\) be a compact torus. Let \(X\in\mathscr{S}(T)\) be a finite \(T\)-space. The following categorifies the \(T\)-equivariant \(A\)-cochains \(C^{*}_{T}(X;A)\).
**Construction 2.2.1**.: Let \(\operatorname{Loc}_{T}(*;A)\) denote the \(\infty\)-category \(\operatorname{QCoh}(\mathscr{M}_{T})\). Let \(T^{\prime}\subseteq T\) be a closed subgroup, so that there is an associated morphism \(q:\mathscr{M}_{T^{\prime}}\to\mathscr{M}_{T}\). This defines a symmetric monoidal functor \(\operatorname{QCoh}(\mathscr{M}_{T})\to\operatorname{QCoh}(\mathscr{M}_{T^{ \prime}})\), which equips \(\operatorname{QCoh}(\mathscr{M}_{T^{\prime}})\) with the structure of a \(\operatorname{QCoh}(\mathscr{M}_{T})\)-module. Let \(\mathscr{L}\mathrm{oc}_{T}(-;A):\mathscr{S}(T)^{\operatorname{op}}\to \operatorname{CAlg}(\operatorname{ShvCat}(\mathscr{M}_{T}))\) be the functor uniquely characterized by the requirement that it preserve finite limits and send \(T/T^{\prime}\mapsto\operatorname{QCoh}(\mathscr{M}_{T^{\prime}})\). If \(X\in\mathscr{S}(T)\), then the \(\infty\)-category \(\operatorname{Loc}_{T}(X;A)\) of _\(T\)-equivariant local systems of \(A\)-modules on \(X\)_ is defined to be the global sections of the quasicoherent stack \(\mathscr{L}\mathrm{oc}_{T}(X;A)\) on \(\mathscr{M}_{T}\). If \(f:X\to Y\) is a map in \(\mathscr{S}(T)\), the associated symmetric monoidal functor \(f^{*}:\operatorname{Loc}_{T}(Y;A)\to\operatorname{Loc}_{T}(X;A)\) (induced by taking global sections of the morphism \(f^{*}:\mathscr{L}\mathrm{oc}_{T}(Y;A)\to\mathscr{L}\mathrm{oc}_{T}(X;A)\) of \(\mathbf{E}_{\infty}\)-algebras in quasicoherent stacks over \(\mathscr{M}_{T}\)) will be called the _pullback_. One can show that \(\operatorname{Loc}_{T}(X;A)\) is a presentable stable \(\infty\)-category, and that \(f^{*}\) preserves small colimits (so it has a right adjoint \(f_{*}\), which will be called _pushforward_).
**Example 2.2.2**.: If \(T=\{1\}\), then \(\operatorname{Loc}_{T}(X;A)\) is equivalent to the \(\infty\)-category \(\operatorname{Loc}(X;A):=\operatorname{Fun}(X,\operatorname{Mod}_{A})\) of local systems on \(X\).
**Remark 2.2.3**.: Let \(X\) be a finite \(T\)-space. The _constant local system_\(\underline{A}_{T}\) is defined to be the image of \(\mathscr{O}_{\mathscr{M}_{T}}\) under the symmetric monoidal functor \(\operatorname{Loc}_{T}(*;A)\simeq\operatorname{QCoh}(\mathscr{M}_{T})\to \operatorname{Loc}_{T}(X;A)\) induced by pullback along \(f:X\to*\). Observe that if \(\underline{A}_{T}\) denotes the constant local system, then \(\operatorname{End}_{\operatorname{Loc}_{T}(X;A)}(\underline{A}_{T})\simeq C _{T}^{*}(X;A)\). Indeed, \(\operatorname{End}_{\operatorname{Loc}_{T}(X;A)}(\underline{A}_{T})\simeq \Gamma(\mathscr{M}_{T};f_{*}f^{*}\mathscr{O}_{\mathscr{M}_{T}})\), but it is easy to see that \(f_{*}f^{*}\mathscr{O}_{\mathscr{M}_{T}}=\mathscr{F}_{T}(X)\in\operatorname{ QCoh}(\mathscr{M}_{T})\). The desired claim then follows from Construction 2.1.6.
**Remark 2.2.4**.: If \(T\) were a _finite_ diagonalizable group scheme (such as \(\mu_{n}\)), the desired category \(\operatorname{Loc}_{T}(X;A)\) is closely related to the \(\infty\)-category of _\(\mathbf{G}\)-tempered local systems_ on the orbispace \(X/\!\!/T\), as described in [11]. Our understanding is that Lurie is planning to describe an extension of the work in [11] and its connections to equivariant homotopy theory in a future article. We warn the reader that Construction 2.2.1 is somewhat _ad hoc_; so the resulting category of equivariant local systems may or may not agree with the output of forthcoming work of Lurie.
**Remark 2.2.5**.: Let \(X\) be a \(T\)-space with a chosen presentation as a filtered colimit of finite \(T\)-spaces \(X_{\alpha}\). Then we will write \(\operatorname{Loc}_{T}(X;A)\) to denote \(\lim\operatorname{Loc}_{T}(X_{\alpha};A)\).
If \(Y\) is a _connected_ space, the \(\infty\)-category \(\operatorname{Loc}(Y;A)=\operatorname{Fun}(Y,\operatorname{Mod}_{A})\) of local systems on \(Y\) is equivalent by Koszul duality to \(\operatorname{LMod}_{C_{*}(\Omega Y;A)}\). This property of local systems is very useful, since it allows one to study of local systems using (derived) algebra. A similar property is true for \(\operatorname{Loc}_{T}(X;A)\):
**Proposition 2.2.6**.: _Let \(X\) be a connected finite \(T\)-space. Then there is an equivalence \(\operatorname{Loc}_{T}(X;A)\simeq\operatorname{LMod}_{\mathscr{F}_{T}(\Omega X )^{\vee}}(\operatorname{QCoh}(\mathscr{M}_{T}))\)._
Proof.: Let \(s:*\to X\) denote the inclusion of a point. We claim that \(s^{*}:\operatorname{Loc}_{T}(X;A)\to\operatorname{QCoh}(\mathscr{M}_{T})\) admits a left adjoint \(s_{!}\). Indeed, the statement for general \(X\) follows formally from the case of \(X=T/T^{\prime}\) for some closed subgroup \(T^{\prime}\subseteq T\) (so \(s\) is the inclusion of the trivial coset). In this case, \(s^{*}\) is the functor \(\operatorname{QCoh}(\mathscr{M}_{T^{\prime}})\to\operatorname{QCoh}(\mathscr{M }_{T})\) given by pushforward along the associated morphism \(q:\mathscr{M}_{T^{\prime}}\to\mathscr{M}_{T}\), so it has a left adjoint \(s_{!}\) given by \(q^{*}\). Note that \(s^{*}\) also has a right adjoint; in particular, it preserves small limits and colimits. Observe now that \(s_{!}\mathscr{O}_{\mathscr{M}_{T}}\) is a compact generator of \(\operatorname{Loc}_{T}(X;A)\): indeed, suppose \(\mathscr{F}\in\operatorname{Loc}_{T}(X;A)\) such that \(\operatorname{Map}_{\operatorname{Loc}_{T}(X;A)}(s_{!}\mathscr{O}_{\mathscr{ M}_{T}},\mathscr{F})\simeq 0\) as an object of \(\operatorname{QCoh}(\mathscr{M}_{T})\). Because \(s^{*}\mathscr{F}\simeq\operatorname{Map}_{\operatorname{Loc}_{T}(X;A)}(s_{!} \mathscr{O}_{\mathscr{M}_{T}},\mathscr{F})\) in \(\operatorname{QCoh}(\mathscr{M}_{T})\), we see that \(s^{*}\mathscr{F}\simeq 0\). Using the connectivity of \(X\), we see that \(\mathscr{F}\) itself must be zero, which implies that \(s_{!}\mathscr{O}_{\mathscr{M}_{T}}\) is a compact generator of \(\operatorname{Loc}_{T}(X;A)\). It follows from the Barr-Beck-Lurie theorem [11, Theorem 4.7.3.5] that \(\operatorname{Loc}_{T}(X;A)\) is equivalent to the \(\infty\)-category of left \(\operatorname{End}_{\operatorname{Loc}_{T}(X;A)}(s_{!}\mathscr{O}_{\mathscr{M}_ {T}})\)-modules in \(\operatorname{QCoh}(\mathscr{M}_{T})\). But \(\operatorname{End}_{\operatorname{Loc}_{T}(X;A)}(s_{!}\mathscr{O}_{\mathscr{M}_ {T}})\simeq s^{*}s_{!}\mathscr{O}_{\mathscr{M}_{T}}\), which identifies with \(\mathscr{F}_{T}(\Omega X)^{\vee}\).
**Remark 2.2.7**.: Modifying the preceding argument shows that if \(X\) is a connected finite \(T\)-space, there is an equivalence \(\operatorname{Loc}_{T}(X;A)\simeq\operatorname{coLM}_{\mathscr{F}_{T}(X)^{ \vee}}(\operatorname{QCoh}(\mathscr{M}_{T}))\). In particular, if \(X\) admits an \(\mathbf{E}_{n}\)-algebra structure (compatible with the \(T\)-action), then \(\mathscr{F}_{T}(X)^{\vee}\) admits the structure of an \(\mathbf{E}_{n}\)-algebra4 in \(\operatorname{coCAlg}(\operatorname{QCoh}(\mathscr{M}_{T}))\), and the equivalence \(\operatorname{Loc}_{T}(X;A)\simeq\operatorname{coLM}_{\mathscr{F}_{T}(X)^{ \vee}}(\operatorname{QCoh}(\mathscr{M}_{T}))\) is \(\mathbf{E}_{n}\)-monoidal for the
convolution tensor product on both sides. More generally, if \(X\) is a \(T\)-space with a chosen presentation as a filtered colimit of finite \(T\)-spaces \(X_{\alpha}\), there is an equivalence \(\operatorname{Loc}_{T}(X;A)\simeq\operatorname{coLMod}_{\mathscr{F}_{T}(X)^{ \vee}}(\operatorname{QCoh}(\mathscr{M}_{T}))\).
### Filtered deformations
As usual, we will fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\) and an oriented \(A\)-group scheme \(\mathbf{G}\) throughout this section. The main idea of this section (using the double-speed Postnikov filtration) has been used to great effect in [1, 18, 19], but the focus of this section is rather different from _loc. cit._.
Write \(\operatorname{Sp}^{\operatorname{fil}}\) to denote the \(\infty\)-category \(\operatorname{Fun}(\mathbf{Z},\operatorname{Sp})\) of filtered spectra, where \(\mathbf{Z}\) is viewed as a poset via the standard ordering. Similarly, write \(\operatorname{Sp}^{\operatorname{gr}}\) to denote the \(\infty\)-category \(\operatorname{Fun}(\mathbf{Z}^{\operatorname{ds}},\operatorname{Sp})\) of graded spectra, where \(\mathbf{Z}^{\operatorname{ds}}\) denotes the discrete set of integers. There is a functor \(\operatorname{gr}:\operatorname{Sp}^{\operatorname{fil}}\to\operatorname{Sp}^ {\operatorname{gr}}\) given by taking associated graded. See [12, 19] for further discussion on filtered and graded spectra. Recall the following equivalence from [10], which let us view a filtration as equivalent to a one-parameter deformation.
**Proposition 2.3.1** (Rees construction).: _There is a symmetric monoidal equivalence \(\operatorname{Sp}^{\operatorname{fil}}\simeq\operatorname{QCoh}(\mathbf{A}^{ 1}/\mathbf{G}_{m})\), where \(\mathbf{A}^{1}/\mathbf{G}_{m}\) is the flat spectral stack over the sphere spectrum. Under this equivalence, the functor \(\operatorname{gr}:\operatorname{Sp}^{\operatorname{fil}}\to\operatorname{Sp} ^{\operatorname{gr}}\) is given by pullback along the closed immersion \(B\mathbf{G}_{m}\hookrightarrow\mathbf{A}^{1}/\mathbf{G}_{m}\). In particular, a \(\mathbf{Z}\)-filtered \(\mathbf{E}_{n}\)-algebra in \(\operatorname{Sp}\) defines an \(\mathbf{E}_{n}\)-algebra in \(\operatorname{QCoh}(\mathbf{A}^{1}/\mathbf{G}_{m})\)._
**Notation 2.3.2**.: If \(R\in\operatorname{CAlg}(\operatorname{Sp}^{\operatorname{fil}})\), we will simply write \(\operatorname{Mod}^{\operatorname{fil}}_{R}\) to denote \(\operatorname{Mod}_{R}(\operatorname{Sp}^{\operatorname{fil}})\). Similarly, if \(R\in\operatorname{CAlg}(\operatorname{Sp}^{\operatorname{gr}})\), we will simply write \(\operatorname{Mod}^{\operatorname{gr}}_{R}\) to denote \(\operatorname{Mod}_{R}(\operatorname{Sp}^{\operatorname{gr}})\). If \(\mathscr{C}\) is a \(\operatorname{Sp}^{\operatorname{fil}}\)-linear \(\infty\)-category, write \(\mathscr{C}^{\operatorname{gr}}\) to denote \(\mathscr{C}\otimes_{\operatorname{Sp}^{\operatorname{fil}}}\operatorname{ Sp}^{\operatorname{gr}}\). For \(R\in\operatorname{CAlg}(\operatorname{Sp}^{\operatorname{fil}})\), the \(\infty\)-category \(\operatorname{Mod}^{\operatorname{fil}}_{R}\) is canonically a \(\operatorname{Sp}^{\operatorname{fil}}\)-linear \(\infty\)-category, and there is an equivalence
\[(\operatorname{Mod}^{\operatorname{fil}}_{R})^{\operatorname{gr}}= \operatorname{Mod}^{\operatorname{fil}}_{R}\otimes_{\operatorname{Sp}^{ \operatorname{fil}}}\operatorname{Sp}^{\operatorname{gr}}\simeq\operatorname{ Mod}^{\operatorname{gr}}_{\operatorname{gr}(R)}.\]
**Construction 2.3.3**.: The \(\mathbf{E}_{\infty}\)-ring \(A\) defines a canonical \(\mathbf{Z}\)-filtered \(\mathbf{E}_{\infty}\)-algebra in \(\operatorname{Sp}\), given by \(\tau_{\geq 2\star}A\). Note that since \(\tau_{\geq 2\star}:\operatorname{Sp}\to\operatorname{Fun}(\mathbf{Z}, \operatorname{Sp})\) is a lax symmetric monoidal functor, \(\tau_{\geq 2\star}A\) is an \(\mathbf{E}_{\infty}\)-algebra in filtered spectra. The discussion in the preceding section in turn admits a canonical one-parameter deformation. Namely, the spectral \(A\)-scheme \(\mathscr{M}_{T}\) admits a filtered deformation \(\mathscr{M}^{\operatorname{fil}}_{T}\): its underlying \(\pi_{0}A\)-scheme is just the underlying scheme of \(\mathscr{M}_{T}\), and its ring of functions is given by the sheaf \(\tau_{\geq 2\star}\mathscr{C}_{\mathscr{M}_{T}}\) of filtered \(\tau_{\geq 2\star}A\)-algebras. Motivated by the comparison to synthetic spectra in [18], we will write \(\operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T})\) to denote the \(\operatorname{Mod}^{\operatorname{fil}}_{\tau_{\geq 2\star}A}\)-linear \(\infty\)-category \(\operatorname{QCoh}(\mathscr{M}^{\operatorname{fil}}_{T})\).
Similarly, if \(X\) is a \(T\)-space, one can also consider filtered deformations of the sheaves \(\mathscr{F}_{T}(X)\) and \(\mathscr{F}_{T}(X)^{\vee}\). For simplicity, we will only consider the case when \(\mathscr{F}_{T}(X)\) (resp. \(\mathscr{F}_{T}(X)^{\vee}\)) has homotopy sheaves concentrated in even degrees; in this case, the filtered deformation of \(\mathscr{F}_{T}(X)\) (resp. \(\mathscr{F}_{T}(X)^{\vee}\)) is simply given by \(\tau_{\geq 2\star}\mathscr{F}_{T}(X)\) (resp. \(\tau_{\geq 2\star}\mathscr{F}_{T}(X)^{\vee}\)). These are quasicoherent sheaves on \(\mathscr{M}^{\operatorname{fil}}_{T}\); since \(\tau_{\geq 2\star}\) is lax symmetric monoidal, \(\tau_{\geq 2\star}\mathscr{F}_{T}(X)\) is an \(\mathbf{E}_{\infty}\)-algebra in \(\operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T})\). Similarly, if \(X\) is an \(\mathbf{E}_{n}\)-space (compatible with the \(T\)-action), then \(\tau_{\geq 2\star}\mathscr{F}_{T}(X)^{\vee}\) is an \(\mathbf{E}_{n}\)-algebra in \(\operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T})\).
Let \(X\) be a connected finite \(T\)-space such that \(\mathscr{F}_{T}(\Omega X)^{\vee}\) is concentrated in even degrees. Motivated by Proposition 2.2.6, define \(\operatorname{Loc}^{\operatorname{Syn}}_{T}(X;A)\) to denote
\(\operatorname{LMod}_{\tau_{\geq 2\star},\mathscr{F}_{T}(\Omega X)^{\vee}}( \operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T}))\). Similarly, if \(Y\) is an \(\mathbf{E}_{n}\)-algebra in connected \(T\)-spaces such that \(\mathscr{F}_{T}(Y)^{\vee}\) is concentrated in even degrees, define \(\operatorname{Loc}_{T}^{\operatorname{Syn}}(Y;A)\) to be \(\operatorname{coLMod}_{\tau_{\geq 2\star},\mathscr{F}_{T}(Y)^{\vee}}( \operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T}))\).
**Remark 2.3.4**.: In Construction 2.3.3, the definition of \(\operatorname{Loc}_{T}^{\operatorname{Syn}}(X;A)\) is rather _ad hoc_; we have not attempted to describe a general construction here, because this definition suffices for our purposes.
The key point of the preceding construction is that it allows us to interpolate between spectral and (derived) algebraic geometry. More precisely:
**Lemma 2.3.5**.: _There is an equivalence \((\operatorname{Mod}_{\tau_{\geq 2\star},A}^{\operatorname{fil}})^{\operatorname{gr}} \simeq\operatorname{Mod}_{\pi_{0}A}\)._
Proof.: Base-changing the \(\operatorname{Sp}^{\operatorname{fil}}\)-linear \(\infty\)-category \(\operatorname{Mod}_{\tau_{\geq 2\star},A}^{\operatorname{fil}}\) along \(\operatorname{gr}:\operatorname{Sp}^{\operatorname{fil}}\to\operatorname{ Sp}^{\operatorname{gr}}\) produces the \(\operatorname{Sp}^{\operatorname{gr}}\)-linear \(\infty\)-category \(\operatorname{Mod}_{\pi_{2\star},A}^{\operatorname{gr}}\), where \(\pi_{2\star}A\) is viewed as a graded \(\mathbf{E}_{\infty}\)-ring. However, \(A\) is even-periodic, so \(\pi_{2\star}A\cong\pi_{0}(A)[\beta^{\pm 1}]\) with \(\beta\) in weight \(1\). This implies that \(\operatorname{Mod}_{\pi_{2\star},A}^{\operatorname{gr}}\simeq\operatorname{ Mod}_{\pi_{0}A}\).
Let \(\mathscr{M}_{T,0}\) denote the underlying \(\pi_{0}A\)-scheme of the \(A\)-scheme \(\mathscr{M}_{T}\). Lemma 2.3.5 identifies \(\operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T})^{\operatorname{gr}} =\operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T})\otimes_{ \operatorname{Sp}^{\operatorname{fil}}}\operatorname{Sp}^{\operatorname{gr}}\) with \(\operatorname{QCoh}(\mathscr{M}_{T,0})\) as \(\pi_{0}A\)-linear \(\infty\)-categories.
**Notation 2.3.6**.: Let \(X\) be a connected finite \(T\)-space such that \(\mathscr{F}_{T}(\Omega X)^{\vee}\) is concentrated in even degrees. The preceding discussion implies that \(\pi_{2\star}\mathscr{F}_{T}(\Omega X)^{\vee}\) defines an \(\mathbf{E}_{1}\)-algebra in \(\operatorname{coCAlg}(\operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T })^{\operatorname{gr}})\). Let \(\operatorname{Loc}_{T}^{\operatorname{gr}}(X;A)\) denote \(\operatorname{LMod}_{\pi_{2\star},\mathscr{F}_{T}(\Omega X)^{\vee}}( \operatorname{QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T})^{\operatorname{gr}})\); note that the \(\mathbf{E}_{\infty}\)-coalgebra structure on \(\mathscr{F}_{T}(\Omega X)^{\vee}\) equips \(\operatorname{Loc}_{T}^{\operatorname{gr}}(X;A)\) with the structure of a symmetric monoidal \(\infty\)-category. By \(2\)-periodicity, we can identify
\[\operatorname{Loc}_{T}^{\operatorname{gr}}(X;A)\simeq\operatorname{LMod}_{ \pi_{0}\mathscr{F}_{T}(\Omega X)^{\vee}}(\operatorname{QCoh}(\mathscr{M}_{T,0 })).\]
Similarly, if \(Y\) is an \(\mathbf{E}_{n}\)-algebra in connected \(T\)-spaces such that \(\mathscr{F}_{T}(Y)^{\vee}\) is concentrated in even degrees, \(\pi_{2\star}\mathscr{F}_{T}(Y)^{\vee}\) defines an \(\mathbf{E}_{\infty}\)-coalgebra in \(\operatorname{Alg}_{\mathbf{E}_{n}}(\operatorname{QCoh}^{\operatorname{Syn}}( \mathscr{M}_{T})^{\operatorname{gr}})\). Let \(\operatorname{Loc}_{T}^{\operatorname{gr}}(Y;A)\) denote \(\operatorname{coLMod}_{\pi_{2\star},\mathscr{F}_{T}(Y)^{\vee}}(\operatorname{ QCoh}^{\operatorname{Syn}}(\mathscr{M}_{T})^{\operatorname{gr}})\); note that the \(\mathbf{E}_{n}\)-algebra structure on \(\mathscr{F}_{T}(Y)^{\vee}\) equips \(\operatorname{Loc}_{T}^{\operatorname{gr}}(Y;A)\) with the structure of an \(\mathbf{E}_{n}\)-monoidal \(\infty\)-category. By \(2\)-periodicity, we can identify
\[\operatorname{Loc}_{T}^{\operatorname{gr}}(Y;A)\simeq\operatorname{coLMod}_{ \pi_{0}\mathscr{F}_{T}(Y)^{\vee}}(\operatorname{QCoh}(\mathscr{M}_{T,0})).\]
Both \(\operatorname{Loc}_{T}^{\operatorname{gr}}(X;A)\) and \(\operatorname{Loc}_{T}^{\operatorname{gr}}(Y;A)\) are \(\operatorname{QCoh}(\mathscr{M}_{T,0})\)-linear \(\infty\)-categories, which arise as \(\operatorname{Loc}_{T}^{\operatorname{Syn}}(X;A)^{\operatorname{gr}}\) and \(\operatorname{Loc}_{T}^{\operatorname{Syn}}(Y;A)^{\operatorname{gr}}\), respectively.
### GKM and complex periodic \(\mathbf{E}_{\infty}\)-rings
We review the main result of [1], which proves a generalization of a result of Goresky-Kottwitz-MacPherson to generalized cohomology theories. This is also studied in the forthcoming work [1, Section 3].
**Setup 2.4.1**.: Let \(A\) be a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring, and let \(\mathbf{G}\) be an oriented commutative \(A\)-group. Fix a compact torus \(T\). We will consider (ind-finite; see Remark 2.1.12) \(T\)-spaces \(X\) such that the following assumptions hold.
1. \(X\) admits a \(T\)-invariant stratification \(\bigcup_{w\in W}X_{x}\) with only _even-dimensional_ cells, with only finitely many in each dimension.
2. The \(T\)-action on each cell \(X_{w}=\mathbf{A}^{\ell(w)}\) is via a linear action, whose weights are pairwise relatively prime.
3. For each weight \(\lambda\) of the \(T\)-action on \(X_{w}=\mathbf{A}^{\ell(w)}\), the closure of \(\mathbf{C}_{\lambda}\subseteq X_{w}\) is a sphere \(S^{\lambda}\) such that \(0\) and \(\infty\) are fixed points of the \(T\)-action.
**Definition 2.4.2**.: The _GKM graph_\(\Gamma\) asssociated to an \(X\) as in Setup 2.4.1 is defined as follows. The vertices are the (isolated) fixed points of the \(T\)-action, and there is an edge \(x\to y\) labeled by a character \(\lambda\) if \(x=0\) and \(y=\infty\) in the closure \(S^{\lambda}\) of \(D(\lambda)\subseteq D(\mathbf{A}^{2\ell(w)})\). Let \(V\) denote the set of vertices of \(\Gamma\), and \(E\) the set of edges.
**Theorem 2.4.3** ([15, Theorem 3.1], [16, Section 3]).: _In Setup 2.4.1, the map \(\mathscr{F}_{T}(X)\to\operatorname{Map}(V,\mathscr{O}_{\mathscr{M}_{T}})\simeq \mathscr{F}_{T}(X^{T})\) induces an injection on homotopy sheaves, and the following diagram is an equalizer on \(\pi_{0}\):_
\[\mathscr{F}_{T}(X)\to\operatorname{Map}(V,\mathscr{O}_{\mathscr{M}_{T}}) \rightrightarrows\prod_{\alpha\in E}\mathscr{O}_{\mathscr{M}_{T_{\alpha}}}.\]
_Here, the two maps are induced by the inclusion of the source and target of \(\alpha:x\to y\)._
Proof sketch.: The argument is exactly as in [15, Theorem 3.1] (where the spaces denoted \(F_{i}\) are points, corresponding to the origin in \(\mathbf{A}^{\ell(w)}\)), so we only give a sketch. We will work locally on \(\mathbf{G}\). In this case, we need to show that the map \(\mathscr{F}_{T}(X)\to\operatorname{Map}(V,\mathscr{O}_{\mathscr{M}_{T}})\simeq \mathscr{F}_{T}(X^{T})\) is injective on homotopy sheaves, and the following diagram is an equalizer on \(\pi_{0}\):
\[\mathscr{F}_{T}(X)\to\mathscr{F}_{T}(X^{T})\rightrightarrows\prod_{\alpha\in E }\mathscr{F}_{T_{\alpha}}.\]
For the injectivity claim, we first claim that \(\mathscr{F}_{T}(X)^{tT}\simeq\mathscr{F}_{T}(X^{T})^{tT}\). (This is a version of Atiyah-Bott localization.) Since \(X\) is generated by finite colimits from \(T\)-orbits \(T/T^{\prime}\), it suffices to prove this claim when \(X\) is of that form. Then \(\mathscr{F}_{T}(T/T^{\prime})\simeq\mathscr{F}_{T^{\prime}}(*)=q_{*}\mathscr{O }_{\mathscr{M}_{T^{\prime}}}\); this has zero Tate construction if \(T^{\prime}\neq T\). On the other hand, \(X^{T}=\emptyset\) if \(T^{\prime}\neq T\), so \(\mathscr{F}_{T}(X^{T})^{tT}=0\) as desired. If \(T^{\prime}=T\), then \(X^{T}=*\), so that both sides are simply \(A^{tT}\).
Note that \(\mathscr{F}_{T}(X^{T})^{tT}\simeq\mathscr{F}_{T}(X^{T})\otimes_{A}A^{tT}\). Since \(\mathscr{F}_{T}(X)^{tT}\simeq\mathscr{F}_{T}(X)\otimes_{\mathscr{O}_{\mathscr{ M}_{T}}}A^{tT}\) is a localization, it suffices to prove that the map \(\mathscr{F}_{T}(X)\to\mathscr{F}_{T}(X)^{tT}\) induces an injection on homotopy. For this, it suffices to prove that \(\mathscr{F}_{T}(X)\) is a free \(\mathscr{O}_{\mathscr{M}_{T}}\)-module. This is a consequence of the assumptions on \(X\).
To prove the statement about the equalizer diagram, the key case is when \(X=S^{W}\) for a \(T\)-representation \(W\); the general case is obtained by induction on the stratification of \(X\). Let \(\lambda_{1},\cdots,\lambda_{n}\) be the weights of \(W\), so that \(X=\bigotimes_{i=1}^{n}S^{\lambda_{i}}\). Therefore, \(X\) is the quotient of \(\prod_{i=1}^{n}S^{\lambda_{i}}\) by its \((2n-2)\)-skeleton. Using this observation, it is not difficult to reduce to the case when \(W=\lambda\) is a character of \(T\). In this case, \(X=S^{\lambda}\) has \(T\)-fixed points given by \(\{0,\infty\}\). There is a cofiber sequence \(S(\lambda)\to*\to S^{\lambda}\), which induces a pushout square
Therefore, we get an equalizer diagram
\[\mathscr{F}_{T}(S^{\lambda})\to\mathscr{O}_{\mathscr{M}_{T}}\rightrightarrows \mathscr{F}_{T}(S(\lambda)).\]
However, if \(T_{\lambda}=\ker(\lambda:T\to\mathbf{G}_{m})\), then \(S(\lambda)\simeq T/T_{\lambda}\), so that \(\mathscr{F}_{T}(S(\lambda))\simeq q_{*}\mathscr{O}_{\mathscr{M}_{T_{\lambda}}}\). It follows that \(\mathscr{F}_{T}(S^{\lambda})\) is the fiber of the map \(\mathscr{O}_{\mathscr{M}_{T}}\oplus\mathscr{O}_{\mathscr{M}_{T}}\to q_{*} \mathscr{O}_{\mathscr{M}_{T_{\lambda}}}\) given by the following composite:
\[\mathscr{O}_{\mathscr{M}_{T}}\oplus\mathscr{O}_{\mathscr{M}_{T}}\xrightarrow{ (x,y)\mapsto x-y}\mathscr{O}_{\mathscr{M}_{T}}\to q_{*}\mathscr{O}_{\mathscr{ M}_{T_{\lambda}}}.\]
However, the map \(\mathscr{O}_{\mathscr{M}_{T}}\to q_{*}\mathscr{O}_{\mathscr{M}_{T_{\lambda}}}\) is precisely given by quotienting by the ideal \(\mathscr{I}_{\lambda}\) (by Notation 2.1.10). Therefore, \(\mathscr{F}_{T}(S^{\lambda})\) is described by the claimed equalizer diagram.
**Remark 2.4.4**.: Informally, the image on homotopy sheaves of the map \(\mathscr{F}_{T}(X)\to\operatorname{Map}(V,\mathscr{O}_{\mathscr{M}_{T}}) \simeq\mathscr{F}_{T}(X^{T})\) consists of those \(f\in\pi_{*}\mathscr{O}_{\mathscr{M}_{T}}^{V}\) such that \(f(x)\equiv f(y)\pmod{\mathscr{I}_{\alpha}}\) for every edge \(\alpha:x\to y\) in \(\Gamma\). Here, \(\mathscr{I}_{\alpha}\) is as in Notation 2.1.10.
## 3. Equivariant topology of the affine Grassmannian
For a topologically minded reader, we recommend the book [1] for a nice introduction to more classical aspects of geometric representation theory.
### Kac-Moody flag varieties
Fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\) and an oriented commutative \(A\)-group \(\mathbf{G}\).
**Observation 3.1.1**.: Let \(\mathscr{G}\) be a Kac-Moody group, and let \(\mathscr{P}\subseteq\mathscr{G}\) be a parabolic subgroup associated to a subset \(J\subseteq\Delta\) of simple roots. Let \(T=T_{\mathscr{G}}/Z(\mathscr{G})\) denote the torus of \(\mathscr{G}/Z(\mathscr{G})\), and let \(W\) be the Weyl group associated to \(\mathscr{G}\). Let \(W_{\mathscr{P}}\) denote the subgroup of \(W\) generated by \(s_{\alpha_{j}}\) for \(\alpha_{j}\in J\), and let \(W^{J}\) denote the set of minimal-length coset representatives in \(W_{\mathscr{G}}/W_{\mathscr{P}}\).
Then \((\mathscr{G}/\mathscr{P})^{T}\cong W^{\mathscr{P}}\), and the Schubert decomposition \(\mathscr{G}/\mathscr{P}=\coprod_{w\in W^{\mathscr{P}}}\mathscr{B}\dot{w} \mathscr{P}/\mathscr{P}\) is a \(T\)-invariant stratification, where \(\overline{w}=\dot{w}\mathscr{P}/\mathscr{P}\) is the unique \(T\)-fixed point in the cell \(\mathscr{B}\dot{w}\mathscr{P}/\mathscr{P}\). We claim that \(\mathscr{G}/\mathscr{P}\) satisfies the hypotheses of Setup 2.4.1. Clearly, condition (a) is satisfied. For condition (b), observe that the tangent space to \(\mathscr{B}\overline{w}\) at \(\overline{w}\) is
\[T_{\overline{w}}\mathscr{B}\dot{w}\mathscr{P}/\mathscr{P}=\mathfrak{b}/( \mathfrak{b}\cap w\cdot\mathfrak{p})=\bigoplus_{\alpha\in\Phi^{+}-w\Phi^{+}( \mathfrak{p})}\mathfrak{g}_{\alpha},\]
where each \(\mathfrak{g}_{\alpha}\) is \(1\)-dimensional. The weights are therefore all distinct, so condition (b) in Setup 2.4.1 is satisfied. For condition (c), let \(\alpha\in\Phi^{+}-w\Phi^{+}(\mathfrak{p})\), and let \(i_{\alpha}:\mathrm{SL}_{2}\to\mathscr{G}\) denote the associated subgroup. The closure of \(\mathscr{B}_{\alpha}\overline{w}\) is \(\mathrm{SL}_{2}\overline{w}=\mathbf{P}^{1}\), where the point at \(0\) corresponds to \(\overline{w}\), and the point at \(\infty\) corresponds to \(\overline{s_{\alpha}w}\). Then the GKM graph \(\Gamma\) of \(\mathscr{G}/\mathscr{P}\) has vertices \(W^{\mathscr{P}}\) and edges \(w\to s_{\alpha}w\) labeled by \(s_{\alpha}\in W_{\mathscr{G}}\). See also [1, Section 5].
**Warning 3.1.2**.: In the following, the reader should replace the symbol "\(\mathscr{F}_{T}(\mathscr{G}/\mathscr{P})\)" by \(\mathscr{F}_{T}(X_{\leq w})\) where \(X_{\leq w}\) is a Schubert cell in \(\mathscr{G}/\mathscr{P}\). In this case, \(X_{\leq w}\) is a finite CW-complex, so that \(\mathscr{F}_{T}(X_{\leq w})\) is a _perfect_\(\mathscr{O}_{\mathscr{M}_{T}}\)-module. This implies that the \(T\)-equivariant _homology_\(\mathscr{F}_{T}(X_{\leq w})^{\vee}\) is the \(\mathscr{O}_{\mathscr{M}_{T}}\)-linear dual of \(\mathscr{F}_{T}(X_{\leq w})\); note that this is not true of \(\mathscr{F}_{T}(\mathscr{G}/\mathscr{P})\) when the Kac-Moody group is not of finite type. (In general, homology is a predual of cohomology, but the linear dual of cohomology does not recover homology in the non-finite case.) We _define_\(\mathscr{F}_{T}(\mathscr{G}/\mathscr{P})^{\vee}\) as the direct limit of \(\mathscr{F}_{T}(X_{\leq w})^{\vee}\).
Since \(\mathscr{G}/\mathscr{P}\) satisfies the hypotheses of Setup 2.4.1 by Observation 3.1.1, we may apply Theorem 2.4.3 to calculate \(\mathscr{F}_{T}(\mathscr{G}/\mathscr{P})\). See [1] for a related discussion.
**Theorem 3.1.3**.: _The following diagram is an equalizer on \(\pi_{0}\):_
\[\mathscr{F}_{T}(\mathscr{G}/\mathscr{P})\to\mathrm{Map}(W^{\mathscr{P}}, \mathscr{O}_{\mathscr{M}_{T}})\rightrightarrows\prod_{\alpha:w\to s_{ \alpha}w}\mathscr{O}_{\mathscr{M}_{T_{\alpha}}}.\]
_Here, the two maps are given by restriction and applying \(s_{\alpha}\) to \(W^{\mathscr{P}}\). Therefore, \(\pi_{0}\mathscr{F}_{T}(\mathscr{G}/\mathscr{P})\) is the sub-\(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\)-algebra of \(\mathrm{Map}(W^{\mathscr{P}},\pi_{0}\mathscr{O}_{\mathscr{M}_{T}})\) consisting of those maps \(f:W^{\mathscr{P}}\to\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\) such that_
\[f(s_{\alpha}w)\equiv f(w)\pmod{\mathscr{I}_{\alpha}}\text{ for all }w\in W^{ \mathscr{P}},\alpha\in\Phi. \tag{2}\]
Motivated by Theorem 3.1.3, we may define an algebraic generalization of \(\pi_{0}\mathscr{F}_{T}(\mathscr{G}/\mathscr{P})\) as follows.
**Construction 3.1.4**.: Let \((W,S)\) be a Coxeter system, and let \(V=\mathbf{R}^{S}\) denote the associated geometric representation. For \(s\in S\), let \(\alpha_{s}\) denote the associated vector, let \(\Phi=\{w(\alpha_{s})|s\in S,w\in W\}\) be the set of roots, and let \(\Phi^{+}\subseteq\Phi\) denote the set of positive roots. Let \(\Lambda=\mathbf{Z}\Phi\subseteq V\) denote the associated root lattice. Fix a smooth \(1\)-dimensional affine group scheme \(\mathbf{G}_{0}\) over a commutative ring \(R\), and let \(\mathscr{M}_{T,0}=\operatorname{Hom}(\Lambda^{\vee},\mathbf{G}_{0})\). Given a character \(\lambda\), let \(c_{\lambda}\) denote a function which cuts out the closed subscheme \(\mathbf{G}_{\ker(\lambda)}\hookrightarrow\mathscr{M}_{T,0}\). Define \(\mathbf{K}\) to be the sub-\(\mathscr{O}_{\mathscr{M}_{T,0}}\)-algebra of \(\operatorname{Map}(W,\mathscr{O}_{\mathscr{M}_{T,0}})\) consisting of those maps \(f:W\to\mathscr{O}_{\mathscr{M}_{T,0}}\) satisfying (2), i.e., such that \(f(s_{\alpha}w)\equiv f(w)\pmod{c_{\alpha}}\) for \(\alpha\in\Phi\) and \(w\in W\).
**Remark 3.1.5**.: Note that if \(\lambda\) is a character, then the function \(c_{\lambda}\) on \(\mathscr{M}_{T}\) is given by the \(T\)-equivariant Thom class of the representation of \(T\) given by \(\lambda:T\to\mathbf{G}_{m}^{\operatorname{rot}}\). Morever, \(c_{\lambda}\) generates \(\mathscr{I}_{\lambda}\).
**Lemma 3.1.6**.: _Let \(s_{\alpha}\in W\), and let \(T_{\alpha}=\ker(\alpha)\subseteq T\). Then we have the following commutative diagram of \(R\)-schemes (where the non-vertical arrows are closed immersions):_
_informally, \(s_{\alpha}\equiv 1\pmod{\mathscr{I}_{\alpha}}\)._
Proof.: This follows from the fact that the character lattice of \(T_{\alpha}\) is the quotient of \(\mathbb{X}^{*}(T)\) by the rank \(1\) sublattice generated by \(\alpha\); therefore, if \(\chi\in\mathbb{X}^{*}(T)\), then \(s_{\alpha}\chi|_{T_{\alpha}}=\chi|_{T_{\alpha}}\).
Theorem 3.1.3 implies the following:
**Corollary 3.1.7**.: _Suppose \(\mathbf{G}_{0}\) is affine. Then there is an equivalence \(\pi_{0}\mathscr{F}_{T}(\mathscr{G}/\mathscr{P})^{\vee}\simeq\mathscr{O}_{ \mathscr{M}_{T,0}}[W^{\mathscr{P}},\frac{s_{\alpha}-1}{c_{\alpha}},\alpha\in \Phi]\) of \(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\)-modules._
Recall that if \(w\in W\), then \(\operatorname{inv}(w)\subseteq\Phi^{+}\) denotes the set of positive roots \(\alpha\) such that \(s_{\alpha}w<w\). The following is then the analogue of [1, Lemma 2.3, Lemma 2.5, Proposition 2.6].
**Proposition 3.1.8**.: _Suppose that \(\mathbf{G}\) is affine. In Construction 3.1.4, \(\mathbf{K}\) is a free \(\mathscr{O}_{\mathscr{M}_{T,0}}\)-module spanned by functions \(\psi_{w}:W\to\mathscr{O}_{\mathscr{M}_{T,0}}\) for \(w\in W\), where \(\psi_{w}\) is uniquely characterized by the property that it satisfies (2) and the following two properties:_
\[\psi_{w}(v) =0\text{ if }v<w,\] \[\psi_{w}(w) =\prod_{\alpha\in\operatorname{inv}(w)}c_{\alpha}.\]
Proof.: The two stated conditions define \(\psi_{w}\) on the interval \([1,w]\subseteq W\). We will now define an extension of \(\psi_{w}\) to the whole of \(W\). We will in fact prove the following more general claim by induction on \(\ell(w)\):
* Let \(w\in W\), and let \([1,w]^{\circ}=[1,w]-\{w\}\). Then any function \(\psi:[1,w]^{\circ}\to\mathscr{O}_{\mathscr{M}_{T,0}}\) satisfying (2) extends to a function \([1,w]\to\mathscr{O}_{\mathscr{M}_{T,0}}\) satisfying (2).
To see this, write \(w=s_{i_{1}}\cdots s_{i_{n}}\), let \(\alpha=\alpha_{i_{1}}\), and let \(w^{\prime}=s_{\alpha}w\) (so that \(w^{\prime}<w\)). Consider the restriction of \(\psi\) to \([1,w^{\prime}]^{\circ}\), so that \(\psi\) itself is an extension to \([1,w^{\prime}]\). Define \(\psi^{\prime}:[1,w^{\prime}]^{\circ}\to\mathscr{O}_{\mathscr{M}_{T,0}}\) by the formula \(\psi^{\prime}(v)=s_{\alpha}\psi(s_{\alpha}v)\). Then \(\psi^{\prime}\) also satisfies (2): indeed, if \(\beta\) is another root, then \(\psi^{\prime}(s_{\beta}v)\equiv\psi^{\prime}(v)\pmod{\mathscr{I}_{\beta}}\) if and only if \(\psi(s_{\alpha}s_{\beta}v)\equiv\psi(s_{\alpha}v)\pmod{s_{\alpha}\mathscr{I}_ {\beta}}\). However, \(s_{\alpha}\mathscr{I}_{\beta}=\mathscr{I}_{s_{\alpha}(\beta)}\), while \(s_{\alpha}s_{\beta}=s_{s_{\alpha}(\beta)}s_{\alpha}\). The claim therefore follows from the assumption that \(\psi\) satisfies (2).
Since \(w^{\prime}<w\), the inductive hypothesis says that \(\psi^{\prime}\) extends to a function \(\psi^{\prime}:[1,w^{\prime}]\to\mathscr{O}_{\mathscr{M}_{T,0}}\) which satisfies (2). If \(v\in[1,w^{\prime}]^{\circ}\), then
\[\psi(v)-\psi^{\prime}(v)=\psi(v)-s_{\alpha}\psi(s_{\alpha}v)\equiv(1-s_{\alpha} )\psi(v)\pmod{\mathscr{I}_{\alpha}}.\]
By Lemma 3.1.6, we see that \(\psi(v)-\psi^{\prime}(v)\equiv 0\pmod{\mathscr{I}_{\alpha}}\), so we may define a function \(p_{v}\in\mathscr{O}_{\mathscr{M}_{T,0}}\) by the formula \(\frac{\psi(v)-\psi^{\prime}(v)}{c_{\alpha}}\). If \(\beta\in\Phi^{+}\) is such that \(s_{\beta}w^{\prime}<w^{\prime}\), then:
\[\psi(w^{\prime})-\psi^{\prime}(w^{\prime}) \equiv\psi(s_{\beta}w^{\prime})-\psi^{\prime}(s_{\beta}w^{\prime} )\pmod{\mathscr{I}_{\beta}}\] \[=c_{\alpha}p_{s_{\beta}w^{\prime}}\pmod{\mathscr{I}_{\beta}}.\]
In particular, there is a function \(p_{w^{\prime}}\in\mathscr{O}_{\mathscr{M}_{T,0}}\) such that
\[\psi(w^{\prime})-\psi^{\prime}(w^{\prime})\equiv c_{\alpha}p_{w^{\prime}} \pmod{\mathscr{I}_{\beta}}\]
for all \(\beta\in\Phi^{+}\) such that \(s_{\beta}w^{\prime}<w^{\prime}\), i.e., \(\beta\in\operatorname{inv}(w^{\prime})\). In particular,
\[\psi(w^{\prime})-\psi^{\prime}(w^{\prime})\equiv c_{\alpha}p_{w^{\prime}} \pmod{\prod_{\beta\in\operatorname{inv}(w^{\prime})}\mathscr{I}_{\beta}}. \tag{3}\]
Note that \(s_{\alpha}\mathrm{inv}(w^{\prime})\) is the set of \(\beta\in\Phi^{+}-\{\alpha\}\) such that \(s_{\beta}w^{\prime}<w^{\prime}\). Define
\[\psi(w)=s_{\alpha}\psi^{\prime}(w^{\prime})+x\prod_{\beta\in s_{\alpha} \mathrm{inv}(w^{\prime})}c_{\beta}\]
for some \(x\) that we will determine in a moment. We check that \(\psi\) satisfies (2). Let \(\alpha^{\prime}\in\Phi^{+}\) be such that \(s_{\alpha^{\prime}}w<w\). Then:
1. If \(\alpha^{\prime}=\alpha\), then \[\psi(w)-\psi(s_{\alpha}w) =s_{\alpha}\psi^{\prime}(w^{\prime})-\psi(w^{\prime})+x\prod_{ \beta\in s_{\alpha}\mathrm{inv}(w^{\prime})}c_{\beta}\] \[\equiv s_{\alpha}(\psi^{\prime}(w^{\prime})-\psi(w^{\prime}))+x \prod_{\beta\in s_{\alpha}\mathrm{inv}(w^{\prime})}c_{\beta}\pmod{\mathscr{I}_ {\alpha}}\] However, (3) implies that \[s_{\alpha}(\psi(w^{\prime})-\psi^{\prime}(w^{\prime}))\equiv c_{-\alpha}s_{ \alpha}(p_{w^{\prime}})\pmod{\prod_{\beta\in s_{\alpha}\mathrm{inv}(w^{ \prime})}\mathscr{I}_{\beta}}\] Therefore, taking \(x\) to be the negative of the residue of \(s_{\alpha}(\psi(w^{\prime})-\psi^{\prime}(w^{\prime}))-c_{-\alpha}s_{\alpha}(p_ {w^{\prime}})\) modulo \(\prod_{\beta\in s_{\alpha}\mathrm{inv}(w^{\prime})}\mathscr{I}_{\beta}\), we see that \[\psi(w)-\psi(s_{\alpha}w)\equiv c_{-\alpha}s_{\alpha}(p_{w^{\prime}})\equiv 0 \pmod{\mathscr{I}_{\alpha}},\] as desired.
2. If \(\alpha^{\prime}\neq\alpha\), then \(\alpha^{\prime}\in s_{\alpha}\mathrm{inv}(w^{\prime})\). Then, we have \[\psi^{\prime}(w^{\prime}) \equiv\psi^{\prime}(s_{s_{\alpha}(\alpha^{\prime})}w^{\prime}) \pmod{\mathscr{I}_{s_{\alpha}(\alpha^{\prime})}}\] \[=s_{\alpha}\psi(s_{\alpha}s_{s_{\alpha}(\alpha^{\prime})}s_{ \alpha}w)\pmod{\mathscr{I}_{s_{\alpha}(\alpha^{\prime})}}\] \[=s_{\alpha}\psi(s_{\alpha^{\prime}}w)\pmod{\mathscr{I}_{s_{ \alpha}(\alpha^{\prime})}}.\] In particular, \(s_{\alpha}\psi^{\prime}(w^{\prime})\equiv\psi(s_{\alpha^{\prime}}w)\pmod{ \mathscr{I}_{\alpha^{\prime}}}\). But this implies that \[\psi(w)-\psi(s_{\alpha^{\prime}}w) \equiv s_{\alpha}\psi^{\prime}(w^{\prime})-\psi(s_{\alpha^{ \prime}}w)\pmod{\mathscr{I}_{\alpha^{\prime}}}\] \[\equiv 0\pmod{\mathscr{I}_{\alpha^{\prime}}},\] as desired.
This finishes the proof of (\(*\)).
To finish the proof of the proposition, note that the two conditions on \(\psi_{w}\) specify it on \([1,w]\), and hence on the subset of \(W\) consisting of elements of length \(<\ell(w)\). By (\(*\)), we may inductively extend \(\psi_{w}\) to the subset of \(W\) consisting of elements of length \(\geq\ell(w)\), and hence to all of \(W\). It remains to show that any \(\psi\in\mathrm{Map}(W,\mathscr{O}_{\mathscr{M}_{T,0}})\) satisfying (2) can be written as a \(\mathscr{O}_{\mathscr{M}_{T,0}}\)-linear combination of the \(\psi_{w}\); see the second half of [10, Proposition 2.6] for the following argument.
Let \(\mathrm{Supp}(\psi)\) denote the subset of \(w\in W\) such that \(f(\psi)\neq 0\). Let \(v\in\mathrm{Supp}(\psi)\) be minimal. If \(\alpha\in\mathrm{inv}(v)\) (so \(s_{\alpha}v<v\)), then \(\psi(v)\equiv\psi(s_{\alpha}v)=0\pmod{\mathscr{I}_{\alpha}}\). This implies that \(\psi(v)\equiv 0\pmod{\psi_{v}(v)}\). Define \(\psi^{\prime}:W\to\pi_{0}\mathscr{O}_{\mathscr{M}_{T,0}}\) by \(\psi^{\prime}(w)=\psi(w)-\frac{\psi(v)}{\psi_{v}(v)}\psi_{v}(w)\); then \(\psi^{\prime}\) satisfies (2) (since \(\psi\) and \(\psi_{v}\) do). By construction, \(v\not\in\mathrm{Supp}(\psi^{\prime})\), and \(\mathrm{Supp}(\psi^{\prime})-\mathrm{Supp}(\psi)\) consists of elements which are strictly larger than \(v\). Therefore, we may repeat this argument for \(\psi^{\prime}\), and induct; this yields the desired result.
### The affine Grassmannian
**Setup 3.2.1**.: Fix notation as in Notation 1.1.19, and assume that \(G\) is semisimple. Then we have an associated affine root datum: the affine simple roots are \(\Delta_{\mathrm{aff}}=\Delta\cup\{0\}\), and the affine weight lattice is given by \(\mathbf{Z}K\oplus\bigoplus_{\alpha_{i}\in\Delta_{\mathrm{aff}}}\mathbf{Z} \alpha_{i}\). (In particular, we denote the affine root by \(\alpha_{0}\).) Thus the associated Kac-Moody algebra is \(\mathfrak{\hat{g}}=\mathfrak{g}(\mathfrak{t})\oplus\mathbf{C}\alpha_{0}\oplus \mathbf{C}K\), where \(K\) is the central class, and \(\alpha_{0}\) is the scaling factor. Let \(\mathscr{G}\) denote the associated Kac-Moody group, and let \(W^{\mathrm{aff}}=\Lambda^{\vee}\rtimes W\) denote the associated affine Weyl group. If \(\lambda^{\vee}\in\Lambda^{\vee}\), we write \(t_{\lambda^{\vee}}\) to denote the associated element of \(W^{\mathrm{aff}}\). If \(\alpha+n\alpha_{0}\) is an affine root and \(x\in\mathfrak{t}\), then
\[s_{\alpha+n\alpha_{0}}(x)=x-(\langle x,\alpha\rangle+n)\alpha^{\vee}=s_{ \alpha}(x)+n\alpha^{\vee}.\]
Let \(\mathscr{B}\) denote the Iwahori subgroup, and \(T_{\mathrm{aff}}\) the maximal torus of \(\mathscr{G}\). Then \(\mathscr{G}/\mathscr{B}\) is the affine flag variety \(\mathrm{Fl}_{G}\); similarly, \(\mathrm{Gr}_{G}\) is the Kac-Moody flag variety associated to the subset \(\Delta\subseteq\Delta_{\mathrm{aff}}\). Up to keeping track of the central torus, we may view \(\mathscr{G}\) as \(G(\mathfrak{t})\), and \(\mathscr{B}\) as the Iwahori \(I\). Thus \(T=T^{\mathrm{aff}}\cap G\) is the maximal torus of \(G\). Let \(\widetilde{T}\) denote the extended torus \(T\times\mathbf{G}_{m}^{\mathrm{rot}}\) (where \(\mathbf{G}_{m}^{\mathrm{rot}}\) is the loop rotation torus); we may identify its Lie algebra \(\widetilde{\mathfrak{t}}\) with \(\mathfrak{t}\oplus\mathbf{C}\alpha_{0}\).
**Remark 3.2.2**.: Let \(\alpha\in\Phi\) and \(n\in\mathbf{Z}\). Then \(n\alpha_{0}\) is the \(\mathbf{G}_{m}^{\mathrm{rot}}\)-representation of weight \(n\). Note that \(\alpha+n\alpha_{0}\) defines an ideal sheaf \(\mathscr{I}_{\alpha+n\alpha_{0}}\subseteq\pi_{0}\mathscr{O}_{\mathscr{M}_{ \widetilde{T}}}=\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\otimes_{\pi_{0}A}\pi_{0} \mathscr{O}_{\mathbf{G}}\).
Theorem 3.1.3 gives an explicit description of \(\pi_{0}\mathscr{F}_{T^{\mathrm{aff}}}(\mathrm{Fl}_{G})\) and \(\pi_{0}\mathscr{F}_{T^{\mathrm{aff}}}(\mathrm{Gr}_{G})\). Using that
\[(\mathrm{Fl}_{G})^{T} =(\mathrm{Fl}_{G})^{\widetilde{T}}=W^{\mathrm{aff}}\] \[(\mathrm{Gr}_{G})^{T} =(\mathrm{Gr}_{G})^{\widetilde{T}}=W^{\mathrm{aff}}/W\cong\Lambda^ {\vee},\]
this further immediately specializes to the following explicit description of \(\pi_{0}\mathscr{F}_{\widetilde{T}}(\mathrm{Fl}_{G})\) and \(\pi_{0}\mathscr{F}_{\widetilde{T}}(\mathrm{Gr}_{G})\):
**Corollary 3.2.3**.: _The following statements are true:_
1. _We may identify_ \(\pi_{0}\mathscr{F}_{\widetilde{T}}(\mathrm{Fl}_{G})\cong\pi_{0}\mathscr{F}_{ \mathbf{G}_{m}^{\mathrm{rot}}}(\mathrm{Fl}_{G}/I)\) _with_ \(\mathbf{K}\) _from Construction_ 3.1.4_, i.e., as the sub-_\(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\)_-algebra of_ \(\mathrm{Map}(W^{\mathrm{aff}},\pi_{0}\mathscr{O}_{\mathscr{M}_{T}})\) _consisting of those maps_ \(f:W^{\mathrm{aff}}\to\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\) _such that_ (4) \[f(s_{\alpha+n\alpha_{0}}(w))\equiv f(w)\pmod{\mathscr{I}_{\alpha+n\alpha_{0}}}\] _for all_ \(w\in W^{\mathrm{aff}},\alpha\in\Phi,n\in\mathbf{Z}\)_._
2. _We may identify_ \(\pi_{0}\mathscr{F}_{\widetilde{T}}(\mathrm{Gr}_{G})\cong\pi_{0}\mathscr{F}_{ \mathbf{G}_{m}^{\mathrm{rot}}}(\mathrm{Gr}_{G}/I)\) _as the sub-_\(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\)_-algebra of_ \(\mathrm{Map}(\Lambda^{\vee},\pi_{0}\mathscr{O}_{\mathscr{M}_{T}})\) _consisting of those maps_ \(f:\Lambda^{\vee}\to\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\) _such that_ (5) \[f(s_{\alpha+n\alpha_{0}}(\lambda))\equiv f(\lambda)\pmod{\mathscr{I}_{\alpha+n \alpha_{0}}}\] _for all_ \(\lambda\in\Lambda^{\vee},\alpha\in\Phi,n\in\mathbf{Z}\)_._
**Corollary 3.2.4**.: _The following statements are true:_
1. _We may identify_ \(\pi_{0}\mathscr{F}_{T}(\mathrm{Fl}_{G})\) _as the sub-_\(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\)_-algebra of_ \(\mathrm{Map}(W^{\mathrm{aff}},\pi_{0}\mathscr{O}_{\mathscr{M}_{T}})\) _consisting of those maps_ \(f:W^{\mathrm{aff}}\to\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\) _such that_ (6) \[f(s_{\alpha+n\alpha_{0}}(w))\equiv f(w)\pmod{\mathscr{I}_{\alpha}}\] _for all_ \(w\in W^{\mathrm{aff}},\alpha\in\Phi,n\in\mathbf{Z}\)_._
2. _We may identify_ \(\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{G})\) _as the sub-_\(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\)_-algebra of_ \(\mathrm{Map}(\Lambda^{\vee},\pi_{0}\mathscr{O}_{\mathscr{M}_{T}})\) _consisting of those maps_ \(f:\Lambda^{\vee}\to\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\) _such that_ (7) \[f(s_{\alpha+n\alpha_{0}}(\lambda))\equiv f(\lambda)\pmod{\mathscr{I}_{\alpha}}\] _for all_ \(\lambda\in\Lambda^{\vee},\alpha\in\Phi,n\in\mathbf{Z}\)_._
**Observation 3.2.5**.: The image of \(s_{\alpha+n\alpha_{0}}\) under the identification \(W^{\mathrm{aff}}/W\cong\Lambda^{\vee}\) is the right coset \(s_{\alpha+n\alpha_{0}}W\). However, \(s_{\alpha+n\alpha_{0}}s_{\alpha}\) is translation by \(n\alpha^{\vee}\). If \(k\) is a commutative ring, we may view \(k[\Lambda^{\vee}]\) as the \(\mathbf{E}_{\infty}\)-ring of functions on \(\check{T}_{k}\); the element \(n\alpha^{\vee}\in\Lambda^{\vee}\) corresponds to the function \(e^{n\alpha^{\vee}}\). Therefore, (7) can be restated as
\[f((e^{n\alpha^{\vee}}-1)(\lambda))\equiv 0\pmod{\mathscr{I}_{\alpha}}.\]
If \(\mathbf{G}\) is affine, then \(\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{G})\) is the \(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}\)-linear dual of \(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}[\Lambda^{\vee}][\frac{e^{n\alpha^{\vee}}- 1}{c_{\alpha}}]_{n\geq 1}\). However, note that for any \(n\geq 1\), we may write
\[\tfrac{e^{n\alpha^{\vee}}-1}{c_{\alpha}}=\tfrac{e^{\alpha^{\vee}}-1}{c_{\alpha} }+\tfrac{e^{(n-1)\alpha^{\vee}}-1}{c_{\alpha}}+c_{\alpha}\tfrac{e^{\alpha^{ \vee}}-1}{c_{\alpha}}\tfrac{e^{(n-1)\alpha^{\vee}}-1}{c_{\alpha}}.\]
This implies that
\[\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{G})\cong\mathrm{Map}_{\mathrm{QCoh}( \mathscr{M}_{T,0})}(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}[\Lambda^{\vee}][\tfrac{ e^{\alpha^{\vee}}-1}{c_{\alpha}}],\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}).\]
**Remark 3.2.6**.: Let \(\lambda\in\Lambda^{\vee,\mathrm{pos}}\) be a dominant coweight, and let \(\Lambda^{\vee,\mathrm{pos}}_{\leq\lambda}\) denote the subset of \(\Lambda^{\vee,\mathrm{pos}}\) consisting of those dominant weights which are at most \(\lambda\). Then we may identify
\[(\mathrm{Gr}_{G}^{\leq\lambda})^{T}=W\cdot\Lambda^{\vee,\mathrm{pos}}_{\leq \lambda}\subseteq\Lambda^{\vee}=(\mathrm{Gr}_{G})^{T},\]
which allows us to calculate that if \(\mathbf{G}\) is affine, then
\[\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{G}^{\leq\lambda})\cong\mathrm{Map}_{ \mathrm{QCoh}(\mathscr{M}_{T,0})}(\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}[W\cdot \Lambda^{\vee,\mathrm{pos}}_{\leq\lambda}][\tfrac{e^{\alpha^{\vee}}-1}{c_{ \alpha}}],\pi_{0}\mathscr{O}_{\mathscr{M}_{T}}).\]
In the above expression, \(\alpha\) ranges over \(\Phi\cap W\cdot\Lambda^{\vee,\mathrm{pos}}_{\leq\lambda}\); in other words, \(\alpha\) is of the form \(w\alpha_{i}\) with \(\alpha_{i}\in\Delta\) such that \(\alpha_{i}\leq\lambda\).
**Remark 3.2.7**.: Recall from Warning 3.1.2 that \(\mathscr{F}_{T}(\mathrm{Gr}_{G})^{\vee}\) is defined to be the direct limit of \(\mathscr{F}_{T}(\mathrm{Gr}_{G}^{\leq\lambda})^{\vee}\). We trust the reader to make the appropriate modifications below as needed (which we have not done to avoid an overbearance of notation), so that the calculation of the \(T\)-equivariant homology \(\mathscr{F}_{T}(\mathrm{Gr}_{G})^{\vee}\) in Theorem 3.2.12 by taking the linear dual of \(\mathscr{F}_{T}(\mathrm{Gr}_{G})\) does not suffer from completion issues. This can be done, for instance, by working with the _\(\Lambda^{\vee,\mathrm{pos}}\)-filtered \(\mathscr{O}_{\mathscr{M}_{T}}\)-module \(\{\mathscr{F}_{T}(\mathrm{Gr}_{G}^{\leq\lambda})^{\vee}\}\)_. In order for the colimit \(\mathscr{F}_{T}(\mathrm{Gr}_{G})^{\vee}\) of the \(\Lambda^{\vee,\mathrm{pos}}\)-filtered module \(\{\mathscr{F}_{T}(\mathrm{Gr}_{G}^{\leq\lambda})^{\vee}\}\) to admit the structure of an \(\mathbf{E}_{2}\)-\(\mathscr{O}_{\mathscr{M}_{T}}\)-algebra, it suffices to show that \(\{\mathscr{F}_{T}(\mathrm{Gr}_{G}^{\leq\lambda})^{\vee}\}\) admits the structure of an \(\mathbf{E}_{2}\)-algebra in \(\Lambda^{\vee,\mathrm{pos}}\)-filtered module; this is proved in Lemma 3.2.8 below.
**Lemma 3.2.8**.: _The \(\Lambda^{\vee,\mathrm{pos}}\)-indexed Schubert filtration \(\{\mathrm{Gr}_{G}^{\leq\lambda}(\mathbf{C})\}\) naturally admits the structure of an \(\mathbf{E}_{2}\)-algebra in \(\mathrm{Fun}(\Lambda^{\vee,\mathrm{pos}},\mathscr{S})\)._
Proof.: This can be proved in essentially the same way as [10, Theorem 3.10]; let us sketch the argument. We will utilize [11, Proposition 5.4.5.15], which states that if \(\mathscr{C}\) is a symmetric monoidal \(\infty\)-category, then a nonunital \(\mathbf{E}_{2}\)-algebra object in \(\mathscr{C}\) is equivalent to the datum of a locally constant \(\mathrm{N}(\mathrm{Disk}(\mathbf{C}))_{\mathrm{nu}}\)-algebra object in \(\mathscr{C}\). Concretely, this amounts to specifying an object \(A(D)\in\mathscr{C}\) for every disk \(D\subseteq\mathbf{C}\) and coherent maps \(\bigotimes_{i=1}^{n}A(D_{i})\to A(D)\) for every inclusion \(\coprod_{i=1}^{n}D_{i}\to D\) of disks, such that for every embedding \(D\subseteq D^{\prime}\) of disks, the induced map \(A(D)\to A(D^{\prime})\) is an equivalence.
In this case, \(\mathscr{C}=\mathrm{Fun}(\Lambda^{\vee,\mathrm{pos}},\mathscr{S})\), and the object \(A(D)\in\mathrm{Fun}(\Lambda^{\vee,\mathrm{pos}},\mathscr{S})\) assigned to a disk \(D\subseteq\mathbf{C}\) may be defined via the Beilinson-Drinfeld Grassmannian \(\mathrm{Gr}_{G,\mathrm{Ran}}\). Namely, the Beilinson-Drinfeld Grassmannian admits (by construction) a morphism \(\mathrm{Gr}_{G,\mathrm{Ran}}\to\mathrm{Ran}_{\mathbf{A}^{1}}\); upon taking complex points, we obtain a map \(\mathrm{Gr}_{G,\mathrm{Ran}}(\mathbf{C})\to\mathrm{Ran}(\mathbf{C})\). If \(S\subseteq\mathbf{C}\) is a subset, then the preimage of \(\mathrm{Ran}(S)\subseteq\mathrm{Ran}(\mathbf{C})\) defines a subspace \(\mathrm{Gr}_{G,\mathrm{Ran}}(S\subseteq\mathbf{C})\subseteq\mathrm{Gr}_{G, \mathrm{Ran}}(\mathbf{C})\). The filtration of \(\mathrm{Gr}_{G}\) via the Bruhat decomposition extends to a filtration \(\mathrm{Gr}_{G,\mathrm{Ran},\leq\mu}\) of \(\mathrm{Gr}_{G,\mathrm{Ran}}\) by dominant coweights \(\mu\in\Lambda^{\vee,\mathrm{pos}}\); see [10, 3.1.11]. Finally, the object \(A(D)\in\mathrm{Fun}(\Lambda^{\vee,\mathrm{pos}},\mathscr{S})\) associated to a disk \(D\subseteq\mathbf{C}\) is the functor \(\Lambda^{\vee,\mathrm{pos}}\to\mathscr{S}\) sending \(\mu\in\Lambda^{\vee,\mathrm{pos}}\) to \(\mathrm{Gr}_{G,\mathrm{Ran},\leq\mu}(D\subseteq\mathbf{C})\).
Suppose \(\coprod_{i=1}^{n}D_{i}\to D\) is an inclusion of disks. The induced map \(\bigotimes_{i=1}^{n}A(D_{i})\to A(D)\) is defined as follows. Let \(\mu\in\Lambda^{\vee,\mathrm{pos}}\); for every \(n\)-tuple \((\mu_{1},\cdots,\mu_{n})\) with \(\sum_{i=1}^{n}\mu_{i}\leq\mu\), we need to exhibit maps \(\bigotimes_{i=1}^{n}A(D_{i})(\mu_{i})\to A(D)(\mu)\) satisfying the obvious coherences. But
\[\bigotimes_{i=1}^{n}A(D_{i})(\mu_{i})=\prod_{i=1}^{n}\mathrm{Gr}_{G,\mathrm{Ran },\leq\mu_{i}}(D_{i}\subseteq\mathbf{C}),\]
so it suffices to show that if \(\mu_{1}+\mu_{2}\leq\mu\), then there are maps \(\operatorname{Gr}_{G,\operatorname{Ran},\leq\mu_{1}}(D_{1}\subseteq\mathbf{C}) \times\operatorname{Gr}_{G,\operatorname{Ran},\leq\mu_{2}}(D_{2}\subseteq \mathbf{C})\to\operatorname{Gr}_{G,\operatorname{Ran},\leq\mu}(D\subseteq \mathbf{C})\). The argument for this is exactly as in [1, Construction 3.15].
We next need to show that the \(\operatorname{N}(\operatorname{Disk}(\mathbf{C}))_{\operatorname{nu}}\)-algebra \(A\) defined above is locally constant, i.e., that if \(D\subseteq D^{\prime}\) is an embedding of disks, then \(A(D)\to A(D^{\prime})\) is an equivalence of functors \(\Lambda^{\vee,\operatorname{pos}}\to\mathscr{S}\). This follows from [1, Proposition 3.17]. To conclude, it suffices (by [1, Theorem 5.4.4.5]) to establish the existence of a quasi-unit for the functor \(A:\Lambda^{\vee,\operatorname{pos}}\to\mathscr{S}\), i.e., a map \(1_{\operatorname{Fun}(\Lambda^{\vee,\operatorname{pos}},\mathscr{S})}\to A\) which is both a left and right unit up to homotopy. Since the unit in \(\operatorname{Fun}(\Lambda^{\vee,\operatorname{pos}},\mathscr{S})\) is the functor sending \(\mu\in\Lambda^{\vee,\operatorname{pos}}\) to the point \(*\), a quasi-unit is the datum of a map \(*\to\operatorname{Gr}_{G,\leq\mu}(\mathbf{C})\) for each \(\mu\in\Lambda^{\vee,\operatorname{pos}}\). As in the proof of [1, Theorem 3.10], this can be taken to be the inclusion of the point corresponding to the trivial \(G\)-bundle over \(\mathbf{A}^{1}\) with the canonical trivialization away from the origin.
With Remark 3.2.7 in mind, we can now use Corollary 3.2.4 to compute the \(T\)-equivariant homology of \(\operatorname{Gr}_{G}\).
**Lemma 3.2.9**.: _There is an equivalence in \(\operatorname{Alg}_{\mathbf{E}_{2}}(\operatorname{coCAlg}(\operatorname{QCoh}( \mathscr{M}_{T})))\):_
\[\mathscr{F}_{T}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\cong\mathscr{O}( \check{T}_{A}\times_{\operatorname{Spec}(A)}\mathscr{M}_{T}).\]
Proof.: Since the action of \(T\) on \(\operatorname{Gr}_{T}(\mathbf{C})\) is trivial, we have a canonical equivalence \(\mathscr{F}_{T}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\simeq\operatorname{ Gr}_{T}(\mathbf{C})_{+}\otimes\mathscr{F}_{T}(*)^{\vee}\). By definition, \(\mathscr{F}_{T}(*)^{\vee}\simeq\mathscr{O}_{\mathscr{M}_{T}}\). We conclude that \(\mathscr{F}_{T}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\) is equivalent as an \(\mathbf{E}_{2}\)-\(A\)-algebra to \(C_{*}(\operatorname{Gr}_{T}(\mathbf{C});A)\otimes_{A}\mathscr{O}_{\mathscr{M}_{ T}}\). Since \(BT(\mathbf{C})\simeq B^{2}\Lambda^{\vee}\), there is an equivalence \(\operatorname{Gr}_{T}(\mathbf{C})\simeq\Lambda^{\vee}\) of \(\mathbf{E}_{2}\)-spaces. Therefore, \(C_{*}(\operatorname{Gr}_{T}(\mathbf{C});A)\simeq A[\Lambda^{\vee}]\) as \(\mathbf{E}_{2}\)-\(A\)-algebras, which is \(\mathscr{O}(\check{T}_{A})\). This implies the desired claim.
**Question 3.2.10**.: Can Lemma 3.2.9 be upgraded to an equivalence of \(\mathbf{E}_{3}\)_-\(A\)-algebras_ for a geometrically defined \(\mathbf{E}_{3}\)-algebra structure on \(\mathscr{F}_{T}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\)? This additional structure is crucial for a statement of the geometric Satake correspondence which is \(\mathbf{E}_{3}\)-monoidal.
**Notation 3.2.11**.: Let \(T_{\mathbf{G}}^{*}\check{T}_{A}\) denote \(\check{T}_{A}\times_{\operatorname{Spec}(A)}\mathscr{M}_{T}\), and let \(T_{\mathbf{G}}^{*}\check{T}\) denote its underlying scheme (over \(\mathscr{M}_{T,0}\)). Note that if \(\mathbf{G}=\mathbf{G}_{a}\), then \(T_{\mathbf{G}}^{*}\check{T}\) is the cotangent bundle of \(\check{T}\), while if \(\mathbf{G}=\mathbf{G}_{m}\), then \(T_{\mathbf{G}}^{*}\check{T}=\check{T}\times T\). If \(\mathfrak{B}_{\mathbf{G}}\) denotes the blowup of \(T_{\mathbf{G}}^{*}\check{T}\) at the closed subscheme given by \(\mathscr{M}_{T_{\alpha},0}\) and the zero set of \(e^{\alpha^{\vee}}-1\) for \(\alpha\in\Phi\), then define \((T_{\mathbf{G}}^{*}\check{T})^{\operatorname{bl}}\) as the complement of the proper preimage of \(\mathscr{M}_{T_{\alpha},0}\) in \(\mathfrak{B}_{\mathbf{G}}\) for \(\alpha\in\Phi\).
**Theorem 3.2.12**.: _Let \(G\) be a connected semisimple algebraic group over \(\mathbf{C}\). Then there is a \(W\)-equivariant isomorphism \(\operatorname{Spec}\pi_{0}\mathscr{F}_{T}(\operatorname{Gr}_{G}(\mathbf{C}))^{ \vee}\cong(T_{\mathbf{G}}^{*}\check{T})^{\operatorname{bl}}\) of schemes over \(\mathscr{M}_{T,0}\), where the left-hand side denotes the relative \(\operatorname{Spec}\)._
Proof.: There is an \(\mathbf{E}_{2}\)-map \(\operatorname{Gr}_{T}(\mathbf{C})\to\operatorname{Gr}_{G}(\mathbf{C})\), which induces an \(\mathbf{E}_{2}\)-map \(\mathscr{F}_{T}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\to\mathscr{F}_{T}( \operatorname{Gr}_{G}(\mathbf{C}))^{\vee}\). This is given by dualizing the map \(r:\mathscr{F}_{T}(\operatorname{Gr}_{G}(\mathbf{C}))\to\mathscr{F}_{T}( \operatorname{Gr}_{T}(\mathbf{C}))\) of \(\mathbf{E}_{2}\)-coalgebras in \(\operatorname{QCoh}(\mathscr{M}_{T})\). The non-\(W\)-equivariant claim now follows from Corollary 3.2.4, since \(r\) induces an injection on \(\pi_{0}\), and the (cocommutative) Hopf algebra structure on \(\pi_{0}\mathscr{F}_{T}(\operatorname{Gr}_{T}(\mathbf{C}))\) is given by the dual of the equivalence of Lemma 3.2.9. Proving \(W\)-equivariance requires a bit more work, but can easily be incorporated by keeping track of the \(W\)-action throughout the above discussion.
**Remark 3.2.13**.: The \(T\)-equivariant and \(G\)-equivariant \(A\)-_coh_homologies of \(\operatorname{Gr}_{G}(\mathbf{C})\) are significantly easier to compute in terms of the stack \(\mathscr{M}_{G}\) (without any reference to root data); see Remark A.5. In particular, see Example A.8 for an alternative argument for [1, Theorem 1] using Hochschild homology and the Hochschild-Kostant-Rosenberg theorem.
**Remark 3.2.14**.: Suppose \(A=\operatorname{KU}\), so that \(\mathbf{G}=\mathbf{G}_{m}\) and \(c_{\alpha}\) is \(e^{\alpha}-1\). It follows from Theorem 3.2.12 that replacing \(T\) with \(\check{T}\), we get an isomorphism between \(\pi_{0}\mathscr{F}_{\check{T}}(\operatorname{Gr}_{\check{G}}(\mathbf{C}))^{\vee}\) and \(\pi_{0}(T_{A}\times_{\operatorname{Spec}(A)}\check{T}_{A})[\frac{e^{\alpha}-1} {e^{\alpha}\,\check{\nu}-1},\alpha\in\Phi]\). Therefore, \(\pi_{0}\mathscr{F}_{T}(\operatorname{Gr}_{G}(\mathbf{C}))^{\vee}\) and \(\pi_{0}\mathscr{F}_{\check{T}}(\operatorname{Gr}_{\check{G}}(\mathbf{C}))^{\vee}\) are both obtained from the blowup \(\mathfrak{B}_{\mathbf{G}_{m}}\) of \(T_{\mathbf{G}}^{*}\check{T}\) by deleting the proper preimage of two different closed subschemes which are Langlands dual to each other. In particular, the Langlands self-duality of the blowup \(\mathfrak{B}_{\mathbf{G}_{m}}\) swaps the affine pieces \(\operatorname{Spec}\pi_{0}\mathscr{F}_{T}(\operatorname{Gr}_{G}(\mathbf{C}))^{\vee}\) and \(\operatorname{Spec}\pi_{0}\mathscr{F}_{\check{T}}(\operatorname{Gr}_{\check{G}} (\mathbf{C}))^{\vee}\) in \(\mathfrak{B}_{\mathbf{G}_{m}}\).
**Remark 3.2.15**.: When \(G=\operatorname{SL}_{2}\) or \(\operatorname{PGL}_{2}\), we can explicitly verify Theorem 3.2.12 at least after base-changing along \(C_{\ast}^{*}(\ast;A)\to C^{*}(\ast;A)\). We will identify \(\operatorname{PGL}_{2}\) with \(\operatorname{SO}_{3}\) (via the \(\operatorname{PGL}_{2}\)-action on \(\mathfrak{psl}_{2}\) which preserves the quadratic form given by the determinant). If \(A=\mathbf{Q}[\beta^{\pm 1}]\), for instance, Theorem 3.2.12 says:
\[\pi_{0}C_{\ast}^{S^{1}}(\Omega S^{3};\mathbf{Q}[\beta^{\pm 1}]) \cong\mathbf{Q}[x,y^{\pm 1},\tfrac{y-1}{x}],\] \[\pi_{0}C_{\ast}^{S^{1}}(\Omega\mathrm{SO}(3);\mathbf{Q}[\beta^{ \pm 1}]) \cong\mathbf{Q}[x,y^{\pm 1},\tfrac{y^{2}-1}{2x}].\]
After killing \(x\), the fraction \(\frac{y-1}{x}\) (resp. \(\frac{y^{2}-1}{x}\)) defines a polynomial generator, and so we have
\[\pi_{0}C_{\ast}(\Omega S^{3};\mathbf{Q}[\beta^{\pm 1}]) \cong\mathbf{Q}[\tfrac{y-1}{x}],\] \[\pi_{0}C_{\ast}(\Omega\mathrm{SO}(3);\mathbf{Q}[\beta^{\pm 1}]) \cong\mathbf{Q}[y^{\pm 1},\tfrac{y^{2}-1}{2x}]/(y^{2}-1).\]
The second of these isomorphisms is compatible with the identification \(\Omega\mathrm{SO}(3)\simeq\mathbf{Z}/2\times\Omega S^{3}\) arising from the isomorphism \(S^{3}/(\mathbf{Z}/2)\cong\mathrm{SO}(3)\) (but note that the equivalence \(\Omega\mathrm{SO}(3)\simeq\mathbf{Z}/2\times\Omega S^{3}\) is _not_ one of \(\mathbf{E}_{1}\)-spaces). Similarly, if \(A=\operatorname{KU}\), Theorem 3.2.12 says:
\[\pi_{0}C_{\ast}^{S^{1}}(\Omega S^{3};\operatorname{KU}) \cong\mathbf{Z}[x^{\pm 1},y^{\pm 1},\tfrac{y-1}{x-1}],\] \[\pi_{0}C_{\ast}^{S^{1}}(\Omega\mathrm{SO}(3);\operatorname{KU}) \cong\mathbf{Z}[x^{\pm 1},y^{\pm 1},\tfrac{y^{2}-1}{x^{2}-1}].\]
After killing \(x-1\), the fraction \(\frac{y-1}{x-1}\) (resp. \(\frac{y^{2}-1}{x^{2}-1}\)) defines a polynomial generator, and so we have
\[\pi_{0}C_{\ast}(\Omega S^{3};\operatorname{KU}) \cong\mathbf{Z}[\tfrac{y-1}{x-1}],\] \[\pi_{0}C_{\ast}(\Omega\mathrm{SO}(3);\operatorname{KU}) \cong\mathbf{Z}[y^{\pm 1},\tfrac{y^{2}-1}{x^{2}-1}]/(y^{2}-1).\]
Again, this is compatible with the identification \(\Omega\mathrm{SO}(3)\simeq\mathbf{Z}/2\times\Omega S^{3}\).
In the case \(G=\operatorname{SL}_{2}\), we refer the reader to Example B.3 and Example B.5 for an explicit description of \(\mathrm{H}_{\ast}^{G\times S_{\mathrm{rot}}^{1}}(\operatorname{Gr}_{G}( \mathbf{C});\mathbf{C})\) and \(\operatorname{KU}_{0}^{G\times S_{\mathrm{rot}}^{1}}(\operatorname{Gr}_{G}( \mathbf{C}))\otimes\mathbf{C}\).
### Quantized equivariant homology of \(\operatorname{Gr}_{T}\)
We now explore the equivariant homology of \(\operatorname{Gr}_{T}\) in more detail; no GKM theory is required here, but several interesting algebraic structures turn up. Let us begin by recalling that Lemma 3.2.9 gives a \(W\)-equivariant equivalence \(\mathscr{F}_{T}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\cong\mathscr{O}( \check{T}_{A}\times_{\operatorname{Spec}(A)}\mathscr{M}_{T})\), which can
be thought of as giving an equivalence between \(\check{T}_{A}\times_{\operatorname{Spec}(A)}\mathscr{M}_{T}\) and the "\(\mathbf{E}_{2}\)-\(\mathscr{M}_{T}\)-scheme \(\operatorname{Spec}\mathscr{F}_{T}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\)". This admits a natural deformation given by the loop-rotation equivariant homology \(\mathscr{F}_{\widetilde{T}}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\). Since \(\widetilde{T}=T\times\mathbf{G}_{m}^{\operatorname{rot}}\), there is an equivalence \(\mathscr{M}_{\widetilde{T}}\simeq\mathscr{M}_{T}\times\mathbf{G}\), where the second factor is identified as \(\mathscr{M}_{\mathbf{G}_{m}^{\operatorname{rot}}}\).
**Definition 3.3.1**.: Let \(\mathbf{G}_{0}\) be a smooth \(1\)-dimensional group scheme over a base commutative ring, let \(T\) be a compact torus, let \(\Lambda\) (resp. \(\Lambda^{\vee}\)) denote the (co)character lattice of \(T\), and let \(\mathscr{M}_{0,T}=\operatorname{Hom}(\Lambda,\mathbf{G}_{0})\). Let \(\lambda\) be a cocharacter of \(T\), so that \(\lambda\) defines a homomorphism \(\Lambda\to\mathbf{Z}\), and hence a homomorphism \(\lambda^{*}:\mathbf{G}_{0}\to\mathscr{M}_{0,T}\). In turn, this defines a map
\[f^{\lambda}:\mathscr{M}_{0,\widetilde{T}}\simeq\mathscr{M}_{0,T}\times\mathbf{ G}_{0}\xrightarrow{\operatorname{pr}\times\lambda^{*}}\mathscr{M}_{0,T}.\]
If \(y\) is a local section of \(\mathscr{O}_{\mathscr{M}_{0,T}}\), we will write \(\lambda^{*}(y)\) to denote the resulting local section of \(\mathscr{O}_{\mathscr{M}_{0,T}}\). Let \(\mathscr{D}_{\widetilde{T}}^{\mathbf{G}_{0}}\) denote the quotient of the associative \(\mathscr{O}_{\mathbf{G}_{0}}\)-algebra \(\mathscr{O}_{\mathscr{M}_{0,T}}\langle x_{\lambda}|\lambda\in\Lambda\rangle\) by the relations given locally by
\[x_{\lambda}\cdot x_{\mu}=x_{\lambda+\mu},\ y\cdot x_{\lambda}=x_{\lambda}\cdot \lambda^{*}(y).\]
Here, \(\lambda,\mu\in\Lambda^{\vee}\), and \(y\) is a local section of \(\mathscr{O}_{\mathscr{M}_{0,T}}\). We will call \(\mathscr{D}_{\widetilde{T}}^{\mathbf{G}_{0}}\) the _algebra of \(\mathbf{G}_{0}\)-differential operators_.
**Remark 3.3.2**.: The algebra \(\mathscr{D}_{\widetilde{T}}^{\mathbf{G}_{0}}\) satisfies a Mellin transform: namely, it follows from unwinding the definition that there is an equivalence
\[\operatorname{LMod}_{\mathscr{D}_{\widetilde{T}}^{\mathbf{G}_{0}}}( \operatorname{QCoh}(\mathbf{G}_{0}))\simeq\operatorname{QCoh}(\mathscr{M}_{0, \widetilde{T}}/\Lambda),\]
where \(\lambda\in\Lambda\) acts on \(\mathscr{M}_{0,\widetilde{T}}\) via \(y\mapsto\lambda^{*}y\).
**Notation 3.3.3**.: If \(A\) is a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring and \(\mathbf{G}_{0}\) is the \(\pi_{0}(A)\)-group underlying a oriented commutative \(A\)-group \(\mathbf{G}\), we will write \(\mathscr{D}_{\widetilde{T}}^{\mathbf{G}}\) to denote \(\mathscr{D}_{\widetilde{T}}^{\mathbf{G}_{0}}\), and refer to it as the algebra of \(\mathbf{G}\)-differential operators. We hope this does not cause any confusion.
**Proposition 3.3.4** (Quantization of Lemma 3.2.9).: _There is an isomorphism \(\pi_{0}\mathscr{F}_{\widetilde{T}}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee} \cong\mathscr{D}_{\widetilde{T}}^{\mathbf{G}}\) of \(\pi_{0}\mathscr{O}_{\mathbf{G}}\)-algebras._
Proof.: Since \(\operatorname{Gr}_{T}(\mathbf{C})\simeq\Omega T_{c}\simeq\Lambda^{\vee}\), it is easy to see that \(\pi_{0}\mathscr{F}_{\widetilde{T}}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee} \cong\bigoplus_{\lambda\in\Lambda^{\vee}}\pi_{0}\mathscr{O}_{\mathscr{M}_{ \widetilde{T}}}\); let \(x_{\lambda}\) be a generator of the summand indexed by \(\lambda\in\Lambda^{\vee}\). If \(\lambda\in\Lambda^{\vee}=\operatorname{Hom}(\Lambda,\mathbf{Z})\), the map \(\Omega T_{c}\to\Omega T_{c}\) given by multiplication-by-\(\lambda\) is \(T\times S^{1}_{\operatorname{rot}}\)-equivariant for the homomorphism \(T\times S^{1}_{\operatorname{rot}}\to T\times S^{1}_{\operatorname{rot}}\) given by \((t,\theta)\mapsto(t\lambda(\theta),\theta)\), where \(\lambda\) is viewed as a homomorphism \(S^{1}\to T\). On weight lattices, this homomorphism induces the map \(\Lambda\times\mathbf{Z}\to\Lambda\times\mathbf{Z}\) which sends \((\mu,n)\mapsto(\mu,n+\lambda^{\vee}(\mu))\). In particular, the composite \(\Lambda\to\Lambda\times\mathbf{Z}\to\Lambda\times\mathbf{Z}\) sends \(\mu\mapsto(\mu,\lambda^{\vee}(\mu))\). Applying \(\operatorname{Hom}(-,\mathbf{G})\) to this composite precisely produces the map \(f^{\lambda}:\mathscr{M}_{\widetilde{T}}\to\mathscr{M}_{T}\) from Definition 3.3.1. This implies the desired identification of \(\pi_{0}\mathscr{F}_{\widetilde{T}}(\operatorname{Gr}_{T}(\mathbf{C}))^{\vee}\).
**Example 3.3.5**.: Let \(T\cong S^{1}\) be a torus of rank \(1\) (for simplicity). Suppose \(A=\mathbf{Q}[\beta^{\pm 1}]\), so \(\mathbf{G}=\hat{\mathbf{G}}_{a}\) and \(\pi_{0}\mathscr{O}_{\mathbf{G}}\cong\mathbf{Q}[\hbar]\). Then the algebra of Definition 3.3.1 is the quotient of the \(\mathbf{Q}[\hbar]\)-algebra \(\mathbf{Q}[\hbar]\langle y,x^{\pm 1}\rangle\) by the relation \(yx=x(y+\hbar)\). In other words, \(y\) acts as the operator \(\hbar x\partial_{x}\), so we simply have that
\[\operatorname{H}_{0}^{\widetilde{T}}(\operatorname{Gr}_{T}(\mathbf{C});\mathbf{ Q}[\beta^{\pm 1}])\cong\operatorname{H}_{*}^{\widetilde{T}}(\operatorname{Gr}_{T}(\mathbf{C}); \mathbf{Q})\cong\mathbf{Q}[\hbar]\langle\hbar x\partial_{x},x^{\pm 1}\rangle.\]
This has been stated previously as [3, Proposition 5.19(2)]. In particular, the localization \(\mathrm{H}_{0}^{\widetilde{T}}(\mathrm{Gr}_{T}(\mathbf{C});\mathbf{Q}[\beta^{\pm 1 }])[\hbar^{-1}]\) is isomorphic to the rescaled Weyl algebra \(\mathscr{D}_{\widetilde{T}}^{\hbar}\); this is the motivation behind the terminology in Definition 3.3.1. Note that Remark 3.3.2 simply reduces to the standard Mellin transform, which gives an equivalence between \(\mathrm{DMod}_{\hbar}(\check{T})\) and \(\mathrm{QCoh}(\mathbf{t}_{\mathbf{Q}[\hbar]}/\Lambda)\).
**Example 3.3.6**.: Again, let \(T\cong S^{1}\) be a torus of rank \(1\) (for simplicity). Suppose \(A=\mathrm{KU}\), so \(\mathbf{G}=\mathbf{G}_{m}\) and \(\pi_{0}\mathscr{O}_{\mathbf{G}}\cong\mathbf{Z}[q^{\pm 1}]\). Then the algebra of Definition 3.3.1 is the quotient of the \(\mathbf{Z}[q^{\pm 1}]\)-algebra \(\mathbf{Z}[q^{\pm 1}]\langle y^{\pm 1},x^{\pm 1}\rangle\) by the relation \(yx=qxy\). (This is also known as the "quantum torus".) In other words, \(y\) acts as the operator \(q^{x\partial_{x}}\) sending \(f(x)\mapsto f(qx)\), so we simply have that
\[\mathrm{KU}_{0}^{\widetilde{T}}(\mathrm{Gr}_{T}(\mathbf{C}))\cong\mathbf{Z}[q ^{\pm 1}]\langle q^{x\partial_{x}},x^{\pm 1}\rangle.\]
This is closely related to the \(q\)-Weyl algebra \(\mathscr{D}_{q}=\mathbf{Z}[q^{\pm 1}]\langle\Theta,x^{\pm 1}\rangle/(\Theta x =x(q\Theta+1))\) for \(\check{T}=\mathbf{G}_{m}\): indeed, since the logarithmic \(q\)-derivative \(\Theta=x\nabla_{q}\) is given by the fraction \(\frac{q^{x\partial_{x}}-1}{q-1}\), the pullback of \(\mathscr{D}_{\check{T}}^{\mathbf{G}}\) along \(\mathbf{G}_{m}-\{1\}\hookrightarrow\mathbf{G}_{m}\) is isomorphic to the algebra \(\mathscr{D}_{q}[\frac{1}{q-1}]\). Note that Remark 3.3.2 gives a "\(q\)-Mellin transform", i.e., an equivalence between \(\mathrm{LMod}_{\mathrm{KU}_{0}^{\widetilde{T}}(\mathrm{Gr}_{T}(\mathbf{C}))}\) and \(\mathrm{QCoh}((\mathbf{G}_{m})_{\mathbf{Z}[q^{\pm 1}]}/\mathbf{Z})\), where \(\mathbf{Z}\) acts on \((\mathbf{G}_{m})_{\mathbf{Z}[q^{\pm 1}]}\) by sending \(y\mapsto qy\).
**Remark 3.3.7**.: Using Proposition 2.2.6, there is an equivalence \(\mathrm{Loc}_{T_{c}}(T_{c};A)\simeq\mathrm{LMod}_{\mathscr{F}_{T}(\mathrm{Gr}_ {T}(\mathbf{C}))^{\vee}}\). Since \(\pi_{0}\mathscr{F}_{\check{T}}(\mathrm{Gr}_{T}(\mathbf{C}))^{\vee}\cong \mathscr{D}_{\check{T}}^{\mathbf{G}}\) is a "quantization" of \(\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{T}(\mathbf{C}))^{\vee}\cong\mathscr{O}_{ \check{T}_{\mathbf{G}}\check{T}}\) (i.e., an associative deformation of \(T_{\mathbf{G}}^{\mathbf{G}}\check{T}\) along \(\mathbf{G}\)), and Proposition 3.3.4 implies an equivalence of \(\mathbf{E}_{1}\)-\(A_{\mathbf{Q}}\)-algebras \(\mathscr{F}_{\check{T}}(\mathrm{Gr}_{T}(\mathbf{C}))^{\vee}\otimes\mathbf{Q} \cong\mathscr{D}_{\check{T}}^{\mathbf{G}}\otimes_{\pi_{0}A}A_{\mathbf{Q}}\), we see that \(\mathrm{LMod}_{\mathscr{D}_{\check{T}}^{\mathbf{G}}}\otimes_{\pi_{0}A}A_{ \mathbf{Q}}\) defines a "quantization" of \(\mathrm{Loc}_{T_{c}}(T_{c};A)\otimes\mathbf{Q}\).
Let us briefly outline the relationship between the algebra \(\mathscr{D}_{\check{T}}^{\mathbf{G}_{0}}\) of Definition 3.3.1 and the \(F\)-de Rham complex of [10].
**Notation 3.3.8**.: For the purpose of this discussion, we will assume that \(T\cong S^{1}\) is a torus of rank \(1\), so that \(\check{T}\cong\mathbf{G}_{m}\). We will also fix an invariant differential form on the formal completion \(\hat{\mathbf{G}}_{0}\) of \(\mathbf{G}_{0}\) at the zero section, so that there is an isomorphism \(\hat{\mathbf{G}}_{0}\cong\mathrm{Spf}\,R[\![t]\!]\) of formal \(R\)-schemes. Let \(F(x,y)\) denote the resulting formal group law over \(R\), and define the \(n\)-series of \(F\) by
\[[n]_{F}:=\overbrace{F(t,F(t,F(t,\cdots\overbrace{F(t,t)\cdots)}))}^{n}.\]
We will often write \(x+_{F}y=x+_{\mathbf{G}}y\) to denote \(F(x,y)\). Let \(\mathscr{D}_{\check{T}}^{\mathbf{G}_{0}}\) denote the completion of \(\mathscr{D}_{T}^{\mathbf{G}_{0}}\) at the zero section of \(\mathscr{M}_{0,\widehat{T}}\cong\mathscr{M}_{0,T}\times\mathbf{G}_{0}\).
**Lemma 3.3.9** (Cartier duality).: _Let \(\hat{\mathbf{G}}_{0}\) be a \(1\)-dimensional formal group over a commutative ring \(R\), and let \(\mathrm{Cart}(\hat{\mathbf{G}}_{0})\) denote its Cartier dual (see [1, Section 3.3] for more on Cartier duals of formal groups). Then there is an equivalence of categories \(\mathrm{QCoh}(\hat{\mathbf{G}}_{0})\simeq\mathrm{QCoh}(B\mathrm{Cart}(\hat{ \mathbf{G}}_{0}))\) sending the convolution tensor product on the left-hand side to the usual tensor product on the right-hand side. Under this equivalence, the functor \(\mathrm{QCoh}(\hat{\mathbf{G}}_{0})\to\mathrm{Mod}_{R}\) given by restriction to the zero section is identified with the functor \(\mathrm{QCoh}(B\mathrm{Cart}(\hat{\mathbf{G}}_{0}))\to\mathrm{Mod}_{R}\) given by pullback along the map \(\mathrm{Spec}(R)\to B\mathrm{Cart}(\hat{\mathbf{G}}_{0})\)._
**Proposition 3.3.10**.: _There is a canonical action of \(\hat{\mathscr{O}}_{\hat{T}}^{\mathbf{G}_{0}}\) on \((\mathbf{G}_{m})_{R[\![t]\!]}=\mathrm{Spf}\,R[\![t]\!][x^{\pm 1}]\) such that \(R[\![t]\!][x^{\pm 1}]\otimes_{\hat{\mathscr{O}}_{\hat{T}}^{\mathbf{G}_{0}}}R[\![t]\!][x^{ \pm 1}]\) is isomorphic to the two-term complex_
\[C^{\bullet}=(R[\![t]\!][x^{\pm 1}]\to R[\![t]\!][x^{\pm 1}]dx),\ x^{n}\mapsto[n]_{F}x^{n}dx\]
_from_[13, Remark 4.3.8].
Proof sketch.: Since \(T\) is of rank 1, there is an isomorphism \(\mathscr{M}_{0,T}\cong\mathbf{G}_{0}\), and hence an isomorphism \(\mathscr{M}_{0,T}\cong\hat{\mathbf{A}}^{1}\) of formal \(R\)-schemes, where \(\hat{\mathscr{M}}_{0,T}\) denotes the completion of \(\mathscr{M}_{0,T}\) at the zero section. Let \(y\) be a local coordinate on \(\mathscr{M}_{0,T}\). Then, \(\hat{\mathscr{O}}_{\hat{T}}^{\mathbf{G}_{0}}\) is isomorphic to the quotient of the associative \(\hat{\mathscr{O}}_{\mathbf{G}_{0}}\)-algebra \(\hat{\mathscr{O}}_{\hat{T}}^{\mathbf{G}_{0}\times\mathscr{M}_{0,T}}\langle x^ {\pm 1}\rangle\) subject to the relation \(yx=x(y+_{\mathbf{G}}t)\). The \(t\)-adic filtration on \(\hat{\mathscr{O}}_{\hat{T}}^{\mathbf{G}_{0}}\) therefore has associated graded \(\mathrm{gr}(\hat{\mathscr{D}}_{\hat{T}}^{\mathbf{G}_{0}})\cong\hat{\mathscr{O }}_{\mathscr{M}_{0,T}}[x^{\pm 1}][\overline{t}]\), where \(\overline{t}\) lives in weight 1. View \(R\) as a \(\mathscr{O}_{\mathscr{M}_{0,T}}\)-algebra via the zero section, i.e., the augmentation \(\mathscr{O}_{\mathscr{M}_{0,T}}\to R\). Then, the action of \(\mathrm{gr}(\hat{\mathscr{D}}_{\hat{T}}^{\mathbf{G}_{0}})\) on \(R[x^{\pm 1}][\overline{t}]\) is induced by the augmentation \(\hat{\mathscr{O}}_{\mathscr{M}_{0,T}}\to R\). The isomorphism \(\hat{\mathscr{M}}_{0,T}\cong\hat{\mathbf{A}}^{1}\) of formal \(R\)-schemes then implies an isomorphism \(R\otimes_{\mathscr{O}_{\mathscr{M}_{0,T}}}R\cong R[\epsilon]/\epsilon^{2}\) with \(\epsilon\) in homological degree 1. It follows that
\[R[\overline{t}][x^{\pm 1}]\otimes_{\mathrm{gr}(\hat{\mathscr{D}}_{\hat{T}}^{ \mathbf{G}_{0}})}R[\overline{t}][x^{\pm 1}]\simeq R[\overline{t}][x^{\pm 1}][ \epsilon]/\epsilon^{2},\]
where \(\overline{t}\) is in weight 1 and degree 0, and \(\epsilon\) is in weight 0 and degree 1.
By Lemma 3.3.9, the \(t\)-adic filtration on \(\hat{\mathscr{D}}_{\hat{T}}^{\mathbf{G}_{0}}\) is equivalent to the data of a \(\mathrm{Cart}(\hat{\mathbf{G}}_{0})\)-action on \(R[\overline{t}][x^{\pm 1}]\otimes_{\mathrm{gr}(\hat{\mathscr{D}}_{\hat{T}}^{ \mathbf{G}_{0}})}R[\overline{t}][x^{\pm 1}]\simeq R[\overline{t}][x^{\pm 1}][ \epsilon]/\epsilon^{2}\). This in turn is equivalent to the data of a differential
\[\nabla:R[\overline{t}][x^{\pm 1}]\to R[\overline{t}][x^{\pm 1}]\cdot\epsilon\]
satisfying a \(\hat{\mathbf{G}}_{0}\)-analogue of the Leibniz rule: if5\(\nabla(x^{n})=f(n)x^{n}\epsilon\) for some \(f(n)\in R[\![t]\!]\), then \(f(n+m)=f(n)+_{\mathbf{G}}f(m)\). It therefore suffices to determine \(\nabla(x)\); but the relation \(yx=x(y+_{\mathbf{G}}t)\) forces \(\nabla(x)=tx\epsilon\). This implies that
Footnote 5: Note that \(\nabla\) has to be homogeneous in the degree of the monomial in \(x\), as can be seen by keeping track of the \(x\)-weight.
\[\nabla(x^{n})=(\overbrace{t+_{\mathbf{G}}\cdots+_{\mathbf{G}}t}^{n})x^{n} \epsilon=[n]_{F}x^{n}\epsilon,\]
as desired.
**Example 3.3.11**.: When \(\mathbf{G}_{0}=\hat{\mathbf{G}}_{a}\) over6\(\mathbf{Q}\), the complex \(C^{\bullet}\) is
Footnote 6: Of course, one can work over \(\mathbf{Z}\) too; we just chose \(\mathbf{Q}\) to continue with Example 3.3.5.
\[C^{\bullet}=(\mathbf{Q}[\![\hbar]\!][x^{\pm 1}]\to\mathbf{Q}[\![\hbar]\!][x^{\pm 1 }]dx),\ x^{n}\mapsto n\hbar x^{n}dx.\]
Indeed, since \(yx=x(y+\hbar)\), we have \(yx^{n}=x^{n}(y+n\hbar)\); since \(t=\hbar\) in this case, we have \(x^{n}\mapsto n\hbar x^{n}\epsilon\). This is evidently a \(\hbar\)-rescaling of the classical de Rham complex of \(\mathbf{G}_{m}\).
When \(\mathbf{G}_{0}=\mathbf{G}_{m}\) over \(\mathbf{Z}\), the complex \(C^{\bullet}\) is
\[C^{\bullet}=(\mathbf{Z}[\![q-1]\!][x^{\pm 1}]\to\mathbf{Z}[\![q-1]\!][x^{\pm 1}] dx),\ x^{n}\mapsto(q^{n}-1)x^{n}dx.\]
Indeed, since \(yx=x(qy)\), we have \(yx^{n}=x^{n}(q^{n}y)\), and hence
\[(y-1)x^{n}=x^{n}(q^{n}y-1)=x^{n}((y-1)+_{F}(q^{n}-1)),\]
where \(F(z,w)=z+w+zw\) is the multiplicative formal group law; since \(t=q-1\) in this case, we have \(x^{n}\mapsto(q^{n}-1)x^{n}\epsilon\). The complex \(C^{\bullet}\) is a \((q-1)\)-rescaling of the \(q\)-de Rham complex of \({\bf G}_{m}\) from [17].
**Remark 3.3.12**.: The complex of Proposition 3.3.10 is not quite the \(F\)-de Rham complex of [16, Definition 4.3.6]; rather, if \(\eta_{t}\) denotes the decalage functor of [16] with respect to the ideal \((t)\subseteq R[\![t]\!]\), the \(F\)-de Rham complex is given by the decalage \(\eta_{t}C^{\bullet}\). In particular, the complex of Proposition 3.3.10 is isomorphic to the \(F\)-de Rham complex after inverting \(t\). One can modify the algebra \(\mathscr{D}^{{\bf G}_{0}}_{\hat{T}}\) of Definition 3.3.1 (by performing a noncommutative analogue of an affine blowup/deformation to the normal cone7) such that the relative tensor product as in Proposition 3.3.10 is the \(F\)-de Rham complex itself. Since it is not needed for this article, we will not describe this modification here.
Footnote 7: For instance, in the case of Example 3.3.5, this procedure simply adjoins the fraction \(\frac{y}{\hbar}\); in the case of Example 3.3.6, this procedure simply adjoins the fraction \(\frac{y-1}{q-1}\).
**Remark 3.3.13**.: Proposition 3.3.10 says that \(\hat{\mathscr{D}}^{{\bf G}_{0}}_{\hat{T}}\) is Koszul dual to the complex \(C^{\bullet}\). Forthcoming work of Arpon Raksit shows that the decalage \(\eta_{t}C^{\bullet}\) can be recovered from the "even filtration" (in the sense of [14]) on the periodic cyclic homology \({\rm HP}(\tau_{\geq 0}A[x^{\pm 1}]/\tau_{\geq 0}A)\). See also the discussion in [18, Section 3.3]. Using similar techniques, one can show that \(C^{\bullet}\) can be recovered from the even filtration on the negative cyclic homology \({\rm HC}^{-}(A[x^{\pm 1}]/A)={\rm HH}(A[x^{\pm 1}]/A)^{hS^{1}}\).
Recalling that \(T=S^{1}\), this \({\bf E}_{\infty}\)-\(A\)-algebra is simply \({\rm HC}^{-}(A[\Omega T]/A)\). The Hochschild homology \({\rm HH}(A[\Omega T]/A)\simeq A\otimes{\rm THH}(S[\Omega T])\) is \(S^{1}\)-equivariantly equivalent to the \(A\)-chains \(C_{*}(\mathscr{L}T;A)\) on the free loop space of \(T\). (For a reference, see [19, Corollary IV.3.3].) The \(A\)-chains \(A[\mathscr{L}T]\) is \(S^{1}\)-equivariantly Koszul dual8 to \(A[\Omega T]^{hT}\); this can be identified as a completion of \(\mathscr{F}_{T}(\Omega T)^{\vee}\) at the zero section of \(\mathscr{M}_{T}\). In other words, \({\rm HC}^{-}(A[\Omega T]/A)\) is Koszul dual to the completion of \(\mathscr{F}_{T\times S^{1}_{\rm rot}}(\Omega T)^{\vee}\) at the zero section of \(\mathscr{M}_{T}\times{\bf G}\). This is the topological source of the Koszul duality of Proposition 3.3.10.
Footnote 8: This Koszul duality essentially stems from the (nonequivariant) decomposition \(\mathscr{L}T\simeq T\times\Omega T\).
**Remark 3.3.14**.: In Remark 3.3.13, we mentioned that the Koszul duality between \({\bf G}\)-differential operators and the \(F\)-de Rham complex manifests in topology as the Koszul duality between \(\mathscr{F}_{T\times S^{1}_{\rm rot}}(\Omega T)^{\vee}\) and \({\rm HC}^{-}(A[\Omega T]/A)\). There is clearly nothing special about \(T\) in this Koszul duality: given a sufficiently robust theory of \(G\)-equivariant \(A\)-(co)homology (see the discussion surrounding Construction 2.1.11), there is also a Koszul duality between \(\mathscr{F}_{G\times S^{1}_{\rm rot}}(\Omega G)^{\vee}\) and \({\rm HC}^{-}(A[\Omega G]/A)=A[\mathscr{L}G]^{hS^{1}}\). When \(A={\bf C}[\beta^{\pm 1}]\), [15, Theorem 3] states that \(\mathscr{F}_{G\times S^{1}_{\rm rot}}(\Omega G)^{\vee}\) can be identified with (the 2-periodification of) the bi-Whittaker reduction \(\tilde{N}^{-}\backslash_{\chi}\mathscr{D}_{\tilde{G}}/\chi\tilde{N}^{-}\). Using the results of this article, it is also possible to compute \(A[\mathscr{L}G]^{hS^{1}}\) in this manner, at least if we assume that small primes are inverted: the zeroth graded piece of the "even filtration" on \(A[\mathscr{L}G]^{hS^{1}}\) looks like the 2-periodification of the \(F\)-de Rham complex of \(Z_{f}(\check{B})\) for a chosen principal nilpotent element \(f\in\check{\mathfrak{g}}\). We plan to explain this in future work.
## 4. The coherent side
### Langlands duality over \(\mathbf{Q}[\beta^{\pm 1}]\)
We now turn to the coherent side of the geometric Satake equivalence. For general \(\mathbf{G}\), it is not obvious what the Langlands dual algebraic stack should be; we will discuss this in Section 4.4. As a warmup, we will focus only on \(\mathbf{Q}[\beta^{\pm 1}]\) in this section (this is more for pedagogical purposes than originality).
**Definition 4.1.1** ((Additive) Kostant slice).: Let \(G\) be a connected reductive group over \(\mathbf{C}\), and fix the rest of notation as in Notation 1.1.19. Fix a principal nilpotent element \(e\in\mathfrak{n}\), and let \((e,f,h)\) be the associated \(\mathfrak{sl}_{2}\)-triple in \(\mathfrak{g}\). Let \(\mathfrak{g}^{e}\) be the centralizer (so \(\mathfrak{g}=\mathfrak{g}^{e}\oplus[e,\mathfrak{g}]\)), and let \(\mathscr{S}:=f+\mathfrak{g}^{e}\subseteq\mathfrak{g}^{\mathrm{reg}}\) be the Kostant slice. The composite \(f+\mathfrak{g}^{e}\to\mathfrak{g}\to\mathfrak{g}/\!\!/G\cong\mathfrak{t}/\!\!/W\) is an isomorphism, by [11].
Let \(\widetilde{\mathfrak{g}}=\mathfrak{b}\times_{B}G\) be the Grothendieck-Springer resolution, so that \(\widetilde{\mathfrak{g}}/G\simeq\mathfrak{b}/B\). We will often work with \(\widetilde{\mathfrak{g}}^{*}\) instead, defined as \(\mathfrak{b}^{*}\times_{B}G\). There is a map \(\widetilde{\chi}:\widetilde{\mathfrak{g}}\to\mathfrak{t}\) which sends a pair \((x\in\operatorname{Ad}_{g}(\mathfrak{b}))\) to the inverse image under the isomorphism \(\mathfrak{t}\to\mathfrak{b}\to\mathfrak{b}/\mathfrak{n}\) of the image of \(g^{-1}x\in\mathfrak{b}\). Let \(\widetilde{\mathscr{S}}\) denote the fiber product \(\mathscr{S}\times_{\mathfrak{g}}\widetilde{\mathfrak{g}}\), so that \(\widetilde{\mathscr{S}}\subseteq\widetilde{\mathfrak{g}}^{\mathrm{reg}}= \mathfrak{g}^{\mathrm{reg}}\times_{\mathfrak{g}}\widetilde{\mathfrak{g}}\). Then, Kostant's result on the Kostant slice implies formally that the composite \(\widetilde{\mathscr{S}}\to\widetilde{\mathfrak{g}}\xrightarrow{\widetilde{ \chi}}\mathfrak{t}\) is an isomorphism. We will often abusively write the inclusion of \(\widetilde{\mathscr{S}}\) as a map \(\kappa:\mathfrak{t}\to\widetilde{\mathfrak{g}}\).
In fact, we will only care about the composite \(\mathfrak{t}\to\widetilde{\mathfrak{g}}\to\widetilde{\mathfrak{g}}/G\) below, so we will also denote it by \(\kappa\). If we identify \(\widetilde{\mathfrak{g}}/G\cong\mathfrak{b}/B\), then the map \(\kappa\) admits a simple description: it is the composite \(\mathfrak{t}\to\mathfrak{b}\to\mathfrak{b}/B\) which sends \(x\mapsto f+x\). This is proved, for instance, in [11, Proposition 19], where it is shown that there is a unique map \(\mu:f+\mathfrak{t}\to N\) such that \(\operatorname{Ad}_{\exp(\mu(x))}(x)\in f+\mathfrak{g}^{e}\); this further implies that the image of any \(x\in\mathfrak{t}\) under the map \(\mathfrak{t}\to\mathfrak{t}/\!\!/W\xrightarrow{\kappa}\mathfrak{g}\) can be identified with \(\operatorname{Ad}_{\exp(\mu(x+f))}(x+f)\).
Fix a nondegenerate invariant bilinear form on \(\mathfrak{g}\), to identify \(\mathfrak{g}\) with \(\mathfrak{g}^{*}\). The first main result of this section is the following; it is essentially equivalent to [1, Proposition 2.8] and the rationalization of [10, Theorem 6.1].
**Theorem 4.1.2**.: _Let \(G\) be a connected and simply-connected semisimple algebraic group over \(\mathbf{C}\). Let \(A\) be an \(\mathbf{E}_{\infty}\)-\(\mathbf{Q}[\beta^{\pm 1}]\)-algebra, and let \(\mathbf{G}=\mathbf{G}_{a}\) (so \(\mathscr{M}_{T}\) is the affine space \(\mathfrak{t}[2]\) over \(A\)). View \(\tilde{\mathfrak{t}}^{*}\), \(\tilde{\mathfrak{n}}\), \(\tilde{\mathfrak{g}}\), and \(\check{B}\) as schemes over \(\mathbf{Q}\). Then \(\operatorname{QCoh}(\tilde{\mathfrak{t}}^{*})\) admits the structure of a module over \(\operatorname{IndCoh}((\widetilde{\mathscr{S}}\times_{\tilde{\mathfrak{g}}} \{0\})/\check{G})\), where the fiber product is (always) derived, such that there is an equivalence_
\[\operatorname{End}_{\operatorname{IndCoh}((\widetilde{\mathscr{S}}\times_{ \tilde{\mathfrak{g}}}\{0\})/\check{G})}(\operatorname{QCoh}(\tilde{\mathfrak{t }}^{*}))\otimes_{\mathbf{Q}}\pi_{0}A\simeq\operatorname{LMod}_{\pi_{0}C_{*}^{T }(\operatorname{Gr}_{G}(\mathbf{C});A)}=\operatorname{Loc}_{T_{c}}^{\mathrm{ gr}}(G_{c};A).\]
**Remark 4.1.3**.: Recall from [1] that there is an Iwahori-Satake equivalence \(\operatorname{IndCoh}((\widetilde{\mathscr{S}}\times_{\tilde{\mathfrak{g}}} \{0\})/\check{G})\simeq\operatorname{Shv}(\operatorname{Gr}_{G})^{I}\) over \(\mathbf{C}\), where the right-hand side is normalized appropriately. One should therefore regard Theorem 4.1.2 as a bar construction of the restriction of this equivalence (lifted from \(\mathbf{C}\) to \(\mathbf{Q}\)) to the regular locus, and more optimistically as a first step towards an alternative proof. See also Example 4.5.6 for the equivalence resulting from "undoing" the bar construction.
We now turn to the proof of Theorem 4.1.2. For the next two results, we only work on one side of Langlands duality, so we drop the "check"s for notational simplicity. Note that \((\widetilde{\mathscr{N}}\times_{\tilde{\mathfrak{g}}}\{0\})/\check{G}\cong( \tilde{\mathfrak{n}}\times_{\tilde{\mathfrak{g}}}\{0\})/\check{B}\); it will be more convenient to work with the latter description.
**Lemma 4.1.4**.: _There is a Koszul duality equivalence \(\operatorname{QCoh}(\widetilde{\mathfrak{g}}^{*}[2]/G)\simeq\operatorname{IndCoh}(( \mathfrak{n}\times_{\mathfrak{g}}\{0\})/B)\)._
We will give two proofs of the following fact.
**Proposition 4.1.5** (Variant of [1, Proposition 2.8]).: _Work over a field \(k\) of characteristic \(0\), and view \(\operatorname{QCoh}(\mathfrak{t}^{*})\) as a \(\operatorname{QCoh}(\widetilde{\mathfrak{g}}^{*}/G)\)-module via the Kostant slice \(\kappa:\mathfrak{t}^{*}\to\widetilde{\mathfrak{g}}^{*}\). Then there is an equivalence \(\operatorname{End}_{\operatorname{QCoh}(\widetilde{\mathfrak{g}}^{*}/G)}( \operatorname{QCoh}(\mathfrak{t}^{*}))\simeq\operatorname{QCoh}((T^{*}T)^{ \operatorname{bl}})\)._
First proof of Proposition 4.1.5.: We may identify \(\operatorname{End}_{\operatorname{QCoh}(\widetilde{\mathfrak{g}}^{*}/G)}( \operatorname{QCoh}(\mathfrak{t}^{*}))\) with \(\operatorname{QCoh}(\mathfrak{t}^{*}\times_{\widetilde{\mathfrak{g}}^{*}/G} \mathfrak{t}^{*})\). We will show, in fact, that there is a Cartesian square
(8)
This is an analogue of [11, Proposition 2.2.1] and [1, Proposition 2.8]. (Note that since \(\mathfrak{t}^{*}\to\widetilde{\mathfrak{g}}^{*}\) lands in the open locus \(\widetilde{\mathfrak{g}}^{*,\operatorname{reg}}\), it does not matter whether we intersect \(\mathfrak{t}^{*}\) with itself in \(\widetilde{\mathfrak{g}}^{*}/G\) or in \(\widetilde{\mathfrak{g}}^{*,\operatorname{reg}}/G\); indeed, the intersection \(\widetilde{\mathfrak{g}}^{*,\operatorname{reg}}\times_{\widetilde{\mathfrak{g} }^{*}}\widetilde{\mathfrak{g}}^{*,\operatorname{reg}}\) is just \(\widetilde{\mathfrak{g}}^{*,\operatorname{reg}}\).) In what follows, it will be convenient (notationally) to use the chosen nondegenerate invariant bilinear form on \(\mathfrak{g}\) to identify \(\mathfrak{b}^{*}\) with the opposite Borel \(\mathfrak{b}^{-}\) and \(N\) with its opposite unipotent, and then to flip the role of \(\mathfrak{b}\) and \(\mathfrak{b}^{-}\), etc.
Recall that the Kostant slice \(\mathscr{S}\subseteq\mathfrak{g}\) is transverse to the regular \(G\)-orbits, and intersects each orbit exactly once; this implies that the image of the map \(\kappa:\mathfrak{t}\to\widetilde{\mathfrak{g}}\) is transverse to the regular \(G\)-orbits on \(\widetilde{\mathfrak{g}}\), and intersects each orbit exactly once. In particular, if \(C\) denotes the locally closed subvariety of \(\widetilde{\mathfrak{g}}\times G\) consisting of pairs \((x,g)\) with \(x\in\widetilde{\mathfrak{g}}^{\operatorname{reg}}\) and \(\operatorname{Ad}_{g}(x)=x\), then \(C/\!\!/G=\mathfrak{t}\times_{\widetilde{\mathfrak{g}}/G}\mathfrak{t}\) (so we may assume without loss of generality that \(x\in\mathfrak{t}\)). To compute \(C/\!\!/G\), one can reduce to the case when \(G\) has semisimple rank \(1\) by the argument of [1, Section 4.3]. To work out this case, we will assume \(G=\operatorname{SL}_{2},\operatorname{PGL}_{2}\).
There are "two" ways to compute in these cases; we will describe both, because each has its own conceptual advantages when generalizing to the multiplicative case (for instance). First, we present the argument which is essentially present in [1]; for this, we will assume \(G=\operatorname{SL}_{2}\). The Grothendieck-Springer resolution \(\widetilde{\mathfrak{g}}=T^{*}(\mathbf{A}^{2}-\{0\})/\mathbf{G}_{m}\) is the total space of \(\mathscr{O}(-1)\oplus\mathscr{O}(-1)\) over \(\mathbf{P}^{1}\); we will think of a point in \(\widetilde{\mathfrak{g}}\) as a pair \((x\in\mathfrak{sl}_{2},\ell\subseteq\mathbf{C}^{2})\) such that \(x\) preserves \(\ell\). The Kostant slice \(\kappa:\mathfrak{t}\cong\mathbf{A}^{1}\to\widetilde{\mathfrak{g}}\) is the map sending \(\lambda\in\mathbf{A}^{1}\) to the pair \((x,\ell)\) with \(x=\left(\begin{smallmatrix}0&\lambda^{2}\\ 1&0\end{smallmatrix}\right)\) and \(\ell=[\lambda:1]\). Indeed, this is essentially immediate from the requirement that the following diagram commutes:
Moreover, the \(\mathrm{SL}_{2}\)-action on \(\widetilde{\mathfrak{g}}\) sends \(g\in\mathrm{SL}_{2}\) and \((x,\ell)\) to \((\mathrm{Ad}_{g}(x),g\ell)\). If \(g=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\), we compute that
\[\mathrm{Ad}_{g}\begin{pmatrix}0&\lambda^{2}\\ 1&0\end{pmatrix}=\begin{pmatrix}bd-ac\lambda^{2}&(a\lambda)^{2}-b^{2}\\ d^{2}-(c\lambda)^{2}&ac\lambda^{2}-bd\end{pmatrix},\ g\cdot[\lambda:1]=[a \lambda+b:c\lambda+d].\]
From this, we see that \(\mathrm{Ad}_{g}(x)=x\) if and only if \(a=d\) and \(b=c\lambda^{2}\), in which case \(g\) also fixes \([\lambda:1]\). In other words, \(g=\left(\begin{smallmatrix}a&c\lambda^{2}\\ c&a\end{smallmatrix}\right)\) with \(a,c\in k\); in order for \(\det(g)=1\), we need \(a^{2}-c^{2}\lambda^{2}=1\). When \(\lambda\neq 0\), both \(x\) and \(g\) are diagonalized by the matrix \(\frac{1}{2}\left(\begin{smallmatrix}1&-1\\ -\lambda^{-1}&-\lambda^{-1}\end{smallmatrix}\right)\in\mathrm{SL}_{2}\): the diagonalization of \(x\) is \(\left(\begin{smallmatrix}\lambda&0\\ 0&\lambda^{-1}\end{smallmatrix}\right)\), and the diagonalization of \(g\) is \(\left(\begin{smallmatrix}t&0\\ 0&w\end{smallmatrix}\right)\) where \(2a=t+w\) and \(2\lambda c=t-w\). Since we have \(\det(g)=a^{2}-(c\lambda)^{2}=1\), we have \(w=t^{-1}\). This shows that if \(k\) is not of characteristic \(2\), then \(\mathfrak{t}\times_{\widetilde{\mathfrak{sl}}_{2}/\mathrm{SL}_{2}}\mathfrak{t }\cong\mathrm{Spec}\,k[\lambda,t^{\pm 1},\frac{t-t^{-1}}{\lambda}]\).
The "second" way to reach this calculation (still with \(G=\mathrm{SL}_{2}\)) is to use the fact that \(\kappa:\mathfrak{t}\to\widetilde{\mathfrak{g}}/G\) can be identified with the composite \(\mathfrak{t}\to\mathfrak{b}\to\mathfrak{b}/B\) sending \(x\mapsto f+x\). Then, \(\mathfrak{t}\times_{\mathfrak{b}/B}\mathfrak{t}\) is isomorphic to the subvariety of \(\mathfrak{t}\times B\) consisting of pairs \((x,g)\) with \(x\in\mathfrak{t}\) and \(\mathrm{Ad}_{g}(x+f)=x+f\). If \(g=\left(\begin{smallmatrix}a&0\\ b&a^{-1}\end{smallmatrix}\right)\in B\), then
\[\mathrm{Ad}_{g}\begin{pmatrix}x&0\\ 1&-x\end{pmatrix}=\begin{pmatrix}x&0\\ 2a^{-1}bx+a^{-2}&-x\end{pmatrix}.\]
Therefore, \(\mathrm{Ad}_{g}(x+f)=x+f\) if and only if
\[2a^{-1}bx+a^{-2}=1,\]
which forces \(b=\frac{a-a^{-1}}{2x}\). This implies that \(\mathfrak{t}\times_{\mathfrak{b}/B}\mathfrak{t}\) is isomorphic to \(\mathrm{Spec}\,k[x,a^{\pm 1},\frac{a-a^{-1}}{x}]\), as desired.
We will now do the calculation with \(G=\mathrm{PGL}_{2}\) via the second method. Again, \(\mathfrak{t}\times_{\mathfrak{b}/B}\mathfrak{t}\) is isomorphic to the subvariety of \(\mathfrak{t}\times B\) consisting of pairs \((x,g)\) with \(x\in\mathfrak{t}\) (identified with the matrix \(\left(\begin{smallmatrix}x&0\\ 0&0\end{smallmatrix}\right)\in\mathfrak{gl}_{2}\)) and \(\mathrm{Ad}_{g}(x+f)=x+f\). If \(g=\left(\begin{smallmatrix}a&0\\ b&1\end{smallmatrix}\right)\in B\), then
\[\mathrm{Ad}_{g}\begin{pmatrix}x&0\\ 1&0\end{pmatrix}=\begin{pmatrix}x&0\\ (bx+1)a^{-1}&0\end{pmatrix}.\]
Therefore, \(\mathrm{Ad}_{g}(x+f)=x+f\) if and only if
\[(bx+1)a^{-1}=1,\]
which forces \(b=\frac{a-1}{x}\). This implies that \(\mathfrak{t}\times_{\mathfrak{b}/B}\mathfrak{t}\) is isomorphic to \(\mathrm{Spec}\,k[x,a^{\pm 1},\frac{1-a}{x}]\), as desired.
Second proof of Proposition 4.1.5.: As in the first proof of Proposition 4.1.5, it will be convenient to use the chosen nondegenerate invariant bilinear form on \(\mathfrak{g}\) to identify \(\mathfrak{b}^{*}\) with the opposite Borel \(\mathfrak{b}^{-}\) and \(N\) with its opposite unipotent, and then to flip the role of \(\mathfrak{b}\) and \(\mathfrak{b}^{-}\), etc. We will prove the following variant of Proposition 4.1.5, which in turn implies the desired result: view \(\mathrm{QCoh}(\mathfrak{t}^{*}/\!\!/W)\) as a \(\mathrm{QCoh}(\mathfrak{g}^{*}/\!\!/G)\)-module via the Kostant slice. Then there is an equivalence \(\mathrm{End}_{\mathrm{QCoh}(\mathfrak{g}^{*}/\!\!/G)}(\mathrm{QCoh}(\mathfrak{t }^{*}/\!\!/W))\simeq\mathrm{QCoh}((T^{*}T)^{\mathrm{bl}}/\!\!/W)\).
Let \(\chi\) be a nondegenerate character on \(\mathfrak{n}^{-}\). The \(N^{-}\)-action on \(G\) via conjugation induces a Hamiltonian \(N^{-}\)-action on \(T^{*}G\); let \(N^{-}{}_{\chi}\backslash(T^{*}G)/_{\chi}N^{-}\) denote the bi-Whittaker reduction of \(T^{*}G\) with respect to this \(N^{-}\)-action at the character \(\chi\in\mathfrak{n}^{-,*}\). Then \((T^{*}T)^{\mathrm{bl}}/\!\!/W\cong N^{-}{}_{\chi}\backslash(T^{*}G)/_{\chi}N^{-}\); see [14, Theorem 6.3], for instance. There is a Morita equivalence between \(\mathrm{QCoh}(\mathfrak{g}^{*}/\!\!/G)\) and
\(\operatorname{QCoh}(T^{*}G)\) (equipped with the convolution monoidal structure); under this equivalence, the \(\operatorname{QCoh}(\mathfrak{g}^{*}/G)\)-module \(\operatorname{QCoh}(\mathfrak{g}^{*}/_{\chi}N^{-})\) is sent to the \(\operatorname{QCoh}(T^{*}G)\)-module \(\operatorname{QCoh}((T^{*}G)/_{\chi}N^{-})\). We conclude the series of equivalences:
\[\operatorname{QCoh}((T^{*}T)^{\operatorname{bl}}/\!\!/W) \simeq\operatorname{QCoh}(N^{-}{}_{\chi}\backslash(T^{*}G)/_{ \chi}N^{-})\] \[\simeq\operatorname{End}_{\operatorname{QCoh}(T^{*}G)}( \operatorname{QCoh}((T^{*}G)/_{\chi}N^{-}))\] \[\simeq\operatorname{End}_{\operatorname{QCoh}(\mathfrak{g}^{*}/G )}(\operatorname{QCoh}(\mathfrak{g}^{*}/_{\chi}N^{-})).\]
However, Kostant's theorem identifies \(\mathfrak{g}^{*}/_{\chi}N^{-}\) with \(\mathfrak{t}^{*}/\!\!/W\) (viewed as a substack of \(\mathfrak{g}^{*}/G\) via the Kostant slice), which finishes the proof.
Proof of Theorem 4.1.2.: By Theorem 3.2.12, we have \(\operatorname{H}_{0}^{T}(\operatorname{Gr}_{G}(\mathbf{C});A)=\pi_{0}\mathscr{ F}_{T}(\operatorname{Gr}_{G}(\mathbf{C}))^{\vee}\cong\mathscr{O}_{(T^{*}T)^{ \operatorname{bl}}}\). It follows that \(\operatorname{LMod}_{\operatorname{H}_{*}^{T}(\operatorname{Gr}_{G}( \mathbf{C});A)}\simeq\operatorname{QCoh}((T^{*}\tilde{T})_{A}^{\operatorname {bl}})\). Since \(\operatorname{End}_{\operatorname{IndCoh}((\widetilde{\mathscr{S}}\times_{ \mathfrak{g}}\{0\})/\tilde{G})}(\operatorname{QCoh}(\tilde{\mathfrak{t}}^{*}) )\simeq\operatorname{QCoh}((T^{*}\tilde{T})^{\operatorname{bl}})\) by Lemma 4.1.4 and Proposition 4.1.5, we conclude the desired result.
**Remark 4.1.6**.: So far, we have not emphasized the role of Whittaker reduction in the above story (except for the second proof of Proposition 4.1.5). However, we take a moment to describe this briefly, since it is a key aspect of Langlands duality. Recall that a theorem of Kostant's gives an isomorphism \((f+\mathfrak{b})/N\cong\mathscr{S}=f+\mathfrak{g}^{e}\). In terms of Whittaker reduction, this says that \(\mathscr{S}\cong\mathfrak{g}/_{\chi}N^{-}\). Since Proposition 4.1.5 is concerned with \(\widetilde{\mathfrak{g}}\) instead of \(\mathfrak{g}\), we need a slight variant of this statement. Namely, recall the map \(\pi:\widetilde{\mathfrak{g}}\to\mathfrak{g}\), let \(\mu:\mathfrak{g}\to\mathfrak{n}\) be the moment map for the adjoint \(N\)-action on \(\mathfrak{g}\), and let \(\widetilde{\mu}\) denote the composite \(\widetilde{\mathfrak{g}}\to\mathfrak{g}\to\mathfrak{n}\). Then \(\mu^{-1}(f)\) is the variety \(f+\mathfrak{b}\), so that \(\widetilde{\mu}^{-1}(f)\) is the subscheme of \(\widetilde{\mathfrak{g}}\) spanned by those pairs \((\mathfrak{b}^{\prime},y\in\mathfrak{b}^{\prime}\cap(f+\mathfrak{b}))\). Kostant's result implies that there is an isomorphism \(\widetilde{\mu}^{-1}(f)/N^{-}\xrightarrow{\sim}\mathfrak{t}\). Whittaker reduction is a key aspect of the Langlands-dual side of Theorem 4.1.2: it is needed to even define the action of \(\operatorname{QCoh}(\widetilde{\mathfrak{g}}^{*}/G)\) on \(\operatorname{QCoh}(\mathfrak{t}^{*})\).
**Example 4.1.7**.: Note that Theorem 4.1.2 implies that \(\operatorname{H}_{0}(\Omega G_{c};\mathbf{Q}[\beta^{\pm 1}])\) can be identified with the ring of functions on the centralizer \(Z_{f}(\tilde{G})\) of a regular nilpotent element \(f\in\mathring{\mathfrak{g}}\) over \(\mathbf{Q}\). In type \(A\) at least, one can directly check that there is such an isomorphism. (Exactly the same argument works in the K-theoretic and elliptic cases, too; in the K-theoretic case, one instead considers the centralizer of a regular _unipotent_ element \(f\in\tilde{G}\).) For instance, if \(\tilde{G}=\operatorname{SL}_{n}\), the centralizer \(Z_{f}(\tilde{G})\) is the direct product of \(\mu_{n}\) with a connected (commutative) unipotent group \(U_{n}\). If \((x_{1},\cdots,x_{n-1})\) is a point in \(U_{n}\) (corresponding to the element in \(Z_{f}(\operatorname{SL}_{n})\) given by the \(n\times n\)-matrix whose \(j\)th row is \((0,\cdots,0,1,x_{1},\cdots,x_{n-j})\)), the group operation is given by
\[(x_{1},\cdots,x_{n-1})\cdot(y_{1},\cdots,y_{n-1})=(x_{1}+y_{1},\cdots,x_{n-1}+x _{n-2}y_{1}+\cdots+x_{1}y_{n-2}+y_{n-1}).\]
The group scheme \(U_{n}\) is isomorphic over \(\mathbf{Q}\) to \(\mathbf{G}_{a}^{\times n-1}\), via Newton's identities for the transformation law for expressing the power sum symmetric polynomials in terms of the elementary symmetric polynomials. For instance, the isomorphism between \(U_{6}\subseteq Z_{f}(\operatorname{SL}_{6})\) and \(\mathbf{G}_{a}^{\times 5}\) is given by the map
\[(x_{1},\cdots,x_{5}) \mapsto(x_{1},x_{1}^{2}-2x_{2},x_{1}^{3}-3x_{1}x_{2}+3x_{3},x_{1}^{ 4}+2x_{2}^{2}-4x_{4}-4x_{2}x_{1}^{2}+4x_{1}x_{3}, \tag{9}\] \[x_{1}^{5}-5x_{1}^{3}x_{2}+5x_{1}^{2}x_{3}-5x_{1}(x_{4}-x_{2}^{ 2})-5x_{2}x_{3}+5x_{5}).\]
In general, the transformation can be determined by extracting the coefficient of \((-t)^{n}/n\) in the power series \(\log\left(\sum_{j\geq 0}x_{j}(-t)^{j}\right)\).
On the other hand, \(G_{c}\) is a maximal compact subgroup of \(\operatorname{PGL}_{n}(\mathbf{C})\), and there is a homotopy equivalence \(\Omega\mathrm{PGL}_{n}(\mathbf{C})\simeq\mathbf{Z}/n\times\Omega\mathrm{SU}(n)\), so that
\[\operatorname{H}_{0}(\Omega\mathrm{PGL}_{n}(\mathbf{C});\mathbf{Q}[\beta^{ \pm 1}])\simeq\mathbf{Q}[x^{\pm 1}]/(x^{n}-1)\otimes_{\mathbf{Z}}\operatorname{H}_{0 }(\Omega\mathrm{SU}(n);\mathbf{Z}[\beta^{\pm 1}]).\]
Under Langlands duality, the \(\mu_{n}\) factor in \(Z_{f}(\mathrm{SL}_{n})\) comes from the first tensor factor. Similarly, \(\operatorname{Spec}\operatorname{H}_{0}(\Omega\mathrm{SU}(n);\mathbf{Z}[\beta^ {\pm 1}])\) is a connected unipotent group scheme: for instance, there is a Bott periodicity equivalence \(\Omega\mathrm{SU}\simeq\mathrm{BU}\) (where \(\mathrm{SU}=\operatorname{colim}_{n\to\infty}\mathrm{SU}(n)\)), so \(\operatorname{Spec}\operatorname{H}_{0}(\Omega\mathrm{SU};\mathbf{Z}[\beta^{ \pm 1}])\) can be identified with the ring of functions over the big Witt ring scheme \(\mathbf{W}\) over \(\mathbf{Z}\). This group scheme is unipotent over \(\mathbf{Z}\), and the ghost components define an isomorphism to \(\prod_{\mathbf{Z}_{\geq 0}}\mathbf{G}_{a}\) upon rationalization (see [11, Theorem II.6.7] for a textbook reference). The group scheme \(\operatorname{Spec}\operatorname{H}_{0}(\Omega\mathrm{SU}(n);\mathbf{Z}[\beta^ {\pm 1}])\) is a quotient of \(\mathbf{W}\) (hence is unipotent): in fact, it is isomorphic to the group scheme \(\mathbf{W}_{n-1}\) of big Witt vectors of length \(n-1\). Since this is rationally isomorphic to \(\mathbf{G}_{a}^{\times n-1}\), we see that
\[\operatorname{Spec}\operatorname{H}_{0}(\Omega\mathrm{PGL}_{n}(\mathbf{C}); \mathbf{Q}[\beta^{\pm 1}])\cong\mu_{n}\times\mathbf{W}_{n-1}\cong\mu_{n} \times\mathbf{G}_{a}^{\times n-1}\cong Z_{f}(\mathrm{SL}_{n}),\]
as desired. Note, however, that the isomorphism \(\mathbf{W}_{n-1}\cong U_{n}\subseteq Z_{f}(\mathrm{SL}_{n})\) is somewhat tricky to write down in coordinates. As an example, using the formula for the ghost components in the big Witt vectors, it is easy to see that the formula (9) implies that the isomorphism \(Z_{f}(\mathrm{SL}_{6})\supseteq U_{6}\xrightarrow{\sim}\mathbf{W}_{5}\) sends \((x_{1},\cdots,x_{5})\) to the Witt vector
\[(x_{1},\cdots,x_{5})\mapsto(x_{1},-x_{2},x_{3}-x_{1}x_{2},x_{1}x_{3}-x_{2}x_{1 }^{2}-x_{4},x_{5}-x_{1}^{3}x_{2}+x_{1}^{2}x_{3}-x_{1}(x_{4}-x_{2}^{2})-x_{2}x _{3}).\]
**Remark 4.1.8**.: One special feature of rational homology which sets it apart from K-theory or elliptic cohomology is that it can be de-periodified. On the Langlands-dual side, this equips the relevant geometric objects with a \(\mathbf{G}_{m}\)-action, i.e., with a grading. Continuing Example 4.1.7, there is still an isomorphism
\[\operatorname{H}_{*}(\Omega\mathrm{PGL}_{n}(\mathbf{C});\mathbf{Q})\simeq \mathbf{Q}[x^{\pm 1}]/(x^{n}-1)\otimes_{\mathbf{Z}}\operatorname{H}_{*}(\Omega \mathrm{SU}(n);\mathbf{Z}),\]
and there is still an isomorphism \(\operatorname{Spec}\operatorname{H}_{*}(\Omega\mathrm{SU}(n);\mathbf{Z}) \cong\mathbf{W}_{n-1}\). Here, the grading on \(\operatorname{H}_{*}(\Omega\mathrm{SU}(n);\mathbf{Z})\) by half the homological degree corresponds to the \(\mathbf{G}_{m}\)-action on \(\mathbf{W}_{n-1}\) defined as follows: if we view \(\mathbf{W}_{n-1}(R)=1+R[t]/t^{n}\subseteq(R[t]/t^{n})^{\times}\), the coordinate \(t\) is given weight \(-1\). This defines a grading on \(Z_{f}(\mathrm{SL}_{n})\), which can also be described directly in general as follows (see [10]). The element \(2\rho=\sum_{\alpha\in\Phi^{+}}\alpha\in\mathbb{X}^{*}(T)\cong\mathbb{X}_{*}( \tilde{T})\) defines a homomorphism \(2\rho:\mathbf{G}_{m}\to\tilde{T}\), which defines a \(\mathbf{G}_{m}\)-action on \(\tilde{\mathfrak{g}}\). This \(\mathbf{G}_{m}\)-action stabilizes the Kostant section \(f+\tilde{\mathfrak{g}}^{e}\), and hence defines a \(\mathbf{G}_{m}\)-action on \(Z_{e}(\tilde{G})\); this is the grading on \(\mathscr{O}_{Z_{e}(\tilde{G})}\) corresponding to half the homological grading on \(\operatorname{H}_{*}(\Omega G_{c};\mathbf{Q})\).
**Remark 4.1.9**.: In [1], the following analogue of (8) is established (over \(\mathbf{C}\), but this does not affect the statement): there is a Cartesian square
(10)
where the top-left corner can be identified with \(\operatorname{Spec}\pi_{0}C_{*}^{G}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{Q})\). We can take the fiber product of (8) with itself over (10) to obtain a Cartesian square
(11)
Using Theorem 4.1.2 and the above discussion, one can use (11) to show that \(\operatorname{End}_{\operatorname{QCoh}((\widetilde{\mathfrak{g}}\times \widetilde{\mathfrak{g}}\tilde{\mathfrak{g}})/\tilde{G})}(\operatorname{QCoh}( \mathfrak{t}\times_{\mathfrak{t}\not{W}}\mathfrak{t}))\) can be identified with \(\operatorname{LMod}_{\pi_{0}C_{*}^{T}(\operatorname{Fl}_{G}(\mathbf{C}); \mathbf{Q}[\beta^{\pm 1}])}\). This can be viewed as a "once-looped" version of Bezrukavnikov's equivalence from [1].
One can quantize Theorem 4.1.2 as follows.
**Definition 4.1.10**.: Following [15], define the (Langlands dual) _universal category_\(\tilde{\mathscr{O}}_{\hbar}^{\operatorname{univ}}\) to be \(\operatorname{DMod}_{\hbar}(\tilde{G}/\tilde{N})^{(\tilde{G}\times\tilde{T},w )}\simeq U_{\hbar}(\tilde{\mathfrak{g}})\text{-}\mathrm{mod}^{\tilde{N},( \tilde{T},w)}\). The \(\infty\)-category \(\tilde{\mathscr{O}}_{\hbar}^{\operatorname{univ}}\) is a quantization of \(\operatorname{QCoh}(\tilde{\mathfrak{b}}^{-}/\tilde{B}^{-})\), since there are isomorphisms
\[\tilde{\mathfrak{b}}^{-}/\tilde{B}^{-}\cong\widetilde{\mathfrak{g}}/\tilde{G} \cong\tilde{T}\backslash T^{*}(\tilde{G}/\tilde{N})/\tilde{G}.\]
**Theorem 4.1.11**.: _Let \(A\) be an \(\mathbf{E}_{\infty}\)-\(\mathbf{C}[\beta^{\pm 1}]\)-algebra, and let \(G\) be a connected and simply-connected semisimple algebraic group or a torus over \(\mathbf{C}\). Then there is a Kostant functor \(\tilde{\mathscr{O}}_{\hbar}^{\operatorname{univ}}\to\operatorname{QCoh}( \tilde{\mathfrak{t}}^{*}\times\mathbf{A}_{\hbar}^{1})\) and a left \(A[\![\hbar]\!]\)-linear equivalence_
\[\operatorname{LMod}_{\pi_{0}C_{*}^{\tilde{T}}(\operatorname{Gr}_{G}( \mathbf{C});A)}\simeq\operatorname{End}_{\tilde{\mathscr{O}}_{\hbar}^{ \operatorname{univ}}}(\operatorname{QCoh}(\tilde{\mathfrak{t}}^{*}\times \mathbf{A}_{\hbar}^{1})).\]
Proof sketch; compare to the second proof of Proposition 4.1.5. We will assume \(A=\mathbf{C}[\beta^{\pm 1}]\), so that \(\pi_{0}C_{*}^{\tilde{T}}(\operatorname{Gr}_{G}(\mathbf{C});A)\) is a \(2\)-periodification of \(\pi_{*}C_{*}^{\tilde{T}}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{C})\). Let \(\mathscr{H}(\widetilde{\mathfrak{t}}^{*},\widetilde{W}^{\operatorname{aff}})\) be the nil-Hecke algebra associated to \(\widetilde{\mathfrak{t}}^{*}\cong\tilde{\mathfrak{t}}^{*}\oplus\mathbf{C}\alpha _{0}\), and let \(e=\frac{1}{\#W}\sum_{w\in W}w\in\mathbf{Q}[W]\) be the symmetrizer idempotent. Using Corollary 3.2.4, one can then show that \(\operatorname{H}_{*}^{\tilde{T}}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{C})\) is isomorphic to \(\mathscr{O}_{\tilde{\mathfrak{t}}^{*}}\otimes_{\mathscr{O}_{\tilde{\mathfrak{t }}^{*}\not{W}}}e\mathscr{H}(\widetilde{\mathfrak{t}}^{*},\widetilde{W}^{ \operatorname{aff}})e\), where the loop rotation parameter \(\hbar\) corresponds to the affine root \(\alpha_{0}\); see [16]. This implies that \(\operatorname{LMod}_{\pi_{0}C_{*}^{T}(\operatorname{Gr}_{G}(\mathbf{C});A)}\) can be identified with \(\operatorname{QCoh}(\tilde{\mathfrak{t}}^{*})\otimes_{\operatorname{QCoh}( \tilde{\mathfrak{t}}^{*}\not{W})}\operatorname{LMod}_{\mathscr{H}(\widetilde{ \mathfrak{t}}^{*},\widetilde{W}^{\operatorname{aff}})e}\).
We now construct the Kostant functor \(\kappa_{\hbar}:\tilde{\mathscr{O}}_{\hbar}^{\operatorname{univ}}\to \operatorname{QCoh}(\tilde{\mathfrak{t}}^{*}\times\mathbf{A}_{\hbar}^{1})\). Recall that the Kostant functor \(\operatorname{HC}_{\hbar}(\tilde{G})\to\operatorname{QCoh}(\tilde{\mathfrak{t }}^{*}/\!\!/W\times\mathbf{A}_{\hbar}^{1})\) is given by the composite
\[\operatorname{HC}_{\hbar}(\tilde{G})=\operatorname{DMod}_{\hbar}(\tilde{G})^{( \tilde{G}\times\tilde{G},w)}\to\operatorname{DMod}_{\hbar}(\tilde{G})^{( \tilde{G},w)}\to\operatorname{DMod}_{\hbar}(\tilde{N}^{-}\backslash_{\chi} \tilde{G})^{(\tilde{G},w)}.\]
However, the final term is equivalent to \(U_{\hbar}(\tilde{\mathfrak{g}})\text{-}\mathrm{mod}^{(\tilde{N}^{-},\chi)}\), which in turn can be identified with \(\operatorname{QCoh}(\tilde{\mathfrak{t}}^{*}/\!\!/W\times\mathbf{A}_{\hbar}^{1})\) by the Skryabin equivalence (see the appendix of [1]). Similarly, the desired Kostant functor on \(\tilde{\mathscr{O}}_{\hbar}^{\operatorname{univ}}\) is also given by Whittaker averaging: there is a composite
\[\tilde{\mathscr{O}}_{\hbar}^{\operatorname{univ}}=\operatorname{DMod}_{\hbar}( \tilde{G}/\tilde{N})^{(\tilde{G}\times\tilde{T},w)}\to\operatorname{DMod}_{ \hbar}(\tilde{G}/\tilde{N})^{(\tilde{T},w)}\xrightarrow{\operatorname{Av}_{ \mathfrak{t}^{\chi}}^{\chi}}\operatorname{DMod}_{\hbar}(\tilde{N}^{-}\backslash_{ \chi}\tilde{G}/\tilde{N})^{(\tilde{T},w)}.\]
However, the final term is equivalent by a standard argument to \(\mathrm{DMod}_{\hbar}(\check{T})^{(\check{T},w)}\simeq\mathrm{QCoh}(\check{ \mathfrak{i}}^{*}\times\mathbf{A}^{1}_{\hbar})\). Note that by construction, the following diagram commutes:
Here, the horizontal maps are given by the Kostant functors.
To finish, we need to show that \(\mathrm{QCoh}(\check{\mathfrak{i}}^{*})\otimes_{\mathrm{QCoh}(\check{ \mathfrak{i}}^{*}/\!\!/W)}\mathrm{LMod}_{e\mathscr{H}(\check{\mathfrak{i}}^{*},\bar{W}^{\mathrm{aff}})_{e}}\) is equivalent to \(\mathrm{End}_{\check{\mathscr{O}}_{\hbar}^{\mathrm{univ}}}(\mathrm{QCoh}( \check{\mathfrak{i}}^{*}\times\mathbf{A}^{1}_{\hbar}))\). There is an equivalence
\[\mathrm{QCoh}(\check{\mathfrak{i}}^{*}\times\mathbf{A}^{1}_{\hbar})\simeq \check{\mathscr{O}}_{\hbar}^{\mathrm{univ}}\otimes_{\mathrm{HC}_{\hbar}(G)} \mathrm{QCoh}(\check{\mathfrak{i}}^{*}/\!\!/W\times\mathbf{A}^{1}_{\hbar}),\]
so that
\[\mathrm{End}_{\check{\mathscr{O}}_{\hbar}^{\mathrm{univ}}}(\mathrm{QCoh}( \check{\mathfrak{i}}^{*}\times\mathbf{A}^{1}_{\hbar}))\simeq\mathrm{QCoh}( \check{\mathfrak{i}}^{*})\otimes_{\mathrm{QCoh}(\check{\mathfrak{i}}^{*}/\! \!/W)}\mathrm{End}_{\mathrm{HC}_{\hbar}(\check{G})}(\mathrm{QCoh}(\check{ \mathfrak{i}}^{*}/\!\!/W\times\mathbf{A}^{1}_{\hbar})).\]
The desired claim now follows from the observation that there is an isomorphism \(\bar{N}^{-}{}_{\chi}\backslash\mathscr{D}_{\tilde{G}}/_{\chi}\bar{N}^{-} \cong e\mathscr{H}(\widetilde{\mathfrak{i}}^{*},\widetilde{W}^{\mathrm{aff}})e\) given by [1, Theorem 8.1.2], which gives an equivalence between \(\mathrm{End}_{\mathrm{HC}_{\hbar}(\tilde{G})}(\mathrm{QCoh}(\check{\mathfrak{i }}^{*}/\!\!/W\times\mathbf{A}^{1}_{\hbar}))\) and \(\mathrm{LMod}_{e\mathscr{H}(\widetilde{\mathfrak{i}}^{*},\bar{W}^{\mathrm{aff}} )_{e}}\).
**Remark 4.1.12**.: In fact, one can quantize the result of [1]: namely, there is an equivalence
\[\mathrm{DMod}_{I\rtimes\mathbf{G}_{m}^{\mathrm{rot}}}(\mathrm{Gr}_{G})\simeq \check{\mathscr{O}}_{\hbar}^{\mathrm{univ}}. \tag{12}\]
We do not have a reference for this fact when \(G\) lives over \(\mathbf{C}\), but it can be deduced using the equivalence of [1, Section 1.6] and the arguments of [1]. I am grateful to Tom Gannon for discussions about this equivalence. (If \(G\) lives over \(\overline{\mathbf{F}_{p}}\) and \(\mathrm{DMod}\) is replaced with \(\overline{\mathbf{Q}_{\ell}}\)-adic sheaves, then (12) can be deduced from [1, Theorem 84] and the parabolic-Whittaker duality for the affine Grassmannian from [1].) Just as with Theorem 4.1.2, Theorem 4.1.11 may be regarded as a "once-looped" version of (12). One can similarly show that there is an equivalence
\[\mathrm{DMod}_{I\rtimes\mathbf{G}_{m}^{\mathrm{rot}}}(\mathrm{Fl}_{G})\simeq \mathrm{DMod}_{\hbar}(\bar{N}\backslash\check{G}/\bar{N})^{(\check{T}\times \check{T},\mathrm{wk})}, \tag{13}\]
which quantizes Bezrukavnikov's equivalence from [1]. Note that \(\check{T}\backslash T^{*}(\check{N}\backslash\check{G}/\bar{N})/\check{T}\) is isomorphic to \((\widetilde{\mathfrak{g}}\times_{\widetilde{\mathfrak{g}}}\widetilde{ \mathfrak{g}})/\check{G}\), so that this equivalence does indeed quantize Bezrukavnikov's equivalence
\[\mathrm{DMod}_{I}(\mathrm{Fl}_{G})\simeq\mathrm{QCoh}((\widetilde{\mathfrak{ g}}[2]\times_{\mathfrak{g}[2]}\widetilde{\mathfrak{g}}[2])/\check{G}).\]
**Remark 4.1.13**.: If \(G\) is a connected and simply-connected semisimple algebraic group or a torus over \(\mathbf{C}\), let \(\mathrm{HC}_{\hbar}(\check{G})\) denote the \(\infty\)-category \(U_{\hbar}(\check{\mathfrak{g}})\)-\(\mathrm{mod}^{\check{G},w}\). Then \(\Gamma(\check{G};\mathscr{D}_{\check{G}})^{\check{G}\times\check{G}}\cong U( \check{\mathfrak{g}})^{\check{G}}\cong\mathrm{Sym}(\check{\mathfrak{i}})^{W}\). An argument very similar to Theorem 4.1.11 proves that there is a Kostant functor \(\mathrm{HC}_{\hbar}(\check{G})\to\mathrm{QCoh}(\check{\mathfrak{i}}^{*}/\!\!/W \times\mathbf{A}^{1}_{\hbar})\) and a left \(A[\![\hbar]\!]\)-linear equivalence
\[\mathrm{LMod}_{\pi_{0}C_{*}^{G\times S_{\mathrm{rot}}^{1}}(\mathrm{Gr}_{G}( \mathbf{C});A)}\simeq\mathrm{End}_{\mathrm{HC}_{\hbar}(\check{G})}(\mathrm{ QCoh}(\check{\mathfrak{i}}^{*}/\!\!/W\times\mathbf{A}^{1}_{\hbar})). \tag{14}\]
This is closely related to [1], [1], and [1, Theorem 1.4]. Let \(\check{\mathfrak{k}}/\!\!/\widetilde{W}^{\mathrm{aff}}\) be the coarse quotient as defined in [1]. Then, the aforementioned articles provide a monoidal "Fourier transform" equivalence
\(\operatorname{IndCoh}(\check{\mathfrak{t}}/\!\!\!/\widetilde{W}^{\operatorname{aff}})\). Note that combined with the preceding discussion, we obtain an equivalence
\[\operatorname{IndCoh}(\check{\mathfrak{t}}/\!\!\!/\widetilde{W}^{ \operatorname{aff}})\simeq\operatorname{End}_{\operatorname{HC}(\check{G})}( \operatorname{QCoh}(\check{\mathfrak{t}}^{*}/\!\!/W)). \tag{15}\]
There is also an equivalence (see [11])
\[\operatorname{End}_{\operatorname{Shv}_{G\times S^{1}_{\operatorname{rot}}}( \operatorname{Gr}_{G};\mathbf{C})}(\operatorname{QCoh}(\check{\mathfrak{t}}^{* }/\!\!/W))\simeq\operatorname{LMod}_{\operatorname{H}^{G\times S^{1}_{ \operatorname{rot}}}_{*}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{C})}\simeq \operatorname{IndCoh}(\check{\mathfrak{t}}/\!\!/\widetilde{W}^{\operatorname{ aff}}),\]
and its relationship to (15) is explained by the derived loop-rotation equivariant geometric Satake equivalence of [10].
In the same way, we have the following result. We expect that the techniques of [1] can be used to show that this implies the equivalences conjectured in [10, Remark 6.22].
**Proposition 4.1.14**.: _We have:_
\[\operatorname{IndCoh}(\check{\mathfrak{t}}/\!\!\!/\widetilde{W}^{ \operatorname{aff}})\simeq\operatorname{End}_{\operatorname{DMod}(\check{N} \setminus\check{G}/\check{N})^{(T\times T,w)}}(\operatorname{QCoh}(\check{ \mathfrak{t}}^{*})), \tag{16}\]
Proof.: The equivalence (16) is proved via:
\[\operatorname{IndCoh}(\check{\mathfrak{t}}/\!\!\!/\widetilde{W}^{ \operatorname{aff}}) \simeq\operatorname{DMod}(\check{N}^{-}{}_{\chi}\backslash\check{ G}/_{\chi}\check{N}^{-})\] \[\simeq\operatorname{End}_{\operatorname{DMod}(\check{G})}( \operatorname{DMod}(\check{G}/_{\chi}\check{N}^{-}))\] \[\simeq\operatorname{End}_{\operatorname{DMod}(\check{N} \setminus\check{G}/\check{N})^{(T\times T,w)}}(\operatorname{DMod}(\check{N} \backslash\check{G}/_{\chi}\check{N}^{-})^{\hat{T},w})\] \[\simeq\operatorname{End}_{\operatorname{DMod}(\check{N} \setminus\check{G}/\check{N})^{(T\times T,w)}}(\operatorname{QCoh}(\check{ \mathfrak{t}}^{*})).\]
The third equivalence above uses [1, Corollary 1.2], and the fourth equivalence above is the well-known fact that restriction to the big cell in \(\check{G}\) defines an equivalence \(\operatorname{DMod}(\check{N}\backslash\check{G}/_{\chi}\check{N}^{-}) \xrightarrow{\sim}\operatorname{DMod}(\check{N}\backslash\check{B}/_{\chi} \check{N}^{-})\simeq\operatorname{DMod}(\check{T})\); see [10, Proposition 1.8], for instance.
**Remark 4.1.15**.: Since \(\check{\mathfrak{g}}/\check{G}=\operatorname{Map}(B\mathbf{G}_{a},B\check{ G})\), the canonical orientation of \(B\mathbf{G}_{a}\) defines a \(1\)-shifted symplectic structure on \(\check{\mathfrak{g}}/\check{G}\) via [12, Theorem 2.5]. The quasi-classical limit (i.e., \(\hbar\to 0\)) of the quantized equivalence (14) gives the following strengthening of Theorem 4.1.2. The Kostant slice \(\check{\mathfrak{t}}/\!\!\!/W\to\check{\mathfrak{g}}/\check{G}\) is a Lagrangian morphism by [11, Proposition 4.18], so that the self-intersection \(\check{\mathfrak{t}}/\!\!\!/W\times_{\check{\mathfrak{g}}/\check{G}}\check{ \mathfrak{t}}/\!\!\!/W\) admits the structure of a symplectic stack (using [12, Theorem 2.9]). Since this fiber product is isomorphic to \((T^{*}\check{T})^{\operatorname{bl}}/\!\!\!/W\) by (10), we obtain a Poisson bracket on \(\mathscr{O}_{(T^{*}\hat{T})^{\operatorname{bl}}/\!\!\!/W}\cong\operatorname{ H}^{G}_{*}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{C})\). This structure can be seen topologically, at least after a completion: using one of the main results of [13], the Borel-equivariant analogue/completion \(C_{*}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{C})^{hG_{c}}\) of \(C^{G}_{*}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{C})\) can be identified with the \(\mathbf{E}_{3}\)-center of \(C_{*}(\operatorname{Gr}_{G}(\mathbf{C});\mathbf{C})\). This defines a \(2\)-shifted Poisson bracket on \(\operatorname{H}_{*}(\operatorname{Gr}_{G}(\mathbf{C})^{hG_{c}};\mathbf{C})\), which can be identified after \(2\)-periodification with the (\(0\)-shifted) Poisson bracket on \(\mathscr{O}_{(T^{*}\hat{T})^{\operatorname{bl}}/\!\!\!/W}\).
### Rationalized Langlands duality over \(\operatorname{KU}\)
Let us now discuss the \(K\)-theoretic analogue of Theorem 4.1.2. First, we discuss the story where the Kostant slice from Section 4.1 is replaced by the "Steinberg slice"; below, we will discuss the story where the Kostant slice from Section 4.1 is replaced by a multiplicative version of the Kostant slice.
**Definition 4.2.1** (Steinberg slice).: Let \(G\) be a simply-connected semisimple algebraic group or a torus. Given \(w\in W\), let \(N_{w}=N\cap w^{-1}N^{-}w\), so that \(N_{w}=\prod_{\alpha\in\Phi_{w}}U_{\alpha}\), where \(\Phi_{w}\) is the set of roots made negative by \(w\). Let \(w=\prod_{\alpha\in\Delta}s_{\alpha}\in W\) be a Coxeter element, and let \(\dot{w}\) be a lift of \(w\) to \(N_{G}(T)\). Define the Steinberg slice \(\Sigma=\dot{w}N_{w}\subseteq G\). Then [10] proved/stated that the composite \(\Sigma\to G\to G/\!\!/G\cong T/\!\!/W\) is an isomorphism. Let \(\widetilde{G}=B\times_{B}G\) be the multiplicative Grothendieck-Springer resolution, so that \(\widetilde{G}/G=B/B\). There is a map \(\widetilde{G}\to T\) sending a pair \(x\in gBg^{-1}\) to \(x\pmod{g[B,B]g^{-1}}\). Let \(\widetilde{\Sigma}\) denote the fiber product \(\Sigma\times_{G}\widetilde{G}\), so that the composite \(\widetilde{\Sigma}\to\widetilde{G}\to T\) is an isomorphism. We will denote the inclusion of \(\widetilde{\Sigma}\) by \(\sigma:T\to\widetilde{G}\).
**Proposition 4.2.2**.: _Let \(G\) be a connected and simply-connected semisimple algebraic group or a torus over \(\mathbf{C}\). Let \(A\) be an \(\mathbf{E}_{\infty}\)-\(\mathrm{KU}\)-algebra, and let \(\mathbf{G}=\mathbf{G}_{m}\) (so \(\mathscr{M}_{T}\) is the torus \(T\) over \(A\)). View \(\widetilde{G}\) as a scheme over \(\mathbf{Q}\). If \(\mathrm{QCoh}(\check{T})\) is viewed as a module over \(\mathrm{QCoh}(\widetilde{G}/\check{G})\) via \(\sigma^{*}\), then there is an equivalence_
\[\mathrm{End}_{\mathrm{QCoh}(\widetilde{G}/\check{G})}(\mathrm{QCoh}(\check{T }))\otimes_{\mathbf{Q}}\pi_{0}A_{\mathbf{Q}}\simeq\mathrm{LMod}_{\pi_{0}C_{T} ^{*}(\mathrm{Gr}_{G}(\mathbf{C});A)}\otimes\mathbf{Q}.\]
Proof.: We will assume without loss of generality that \(A=\mathrm{KU}\). By Theorem 3.2.12, there is an equivalence \(\pi_{0}C_{*}^{T}(\mathrm{Gr}_{G}(\mathbf{C});A)=\pi_{0}\mathscr{F}_{T}( \mathrm{Gr}_{G}(\mathbf{C}))^{\vee}\simeq\mathscr{O}_{(T_{\mathbf{G}_{m}}^{*} \check{T})^{\mathrm{bl}}}\). It follows that \(\mathrm{LMod}_{\pi_{0}C_{*}^{T}(\mathrm{Gr}_{G}(\mathbf{C});A)}\simeq\mathrm{ QCoh}((T_{\mathbf{G}_{m}}^{*}\check{T})^{\mathrm{bl}})\). It therefore suffices to show that over a field \(k\) of characteristic zero, there is an equivalence \(\mathrm{End}_{\mathrm{QCoh}(\widetilde{G}/\check{G})}(\mathrm{QCoh}(\check{T }))\simeq\mathrm{QCoh}((T_{\mathbf{G}_{m}}^{*}\check{T})^{\mathrm{bl}})\).
As in Proposition 4.1.5, there is an equivalence \(\mathrm{End}_{\mathrm{QCoh}(\widetilde{G}/\check{G})}(\mathrm{QCoh}(\check{T }))\,\simeq\mathrm{QCoh}(\check{T}\times_{\widetilde{G}/\check{G}}\check{T})\), so it suffices to establish the existence of a Cartesian square
(17)
Again, one can reduce to the case when \(\check{G}\) has semisimple rank \(1\) by the argument of [1, Section 4.3]. Every split reductive group of semisimple rank \(1\) is isomorphic to the product of a split torus with \(\mathrm{SL}_{2}\), \(\mathrm{PGL}_{2}\), or \(\mathrm{GL}_{2}\). We will illustrate the calculation when \(\check{G}=\mathrm{SL}_{2}\), and describe an alternative simpler calculation in the case \(\check{G}=\mathrm{PGL}_{2}\) later.
View a point in \(\widetilde{\check{G}}\) as a pair \((x\in\mathrm{SL}_{2},\ell\subseteq\mathbf{C}^{2})\) such that \(x\) preserves \(\ell\). The Steinberg slice \(\sigma:\check{T}\cong\mathbf{G}_{m}\to\widetilde{\mathrm{SL}}_{2}\) is the map sending \(\lambda\in\mathbf{G}_{m}\) to the pair \((x,\ell)\) with
\[x=\begin{pmatrix}\lambda+\lambda^{-1}&-1\\ 1&0\end{pmatrix},\ \ell=[\lambda:1]\,.\]
Note that this indeed a well-defined point in \(\widetilde{\mathrm{SL}}_{2}\), since one can check that \(x\) preserves \(\ell\). This calculation of \(\sigma(\lambda)\) is essentially immediate from the requirement
that the following diagram commutes:
Moreover, the \(\operatorname{SL}_{2}\)-action on \(\widetilde{\operatorname{SL}}_{2}\) sends \(g\in\operatorname{SL}_{2}\) and \((x,\ell)\) to \((\operatorname{Ad}_{g}(x),g\ell)\). If \(g=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\), one can directly compute that \(g\) commutes with \(\left(\begin{smallmatrix}\lambda+\lambda^{-1}&-1\\ 1&0\end{smallmatrix}\right)\) if and only if \(a=c(\lambda+\lambda^{-1})+d\) and \(b=-c\). Therefore, \(g=\left(\begin{smallmatrix}c(\lambda+\lambda^{-1})+d&-c\\ c&d\end{smallmatrix}\right)\) for \(c,d\in k\). In order for \(\det(g)=1\), we need
\[c^{2}+d^{2}+cd(\lambda+\lambda^{-1})=1.\]
As long as \(\lambda\neq\pm 1\), both \(x\) and \(g\) can be simultaneously diagonalized by \(\left(\begin{smallmatrix}\lambda&\lambda^{-1}\\ 1&1\end{smallmatrix}\right)\): the diagonalization of \(x\) is \(\left(\begin{smallmatrix}\lambda&0\\ 0&\lambda^{-1}\end{smallmatrix}\right)\), and the diagonalization of \(g\) is \(\left(\begin{smallmatrix}c\lambda+d&0\\ 0&c\lambda^{-1}+d\end{smallmatrix}\right)\). If \(t=c\lambda+d\), then \(c\lambda^{-1}+d=t^{-1}\) by the above determinant relation. We also have that \(a=t-\frac{\lambda(t-t^{-1})}{\lambda-\lambda^{-1}}\) and \(c=\frac{t-t^{-1}}{\lambda-\lambda^{-1}}\). This shows that \(\mathbf{G}_{m}\times_{\widetilde{\operatorname{SL}_{2}}/\operatorname{SL}_{2}} \mathbf{G}_{m}\cong\operatorname{Spec}k[\lambda^{\pm 1},t^{\pm 1},\frac{t-t^{-1}}{ \lambda-\lambda^{-1}}]\) (even if if \(k\) is of characteristic \(2\)).
An alternative argument for the Cartesian square (17) can be given using the multiplicative Kostant slice, which gives a _different_ section of the map \(G\to G/\!\!/G\). The multiplicative Kostant slice is significantly more accessible, and the resulting Theorem 4.2.5 is what we will generalizing below to other cohomology theories.
**Definition 4.2.3** (Multiplicative Kostant slice).: Let \(e\in\mathfrak{n}\) be a principal nilpotent element. Then the map \(\mathbf{G}_{a}\to G\) corresponding to \(e\) factors through the map \(\mathbf{G}_{a}=B\to\operatorname{SL}_{2}\); we will denote the image of the standard generator \((\begin{smallmatrix}1&0\\ 1&1\end{smallmatrix})\in B^{-}\) under the map \(\operatorname{SL}_{2}\to G\) by \(f\in G\). Let \(Z_{G}(e)^{\circ}\) be the connected component of the identity in the centralizer of \(e\) in \(G\). Define the _multiplicative Kostant slice_\(\mathscr{S}_{\mu}\) by \(Z_{G}(e)^{\circ}\cdot f\subseteq G\). Since \(G\) is assumed to be simply-connected, the composite \(\mathscr{S}_{\mu}\to G\to G/\!\!/G\cong T/\!\!/W\) is an isomorphism. We will often denote the inclusion of the Kostant slice by \(\kappa:T/\!\!/W\to G\). Let \(\widetilde{\mathscr{S}_{\mu}}\) denote the fiber product \(\widetilde{\mathscr{S}_{\mu}}\times_{G}\widetilde{G}\), so that the composite \(\widetilde{\mathscr{S}_{\mu}}\to\widetilde{G}\to T\) is an isomorphism; we will denote the inclusion of \(\widetilde{\mathscr{S}_{\mu}}\) as a map \(\kappa:\widetilde{\mathscr{S}_{\mu}}\cong T\to\widetilde{G}\).
As with the additive Kostant slice, we will only care about the composite \(T\to\widetilde{G}\to\widetilde{G}/G\) below, so we will also denote it by \(\kappa\). If we identify \(\widetilde{G}/G\cong B/B\), then the map \(\kappa\) admits a simple description: it is the composite \(T\to B\to B/B\) which sends \(x\mapsto xf\). Just as in [11, Proposition 19], there is a unique map \(\mu:T\cdot f\to N\) such that \(\operatorname{Ad}_{\mu(x)}(x)\in Z_{G}(e)^{\circ}\cdot f\), and the image of any \(x\in T\) under the map \(T\to T/\!\!/W\xrightarrow{\kappa}G\) can be identified with \(\operatorname{Ad}_{\mu(xf)}(xf)\).
**Remark 4.2.4**.: The main result of [10] states that any two sections of the map \(G\to T/\!\!/W\) are conjugate. For instance, the multiplicative Kostant section \(T/\!\!/W\cong\mathbf{A}^{1}\to\operatorname{SL}_{2}\) sending \(\lambda\mapsto\left(\begin{smallmatrix}\lambda-1&\lambda-2\\ 1&1\end{smallmatrix}\right)\) and the Steinberg section \(T/\!\!/W\cong\mathbf{A}^{1}\to\operatorname{SL}_{2}\) sending \(\lambda\mapsto\left(\begin{smallmatrix}\lambda&-1\\ 1&0\end{smallmatrix}\right)\) are conjugated into each other by the matrix \(\left(\begin{smallmatrix}1&-1\\ 0&1\end{smallmatrix}\right)\).
**Theorem 4.2.5**.: _Let \(G\) be a connected and simply-connected semisimple algebraic group or a torus over \(\mathbf{C}\). Let \(A\) be an \(\mathbf{E}_{\infty}\)-\(\mathrm{KU}\)-algebra, and let \(\mathbf{G}=\mathbf{G}_{m}\) (so \(\mathscr{M}_{T}\) is the torus \(T\) over \(A\)). View \(\widetilde{G}\) as a scheme over \(\mathbf{Q}\). If \(\mathrm{QCoh}(\tilde{T})\) is viewed as a module over \(\mathrm{QCoh}(\widetilde{G}/\tilde{G})\) via \(\kappa^{*}\), then there is an equivalence_
\[\mathrm{End}_{\mathrm{QCoh}(\widetilde{G}/\tilde{G})}(\mathrm{QCoh}(\tilde{T} ))\otimes_{\mathbf{Q}}\pi_{0}A_{\mathbf{Q}}\simeq\mathrm{LMod}_{\pi_{0}C_{*}^{ T}(\mathrm{Gr}_{G}(\mathbf{C});A)}\otimes\mathbf{Q}.\]
Proof.: Following the argument of Proposition 4.2.2, we only need to prove the Cartesian-ness of (17), where the map \(\tilde{T}\to\widetilde{G}/\tilde{G}\) is chosen to be the multiplicative Kostant slice instead of the Steinberg slice. Again, we only review the calculation for \(\tilde{G}=\mathrm{SL}_{2}\); this was done in [1]. For convenience, we will drop the "check"s. As before, there are "two" ways to compute in the case \(G=\mathrm{SL}_{2}\). First, we describe the argument essentially present in [1] (which works over a base field of characteristic not 2). If \(\lambda\in\mathbf{G}_{m}\), we denote \(\lambda+\lambda^{-1}\in\mathbf{A}^{1}\) by \(f(\lambda)\). The Kostant slice \(\kappa:\tilde{T}\cong\mathbf{G}_{m}\to\widetilde{\mathrm{SL}}_{2}\) is the map sending \(\lambda\in\mathbf{G}_{m}\) to the pair \((x,\ell)\) with
\[x=\begin{pmatrix}f(\lambda)-1&f(\lambda)-2\\ 1&1\end{pmatrix},\ \ell=\left[\lambda-1:1\right].\]
Note that this indeed a well-defined point in \(\widetilde{\mathrm{SL}}_{2}\), since one can check that \(x\) preserves \(\ell\): the key point is the conic relation
\[2\lambda=f(\lambda)-\sqrt{f(\lambda)^{2}-4}.\]
Indeed, this calculation of \(\kappa(\lambda)\) is essentially immediate from the requirement that the following diagram commutes:
Moreover, the \(\mathrm{SL}_{2}\)-action on \(\widetilde{\mathrm{SL}}_{2}\) sends \(g\in\mathrm{SL}_{2}\) and \((x,\ell)\) to \((\mathrm{Ad}_{g}(x),g\ell)\). If \(g=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\), we directly compute that \(\mathrm{Ad}_{g}(x)=x\) if and only if \(b=c(f(\lambda)-2)\) and \(a-d=(f(\lambda)-2)c\), in which case \(g\) also preserves \(\ell\). Therefore, \(g=\left(\begin{smallmatrix}(f(\lambda)-2)c+d&(f(\lambda)-2)c\\ c&d\end{smallmatrix}\right)\) for \(c,d\in k\). In order for \(\det(g)=1\), we need
\[d^{2}+c(f(\lambda)-2)(d-c)=1.\]
Both \(x\) and \(g\) can be simultaneously diagonalized (if \(f(\lambda)\neq\pm 2\)); note that \(\lambda+\lambda^{-1}\) is an eigenvalue of \(x\). If \(t\) is an eigenvalue of \(g\), then we have \(c=\frac{t-t^{-1}}{\lambda-\lambda^{-1}}\) and \(d=\frac{t^{2}\lambda+1}{t(\lambda+1)}\). When \(k\) is not of characteristic 2, this shows that \(\mathbf{G}_{m}\times_{\widetilde{\mathrm{SL}}_{2}/\mathrm{SL}_{2}}\mathbf{G}_ {m}\cong k[\lambda^{\pm 1},t^{\pm 1},\frac{t-t^{-1}}{\lambda-\lambda^{-1}}]\), as desired.
For the "second" method of calculation when \(G=\mathrm{SL}_{2}\) (which works in arbitary characteristic), we use the fact that \(\kappa:T\to\widetilde{G}/G\) can be identified with the composite \(T\to B\to B/B\) sending \(x\mapsto xf\). Then, \(T\times_{B/B}T\) is isomorphic to the subvariety of \(T\times B\) consisting of pairs \((x,g)\) with \(x\in T\) (identified with the matrix \(\left(\begin{smallmatrix}x&0\\ 0&x^{-1}\end{smallmatrix}\right)\)) and \(\mathrm{Ad}_{g}(xf)=xf\). Note that \(xf\) is the matrix \(\left(\begin{smallmatrix}x&0\\ x^{-1}&x^{-1}\end{smallmatrix}\right)\). If \(\kappa\) is a
\(g=\left(\begin{smallmatrix}a&0\\ b&a^{-1}\end{smallmatrix}\right)\in B\), then
\[\operatorname{Ad}_{g}\begin{pmatrix}x&0\\ x^{-1}&x^{-1}\end{pmatrix}=\begin{pmatrix}x&0\\ a^{-2}x^{-1}+ba^{-1}(x-x^{-1})&x^{-1}\end{pmatrix}.\]
Therefore, \(\operatorname{Ad}_{g}(xf)=xf\) if and only if
\[a^{-2}x^{-1}+ba^{-1}(x-x^{-1})=x^{-1},\]
which forces \(b=\frac{a-a^{-1}}{x^{2}-1}\). This implies that \(T\times_{B/B}T\) is isomorphic to \(\operatorname{Spec}k[x^{\pm 1},a^{\pm 1},\frac{a-a^{-1}}{x^{2}-1}]\), as desired.
We can also run this argument in the case \(G=\operatorname{PGL}_{2}\) (again in arbitary characteristic). Again, \(T\times_{B/B}T\) is isomorphic to the subvariety of \(T\times B\) consisting of pairs \((x,g)\) with \(x\in T\) (identified with the matrix \(\left(\begin{smallmatrix}x&0\\ 0&1\end{smallmatrix}\right)\)) and \(\operatorname{Ad}_{g}(xf)=xf\). Note that \(xf\) is the matrix \(\left(\begin{smallmatrix}x&0\\ 1&1\end{smallmatrix}\right)\). If \(g=\left(\begin{smallmatrix}a&0\\ b&1\end{smallmatrix}\right)\in B\), then
\[\operatorname{Ad}_{g}\begin{pmatrix}x&0\\ 1&1\end{pmatrix}=\begin{pmatrix}x&0\\ ba^{-1}(x-1)+a^{-1}&1\end{pmatrix}.\]
Therefore, \(\operatorname{Ad}_{g}(xf)=xf\) if and only if
\[ba^{-1}(x-1)+a^{-1}=1,\]
which forces \(b=\frac{a-1}{x-1}\). This implies that \(T\times_{B/B}T\) is isomorphic to \(\operatorname{Spec}k[x^{\pm 1},a^{\pm 1},\frac{a-1}{x-1}]\), as desired.
**Observation 4.2.6**.: In the second argument for the Cartesian square (17), we may replace the symbol \(\lambda\) by the symbol \(e^{\lambda}\); then, \(e^{\lambda}-1\) is the exponential of the multiplicative formal group law. In particular, the defining equation for the line \(\ell\) in the cases of \(\mathbf{G}=\mathbf{G}_{a},\mathbf{G}_{m}\) precisely describes the exponential for the formal completion \(\hat{\mathbf{G}}\) of \(\mathbf{G}\) at the identity.
**Remark 4.2.7**.: In [1], the following analogue of (17) is established (over \(\mathbf{C}\), but this does not affect the statement): there is a Cartesian square
(18)
where the top-left corner can be identified with \(\operatorname{Spec}C_{0}^{G}(\operatorname{Gr}_{G}(\mathbf{C});\operatorname{ KU})\otimes\mathbf{Q}\). We can take the fiber product of (17) with itself over (18) to obtain a Cartesian square
(19)
Using Theorem 4.2.5 and the above discussion, one can use (19) to show that \(\operatorname{End}_{\operatorname{QCoh}((\tilde{G}\times_{\tilde{G}}\tilde{G}) /\tilde{G})}(\operatorname{QCoh}(\tilde{T}\times_{\tilde{T}\tilde{W}}\tilde{T}))\) can be identified with \(\operatorname{LMod}_{\pi_{0}C_{\tau}^{T}(\operatorname{Fl}_{G}(\mathbf{C}); \operatorname{KU})}\otimes\mathbf{Q}\). This can be viewed as a "once-looped" version of a K-theoretic analogue of Bezrukavnikov's equivalence from [1].
**Remark 4.2.8**.: We expect that most of the steps of Theorem 4.1.11 can be replicated to study \(\operatorname{LMod}_{C_{\tau}^{\widetilde{T}}(\operatorname{Gr}_{G}(\mathbf{C}); \operatorname{KU})}\otimes\mathbf{Q}\). More precisely, let \(d\in\mathbf{Z}\), and fix a symmetric bilinear form \((-,-):\Lambda\times\Lambda\to\frac{1}{d}\mathbf{Z}\) such that whose Gram matrix is the associated Cartan matrix (i.e., \((\alpha_{i},\alpha_{j})\) is the \(a_{ij}\) entry of the associated Cartan matrix). We then have the quantum group \(U_{q}(\mathfrak{g})\) defined over \(\mathbf{Z}[q^{\pm 1}]\) associated to the pairing \(\Lambda\times\Lambda\to\mathbf{Z}[q^{\pm 1}]\) sending \(\lambda,\mu\mapsto q^{-(\lambda,\mu)}\). Following [10, Definition 4.24], define the _quantum universal category_\(\mathscr{O}_{q}^{\operatorname{inv}}\) as the \(\infty\)-category of \((U_{q}(\mathfrak{g}),U_{q}(\mathfrak{t}))\)-bimodules whose diagonal \(U_{q}(\mathfrak{b})\)-action is integrable.
Let \((W,\Delta)\) be a crystallographic root system, let \(\Lambda^{\vee}=\mathbf{Z}\Phi\) denote the associated root lattice, and let \(T=\operatorname{Spec}\mathbf{Z}[\Lambda]\) denote the associated torus. Each \(\alpha\in W\) defines an operator \(s_{\alpha}\) on \(\mathscr{O}_{T}\). Define the _multiplicative nil-Hecke algebra_\(\mathscr{H}(T,W)\) as the subalgebra of \(\operatorname{Frac}(\mathscr{O}_{T})\rtimes\mathbf{Q}[W]\) generated by \(\mathscr{O}_{T}\) and the operators \(T_{\alpha}=\frac{1}{e^{\alpha}-1}(s_{\alpha}-1)\). (Also see [1] for a study of a multiplicative analogue of Soergel theory.) Then, there are relations
\[T_{\alpha}^{2}=T_{\alpha},\ (T_{\alpha}T_{\beta})^{m_{\alpha,\beta}}=(T_{\beta}T_ {\alpha})^{m_{\alpha,\beta}},\ x\cdot T_{\alpha}=T_{\alpha}\cdot s_{\alpha}(x) +T_{\alpha}(x),\ \alpha\in\Delta.\]
Recall that \(m_{\alpha_{i}\alpha_{j}}\) is 2, 3, 4, 6, \(\infty\) if \(a_{ij}a_{ji}\) is 0, 1, 2, 3, \(\geq 4\) (respectively). This algebra was also studied in [10, Section 2.2]. Note that if \(\lambda\in\Lambda\) (corresponding to the function \(e^{\lambda}\) on \(T\)), we have \(T_{\alpha}(e^{\lambda})=[\langle\alpha^{\vee},\lambda\rangle]_{e^{\alpha}}e^{\lambda}\), where \([\langle\alpha^{\vee},\lambda\rangle]_{e^{\alpha}}\) denotes the \(q\)-integer \(\frac{q^{\langle\alpha^{\vee},\lambda\rangle}-1}{q-1}\) with \(q=e^{\alpha}\).
Given the discussion in Section 3.3 relating loop-rotation equivariance in \(K\)-theory to \(q\)-deformations, as well as Theorem 4.1.11, we expect:
**Conjecture 4.2.9**.: _There is a Kostant functor \(\kappa:\mathscr{O}_{q}^{\operatorname{univ}}\to\operatorname{QCoh}(\check{T}_ {\mathbf{Q}}\times\mathbf{G}_{m}^{q})\) (where \(\mathbf{G}_{m}^{q}=\operatorname{Spec}\mathbf{Q}[q^{\pm 1}]\)) such that there is a \(\mathbf{Q}[q^{\pm 1}]\)-linear equivalence_
\[\operatorname{LMod}_{\pi_{0}C_{\tau}^{\widetilde{T}}(\operatorname{Gr}_{G}( \mathbf{C});\operatorname{KU})}\otimes\mathbf{Q}\simeq\operatorname{End}_{ \mathscr{O}_{q}^{\operatorname{univ}}}(\operatorname{QCoh}(\check{T}_{ \mathbf{Q}}\times\mathbf{G}_{m}^{q})). \tag{20}\]
_Similarly, if \(\operatorname{HC}_{q}(\check{G})\) denotes the category of [10, Definition 2.24], there is a Kostant functor \(\kappa:\operatorname{HC}_{q}(\check{G})\to\operatorname{QCoh}(\check{T}_{ \mathbf{Q}}/\!\!/W\times\mathbf{G}_{m}^{q})\) and a \(\mathbf{Q}[q^{\pm 1}]\)-linear equivalence_
\[\operatorname{LMod}_{\pi_{0}C_{\ast}^{G\times S^{1}_{\operatorname{rot}}}( \operatorname{Gr}_{G}(\mathbf{C});\operatorname{KU})}\otimes\mathbf{Q}\simeq \operatorname{End}_{\operatorname{HC}_{q}(\check{G})}(\operatorname{QCoh}( \check{T}_{\mathbf{Q}}/\!\!/W\times\mathbf{G}_{m}^{q})). \tag{21}\]
At the moment, we are only able to describe the left-hand side in terms of combinatorial data. Let \(e=\frac{1}{\#W}\sum_{w\in W}w\) be the symmetrizer idempotent. Using Corollary 3.2.4 and [10, Proposition 2.6] (see also Proposition 3.1.8), one can show that \(\pi_{0}C_{\ast}^{\widetilde{T}}(\operatorname{Gr}_{G}(\mathbf{C});\operatorname {KU})\otimes\mathbf{Q}\) is isomorphic to \(\mathscr{O}_{\widehat{T}}\otimes\mathscr{O}_{\tau_{\widehat{T}/W}}e\mathscr{H}( \widetilde{T},\widetilde{W}^{\operatorname{aff}})e\), where the parameter \(q\in\pi_{0}\operatorname{KU}_{\mathbf{G}_{m}^{\operatorname{rot}}}\cong \mathbf{Z}[q^{\pm 1}]\) corresponds to the coordinate on \(\mathbf{G}_{m}^{q}\subseteq\widetilde{T}\) viewed as an element of \(\mathscr{H}(\widetilde{T},\widetilde{W}^{\operatorname{aff}})e\). Similarly, \(\pi_{0}C_{\ast}^{G\times S^{1}_{\operatorname{rot}}}(\operatorname{Gr}_{G}( \mathbf{C});\operatorname{KU})\otimes\mathbf{Q}\) is isomorphic to \(e\mathscr{H}(\widetilde{T},\widetilde{W}^{\operatorname{aff}})e\). The conjectural equivalence (21) then reduces to proving an (also conjectural) equivalence
\[\operatorname{End}_{\operatorname{HC}_{q}(\check{G})}(\operatorname{QCoh}(\check {T}/\!\!/W\times\mathbf{G}_{m}^{q}))\simeq\operatorname{LMod}_{e\mathscr{H}( \widetilde{T},\widetilde{W}^{\operatorname{aff}})e}. \tag{22}\]
This may be understood as a quantum analogue of [11, Theorem 8.1.2]. Note that the above equivalences are now statements which are squarely on one side of Langlands duality. In the case \(G=\operatorname{SL}_{2}\), we described \(C_{\ast}^{G\times S^{1}_{\operatorname{rot}}}(\operatorname{Gr}_{G}(\mathbf{C}); \operatorname{KU})\otimes\mathbf{Q}\) (and hence \(e\mathscr{H}(\widetilde{T},\widetilde{W}^{\operatorname{aff}})e\)) below in Example B.5; it might be possible to use this
calculation to compare with \(\operatorname{End}_{\tilde{\mathscr{O}}^{\mathrm{univ}}_{q}}(\mathrm{QCoh}(\check{T} \times\mathbf{G}^{q}_{m}))\) for \(\check{G}=\mathrm{PGL}_{2}\). A positive resolution to [11, Conjecture 3.17] should be the key input into proving (22).
For general \(G\), just as \((T\times\check{T})^{\mathrm{bl}}\) is birational to \(T\times\check{T}\), the map from the algebra of \(q\)-difference operators on \(\check{T}\) to \(\mathscr{H}(\widetilde{\check{T}},\widetilde{W}^{\mathrm{aff}})e\) is an isomorphism after a particular localization. One therefore expects \(\tilde{\mathscr{O}}^{\mathrm{univ}}_{q}\) and \(\mathrm{HC}_{q}(\check{T})\) to generically be equivalent. This is indeed true, and can be seen using [12, Theorem 4.33] (although the functor \(\tilde{\mathscr{O}}^{\mathrm{univ}}_{q}\to\mathrm{HC}_{q}(\check{T})\) in _loc. cit._ is not our expected functor \(\kappa\)).
**Remark 4.2.10**.: Since \(\check{G}/\check{G}=\mathrm{Map}(S^{1},B\check{G})\), the canonical orientation of \(S^{1}\) defines a \(1\)-shifted symplectic structure on \(\check{G}/\check{G}\) via [11, Theorem 2.5]. The quasi-classical limit (i.e., \(q\to 1\)) of the conjectural equivalence (21) gives the following strengthening of Theorem 4.2.5. (This strengthening can be proved independently of (21).)
Observe that the Kostant slice \(\check{T}/\!\!/W\to\check{G}/\check{G}\) is a Lagrangian morphism. It follows that the self-intersection \(\check{T}/\!\!/W\times_{\check{G}/\check{G}}\check{T}/\!\!/W\) admits the structure of a symplectic stack by [11, Theorem 2.9]. Since this fiber product is isomorphic to \((T^{*}_{\mathbf{G}_{m}}\check{T})^{\mathrm{bl}}/\!\!/W\) by (18), we obtain a Poisson bracket on \(\mathscr{O}_{(T^{*}_{\mathbf{G}_{m}}\check{T})^{\mathrm{bl}}/\!\!/W}\cong \pi_{0}C^{G}_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathrm{KU})\). This structure can be seen topologically, at least after a completion: using one of the main results of [10], the Borel-equivariant analogue/completion \(C_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathrm{KU})^{h_{G_{c}}}\) of \(C^{G}_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathrm{KU})\) can be identified with the \(\mathbf{E}_{3}\)-center of \(\pi_{0}C_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathrm{KU})\). This defines a \(2\)-shifted Poisson bracket on \(\pi_{0}C_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathrm{KU})^{h_{G_{c}}}\), which can be identified with the (\(0\)-shifted, via the \(2\)-periodicity of \(\mathrm{KU}\)) Poisson bracket on \(\mathscr{O}_{(T^{*}_{\mathbf{G}_{m}}\check{T})^{\mathrm{bl}}/\!\!/W}\).
**Remark 4.2.11**.: Following Conjecture 4.2.9, one can also hope for a result analogous to (20) when \(q\rightsquigarrow\zeta_{p}\) is specialized to a primitive \(p\)th root of unity. Namely, consider the \(\infty\)-category \(\mathrm{LMod}_{C_{*}^{T\times\mu_{p,\mathrm{rot}}}(\mathrm{Gr}_{G}(\mathbf{C} );\mathrm{KU})}\), where \(\mu_{p,\mathrm{rot}}\subseteq S^{1}_{\mathrm{rot}}\) acts by loop rotation. Note that \(C_{*}^{T\times\mu_{p,\mathrm{rot}}}(\mathrm{Gr}_{G}(\mathbf{C});\mathrm{KU})\) is a module over \(\mathrm{KU}^{h\mathbf{Z}/p}\), and \(\pi_{*}\mathrm{KU}^{h\mathbf{Z}/p}\cong\mathbf{Z}[\![q-1]\!](q^{\pm 1})/(q^{p}-1)\). Inverting \(q-1\), we find that \(C_{*}^{T\times\mu_{p,\mathrm{rot}}}(\mathrm{Gr}_{G}(\mathbf{C});\mathrm{KU})[ \frac{1}{q-1}]\) is a module over \(\mathrm{KU}^{h\mathbf{Z}/p}[\frac{1}{q-1}]\simeq\mathrm{KU}^{t\mathbf{Z}/p} \simeq\mathbf{Q}(\zeta_{p})[\beta^{\pm 1}]\). We then expect the following (likely simpler) analogues of (20) and (21):
**Conjecture 4.2.12**.: _There are Kostant functors \(\kappa:\tilde{\mathscr{O}}^{\mathrm{univ}}_{\zeta_{p}}\to\mathrm{QCoh}(\check{T} _{\mathbf{Q}(\zeta_{p})})\) and \(\kappa:\mathrm{HC}_{\zeta_{p}}(\check{G})\to\mathrm{QCoh}(\check{T}_{\mathbf{Q }(\zeta_{p})}/\!\!/W)\) such that there are \(\mathbf{Q}(\zeta_{p})\)-linear equivalences_
\[\mathrm{LMod}_{\pi_{0}C_{*}^{T\times\mu_{p,\mathrm{rot}}}(\mathrm{ Gr}_{G}(\mathbf{C});\mathrm{KU})[\frac{1}{q-1}]} \simeq\operatorname{End}_{\tilde{\mathscr{O}}^{\mathrm{univ}}_{\zeta_{p}}}( \mathrm{QCoh}(\check{T}_{\mathbf{Q}(\zeta_{p})})),\] \[\mathrm{LMod}_{\pi_{0}C_{*}^{G\times\mu_{p,\mathrm{rot}}}(\mathrm{ Gr}_{G}(\mathbf{C});\mathrm{KU})[\frac{1}{q-1}]} \simeq\operatorname{End}_{\mathrm{HC}_{\zeta_{p}}(\check{G})}(\mathrm{ QCoh}(\check{T}_{\mathbf{Q}(\zeta_{p})}/\!\!/W)).\]
_Note that there is no rationalization necessary on the left-hand sides._
As with Conjecture 4.2.9, Conjecture 4.2.12 reduces to proving the (also conjectural) equivalence
\[\operatorname{End}_{\mathrm{HC}_{\zeta_{p}}(\check{G})}(\mathrm{QCoh}(\check{T} _{\mathbf{Q}(\zeta_{p})}/\!\!/W))\simeq\mathrm{LMod}_{e\mathscr{H}_{\mathrm{c} _{p}}(\widetilde{\check{T}},\widetilde{W}^{\mathrm{aff}})_{e}},\]
where \(\mathscr{H}_{\zeta_{p}}(\widetilde{\check{T}},\widetilde{W}^{\mathrm{aff}})\) denotes the algebra obtained from \(\mathscr{H}(\widetilde{\check{T}},\widetilde{\widetilde{W}}^{\mathrm{aff}})\) by setting \(q\) (arising from the loop rotation torus in \(\widetilde{\check{T}}\)) to \(\zeta_{p}\).
### The elliptic Kostant slice
Fix a (classical) \(\mathbf{Q}\)-algebra \(k\) for the remainder of this section. Let \(E\) be a (smooth) elliptic curve over \(k\), let \(\operatorname{Bun}^{0}_{B}(E)\) denote the moduli stack of \(B\)-bundles on \(E\) of degree \(0\), and let \(\operatorname{Bun}^{0}_{T}(E)\) denote the scheme of \(T\)-bundles on \(E\) of degree \(0\). We will also make use of the stack \(\operatorname{Bun}^{\operatorname{ss}}_{G}(E)\) of semistable \(G\)-bundles on \(E\).
**Definition 4.3.1**.: Say that a \(B\)-bundle \(\mathscr{P}_{B}\) on \(E\) is _regular_ if \(\dim\operatorname{Aut}(\mathscr{P}_{B})=\operatorname{rank}(G)\). Let \(\operatorname{Bun}^{0}_{B}(E)^{\operatorname{reg}}\) denote the open substack of \(\operatorname{Bun}^{0}_{B}(E)\) defined by the regular \(B\)-bundles. Similarly, if \(\mathscr{P}\in\operatorname{Bun}^{\operatorname{ss}}_{G}(E)\) is a semistable \(G\)-bundle on \(E\), we say that \(\mathscr{P}\) is _regular_ if \(\dim\operatorname{Aut}(\mathscr{P})=\operatorname{rank}(G)\). Let \(\operatorname{Bun}^{\operatorname{ss}}_{G}(E)^{\operatorname{reg}}\subseteq \operatorname{Bun}^{\operatorname{ss}}_{G}(E)\) denote the open substack of regular semistable \(G\)-bundles.
**Notation 4.3.2**.: For \(\mathscr{P}_{T}\in\operatorname{Bun}^{0}_{T}(E)\), write \(\Delta_{\mathscr{P}}\) to denotes the set of those simple roots \(\alpha\in\Delta\) such that the \(\alpha\)-component of \(\mathscr{P}_{T}\) is trivial. We will also write \(N_{\mathscr{P}}=\prod_{\alpha\in\Phi^{-}\cap\Delta_{\mathscr{P}}}N_{\alpha}\subseteq N\).
**Proposition 4.3.3**.: _The map \(\operatorname{Bun}^{0}_{B}(E)\to\operatorname{Bun}^{0}_{T}(E)\) admits a canonical unique section \(\kappa:\operatorname{Bun}^{0}_{T}(E)\to\operatorname{Bun}^{0}_{B}(E)\) landing in \(\operatorname{Bun}^{0}_{B}(E)^{\operatorname{reg}}\)._
Proof.: Let \(\mathscr{P}\) be a semistable \(G\)-bundle on \(E\). By [1, Proposition 5.5.5], the regularity of \(\mathscr{P}\) is equivalent to the condition that for any (or some) \(B\)-reduction \(\mathscr{P}_{B}\) of \(\mathscr{P}\) of degree \(0\), the associated \(N\)-bundle \(\mathscr{P}_{B}/T\) is induced from an \(N_{\mathscr{P}}\)-bundle with nontrivial associated \(N_{\alpha}\)-bundle for each \(\alpha\in\Delta_{\mathscr{P}}\). Moreover, every geometric fiber of the map \(\operatorname{Bun}^{\operatorname{ss}}_{G}(E)\to\operatorname{Hom}(\mathbb{X}^ {*}(T),E)/\!/W\) to the coarse moduli space of \(\operatorname{Bun}^{\operatorname{ss}}_{G}(E)\) contains a unique regular semistable \(G\)-bundle. Also see [1, Proposition 3.9], where a similar result is stated.
Following [1, Definition 4.3.7], set
\[\widetilde{\operatorname{Bun}^{\operatorname{ss}}_{G}}(E)^{\operatorname{reg} }\cong\operatorname{Bun}^{\operatorname{ss}}_{G}(E)^{\operatorname{reg}} \times_{\operatorname{Hom}(\mathbb{X}^{*}(T),E)/\!/W}\operatorname{Hom}( \mathbb{X}^{*}(T),E).\]
Let \(\operatorname{Bun}^{0}_{B}(E)^{\operatorname{reg}}\) denote the moduli stack of \(B\)-bundles on \(E\) of degree \(0\). It then follows from the isomorphism \(\widetilde{\operatorname{Bun}^{\operatorname{ss}}_{G}}(E)\cong\operatorname{Bun }^{0}_{B}(E)\) of [1, Proposition 4.1.2] and the equality \(\dim\operatorname{Aut}(\mathscr{P})=\dim\operatorname{Aut}(\mathscr{P}_{B})\) that there is an isomorphism \(\widetilde{\operatorname{Bun}^{\operatorname{ss}}_{G}}(E)^{\operatorname{ reg}}\cong\operatorname{Bun}^{0}_{B}(E)^{\operatorname{reg}}\). In particular, every geometric fiber of the map \(\operatorname{Bun}^{0}_{B}(E)\to\operatorname{Hom}(\mathbb{X}^{*}(T),E)= \operatorname{Bun}^{0}_{T}(E)\) contains a unique regular \(B\)-bundle of degree \(0\).
The existence of \(\kappa\) is a consequence of [1, Theorem 4.3.2], which is a refinement of [1, Theorem 5.1.1]. Since we will not need the full strength of [1, Theorem 4.3.2] outside of this proof, we will only briefly recall the necessary notation and statements. In _loc. cit._, the scheme \(\operatorname{Bun}^{0}_{T}(E)\) is denoted by \(Y\). Let \(\widetilde{\operatorname{Bun}}_{G}(E)\) denote the Kontsevich-Mori compactification of \(\widetilde{\operatorname{Bun}}_{G}(E)\cong\operatorname{Bun}^{0}_{B}(E)\); see [1, Definition 2.1.2]. Let \(\Theta\) denote the theta-line bundle over \(\operatorname{Bun}^{0}_{T}(E)\) of [1, Corollary 3.2.10], and let \(\widetilde{\chi}:\widetilde{\operatorname{Bun}}_{G}(E)\to\Theta^{-1}/ \mathbf{G}_{m}\) denote the map constructed in [1, Corollary 3.3.2]. Then, [1, Theorem 4.3.2] shows that there is a map \(\Theta^{-1}\to\widetilde{\operatorname{Bun}}_{G}^{-\operatorname{ss}}(E)\) landing in \(\widetilde{\operatorname{Bun}}_{G}^{-\operatorname{ss}}(E)^{\operatorname{ reg}}\) such that the composite
\[\Theta^{-1}\to\widetilde{\operatorname{Bun}}_{G}^{-\operatorname{ss}}(E)\xrightarrow{\widetilde{\chi}}\Theta^{-1}/\mathbf{G}_{m}\]
is the canonical map. Composing with the zero section of \(\Theta^{-1}\), we obtain a map
\[\operatorname{Bun}^{0}_{T}(E)\cong 0_{\Theta^{-1}}\to\Theta^{-1}\to\widetilde{ \operatorname{Bun}}_{G}^{-\operatorname{ss}}(E)^{\operatorname{reg}}\cong \operatorname{Bun}^{0}_{B}(E).\]
This is the desired map \(\kappa\)
**Definition 4.3.4**.: We will refer to the map \(\kappa:\operatorname{Bun}^{0}_{T}(E)\to\operatorname{Bun}^{0}_{B}(E)\) from Proposition 4.3.3 as the _elliptic Kostant slice_.
**Example 4.3.5**.: Let \(G=\operatorname{SL}_{2}\), so that a \(B\)-bundle on \(E\) is just a rank 2 vector bundle \(\mathscr{V}\) with \(\det(\mathscr{V})=0\), equipped with a full flag. Then, the map \(\kappa:\operatorname{Pic}^{0}(E)\to\operatorname{Bun}^{0}_{B}(E)\) sends a line bundle \(\mathscr{L}\) to the trivial filtration \(\mathscr{O}_{E}\subseteq\mathscr{O}_{E}\oplus\mathscr{L}\) if \(\mathscr{L}^{2}\neq\mathscr{O}_{E}\); and to the Atiyah extension \(\mathscr{L}\subseteq\mathscr{F}_{2}\twoheadrightarrow\mathscr{L}^{-1}\) from [1] if \(\mathscr{L}^{2}\cong\mathscr{O}_{E}\). This extension is defined by a nontrivial element of \(\operatorname{Ext}^{1}_{E}(\mathscr{L},\mathscr{L}^{-1})\cong\operatorname{ H}^{1}(E;\mathscr{L}^{-2})\). This can either be shown by unwinding the construction of the section \(\kappa\) via [1, Theorem 4.3.2], or directly by noting that the description above provides the unique regular \(B\)-bundle lifting \(\mathscr{L}\).
We will need the following lemma below.
**Lemma 4.3.6**.: _Let \(I\subseteq\Phi^{-}\) be a subset, and let \(\operatorname{Bun}^{0}_{T}(E)_{I}\) denote the subscheme of \(\operatorname{Bun}^{0}_{T}(E)\) defined by those bundles \(\mathscr{P}_{T}\) whose \(\alpha\)-component is trivial precisely for \(\alpha\in I\). Let \(N_{I}\subseteq N\) be the smallest unipotent subgroup which is invariant under \(T\)-conjugation and which contains \(N_{\alpha}\) for every \(\alpha\in I\). Then the natural map_
\[\operatorname{Bun}^{0}_{TN_{I}}(E)\times_{\operatorname{Bun}^{0}_{T}(E)} \operatorname{Bun}^{0}_{T}(E)_{I}\to\operatorname{Bun}^{0}_{B}(E)\times_{ \operatorname{Bun}^{0}_{T}(E)}\operatorname{Bun}^{0}_{T}(E)_{I}\]
_is an isomorphism._
Proof.: Let \(\mathscr{P}_{I}\) denote the universal \(T\)-bundle over \(\operatorname{Bun}^{0}_{T}(E)_{I}\), so that \(\operatorname{Bun}^{0}_{B}(E)\times_{\operatorname{Bun}^{0}_{T}(E)}\operatorname {Bun}^{0}_{T}(E)_{I}\) is the stack of \(B\)-bundles \(\mathscr{P}_{B}\) such that \(\mathscr{P}_{B}/N\cong\mathscr{P}_{T}\); therefore, it is isomorphic to the stack \(\operatorname{Bun}^{\mathscr{P}_{I}}_{N}\) in the notation of [1, Section 2.1.1]. Similarly, \(\operatorname{Bun}^{0}_{TN_{I}}(E)\times_{\operatorname{Bun}^{0}_{T}(E)} \operatorname{Bun}^{0}_{T}(E)_{I}\cong\operatorname{Bun}^{\mathscr{P}_{I}}_{N _{I}}\). To show that these stacks are isomorphic, consider the filtration
\[N_{\ell}\subseteq N_{\ell-1}\subseteq\cdots\subseteq N_{2}\subseteq N_{1}=N\]
by root height (recall that the height of a root is the sum of its simple root components), so that it is invariant under \(T\)-conjugation, and there is an induced filtration
\[N_{I,\ell}\subseteq N_{I,\ell-1}\subseteq\cdots\subseteq N_{I,2}\subseteq N _{I,1}=N_{I}.\]
Then, \(N_{j}\subseteq N\) is normal and \(N_{j-1}/N_{j}\) is central in \(N/N_{j}\) (and similarly for \(N_{I,j}\)); this implies that \(\operatorname{Bun}^{\mathscr{P}_{I}}_{N/N_{j}}\) is a \(\operatorname{Bun}^{\mathscr{P}_{I}}_{N_{j-1}/N_{j}}\)-torsor over \(\operatorname{Bun}^{\mathscr{P}_{I}}_{N/N_{j-1}}\). Similar statements hold for \(\operatorname{Bun}^{\mathscr{P}_{I}}_{N_{I}/N_{I,j}}\). To show that \(\operatorname{Bun}^{\mathscr{P}_{I}}_{N_{I}}\to\operatorname{Bun}^{\mathscr{P }_{I}}_{N}\) is an isomorphism, it therefore suffices to show that the induced map \(\operatorname{Bun}^{\mathscr{P}_{I}}_{N_{I,j-1}/N_{I,j}}\to\operatorname{Bun}^{ \mathscr{P}_{I}}_{N_{j-1}/N_{j}}\) is an isomorphism. Let \(\mathscr{N}=\mathscr{P}_{I}\times^{T}N\), \(\mathscr{N}_{I}=\mathscr{P}_{I}\times^{T}N_{I}\), etc., so that \(\mathscr{N}_{j-1}/\mathscr{N}_{j}\) is a direct sum of line bundles of degree zero. By choice of \(N_{I}\), the inclusion of the trivial line bundle summands into \(\mathscr{N}_{j-1}/\mathscr{N}_{j}\) factors through the map \(\mathscr{N}_{I,j-1}/\mathscr{N}_{I,j}\to\mathscr{N}_{j-1}/\mathscr{N}_{j}\). The desired isomorphism then follows from the observation that if \(U\) is a vector group with \(\mathbf{G}_{m}\)-action, then \(\operatorname{Bun}^{\mathscr{L}}_{U}\) is a point if \(\mathscr{L}\) is a nontrivial line bundle of degree zero (because then \(\operatorname{H}^{1}(E;U(\mathscr{L}))=0\)).
**Example 4.3.7**.: For instance, suppose that \(I=\emptyset\), so that \(\operatorname{Bun}^{0}_{T}(E)_{\emptyset}\) denotes the open subscheme of \(T\)-bundles of degree zero whose \(\alpha\)-component is nontrivial for every negative root \(\alpha\). The isomorphism \(\widetilde{\operatorname{Bun}}^{\operatorname{ss}}_{G}(E)\cong\operatorname{Bun }^{0}_{B}(E)\) implies that the map \(\widetilde{\operatorname{Bun}}^{\operatorname{ss}}_{G}(E)\to\operatorname{Bun }^{0}_{T}(E)\) is an isomorphism over \(\operatorname{Bun}^{0}_{T}(E)_{\emptyset}\). In particular, every point of \(\operatorname{Bun}^{0}_{T}(E)_{\emptyset}\) has a canonical associated (regular) semistable \(G\)-bundle. The above results continue to hold if \(E\) is replaced by the constant stack \(S^{1}\) or by
(in which case \(\widetilde{\operatorname{Bun}}_{G}^{\operatorname{ss}}(E)\) and \(\operatorname{Bun}_{B}^{0}(E)\) are to be interpreted as \(G/G\) and \(B/B\), and \(\mathfrak{g}/G\) and \(\mathfrak{b}/B\), respectively). In the case of \(S^{1}\), for instance, the semistable \(G\)-bundles obtained in this way from \(\operatorname{Bun}_{T}^{0}(E)_{\mathfrak{g}}\) are precisely those which lie in the regular _semisimple_ locus \(G^{\operatorname{rs}}/G\); similarly for the case of \(B\mathbf{G}_{a}\).
### Rationalized Langlands duality over elliptic cohomology
**Definition 4.4.1**.: Let \(\mathbf{G}_{0}\) be a commutative group scheme over a ring \(A_{0}\) (even an \(\mathbf{E}_{\infty}\)-ring, but we will not need this). Let \(\mathbf{G}_{0}^{\vee}\) denote the stack \(\operatorname{Hom}(\mathbf{G}_{0},B\mathbf{G}_{m})\).
**Example 4.4.2**.: If \(\mathbf{G}_{0}=\mathbf{G}_{m}\), then \(\mathbf{G}_{0}^{\vee}=B\mathbf{Z}\), i.e., is \(S^{1}\) viewed as a constant stack. If \(\mathbf{G}_{0}\) is an abelian variety, then \(\mathbf{G}_{0}^{\vee}\) is the dual abelian variety. If \(\mathbf{G}_{0}=\mathbf{Z}\), then \(\mathbf{G}_{0}^{\vee}\) is \(B\mathbf{G}_{m}\). Let \(W\) denote the commutative group scheme over \(\mathbf{Z}_{(p)}\) of \(p\)-typical Witt vectors. Let \(W[F]\) denote the kernel of Frobenius on \(W\). If \(\hat{\mathbf{G}}_{a}\) denotes the formal completion of \(\mathbf{G}_{a}\) at the origin, then \(\hat{\mathbf{G}}_{a}^{\vee}\cong BW[F]\) (over \(\mathbf{Z}_{(p)}\)). Since \(W[F]\cong\mathbf{G}_{a}\) over a field of characteristic zero, there is an isomorphism \(\hat{\mathbf{G}}_{a,\mathbf{Q}}^{\vee}\cong B\mathbf{G}_{a}\).
**Remark 4.4.3**.: In general, there is a canonical map \(\mathbf{G}_{0}\to(\mathbf{G}_{0}^{\vee})^{\vee}\), and the above examples imply that it is an isomorphism if \(\mathbf{G}_{0}\) is a finite product of abelian varieties, classifying stacks of groups of multiplicative type, and finitely generated abelian groups. If this is the case, \(\mathbf{G}_{0}\) is said to be _dualizable_.
**Remark 4.4.4**.: Note that the pairing \(\mathbf{G}_{0}\times\mathbf{G}_{0}^{\vee}\to B\mathbf{G}_{m}\) defines a line bundle over \(\mathbf{G}_{0}\times\mathbf{G}_{0}^{\vee}\), which we will denote by \(\mathscr{P}\) and call the _Poincare line bundle_. If \(\mathbf{G}_{0}\) is an abelian variety, this is the usual Poincare line bundle over \(\mathbf{G}_{0}\times\mathbf{G}_{0}^{\vee}\). If \(\mathbf{G}_{0}=\mathbf{G}_{m}\), the Poincare line bundle gives the equivalence \(\operatorname{Rep}(\mathbf{Z})\simeq\operatorname{QCoh}(\mathbf{G}_{m})\) obtained by viewing \(\mathbf{G}_{m}\) as the torus associated to the monoid \(\mathbf{Z}\).
**Remark 4.4.5**.: If \(\mathbf{G}_{0}\) is a finite flat, diagonal, or constant group scheme (but not an abelian variety!), then \(\mathbf{G}_{0}^{\vee}\) can be identified with the classifying stack of the Cartier dual of \(\mathbf{G}_{0}\). If \(X\) is an \(A_{0}\)-scheme, let \(\mathscr{L}_{\mathbf{G}_{0}}X\) denote the \(\mathbf{G}_{0}\)_-loop space_ of \(X\), given by the mapping stack \(\operatorname{Map}(\mathbf{G}_{0}^{\vee},X)\). Then, if \(\mathbf{G}_{0}\) is replaced by its formal completion at the zero section, the \(\mathbf{G}_{0}\)-loop space recovers the loop space of [11].
**Assumption 4.4.6**.: Fix an isomorphism \(\mathbb{X}^{\bullet}(T)\cong\mathbb{X}_{\bullet}(T)\) of lattices, which will be used implicitly below without further mention. (Note that we are not asking for a \(W\)-equivariant isomorphism, which would not exist in general.) This gives an isomorphism \(\mathscr{M}_{T}\cong\mathscr{M}_{T}\), which we will use below as an analogue of the identification between \(\check{\mathfrak{t}}=\mathfrak{t}^{*}\) and \(\mathfrak{t}\) (ubiquitous in geometric representation theory). Although potentially confusing, we will see below in the proof of Theorem 4.4.7 that this identification does not run the risk of conflating different sides of Langlands duality.
We will prove the following at the end of the section, after a discussion of some consequences.
**Theorem 4.4.7**.: _Fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\) and an oriented commutative \(A\)-group \(\mathbf{G}\), as well as a semisimple algebraic group \(\check{G}\) over \(\mathbf{Q}\). Assume that the underlying \(\pi_{0}A\)-scheme \(\mathbf{G}_{0}\) is \(\mathbf{G}_{a}\), \(\mathbf{G}_{m}\), or an elliptic curve \(E\). Given a principal nilpotent \(f\in\mathfrak{n}\), there is a "\(\mathbf{G}\)-Kostant slice" \(\kappa:(\mathscr{M}_{T,0})_{\mathbf{Q}}\to\operatorname{Bun}_{\check{B}}( \mathbf{G}_{0,\mathbf{Q}}^{\vee})\)
over \(\pi_{0}A_{\mathbf{Q}}\). If \(\mathrm{Bun}^{0}_{B}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})=\mathrm{Bun}_{\check{B}}( \mathbf{G}^{\vee}_{0,\mathbf{Q}})\times_{\mathrm{Bun}_{T}}\mathscr{M}_{T}\), there is a Cartesian square_
Combining with Theorem 3.2.12, we obtain the following:
**Corollary 4.4.8**.: _Suppose that \(G\) is a connected and simply-connected semisimple algebraic group or a torus over \(\mathbf{C}\). Assume that the underlying \(\pi_{0}A\)-scheme \(\mathbf{G}_{0}\) is \(\mathbf{G}_{a}\), \(\mathbf{G}_{m}\), or an elliptic curve \(E\). Then there is an equivalence_
\[\mathrm{End}_{\mathrm{QCoh}(\mathrm{Bun}^{0}_{B}(\mathbf{G}^{\vee}_{0,\mathbf{ Q}}))}(\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}}))\simeq\mathrm{Mod}_{\pi_{0} \mathscr{F}_{T}(\mathrm{Gr}_{G}(\mathbf{C}))^{\vee}}(\mathrm{QCoh}(\mathscr{M} _{T,0}))\otimes\mathbf{Q},\]
_where \(\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}})\) is regarded as a \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\check{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}}))\)-module via \(\kappa\)._
**Example 4.4.9**.: For example, if \(\mathbf{G}=\hat{\mathbf{G}}_{a}\), then \(\mathbf{G}^{\vee}_{0}=BW[F]\). Therefore, \(\mathbf{G}^{\vee}_{0,\mathbf{Q}}=B\mathbf{G}_{a}\), and \(\mathrm{Bun}^{0}_{B}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})=\check{\mathbf{b}}_{ \mathbf{Q}}/\check{B}_{\mathbf{Q}}\cong\widetilde{\check{\mathbf{g}}}_{ \mathbf{Q}}/\check{G}_{\mathbf{Q}}\) by [11, Theorem 1.2.4]. In particular, Theorem 4.4.7 was proved above in this case as Theorem 4.1.2. If \(\mathbf{G}=\mathbf{G}_{m}\), then \(\mathbf{G}^{\vee}_{0,\mathbf{Q}}=B\mathbf{Z}=S^{1}\), so that \(\mathrm{Bun}^{0}_{\check{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})=\mathrm{Map}(S ^{1}_{\mathrm{KU}_{\mathbf{Q}}},B\check{B}_{\mathrm{KU}_{\mathbf{Q}}})\) is isomorphic to the \(2\)-periodification of \(\check{B}_{\mathbf{Q}}/\check{B}_{\mathbf{Q}}\). In particular, Theorem 4.4.7 was proved above in this case as Theorem 4.2.5. If \(\mathbf{G}_{0}\) is an elliptic curve \(E\), then \(\mathbf{G}^{\vee}_{0}=E^{\vee}\), so that \(\mathrm{Bun}^{0}_{\check{B}}(\mathbf{G}^{\vee}_{0})=\mathrm{Bun}^{0}_{\check{ B}}(E^{\vee})\). Theorem 4.4.7 in this case will be proved below.
We also obtain a proof of Theorem 1.1.10 (which we restate for convenience):
**Corollary** (Theorem 1.1.10).: _Suppose that \(G\) is a connected and simply-connected semisimple algebraic group or a torus over \(\mathbf{C}\), and let \(T\) act on \(G\) by conjugation. Let \(G_{c}\) denote the maximal compact subgroup of \(G(\mathbf{C})\). Fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\), and let \(\mathbf{G}\) be an oriented group scheme in the sense of [10]. Assume that the underlying \(\pi_{0}A\)-scheme \(\mathbf{G}_{0}\) is \(\mathbf{G}_{a}\), \(\mathbf{G}_{m}\), or an elliptic curve \(E\). Then there is an equivalence of \(\pi_{0}A_{\mathbf{Q}}\)-linear \(\infty\)-categories:_
\[\mathrm{Loc}^{\mathrm{gr}}_{T_{c}}(G_{c};A)\otimes\mathbf{Q}\simeq\mathrm{ QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}}\times_{\mathrm{Bun}^{0}_{B}(\mathbf{G}^{ \vee}_{0,\mathbf{Q}})}(\mathscr{M}_{T,0})_{\mathbf{Q}}).\]
Proof.: Note that \(G_{c}\) is connected. By Notation 2.3.6, there is an equivalence \(\mathrm{Loc}^{\mathrm{gr}}_{T_{c}}(G_{c};A)\simeq\mathrm{LMod}_{\pi_{0}\mathscr{ F}_{T}(\Omega G_{c})^{\vee}}(\mathrm{QCoh}(\mathscr{M}_{T,0}))\), so the claim follows from Corollary 4.4.8.
**Remark 4.4.10**.: If \(A=\mathbf{Q}[\beta^{\pm 1}]\), the equivalence resulting from Theorem 1.1.10 is an equivalence of \(2\)-periodic \(\mathbf{Q}\)-linear \(\infty\)-categories. However, the equivalence can be de-periodified, and one obtains an equivalence
\[\mathrm{Loc}_{T_{c}}(G_{c};\mathbf{Q})\simeq\mathrm{QCoh}(\check{\mathfrak{t}}[ 2]_{\mathbf{Q}}\times_{\tilde{\mathfrak{t}}[2]_{\mathbf{Q}}/\check{G}_{ \mathbf{Q}}}\check{\mathfrak{t}}[2]_{\mathbf{Q}}).\]
There is also a \(G_{c}\)-equivariant analogue:
\[\mathrm{Loc}_{G_{c}}(G_{c};\mathbf{Q})\simeq\mathrm{QCoh}(\check{\mathfrak{t}}[ 2]_{\mathbf{Q}}/W\times_{\tilde{\mathfrak{t}}[2]_{\mathbf{Q}}/\check{G}_{ \mathbf{Q}}}\check{\mathfrak{t}}[2]_{\mathbf{Q}}/W).\]
This equivalence can be de-equivariantized, to obtain an equivalence
\[\mathrm{Loc}(G_{c};\mathbf{Q})\simeq\mathrm{QCoh}(Z_{f}(\check{B})),\]
where \(f\in\tilde{\mathfrak{g}}\) is the image of the origin in \(\check{\mathfrak{t}}/\!\!/W\) under the Kostant slice, and \(Z_{f}(\check{B})\) is a shifted analogue of the centralizer of \(f\) in \(\check{B}\). Note that \(T^{*}G_{c}=G(\mathbf{C})\), so that the left-hand side can be interpreted as a relative of the \(\mathbf{Q}\)-linearization of the wrapped Fukaya category of \(T^{*}G_{c}\) by [10, Theorem 1.1]. In particular, this shifted analogue of \(Z_{f}(\check{B})\) is a (derived) mirror to \(G(\mathbf{C})\) viewed as a symplectic manifold.
**Remark 4.4.11**.: The proof of Theorem 1.1.10 above uses the Koszul duality equivalence \(\operatorname{Loc}_{T_{c}}(G_{c};A)\simeq\operatorname{LMod}_{\mathscr{F}_{T}( \Omega G_{c})^{\vee}}(\operatorname{QCoh}(\mathscr{M}_{T,0}))\) of Proposition 2.2.6. The category \(\operatorname{LMod}_{\mathscr{F}_{T}(\Omega G_{c})^{\vee}}(\operatorname{ QCoh}(\mathscr{M}_{T,0}))\) (and hence the right-hand side of Theorem 1.1.10) admits a "quantization" parametrized by \(\mathbf{G}\), given by \(\operatorname{LMod}_{\mathscr{F}_{T}(\Omega G_{c})^{\vee}}(\operatorname{ QCoh}(\mathscr{M}_{T,0}))\). For instance, if \(A=\mathbf{Q}[\beta^{\pm 1}]\), the right-hand side of Theorem 1.1.10 quantizes to \(\operatorname{End}_{\hat{\mathscr{O}}^{\text{univ}}_{\hbar}}(\operatorname{ QCoh}(\widetilde{t}))\); and if \(A=\operatorname{KU}\), the right-hand side of Theorem 1.1.10 quantizes to \(\operatorname{End}_{\hat{\mathscr{O}}^{\text{univ}}_{\hbar}}(\operatorname{ QCoh}(\widetilde{T}))\). It follows from this discussion that the \(\infty\)-category \(\operatorname{Loc}_{T_{c}}(G_{c};A)\) must itself admits a quantization. We have seen a quantization of this form above in Remark 3.3.7.
In fact, Theorem 4.1.11 and Conjecture 4.2.9 suggest that \(\operatorname{LMod}_{\mathscr{F}_{\widetilde{T}}(\operatorname{Gr}_{G}( \mathbf{C}))^{\vee}}(\operatorname{QCoh}(\mathscr{M}_{\widetilde{T}}))\otimes \mathbf{Q}\) should be viewed as \(\operatorname{End}_{\mathscr{E}_{\mathbf{G}}}(\operatorname{QCoh}(\mathscr{M} _{\widetilde{T}})\otimes\mathbf{Q})\) for some \(A_{\mathbf{Q}}\)-linear \(\infty\)-category \(\mathscr{O}_{\mathbf{G}}\) which is a \(1\)-parameter deformation of \(\operatorname{QCoh}(\operatorname{Bun}_{\check{B}}(\mathbf{G}^{\vee}_{0, \mathbf{Q}}))\). The coordinate on the group scheme \(\mathbf{G}\) defines a "quantization parameter" (i.e., the analogue of \(\hbar\) and \(q\)). This putative \(\infty\)-category \(\mathscr{O}_{\mathbf{G}}\) would be an analogue of the (quantum) universal category \(\mathscr{O}\). We do not know how to define such an \(\infty\)-category \(\mathscr{O}_{\mathbf{G}}\) at the moment; however, in future work, we plan to use the results of [11] to study an "\(F\)-deformation" of \(U(\mathfrak{g})\) for certain formal group laws \(F(x,y)\) (at least for \(G=\operatorname{SL}_{2},\operatorname{PGL}_{2}\)). When \(F\) is the multiplicative formal group, this \(F\)-deformation of \(U(\mathfrak{g})\) recovers the quantum enveloping algebra \(U_{q}(\mathfrak{g})\). We hope that further study of such deformations will point to a good definition of the putative \(\infty\)-category \(\mathscr{O}_{\mathbf{G}}\).
**Remark 4.4.12**.: It is natural to ask for an explicit description of the \(1\)-parameter deformation of \(\operatorname{Loc}_{T_{c}}(G_{c};A)\) over \(\mathbf{G}\) from Remark 4.4.11 (i.e., not in terms of the framed \(\mathbf{E}_{2}\)-structure on \(\Omega G_{c}=\Omega^{2}BG_{c}\)). To describe this, let us view \(\operatorname{Loc}_{T_{c}}(G_{c};A)\) as the \(\infty\)-category of local systems on the orbifold \(G_{c}/_{\text{ad}}T_{c}\). We now need the following:
**Lemma 4.4.13**.: _The orbifold \(G_{c}/_{\text{ad}}T_{c}\) is isomorphic to the the moduli stack \(\operatorname{Conn}(S^{1};\mathfrak{g})^{\text{lev}}\) of \(\mathfrak{g}\)-valued smooth connections on \(S^{1}\) equipped with a level structure given by a \(T_{c}\)-reduction at \(\{1\}\in S^{1}\), taken modulo gauge transformations._
Proof.: Write \(G_{c}/_{\text{ad}}T_{c}\simeq*/T_{c}\times_{*/G_{c}}G_{c}/_{\text{ad}}G_{c}\). There is an equivalence
\[G_{c}/_{\text{ad}}G_{c}\simeq*/G_{c}\times_{*/G_{c}\times*/G_{c}}*/G_{c}\]
which exhibits \(G_{c}/_{\text{ad}}G_{c}\) as the free loop space \(\mathscr{L}(*/G_{c})\cong*/\mathscr{L}G_{c}\) in the \(\infty\)-category of orbifolds. To see this, note that \(G_{c}/G_{c}\simeq G_{c}\backslash(G_{c}\times G_{c})/G_{c}\), where \(G_{c}\times G_{c}\) acts on \(G_{c}\times G_{c}\) via
\[(g_{1},g_{2}):(h_{1},h_{2})\mapsto(g_{1}h_{1}g_{2}^{-1},g_{1}h_{2}g_{2}^{-1}).\]
In any case, the above equivalence implies that \(G_{c}/_{\text{ad}}G_{c}\) is isomorphic to the moduli stack \(\operatorname{Conn}(S^{1};\mathfrak{g})/\mathscr{L}G_{c}\), where \(\operatorname{Conn}(S^{1};\mathfrak{g})\) is the moduli space of smooth connections on \(S^{1}\) valued in \(\mathfrak{g}\); see [12, Section 15.1]. This implies the desired claim.
One natural way to quantize \(\operatorname{Loc}_{T_{c}}(G_{c};A)\) is therefore to consider the \(\infty\)-category of "\(S^{1}_{\operatorname{rot}}\ltimes\mathscr{L}G_{c}\)-equivariant \(A\)-valued local systems on \(\operatorname{Conn}(S^{1};\mathfrak{g})^{\operatorname{lev}}\)"; this is a module over \(\operatorname{Loc}_{S^{1}_{\operatorname{rot}}}(*;A)\simeq\operatorname{QCoh}( \mathbf{G})\), and its fiber over the zero section of \(\mathbf{G}\) is \(\operatorname{Loc}_{T_{c}}(G_{c};A)\) itself. However, defining this \(\infty\)-category precisely requires additional effort, since \(S^{1}_{\operatorname{rot}}\ltimes\mathscr{L}G_{c}\) is not a compact group.
Let us now turn to the proof of Theorem 4.4.7; by Example 4.4.9, we only need to consider the case when \(\mathbf{G}\) is a (smooth) elliptic curve \(E\). Since we are working on one side of Langlands duality, we now drop the "check".
Proof of Theorem 4.4.7.: We will work over \(\mathbf{Q}\), and omit it from the notation. Write \(X\) to denote the fiber product in Theorem 4.4.7, so that our goal is to identify \(X\) with \((T^{*}_{\mathbf{G}}\tilde{T})^{\operatorname{bl}}\). (The reader should keep in mind Assumption 4.4.6.) The argument of [1, Section 4.3] can be used to reduce to the case when \(\check{G}\) has semisimple rank 1.
Namely, first note that both \(X\) and \((T^{*}_{\mathbf{G}}\tilde{T})^{\operatorname{bl}}\) are flat over \(\mathscr{M}_{T}\): the only non-trivial case is \((T^{*}_{\mathbf{G}}\tilde{T})^{\operatorname{bl}}\), in which case this follows from [1, Claim in Lemma 4.1]. Let \(\mathscr{M}_{T}^{\circ}\hookrightarrow\mathscr{M}_{T}\) denote the open immersion given by the complement of the union of the divisors \(\mathscr{M}_{T_{\alpha}}\hookrightarrow\mathscr{M}_{T}\) for \(\alpha\in\Phi\). Upon localizing to \(\mathscr{M}_{T}^{\circ}\), both \(X\) and \((T^{*}_{\mathbf{G}}\tilde{T})^{\operatorname{bl}}\) are isomorphic to \(\tilde{T}\times\mathscr{M}_{T}^{\circ}\). Let \(\mathscr{M}_{T}^{\bullet}\) denote the complement of the union of all pairwise intersections of the divisors \(\mathscr{M}_{T_{\alpha}}\hookrightarrow\mathscr{M}_{T}\) for \(\alpha\in\Phi\). Then \(\mathscr{M}_{T}-\mathscr{M}_{T}^{\bullet}\hookrightarrow\mathscr{M}_{T}\) is of codimension \(\geq 2\). It therefore suffices to show (by flatness of both \(X\) and \((T^{*}_{\mathbf{G}}\tilde{T})^{\operatorname{bl}}\) over the normal irreducible scheme \(\mathscr{M}_{T}\)) that the isomorphism \(X|_{\mathscr{M}_{T}^{\circ}}\cong(T^{*}_{\mathbf{G}}\tilde{T})^{\operatorname {bl}}|_{\mathscr{M}_{T}^{\circ}}\) extends across the codimension 1 points of \(\mathscr{M}_{T}-\mathscr{M}_{T}^{\circ}\) (i.e., points of \(\mathscr{M}_{T}^{\bullet}-\mathscr{M}_{T}^{\circ}\)).
If \(y\) is a codimension 1 point of \(\mathscr{M}_{T}\) which lies on the divisor \(\mathscr{M}_{T_{\alpha}}\hookrightarrow\mathscr{M}_{T}\) for some \(\alpha\in\Phi\), let \(Z_{\alpha}(y)\subseteq\check{G}\) denote the reductive subgroup of \(\check{G}\) containing \(\tilde{T}\) and whose nonzero roots are \(\pm\alpha\). This is a connected Levi subgroup of semisimple rank 1. It is easy to see that the localization \((T^{*}_{\mathbf{G}}\tilde{T})^{\operatorname{bl}}_{y}\) depends only on \(Z_{\alpha}(y)\). Let \(\check{B}_{\alpha}^{-}\subseteq\check{B}\) denote the Borel subgroup of \(Z_{\alpha}(y)\) determined by \(\check{B}\). Lemma 4.3.6 with \(I=\{\alpha\}\) implies that the induced map from \((\mathscr{M}_{T,0})_{\mathbf{Q}}\times_{\operatorname{Bun}^{0}_{\check{B}_{ \alpha}^{-}}(E)}(\mathscr{M}_{T,0})_{\mathbf{Q}}\) to \((\mathscr{M}_{T,0})_{\mathbf{Q}}\times_{\operatorname{Bun}^{0}_{\check{B}}(E)} (\mathscr{M}_{T,0})_{\mathbf{Q}}\) defines an isomorphism upon localizing at \(y\). In particular, the localization \(X_{y}\) also depends only on \(Z_{\alpha}(y)\).
We are now reduced to the case when \(\check{G}\) has semisimple rank 1. Every split reductive group of semisimple rank 1 is isomorphic to the product of a split torus with \(\operatorname{SL}_{2}\), \(\operatorname{PGL}_{2}\), or \(\operatorname{GL}_{2}\). Let us illustrate the calculation when \(\check{G}=\operatorname{PGL}_{2}\). The cases \(\check{G}=\operatorname{SL}_{2},\operatorname{GL}_{2}\), and products of tori with these groups can be addressed similarly. For notational convenience, we will drop the "check"s and write \(B\) instead of \(\check{B}\), etc.; also note that since \(T\) is of rank 1, we may identify \(\mathscr{M}_{T}\cong\mathbf{G}\). Let \(\mathscr{V}\) denote the unique indecomposable rank 2 "Atiyah bundle" over \(E^{\vee}\times\mathbf{G}_{0}\); this is an extension of the structure sheaf by the Poincare line bundle \(\mathscr{P}\), which is specified by a nonzero section of \(\operatorname{H}^{1}(E^{\vee}\times\mathbf{G}_{0};\mathscr{P})\cong k\). The bundle \(\mathscr{V}\) sits in a short exact sequence
\[0\to\mathscr{P}\to\mathscr{V}\to\mathscr{O}_{E^{\vee}\times\mathbf{G}_{0}}\to 0.\]
Any fixed basepoint \(p_{0}\in E^{\vee}\) defines an isomorphism \(E^{\vee}\cong\mathbf{G}_{0}\), and allows us to identify \(\mathscr{P}\) with the line bundle on \(E^{\vee}\times E^{\vee}\) corresponding to the divisor \(\Delta-E^{\vee}\times\{p_{0}\}-\{p_{0}\}\times E^{\vee}\), where \(\Delta\) is the diagonal. In particular, \(\mathscr{P}|_{E^{\vee}\times\{x\}}\cong\mathscr{O}_{E^{\vee}}(x-p_{0})\), and is therefore only trivial when \(x=p_{0}\). The fiber of \(\mathscr{V}\) over \(E^{\vee}\times\{x\}\)
is specified by a nonzero element of \(\operatorname{Ext}^{1}_{E^{\vee}}(\mathscr{O},\mathscr{O}(x-p_{0}))\); but if \(\mathscr{L}\) is a nontrivial line bundle, then \(\operatorname{H}^{1}(E^{\vee};\mathscr{L})=0\). This implies that the map \(\kappa:\mathbf{G}_{0}\to\operatorname{Bun}_{B}(E^{\vee})\) sends a degree \(0\) line bundle \(\mathscr{L}\) on \(E^{\vee}\) to the trivial extension \(\mathscr{O}_{E^{\vee}}\subseteq\mathscr{O}_{E^{\vee}}\oplus\mathscr{L}\) if \(\mathscr{L}\not\cong\mathscr{O}_{E^{\vee}}\), and to the Atiyah extension \(\mathscr{O}_{E^{\vee}}\subseteq\mathscr{F}_{2}\) if \(\mathscr{L}\cong\mathscr{O}_{E^{\vee}}\).
We need to understand \(\operatorname{Aut}_{B}(\{\mathscr{P}\subseteq\mathscr{V}\})\). If \(\mathscr{L}\) is a nontrivial line bundle on \(E^{\vee}\), then \(\mathscr{L}\) has no sections, so \(\operatorname{Aut}_{B}(\{\mathscr{O}_{E^{\vee}}\subseteq\mathscr{O}_{E^{\vee} }\oplus\mathscr{L}\})\cong\mathbf{G}_{m}\). On the other hand, the algebra \(\operatorname{End}(\mathscr{F}_{2})\) of endomorphisms of \(\mathscr{F}_{2}\) as a rank \(2\) vector bundle is isomorphic to \(k[\epsilon]/\epsilon^{2}\) as an algebra; the element \(\epsilon\) acts as the composite \(\mathscr{F}_{2}\twoheadrightarrow\mathscr{O}_{E^{\vee}}\hookrightarrow\mathscr{ F}_{2}\). In particular, the group scheme \(\operatorname{Aut}(\mathscr{F}_{2})\) of automorphisms of \(\mathscr{F}_{2}\) as a rank \(2\) vector bundle is \((k[\epsilon]/\epsilon^{2})^{\times}\). An automorphism of \(\mathscr{F}_{2}\) preserving the flag \(\mathscr{O}_{E^{\vee}}\subseteq\mathscr{F}_{2}\) is defined by a matrix \(\left(\begin{smallmatrix}x&y\\ 0&z\end{smallmatrix}\right)\), where \(x,y,z\in\operatorname{Hom}(\mathscr{O}_{E^{\vee}},\mathscr{O}_{E^{\vee}})\). In order for two maps \(x,z:\mathscr{O}_{E^{\vee}}\to\mathscr{O}_{E^{\vee}}\) to define an automorphism of \(\mathscr{F}_{2}\), we need \(x=z\). Since we are only calculating the automorphisms of \(\mathscr{F}_{2}\) as a \(\operatorname{PGL}_{2}\)-bundle, the factor \(x=z\) can be scaled out, and we find that \(\operatorname{Aut}_{B}(\{\mathscr{O}_{E^{\vee}}\subseteq\mathscr{F}_{2}\}) \cong\mathbf{G}_{a}\). The fiber of the map \(\mathbf{G}_{0}\times_{\operatorname{Bun}_{B}(E^{\vee})}\mathbf{G}_{0}\to \mathbf{G}_{0}\) over \(\mathscr{L}\in\mathbf{G}_{0}\) is therefore \(\mathbf{G}_{m}\) if \(\mathscr{L}\not\cong\mathscr{O}_{E^{\vee}}\) (i.e., away from the zero section), which degenerates to \(\mathbf{A}^{1}\) over the zero section corresponding to \(\mathscr{L}=\mathscr{O}_{E^{\vee}}\).
(In the case \(\check{G}=\operatorname{SL}_{2}\), the same argument shows that the fiber of the map \(\mathbf{G}_{0}\times_{\operatorname{Bun}_{B}(E^{\vee})}\mathbf{G}_{0}\to \mathbf{G}_{0}\) is still \(\mathbf{G}_{m}\) if \(\mathscr{L}^{2}\) is not trivial, but the fiber over any point of \(\mathscr{L}\in\mathbf{G}_{0}[2]\) is instead \(\mathbf{G}_{a}\times\mu_{2}\). Indeed, the image of of \(\mathscr{L}\in\mathbf{G}_{0}[2]\) under the Kostant slice \(\mathbf{G}_{0}\to\operatorname{Bun}_{B}(E^{\vee})\) is the nontrivial extension
\[0\to\mathscr{L}\to\mathscr{L}\otimes\mathscr{F}_{2}\to\mathscr{L}^{-1}\to 0.\]
Note that the subgroup of \(\check{B}\subseteq\operatorname{SL}_{2}\) given by \(\operatorname{Aut}_{\check{B}}(\{\mathscr{L}\subseteq\mathscr{F}_{2}\otimes \mathscr{L}\})\) is of the form \(\left(\begin{smallmatrix}x&y\\ 0&z\end{smallmatrix}\right)\), where \(x\in\operatorname{Hom}(\mathscr{L},\mathscr{L})\), \(y\in\operatorname{Hom}(\mathscr{L}^{-1},\mathscr{L})\), and \(z\in\operatorname{Hom}(\mathscr{L}^{-1},\mathscr{L}^{-1})\). Not every such matrix defines an automorphism of \(\mathscr{F}_{2}\otimes\mathscr{L}\); for instance, in order for two maps \(x:\mathscr{L}\to\mathscr{L}\) and \(z:\mathscr{L}^{-1}\to\mathscr{L}^{-1}\) to define an automorphism of \(\mathscr{F}_{2}\otimes\mathscr{L}\), we need \(x=z\otimes\mathscr{L}^{2}=z\). In order for the resulting matrix \(\left(\begin{smallmatrix}x&y\\ 0&z\end{smallmatrix}\right)\) to preserve the trivialization of \(\det(\mathscr{V}\otimes\mathscr{L})\), we need \(x^{2}=1\); the function \(y\) can be arbitrary. This discussion implies that \(\operatorname{Aut}_{\check{B}}(\{\mathscr{L}\subseteq\mathscr{F}_{2}\otimes \mathscr{L}\})\cong\mu_{2}\times\mathbf{G}_{a}\), where the \(\mu_{2}\) encodes \(x\), and \(\mathbf{G}_{a}\) encodes \(y\).)
The intersection \(\mathbf{G}_{0}\times_{\operatorname{Bun}_{B}(E^{\vee})}\mathbf{G}_{0}\) consists of \(\mathscr{L},\mathscr{L}^{\prime}\in\mathbf{G}_{0}\) equipped with an isomorphism \(\kappa(\mathscr{L})\cong\kappa(\mathscr{L}^{\prime})\) of \(B\)-bundles over \(E^{\vee}\) (which in particular forces \(\mathscr{L}\cong\mathscr{L}^{\prime}\)). In fact, the discussion above can be used to conclude that \(\mathbf{G}_{0}\times_{\operatorname{Bun}_{B}(E^{\vee})}\mathbf{G}_{0}\) is isomorphic to an affine blowup of \(\mathbf{G}_{0}\times\mathbf{G}_{m}\), defined as the complement \(U\) of the proper preimage of the zero section of \(\mathbf{G}_{0}\) inside the blowup \(\mathfrak{B}\) of \(\mathbf{G}_{0}\times\mathbf{G}_{m}\) at the union of the zero sections of \(\mathbf{G}_{0}\) and \(\mathbf{G}_{m}\). (In the case \(\check{G}=\operatorname{SL}_{2}\), the fiber product \(\mathbf{G}_{0}\times_{\operatorname{Bun}_{B}(E^{\vee})}\mathbf{G}_{0}\) is isomorphic to an affine blowup of \(\mathbf{G}_{0}\times\mathbf{G}_{m}\), defined as the complement \(U\) of the proper preimage of the \(2\)-torsion \(\mathbf{G}_{0}[2]\subseteq\mathbf{G}_{0}\) inside the blowup \(\mathfrak{B}\) of \(\mathbf{G}_{0}\times\mathbf{G}_{m}\) at the union of the \(2\)-torsion sections \(\mathbf{G}_{0}[2]\subseteq\mathbf{G}_{0}\) and \(\mu_{2}\subseteq\mathbf{G}_{m}\).) But \(U\subseteq\mathfrak{B}\) is precisely the affine blowup \((T^{*}_{\mathbf{G}}T)^{\operatorname{bl}}\), as desired.
**Remark 4.4.14**.: The most classical instantiation of the Atiyah bundle is via the Weierstrass functions. The \(\mathbf{G}_{a}\)-torsor \(\mathscr{A}\) over \(E\) associated to \(\mathscr{V}\) is the complement of the section at \(\infty\) of the projective line \(\mathbf{P}(\mathscr{V})\). If we work complex-analytically, \(E^{\operatorname{an}}\) can be identified as the quotient \(\mathbf{C}/\Lambda\) for some rank \(2\) lattice \(\Lambda\subseteq\mathbf{C}\). Associated
to \(\Lambda\) are two Weierstrass functions defined on \(\mathbf{C}\):
\[\wp(z;\Lambda) =\frac{1}{z^{2}}+\sum_{\lambda\in\Lambda-\{0\}}\left(\frac{1}{(z- \lambda)^{2}}-\frac{1}{\lambda^{2}}\right),\] \[\zeta(z;\Lambda) =\frac{1}{z}+\sum_{\lambda\in\Lambda-\{0\}}\left(\frac{1}{z- \lambda}+\frac{1}{\lambda}+\frac{z}{\lambda^{2}}\right).\]
Note that \(\wp(z;\Lambda)\) is doubly-periodic, i.e., \(\wp(z+\lambda;\Lambda)=\wp(z;\Lambda)\) for any \(\lambda\in\Lambda\). Alternatively, \(\wp\) defines a map \(\mathbf{C}\to\mathbf{C}\) which factors through a map \(\mathbf{C}/\Lambda=E^{\mathrm{an}}\to\mathbf{C}\).
Although \(\zeta(z;\Lambda)\) is not doubly-periodic, an easy calculation shows that \(\wp(z;\Lambda)=-\partial_{z}\zeta(z;\Lambda)\); so if \(\lambda\in\Lambda\), then \(\zeta(z+\lambda;\Lambda)-\zeta(z;\Lambda)=c(\lambda)\) for some constant \(c(\lambda)\). The function \(\lambda\mapsto c(\lambda)\) is evidently additive, and defines a homomorphism \(\Lambda\to\mathbf{C}\), which defines a \(\mathbf{C}\)-bundle over \(E^{\mathrm{an}}=\mathbf{C}/\Lambda\). This \(\mathbf{C}\)-bundle is precisely the analytification \(\mathscr{A}^{\mathrm{an}}\) of the \(\mathbf{G}_{a}\)-torsor \(\mathscr{A}\). It follows that although \(\zeta\) is not defined on \(E^{\mathrm{an}}\), the torsor \(\mathscr{A}^{\mathrm{an}}\) is the universal space over \(E^{\mathrm{an}}\) on which \(\zeta\) is defined.
This discussion also describes the total space of the rank \(2\)-bundle \(\mathscr{V}^{\mathrm{an}}\) purely analytically. For instance, if \(q\in\mathbf{C}^{\times}\) is a unit complex number of modulus \(<1\), we can identify \(\mathrm{Tot}(\mathscr{V}^{\mathrm{an}})\) over the Tate curve \(\mathbf{C}^{\times}/q^{\mathbf{Z}}\) with the quotient
\[\mathrm{Tot}(\mathscr{V}^{\mathrm{an}})=\left(\mathbf{C}^{\times}\times \mathbf{C}^{2}\right)/\left((z,x)\sim(qz,(\begin{smallmatrix}1&1\\ 0&1\end{smallmatrix})x)\right).\]
### Putting it together
We will now explore one corollary of Corollary 4.4.8.
**Setup 4.5.1**.: Let \(\mathrm{Pr}^{\mathrm{L,st}}\) be the \(\infty\)-category of compactly generated presentable \(\infty\)-categories and colimit-preserving functors which preserve compact objects. Let \(\mathscr{C}\in\mathrm{CAlg}(\mathrm{Pr}^{\mathrm{L,st}})\), and let \(\mathscr{D}\in\mathrm{CAlg}(\mathrm{LMod}_{\mathscr{C}}(\mathrm{Pr}^{\mathrm{ L,st}}))\) whose underlying object of \(\mathrm{LMod}_{\mathscr{C}}(\mathrm{Pr}^{\mathrm{L,st}})\) is dualizable. The unit map \(i^{*}:\mathscr{C}\to\mathscr{D}\) defines a symmetric monoidal functor \(i^{\prime}{}^{*}:\mathscr{D}\simeq\mathscr{D}\otimes_{\mathscr{C}}\mathscr{C }\to\mathscr{D}\otimes_{\mathscr{C}}\mathscr{D}\), and if \(i_{*}:\mathscr{D}\to\mathscr{C}\) denotes the right adjoint to \(i^{*}\), the following diagram commutes:
**Proposition 4.5.2**.: _In Setup 4.5.1, there is a fully faithful colimit-preserving functor \(\mathrm{Tot}(\mathscr{D}^{\otimes_{\mathscr{C}}\bullet\bullet+1})\hookrightarrow \mathscr{C}\); we will denote its essential image by \(\mathscr{C}^{\wedge}_{\mathscr{D}}\)._
Proof.: The assumptions in Setup 4.5.1 imply that the augmented cosimplicial diagram \(N(\mathbf{\Delta}_{+})\to\mathrm{Cat}_{\infty}\) given by \(\mathscr{D}^{\otimes_{\mathscr{C}}\bullet+1}\) satisfies the assumptions of [1, Corollary 4.7.5.3]. Therefore, the functor \(\mathscr{C}\to\mathrm{Tot}(\mathscr{D}^{\otimes_{\mathscr{C}}\bullet+1})\) has a fully faithful left adjoint, as desired.
**Observation 4.5.3**.: Regard \(\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}})\) as a \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}}))\)-algebra via \(\kappa:(\mathscr{M}_{T,0})_{\mathbf{Q}}\to\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{ G}^{\vee}_{0,\mathbf{Q}})\). Then the completion \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}}))^{ \wedge}_{\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}})}\) of Proposition 4.5.2 with respect to \(\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}})\) can be identified with \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})^{ \mathrm{reg}})\).
**Example 4.5.4**.: When \(A\) is an \(\mathbf{E}_{\infty}\)-\(\mathbf{Q}[\beta^{\pm 1}]\)-algebra and \(\mathbf{G}=\hat{\mathbf{G}}_{a}\), the Koszul duality equivalence of Lemma 4.1.4 gives \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}}))\simeq \mathrm{IndCoh}((\widetilde{\mathscr{N}}\times_{\hat{\mathfrak{g}}}\{0\})/ \check{G})\); we define \(\mathrm{IndCoh}((\widetilde{\mathscr{N}}\times_{\hat{\mathfrak{g}}}\{0\})/ \check{G})_{\mathrm{Kost}}\) to be the essential image of \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})^{ \mathrm{reg}})\)
under this equivalence. We remark that in this case, \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})^{ \mathrm{reg}})\simeq\mathrm{QCoh}(\widetilde{\tilde{\mathfrak{g}}}^{\mathrm{reg} }/\check{G})\). Similarly, if \(A=\mathrm{KU}\) and \(\mathbf{G}=\mathbf{G}_{m}\), then \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})^{ \mathrm{reg}})\simeq\mathrm{QCoh}(\widetilde{\tilde{G}}^{\mathrm{reg}}/\check{G})\).
**Corollary 4.5.5**.: _Fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\) and an oriented commutative \(A\)-group \(\mathbf{G}\). Assume that the underlying \(\pi_{0}A\)-scheme \(\mathbf{G}_{0}\) is \(\mathbf{G}_{a}\), \(\mathbf{G}_{m}\), or an elliptic curve \(E\). Suppose \(G\) is a connected and simply-connected semisimple algebraic group over \(\mathbf{C}\). Then there is an \(\mathbf{E}_{2}\)-monoidal equivalence_
\[\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})^ {\mathrm{reg}})\simeq\mathrm{Loc}^{\mathrm{gr}}_{T_{c}}(\Omega G_{c};A)\otimes \mathbf{Q}.\]
Proof.: The \(\mathbf{E}_{\infty}\)-coalgebra structure on \(\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{G}(\mathbf{C}))^{\vee}\) defines a \(\mathrm{QCoh}(\mathscr{M}_{T,0})\)-coalgebra structure on \(\mathrm{LMod}_{\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{G}(\mathbf{C}))^{\vee}}( \mathrm{QCoh}(\mathscr{M}_{T,0}))\). The right-hand side of the equivalence of Corollary 4.4.8 also admits a \(\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}})\)-coalgebra structure, being the tensor product of \(\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}})\) with itself over \(\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}}))\); and it is not difficult to check that the equivalence of Corollary 4.4.8 is one of \(\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}})\)-coalgebras. In particular, there is a commutative diagram
\[\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}}\times_{\mathrm{Bun}^{0}_{B}( \mathbf{G}^{\vee}_{0,\mathbf{Q}})}\widetilde{(\mathscr{M}_{T,0})_{\mathbf{Q}} )}\widetilde{\widetilde{\mathbf{LMod}}_{\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{ G}(\mathbf{C}))^{\vee}}(\mathrm{QCoh}(\mathscr{M}_{T,0}))\otimes\mathbf{Q}}\]
which defines an equivalence of cosimplicial diagrams, and hence of their totalizations. The totalization of the cosimplicial diagram built from the functor \(\mathrm{QCoh}(\mathscr{M}_{T,0})\to\mathrm{Mod}_{\pi_{0}\mathscr{F}_{T}( \mathrm{Gr}_{G}(\mathbf{C}))^{\vee}}(\mathrm{QCoh}(\mathscr{M}_{T,0}))\) defines an equivalence
\[\mathrm{Tot}(\mathrm{LMod}_{(\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{G}(\mathbf{C} ))^{\vee})^{\otimes\bullet}}(\mathrm{QCoh}(\mathscr{M}_{T,0})))\simeq\mathrm{ coLMod}_{\pi_{0}\mathscr{F}_{T}(\mathrm{Gr}_{G}(\mathbf{C}))^{\vee}}(\mathrm{QCoh}( \mathscr{M}_{T,0}));\]
note that by Notation 2.3.6, this is in turn equivalent to \(\mathrm{Loc}^{\mathrm{gr}}_{T_{c}}(\Omega G_{c};A)\). By Proposition 4.5.2, we also have
\[\mathrm{Tot}(\mathrm{QCoh}((\mathscr{M}_{T,0})_{\mathbf{Q}})^{\otimes_{ \mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})^{ \bullet+1})}})\simeq\mathrm{QCoh}(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{ \vee}_{0,\mathbf{Q}})^{\mathrm{reg}}).\]
This gives the desired equivalence.
**Example 4.5.6**.: When \(A=\mathbf{Q}[\beta^{\pm 1}]\) and \(\mathbf{G}=\mathbf{G}_{a}\), we have \(\mathrm{Bun}^{0}_{\tilde{B}}(\mathbf{G}^{\vee}_{0,\mathbf{Q}})=\widetilde{ \tilde{\mathfrak{g}}}/\check{G}\). Corollary 4.5.5 gives an \(\mathbf{E}_{2}\)-monoidal equivalence
\[\mathrm{IndCoh}((\widetilde{\mathscr{N}}\times_{\tilde{\mathfrak{g }}}\{0\})/\check{G})_{\mathrm{Kost}} \simeq\mathrm{QCoh}(\widetilde{\tilde{\mathfrak{g}}}^{\mathrm{reg} }/\check{G})\] \[\simeq\mathrm{Loc}^{\mathrm{gr}}_{T_{c}}(\Omega G_{c};\mathbf{Q} [\beta^{\pm 1}]).\]
Note that \(\widetilde{\tilde{\mathfrak{g}}}/\check{G}\) is isomorphic to the quotient \(\check{G}\backslash(\check{G}\times^{\tilde{N}}\tilde{\mathfrak{b}})/\check{T}\); and [1, Proposition 3.10] says that \(\check{G}\times^{\tilde{N}}\tilde{\mathfrak{b}}\) is the universal symplectic implosion (i.e., the symplectic implosion of \(T^{*}G\)). The relationship of this perspective to Langlands duality is closely related to the program of Ben-Zvi-Sakellaridis-Venkatesh [16]: namely, the Hamiltonian \(\check{G}\times\check{T}\)-space \(T^{*}(\check{G}/\check{N})\) acts as a "kernel" for the symplectic implosion functor from Hamiltonian \(\check{G}\)-spaces to Hamiltonian \(\check{T}\)-spaces.
Similarly, using Theorem 4.1.11, one can prove an equivalence between \(\mathrm{Loc}^{\mathrm{gr}}_{T_{c}}(\Omega G_{c};\mathbf{Q}[\beta^{\pm 1}])\) and a localization of \(\check{\mathscr{O}}^{\mathrm{univ}}_{\tilde{h}}\). There is also an \(\mathbf{E}_{2}\)-monoidal equivalence
\[\mathrm{QCoh}(\tilde{\mathfrak{g}}^{\mathrm{reg}}/\check{G})\simeq\mathrm{Loc}^{ \mathrm{gr}}_{G_{c}}(\Omega G_{c};\mathbf{Q}[\beta^{\pm 1}]);\]
this follows from the analogue of Remark 2.2.7 for \(G_{c}\)-local systems and [20, Proposition 2.2.1], which says that the classifying stack of the group scheme \(\check{J}=\operatorname{Spec}\operatorname{H}_{*}^{G}(\operatorname{Gr}_{G}( \mathbf{C});\mathbf{Q})\) of regular centralizers is isomorphic to \(\check{\mathbf{g}}^{\operatorname{reg}}/\check{G}\).
**Remark 4.5.7**.: In the case of rank \(1\), one can use Example 4.5.6 to show that if the circle \(\operatorname{SO}(2)\) acts on \(\operatorname{SO}(3)/\operatorname{SO}(2)=S^{2}\) by left multiplication, there is an equivalence
\[\operatorname{Loc}_{\operatorname{SO}(2)}^{\operatorname{gr}}(\Omega S^{2}; \mathbf{Q}[\beta^{\pm 1}])\simeq\operatorname{QCoh}(T^{*}(\mathbf{A}^{2})^{ \operatorname{reg}}/\mathrm{SL}_{2}). \tag{23}\]
Here, \(\mathbf{A}^{2}\) is equipped with the standard action of \(\mathrm{SL}_{2}\), and \(T^{*}(\mathbf{A}^{2})^{\operatorname{reg}}\) denotes the preimage of the regular locus of \(\mathfrak{sl}_{2}\) under the moment map \(T^{*}(\mathbf{A}^{2})\to\mathfrak{sl}_{2}\). This is because there is an equivalence
\[\operatorname{Loc}_{\operatorname{SO}(2)}^{\operatorname{gr}}( \Omega S^{2};\mathbf{Q}[\beta^{\pm 1}]) \simeq\operatorname{Loc}_{\operatorname{SO}(2)}^{\operatorname{ gr}}(\Omega\operatorname{SO}(3);\mathbf{Q}[\beta^{\pm 1}])\otimes_{\operatorname{Loc}^{ \operatorname{gr}}(\Omega\operatorname{SO}(2);\mathbf{Q}[\beta^{\pm 1}])} \operatorname{Vect}_{\mathbf{Q}}\] \[\simeq\operatorname{QCoh}(\widetilde{\mathfrak{sl}_{2}}^{ \operatorname{reg}}/\mathrm{SL}_{2}\times_{B\mathbf{G}_{m}}\operatorname{ Spec}(\mathbf{Q}))\simeq\operatorname{QCoh}(\mathrm{SL}_{2}\backslash T^{*}( \mathrm{SL}_{2}/N)^{\operatorname{reg}}),\]
where the second equivalence uses Example 4.5.6 (i.e., the Arkhipov-Bezrukavnikov-Ginzburg equivalence over the regular locus). If \(\check{B}\subseteq\mathrm{SL}_{2}\) is a fixed Borel subgroup, the map \(\widetilde{\mathfrak{sl}_{2}}^{\operatorname{reg}}/\mathrm{SL}_{2}\to B \mathbf{G}_{m}\) is given by the composite
\[\widetilde{\mathfrak{sl}_{2}}^{\operatorname{reg}}/\mathrm{SL}_{2}\cong \check{\mathfrak{b}}^{\operatorname{reg}}/\check{B}\to B\check{B}\to B \check{T}=B\mathbf{G}_{m}.\]
However, \(\mathrm{SL}_{2}/N\cong\mathbf{A}^{2}-\{0\}\), and there is an \(\mathrm{SL}_{2}\)-equivariant isomorphism \(T^{*}(\mathbf{A}^{2})^{\operatorname{reg}}\cong T^{*}(\mathbf{A}^{2}-\{0\})^ {\operatorname{reg}}\). Let us remark that (23) can be de-periodified to give an equivalence
\[\operatorname{Loc}_{\operatorname{SO}(2)}(\Omega S^{2};\mathbf{Q})\simeq \operatorname{QCoh}(T^{*}[2](\mathbf{A}^{2})^{\operatorname{reg}}/\mathrm{SL }_{2}).\]
This is in fact related to the program of Ben-Zvi-Sakellaridis-Venkatesh [10] applied to the "Hecke period"; their program predicts a duality between the Hamiltonian \(\mathrm{PGL}_{2}\)-variety \(T^{*}(\mathrm{PGL}_{2}/\mathbf{G}_{m})\) and the Hamiltonian \(\mathrm{SL}_{2}\)-variety \(T^{*}(\mathbf{A}^{2})\).
**Example 4.5.8**.: When \(A=\mathrm{KU}\) and \(\mathbf{G}=\mathbf{G}_{m}\), we have \(\mathrm{Bun}_{\check{B}}^{0}(\mathbf{G}_{0,\mathbf{Q}}^{\vee})=\widetilde{ \check{G}}/\check{G}\). Therefore, Corollary 4.5.5 gives an \(\mathbf{E}_{2}\)-monoidal equivalence
\[\operatorname{QCoh}(\widetilde{\check{G}}^{\operatorname{reg}}/\check{G}) \simeq\operatorname{Loc}_{T_{c}}^{\operatorname{gr}}(\Omega G_{c};\mathrm{KU })\otimes\mathbf{Q}.\]
Note that \(\widetilde{\check{G}}/\check{G}\) is isomorphic to the quotient \(\check{G}\backslash(\check{G}\times^{\check{N}}\check{B})/\check{T}\); and [10, Discussion following Proposition 3.10] says that \(\check{G}\times^{\check{N}}\check{B}\) is the universal group-valued symplectic implosion (i.e., the symplectic implosion of the internal fusion double of \(\check{G}\)). The relationship of this perspective to Langlands duality is closely related to a quasi-Hamiltonian analogue of the program of Ben-Zvi-Sakellaridis-Venkatesh [10], which we will explore in future work.
Similarly, one can show that there is an \(\mathbf{E}_{2}\)-monoidal equivalence
\[\operatorname{QCoh}(\check{G}^{\operatorname{reg}}/\check{G})\simeq \operatorname{Loc}_{G_{c}}^{\operatorname{gr}}(\Omega G_{c};\mathrm{KU}) \otimes\mathbf{Q}.\]
Were there a full KU-theoretic geometric Satake equivalence, the above equivalence would be obtained by localization over the (open) regular locus of \(\check{G}\). The above equivalence is presumably related to [11, Section 1.2].
**Example 4.5.9**.: Suppose \(A\) is a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring and \(\mathbf{G}\) is an oriented elliptic curve over \(A\) (in the sense of [17]). Let \(E\) be the underlying classical scheme of \(\mathbf{G}\) over the classical ring \(\pi_{0}(A)\), so that \(E\) is an elliptic
curve, and let \(E^{\vee}\) be the dual elliptic curve. Then \(\operatorname{Bun}^{0}_{B}(\mathbf{G}^{\vee}_{0})=\operatorname{Bun}^{0}_{B}(E^{ \vee})\), and Corollary 4.5.5 gives an \(\mathbf{E}_{2}\)-monoidal \(\pi_{0}A_{\mathbf{Q}}\)-linear equivalence
\[\operatorname{QCoh}(\operatorname{Bun}^{0}_{B}(E^{\vee})^{\operatorname{reg}}) \simeq\operatorname{Loc}^{\operatorname{gr}}_{T_{c}}(\Omega G_{c};A)\otimes \mathbf{Q}.\]
This may be understood as a step towards a full \(A\)-theoretic analogue of the ABG equivalence.
### Coefficients in the sphere spectrum?
In this brief section, we study the natural question of whether there is an analogue of Theorem 1.1.10 and Corollary 4.5.5 with coefficients in a more general \(\mathbf{E}_{\infty}\)-ring \(R\) (e.g., the sphere spectrum). This is closely related to the discussion in Section 3.3, and already turns out to be rather nontrivial for a torus as soon as \(R\) is not complex-orientable. As a warmup, let us make the following observation.
**Proposition 4.6.1**.: _Fix a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\), and let \(\mathbf{G}\) be an oriented group scheme in the sense of [10] which is dualizable. Let \(T\) be a torus over \(\mathbf{C}\), and let \(\check{T}_{A}:=\operatorname{Spec}A[\mathbb{X}_{*}(T)]\) denote the dual torus over \(A\). Then there is an \(\mathbf{E}_{2}\)-monoidal \(A\)-linear equivalence \(\operatorname{Shv}_{T}(\operatorname{Gr}_{T}(\mathbf{C});A)\simeq\operatorname {QCoh}(\mathscr{M}_{T}\times B\check{T}_{A})\). Fixing an isomorphism \(\mathscr{M}_{T}\cong\mathscr{M}_{\check{T}}\) as in Assumption 4.4.6 makes this category equivalent to \(\operatorname{QCoh}(\mathscr{L}_{\mathbf{G}}B\check{T}_{A})\)._
Proof.: Note that there is an \(\mathbf{E}_{2}\)-monoidal equivalence \(\operatorname{Shv}_{T}(\operatorname{Gr}_{T}(\mathbf{C});A)\simeq\operatorname {Shv}_{T_{c}}(\Omega T_{c};A)\). Since the \(T_{c}\)-action on \(\Omega T_{c}\) is trivial and \(\Omega T_{c}\cong\mathbb{X}_{*}(T)\) as \(\mathbf{E}_{\infty}\)-spaces, we obtain an \(\mathbf{E}_{2}\)-monoidal equivalence
\[\operatorname{Shv}_{T}(\operatorname{Gr}_{T}(\mathbf{C});A)\simeq\operatorname {Fun}(\mathbb{X}_{*}(T),\operatorname{Loc}_{T_{c}}(*;A))\simeq\operatorname{ Fun}(\mathbb{X}_{*}(T),\operatorname{Mod}_{A})\otimes_{\operatorname{Mod}_{A}} \operatorname{QCoh}(\mathscr{M}_{T}).\]
The first claim now follows from the equivalence \(\operatorname{QCoh}(B\check{T}_{A})\simeq\operatorname{Fun}(\mathbb{X}_{*}(T),\operatorname{Mod}_{A})\). Fixing an isomorphism \(\mathscr{M}_{T}\cong\mathscr{M}_{\check{T}}\) and using that \(\mathscr{L}_{\mathbf{G}}B\check{T}_{A}\cong B\check{T}_{A}\times\mathscr{M}_{ \check{T}}\), we see that \(\operatorname{Shv}_{T}(\operatorname{Gr}_{T}(\mathbf{C});A)\) can be identified with \(\operatorname{QCoh}(\mathscr{L}_{\mathbf{G}}B\check{T}_{A})\), as desired.
Crucial to the argument of Proposition 4.6.1 was the equivalence \(\operatorname{Loc}_{T_{c}}(*;A)\simeq\operatorname{QCoh}(\mathscr{M}_{T})\). If \(R\) is a general \(\mathbf{E}_{\infty}\)-ring, then such a statement will generally _only_ be true (for an appropriate definition of \(\operatorname{Loc}_{T_{c}}(*;R)\)) when \(R\) is close to being complex-oriented. For example:
**Example 4.6.2**.: The methods of this article show that there is an analogue of Theorem 1.1.10 for \(\operatorname{KO}\):
\[\operatorname{Loc}^{\operatorname{gr}}_{T_{c}}(G_{c};\operatorname{KO})\otimes \mathbf{Q}\simeq\operatorname{QCoh}((\mathscr{M}_{\check{T},0})_{\mathbf{Q}} \times_{\operatorname{Bun}^{0}_{B}(\mathbf{G}^{\vee}_{0})}(\mathscr{M}_{\check{ T},0})_{\mathbf{Q}}).\]
Here, \(\mathbf{G}\) is the universal spectral multiplicative group over \(B\mathbf{Z}/2\). Similarly, using the definition of genuine \(T\)-equivariant TMF from [11], one can also obtain an analogue of Theorem 1.1.10 (where \(\mathbf{G}\) is replaced by the universal oriented spectral elliptic curve over the moduli stack of oriented spectral elliptic curves from [10, Proposition 7.2.10]).
See also [14, Section 8.1] for a variant of the following:
**Example 4.6.3**.: Let \(\operatorname{Sp}_{T_{c}}\) denote the \(\infty\)-category of genuine \(T_{c}\)-equivariant spectra, and let \(i^{*}_{T_{c}}:\operatorname{Sp}_{T_{c}}\to\operatorname{Sp}\) be the lax symmetric monoidal right adjoint to the unique symmetric monoidal colimit-preserving functor \(\operatorname{Sp}\to\operatorname{Sp}_{T_{c}}\). Suppose \(R\) is an \(\mathbf{E}_{\infty}\)-ring such that there is an \(\mathbf{E}_{\infty}\)-algebra \(R_{T_{c}}\in\operatorname{CAlg}(\operatorname{Sp}_{T_{c}})\) given by "genuine \(T_{c}\)-equivariant \(R\)-cohomology". Then, \(\operatorname{Loc}_{T_{c}}(*;R)\) might be understood to mean \(\operatorname{Mod}_{R_{T_{c}}}(\operatorname{Sp}_{T_{c}})\). We are interested in the following question: when \(\operatorname{Loc}_{T_{c}}(*;R)\)
equivalent (as a symmetric monoidal category) to the \(\infty\)-category of modules over some \({\bf E}_{\infty}\)-ring \(B\)? It is not difficult to see that if this happens, then the \({\bf E}_{\infty}\)-ring \(B\) will simply be \(i^{*}_{T_{c}}(R_{T_{c}})\). (One could more generally ask when \({\rm Loc}_{T_{c}}(*;R)\) is equivalent to the \(\infty\)-category of quasicoherent sheaves on some spectral \(R\)-stack; but this obscures the key homotopical point.)
Let us suppose for simplicity that \(T_{c}\) is of rank \(1\), i.e., that \(T_{c}=S^{1}\). Recall that the \(\infty\)-category \({\rm Sp}_{S^{1}}\) is compactly generated by \(S^{0}\) (with the trivial \(S^{1}\)-action) and \((S^{1}/\mu_{n})_{+}\) for \(n\geq 2\). If \(\lambda\) denote the \(1\)-dimensional complex representation of \(\mu_{n}\), there is a cofiber sequence \((S^{1}/\mu_{n})_{+}\to S^{0}\to S^{\lambda^{n}}\); so \({\rm Sp}_{S^{1}}\) is compactly generated by \(S^{0}\) and \(S^{\lambda^{n}}\) for \(n\geq 2\). It follows that \({\rm Loc}_{S^{1}}(*;R)\simeq{\rm Mod}_{R_{S^{1}}}({\rm Sp}_{S^{1}})\) is compactly generated by \(R_{S^{1}}\) and \(R_{S^{1}}\otimes S^{\lambda^{n}}\) for \(n\geq 2\). If \(R\) is complex-oriented, there is an equivalence \(R_{S^{1}}\otimes S^{\lambda^{n}}\simeq\Sigma^{2}R_{S^{1}}\). This lets us conclude that \({\rm Loc}_{S^{1}}(*;R)\) is compactly generated by the _single_ unit object \(R_{S^{1}}\), so that [10, Lemma 4.4] implies that \({\rm Loc}_{S^{1}}(*;R)\simeq{\rm Mod}_{i^{*}_{S^{1}}(R_{S^{1}})}\).
**Remark 4.6.4**.: In contrast to the above discussion, if \(R\) is not complex-oriented (or more generally does not admit a finite flat cover by a complex-oriented ring), then \({\rm Loc}_{T_{c}}(*;R)\) stands little chance of being compactly generated by the unit object. For example, if \(R\) is the sphere spectrum, then \({\rm Loc}_{T_{c}}(*;R)\simeq{\rm Sp}_{T_{c}}\) is _not_ compactly generated by the unit object. Note, however, that the Barr-Beck-Lurie theorem ([12, Theorem 4.7.3.5]) implies \({\rm Sp}_{S^{1}}\) is equivalent to the \(\infty\)-category of left modules over the \({\bf E}_{1}\)-ring \({\rm End}_{{\rm Sp}_{S^{1}}}\left(S^{0}\oplus\bigoplus_{n\geq 2}(S^{1}/\mu_{n })_{+}\right)\); this is _not_ an \({\bf E}_{\infty}\)-ring.
In particular, if \(\check{T}_{S}:={\rm Spec}\,S[{\mathbb{X}}_{*}(T)]\) denotes the dual torus over the sphere spectrum, then one can run part of the proof of Proposition 4.6.1 to conclude that \({\rm Shv}_{T}({\rm Gr}_{T}({\bf C});S)\simeq{\rm Fun}({\mathbb{X}}_{*}(T),{ \rm Loc}_{T_{c}}(*;S))\simeq{\rm Fun}({\mathbb{X}}_{*}(T),{\rm Sp}_{T_{c}}) \simeq{\rm Sp}_{\check{T}_{c}}\otimes{\rm QCoh}(\check{BT_{S}})\). Here, we have identified \({\rm Sp}_{T_{c}}\simeq{\rm Sp}_{\check{T}_{c}}\) (see Assumption 4.4.6). The discussion in Remark 4.6.4 shows that it is not clear how to view the right-hand side in terms of quasicoherent sheaves on some spectral stack. In particular, we see that already in the case of a torus, the coherent side of "derived geometric Satake with spherical coefficients" starts to deviate from the standard form of derived geometric Satake. It seems as though the appropriate analogue of the coherent side involves some combination of Hausmann's global group laws [11] and the spectral moduli stack of oriented formal groups (see [12, 13]). We hope to approach this in future work via \(T\)-equivariant complex cobordism \({\rm MU}_{T}\).
At the moment, derived geometric Satake with spherical coefficients for a general reductive group over \({\bf C}\) seems to require more technical setup than is currently available in the literature (although a version of the geometric Casselman-Shalika equivalence of [13] was discussed in [12, Section 10]).
## Appendix A Relationship to Brylinski-Zhang
Theorem 1.1.10 is closely related to the results of Brylinski-Zhang (see [2]). To explain this, we begin by recasting the results of [2] in the language of Section 2.1.
**Recollection A.1**.: Let \(G\) be a simply-connected compact Lie group. Then the main result of [2] says that there is an isomorphism \(\operatorname{KU}_{G}^{*}(G)\cong\Omega_{K_{0}(\operatorname{Rep}(G))/\mathbf{ Z}}^{*}\otimes_{\mathbf{Z}}\mathbf{Z}[\beta^{\pm 1}]\), where \(K_{0}(\operatorname{Rep}(G))\) is the (complex) representation ring of \(G\). If \(G\) is not necessarily simply-connected, there is also an isomorphism \(\operatorname{H}_{G}^{*}(G;\mathbf{Q})\cong\Omega_{\operatorname{H}^{*}(BG; \mathbf{Q})/\mathbf{Q}}^{*}\).
These can be simultaneously generalized by the following:
**Proposition A.2**.: _Let \(A\) be a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring, and let \(\mathbf{G}\) be an oriented commutative \(A\)-group. Let \(G\) be a simply-connected compact Lie group, and suppose that the functor \(\mathscr{F}_{G}:\mathscr{S}(G)_{\operatorname{conn}}^{\operatorname{op}} \to\operatorname{QCoh}(\mathscr{M}_{G})\) of \(G\)-equivariant \(A\)-cochains on connected finite \(G\)-spaces is symmetric monoidal9. Then there is an equivalence_
Footnote 9: Note that this assumption will fail if \(G\) is not connected!
\[\Gamma(\mathscr{M}_{G};\mathscr{F}_{G}(G))\simeq\operatorname{HH}(\mathscr{M}_ {G}/A).\]
Proof.: Indeed, since \(G\) is connected, we have \(G\simeq\Omega(BG)\). Recall from Lemma 4.4.13 that \(G/G\) is the free loop space of \(*/G\) in the category of orbifolds. The assumption on \(\mathscr{F}_{G}\) now implies that
\[\mathscr{F}_{G}(G)\simeq\mathscr{F}_{G}(*)\otimes_{\mathscr{F}_{G\times G}(*)} \mathscr{F}_{G}(*)\simeq\mathscr{O}_{\mathscr{M}_{G}}\otimes_{\mathscr{O}_{ \mathscr{M}_{G}}\otimes_{\mathscr{O}_{\mathscr{M}_{G}}}}\mathscr{O}_{\mathscr{ M}_{G}}.\]
Therefore, \(\Gamma(\mathscr{M}_{G};\mathscr{F}_{G}(G))\) is precisely the Hochschild homology of \(\mathscr{M}_{G}\).
**Remark A.3**.: One can view \(\Gamma(\mathscr{M}_{G};\mathscr{F}_{G}(G))\) as endomorphisms of the unit object in \(\operatorname{Loc}_{G}(G;A)\), so that \(\operatorname{Mod}_{\Gamma(\mathscr{M}_{G};\mathscr{F}_{G}(G))}\) behaves as a completion of \(\operatorname{Loc}_{G}(G;A)\).
**Remark A.4**.: In some cases, the Hochschild-Kostant-Rosenberg spectral sequence degenerates integrally. Then, \(\pi_{*}\mathrm{HH}(\mathscr{M}_{G}/A)\) can be identified with the \(2\)-periodification of the (derived) Hodge cohomology of the underlying stack of \(\mathscr{M}_{G}\) over \(\pi_{0}(A)\). This applies, for instance, when \(A=\operatorname{KU}\); in this case, \(\mathscr{M}_{G}\) is a lift to \(\operatorname{KU}\) of \(\operatorname{Spec}K_{0}(\operatorname{Rep}(G))\cong T/\!/W\), and Proposition A.2 is precisely the calculation of [2].
**Remark A.5**.: Proposition A.2 can be continued further to study the \(G\)-equivariant \(A\)-cohomology of \(\Omega G\), if we additionally assume that the functor \(\mathscr{F}_{G}:\operatorname{Ind}(\mathscr{S}(G))_{\operatorname{conn}}^{ \operatorname{op}}\to\operatorname{QCoh}(\mathscr{M}_{G})\) on connected _ind-finite_\(G\)-spaces is symmetric monoidal. Indeed, observe that there is an equivalence
\[G\backslash\Omega G\simeq G\backslash\mathscr{L}G/G\simeq*/G\times_{*/ \mathscr{L}G}*/G\]
of orbifolds. But \(*/\mathscr{L}G\simeq\mathscr{L}(*/G)\simeq*/G\times_{*/G\times_{*/G}}*/G\), so that \(G\backslash\Omega G\simeq\operatorname{Map}(S^{2},*/G)\), i.e., the cotensoring of \(*/G\) by \(S^{2}\) (in _unpointed_ orbifolds). Using the assumption on \(\mathscr{F}_{G}\), we therefore conclude that the \(G\)-equivariant \(A\)-cohomology of \(\Omega G\) can be identified with the factorization homology
\[\Gamma(\mathscr{M}_{G};\mathscr{F}_{G}(\Omega G))\simeq\int_{S^{2}}\mathscr{ M}_{G}\in\operatorname{CAlg}_{A}\]
taken internally to \(A\)-modules.
The preceding discussion also computes the \(T\)-equivariant \(A\)-cohomology of \(\Omega G\). To explain this, write \(p:\mathscr{M}_{T}\to\mathscr{M}_{G}\) to denote the canonical map. The above discussion shows that there is an equivalence
\[T\backslash\Omega G\simeq*/T\times_{*/G\times_{*/G\times_{*/G}}*/G}*/G\]
of orbifolds, so that \(p_{*}\mathscr{F}_{T}(\Omega G)\) can be identified with the factorization homology over \(S^{2}\) of \(\mathscr{M}_{G}\) with coefficients in the \(\mathbf{E}_{2}\)-module \(p_{*}\mathscr{O}_{\mathscr{M}_{T}}\). In other words, there is an equivalence
\[\Gamma(\mathscr{M}_{T};\mathscr{F}_{T}(\Omega G))\simeq\int_{S^{2}}(\mathscr{ M}_{G};p_{*}\mathscr{O}_{\mathscr{M}_{T}})\in\mathrm{CAlg}_{A}.\]
**Remark A.6**.: This approach is rather robust: for instance, if \(K\subseteq G\) is a closed subgroup such that \(G/K\) is a finite space, there are equivalences of orbifolds
\[G\backslash\mathscr{L}(G/K)\simeq\Omega(G/K)/K\simeq(*\times_{*\times_{*/G}*/K }*)/K\simeq*/K\times_{*/K\times_{*/G}*/K}*/K.\]
Under the same hypotheses as Remark A.5, this implies that \(\Gamma(\mathscr{M}_{G};\mathscr{F}_{G}(\mathscr{L}(G/K)))\) is isomorphic to the relative Hochschild homology \(\mathrm{HH}(\mathscr{M}_{K}/\mathscr{M}_{G})\). One can recover Remark A.5 by noting that if \(H\) is a simply-connected compact Lie group and \(K=H\subseteq H\times H=G\), the Hochschild homology \(\mathrm{HH}(\mathscr{M}_{H}/\mathscr{M}_{H}\times\mathscr{M}_{H})\) of the diagonal embedding \(\Delta:\mathscr{M}_{H}\hookrightarrow\mathscr{M}_{H}\times\mathscr{M}_{H}\) is precisely the factorization homology \(\int_{S^{2}}\mathscr{M}_{H}\).
The relationship of the Brylinski-Zhang isomorphism to Theorem 1.1.10 can now be explained as follows.
**Example A.7**.: Continue to assume that \(G\) is a simply-connected compact Lie group. If \(A=\mathbf{Q}[\beta^{\pm 1}]\), then there is an equivalence
\[\mathrm{Loc}_{G}^{\mathrm{gr}}(G;\mathbf{Q}[\beta^{\pm 1}])\simeq\mathrm{ QCoh}(\check{\mathfrak{l}}/W\times_{\check{\mathfrak{g}}/\tilde{G}}\check{ \mathfrak{l}}/W),\]
where all objects on the coherent side are defined over \(\mathbf{Q}\). Since \(\check{\mathfrak{l}}/W\times_{\check{\mathfrak{g}}/\tilde{G}}\check{ \mathfrak{l}}/W\cong(T^{*}\check{T})^{\mathrm{bl}}/W\) is isomorphic to the group scheme of regular centralizers in \(\check{\mathfrak{g}}\), we will write write \(\check{J}_{\mathbf{G}_{a}}\) to denote \(\check{\mathfrak{l}}/W\times_{\check{\mathfrak{g}}/\tilde{G}}\check{ \mathfrak{l}}/W\). The above equivalence therefore states that
\[\mathrm{Loc}_{G}^{\mathrm{gr}}(G;\mathbf{Q}[\beta^{\pm 1}])\simeq\mathrm{ QCoh}(\check{J}_{\mathbf{G}_{a}}). \tag{24}\]
On the other hand, by [10, Theorem 3.4.2], the Lie algebra of \(\check{J}_{\mathbf{G}_{a}}\) over \(\check{\mathfrak{l}}/W\) is isomorphic to \(T^{*}(\check{\mathfrak{l}}/W)\). Therefore, Proposition A.2 and the Hochschild-Kostant-Rosenberg theorem gives an isomorphism
\[\mathrm{H}_{G}^{*}(G;\mathbf{Q}[\beta^{\pm 1}])\cong\pi_{*}\mathrm{HH}(\check{ \mathfrak{l}}/W/\mathbf{Q})\otimes_{\mathbf{Q}}\mathbf{Q}[\beta^{\pm 1}] \cong\mathscr{O}_{T[-1](\check{\mathfrak{l}}/W)}\otimes_{\mathbf{Q}}\mathbf{Q}[ \beta^{\pm 1}].\]
In particular, there is an equivalence
\[\mathrm{Mod}_{\mathrm{H}_{G}^{0}(G;\mathbf{Q}[\beta^{\pm 1}])}\simeq\mathrm{ QCoh}(T[-1](\check{\mathfrak{l}}/W))\otimes_{\mathbf{Q}}\mathbf{Q}[\beta^{ \pm 1}]. \tag{25}\]
By Koszul duality, the right-hand side is equivalent to the \(2\)-periodification of the \(\infty\)-category of ind-coherent sheaves over the formal completion of \(\check{J}_{\mathbf{G}_{a}}\) at the zero section. One can view the resulting description of \(\mathrm{Mod}_{\mathrm{H}_{G}^{0}(G;\mathbf{Q}[\beta^{\pm 1}])}\) as a infinitesimal version of the equivalence (24). By construction, the equivalence (25) is just a restatement of the Brylinski-Zhang isomorphism \(\mathrm{H}_{G}^{*}(G;\mathbf{Q})\cong\Omega^{*}_{\mathbf{H}^{*}(BG;\mathbf{Q}) /\mathbf{Q}}\).
**Example A.8**.: We can also specialize Remark A.5 to this case: we have
\[\operatorname{H}_{G}^{*}(\Omega G;\mathbf{Q})\cong\pi_{*}\left(\int_{S^{2}} \mathscr{M}_{G}\right). \tag{26}\]
Here, \(\mathscr{M}_{G}=\operatorname{Spec}C^{*}(BG;\mathbf{Q})\) is the derived \(\mathbf{Q}\)-scheme whose underlying graded \(\mathbf{Q}\)-scheme is \(\check{\mathfrak{l}}[2]/\!\!/W=\operatorname{Spec}\operatorname{H}^{*}(BG; \mathbf{Q})\). Since \(\mathbf{Q}\) is a field of characteristic zero and \(G\) is assumed to be connected, \(\operatorname{H}^{*}(BG;\mathbf{Q})\) is a polynomial algebra on generators in even degrees; this implies that \(C^{*}(BG;\mathbf{Q})\) is formal as an \(\mathbf{E}_{\infty}\)-\(\mathbf{Q}\)-algebra10. In particular, we may identify \(\mathscr{M}_{G}=\check{\mathfrak{l}}[2]/\!\!/W\). Just as the Hochschild homology of \(\check{\mathfrak{l}}[2]/\!\!/W\) can be identified with the ring of functions on \(T[-1](\check{\mathfrak{l}}[2]/\!\!/W)\), a version of the Hochschild-Kostant-Rosenberg theorem implies that the factorization homology over \(S^{2}\) can be identified with the ring of functions on the \((-2)\)-shifted tangent bundle
Footnote 10: This follows from the fact that the free \(\mathbf{E}_{\infty}\)-\(\mathbf{Q}\)-algebra on classes in even degrees can be identified with the polynomial \(\mathbf{Q}\)-algebra, i.e., is itself formal.
\[T[-2](\check{\mathfrak{l}}[2]/\!\!/W)=\operatorname{Spec}\operatorname{Sym}_{ \check{\mathfrak{l}}[2]/\!\!/W}(\Omega^{1}_{\check{\mathfrak{l}}[2]/\!\!/W}[2]).\]
Now11, if \(R\) is a (simplicial) commutative ring and \(M\) is a connective \(R\)-module, there is a decalage isomorphism12 (see [11, Sec. I.4.3.2]) \(\operatorname{Sym}_{R}^{j}(M[2])\cong\Gamma_{R}^{j}(M)[2j]\), where \(\Gamma^{j}\) denotes (the left derived functor of) the \(j\)th divided power construction. Therefore, we see that \(\operatorname{Sym}_{\check{\mathfrak{l}}[2]/\!\!/W}(\Omega^{1}_{\check{ \mathfrak{l}}[2]/\!\!/W}[2])\) can be identified with a shearing (which we will simply denote by \([\bullet]\)) of the divided power algebra \(\Gamma_{\check{\mathfrak{l}}[2]/\!\!/W}(\Omega^{1}_{\check{\mathfrak{l}}[2]/ \!\!/W})\). In other words, there is an isomorphism
Footnote 11: We will not need such a general statement, but we recall it since it is very useful in many other contexts, too.
\[\pi_{*}\left(\int_{S^{2}}\check{\mathfrak{l}}[2]/\!\!/W\right)\cong\Gamma_{ \check{\mathfrak{l}}[2]/\!\!/W}(\Omega^{1}_{\check{\mathfrak{l}}[2]/\!\!/W}) [2\bullet];\]
the shearing on the right-hand side is undone by \(2\)-periodifying the left-hand side. Therefore, we obtain an isomorphism
\[\operatorname{H}_{G}^{*}(\Omega G;\mathbf{Q})\otimes_{\mathbf{Q}}\mathbf{Q}[ \beta^{\pm 1}]\cong\Gamma_{\check{\mathfrak{l}}/\!\!/W}(\Omega^{1}_{\check{ \mathfrak{l}}/\!\!/W}).\]
Up to this point, the fact that the coefficients are \(\mathbf{Q}\) (as opposed to a general \(\mathbf{Z}\)-algebra with some small primes inverted) has not been used outside of the formality of \(C^{*}(BG;\mathbf{Q})\). Using it now, we see that the divided power algebra can be identified with a symmetric algebra, in which case the above formula implies that \(\operatorname{H}_{G}^{*}(\Omega G;\mathbf{Q})\otimes_{\mathbf{Q}}\mathbf{Q}[ \beta^{\pm 1}]\) can be identified with the ring of functions on the tangent bundle \(T(\check{\mathfrak{l}}/\!\!/W)\). This should be compared to [1, Theorem 1] with \(\hbar=0\); see [1, Section 2.6] and [1, Section 1.7]. A similar argument using the \(S^{1}\)-action on \(S^{2}\) by rotation can be used to recover (the \(2\)-periodification of) the full quantized statement of [1, Theorem 1].
**Remark A.9**.: The above discussion implies a more general statement. Namely, suppose that \(R\) is a (classical) commutative ring such that Remark A.5 applies to \(G\)-equivariant \(R\)-cohomology -- in particular, such that there is an isomorphism
\[\operatorname{H}_{G}^{*}(\Omega G;R)\cong\pi_{*}\left(\int_{S^{2}}\mathscr{M}_{ G}\right)\in\operatorname{CAlg}_{\pi_{*}R} \tag{27}\]
as in (26). (This assumption is likely to hold for rather general rings \(R\).) As usual, \(\mathscr{M}_{G}=\operatorname{Spec}C^{*}(BG;R)\) is an \(\mathbf{E}_{\infty}\)-\(R\)-scheme with underlying graded \(R\)-scheme \(\check{\mathfrak{t}}_{R}[2]/\!\!/W\); here, \(\check{\mathfrak{t}}_{R}\) denotes the base-change of \(\check{\mathfrak{t}}\) from \(\mathbf{Z}\) to \(R\). Suppose that \(C^{*}(BG;R)\) is formal as an \(\mathbf{E}_{n}\)-\(R\)-algebra (i.e., there is an equivalence \(C^{*}(BG;R)\simeq\operatorname{H}^{*}(BG;R)\) as \(\mathbf{E}_{n}\)-\(R\)-algebras); by obstruction theory, this can always be guaranteed if \(n=2\) and \(\operatorname{H}^{*}(BG;R)\) is a polynomial algebra on generators in even degrees. Then (27) implies that \(\operatorname{H}_{G}^{*}(\Omega G;R)\) is equivalent to \(\pi_{*}\left(\int_{S^{2}}\check{\mathfrak{t}}_{R}[2]/\!\!/W\right)\) as \(\mathbf{E}_{n-2}\)-\(R\)-algebras. In particular, since \(C^{*}(BG;R)\) is formal as an \(\mathbf{E}_{2}\)-\(R\)-algebra, we see that \(\operatorname{H}_{G}^{*}(\Omega G;R)\) is equivalent to \(\pi_{*}\left(\int_{S^{2}}\check{\mathfrak{t}}_{R}[2]/\!\!/W\right)\) as unital \(R\)-modules. If \(C^{*}(BG;R)\) is formal as an \(\mathbf{E}_{3}\)-\(R\)-algebra, then we can also identify \(\operatorname{H}_{G}^{*}(\Omega G;R)\) as an \(R\)_-algebra_.
In any case, since \(R\) is not necessarily a \(\mathbf{Q}\)-algebra, the Hochschild-Kostant-Rosenberg theorem need not give an isomorphism between \(\pi_{*}\left(\int_{S^{2}}\check{\mathfrak{t}}_{R}[2]/\!\!/W\right)\) and \(\operatorname{Sym}_{[2]/\!\!/W}(\Omega^{1}_{\{[2]/\!\!/W\}}[2])\); rather, there will always be a "HKR" filtration on \(\pi_{*}\left(\int_{S^{2}}\check{\mathfrak{t}}_{R}[2]/\!\!/W\right)\) whose associated graded is given by \(\operatorname{Sym}_{\check{\mathfrak{t}}[2]/\!\!/W}(\Omega^{1}_{\{[2]/\!\!/ W\}}[2])\). If this filtration splits, we conclude that the cohomology ring \(\operatorname{H}_{G}^{*}(\Omega G;R)\) will admit divided powers on the \(\mathscr{O}_{\check{\mathfrak{t}}_{R}[2]/\!\!/W}\)-algebra generators \(\Omega^{1}_{\check{\mathfrak{t}}_{R}[2]/\!\!/W}\). The assumption that the HKR filtration splits seems likely to hold if some primes are assumed to be units in \(R\) (e.g., if \(\dim(\mathfrak{t})!\in R^{\times}\)). Note that by virtue of the argument establishing (27), the divided power structure on \(\operatorname{H}_{G}^{*}(\Omega G;R)\) is closely related to the \(\mathbf{E}_{3}\)-algebra structure on the derived Satake category.
The preceding discussion is directly connected with a question asked by Bezrukavnikov about divided powers in the cohomology of the affine Grassmannian (see [10]). It would be interesting to determine the exact conditions under which the above assumptions on \(R\) hold true (namely, \(C^{*}(BG;R)\) being formal as an \(\mathbf{E}_{3}\)-\(R\)-algebra, (27), and the splitting of the HKR filtration for \(\int_{S^{2}}\check{\mathfrak{t}}_{R}[2]/\!\!/W\)). The formality of \(C^{*}(BG;R)\) seems to be the thorniest of these conditions, but we nevertheless hope that (27) could be useful in approaching Bezrukavnikov's question.
**Example A.10**.: Recall that there is an equivalence
\[\operatorname{Loc}_{G}^{\operatorname{gr}}(G;\operatorname{KU})\otimes \mathbf{Q}\simeq\operatorname{QCoh}(\check{T}/\!\!/W\times_{\check{G}/\check{ G}}\check{T}/\!\!/W),\]
where all objects on the coherent side are defined over \(\mathbf{Q}\). Since \(\check{T}/\!\!/W\times_{\check{G}/\check{G}}\check{T}/\!\!/W\cong(T\times\check {T})^{\operatorname{bl}}/\!\!/W\) is isomorphic to the group scheme of regular centralizers in \(\check{G}\), we will write write \(\check{J}_{\mathbf{G}_{m}}\) to denote \(\check{T}/\!\!/W\times_{\check{G}/\check{G}}\check{T}/\!\!/W\). The above equivalence therefore states that
\[\operatorname{Loc}_{G}^{\operatorname{gr}}(G;\operatorname{KU})\otimes \mathbf{Q}\simeq\operatorname{QCoh}(\check{J}_{\mathbf{G}_{m}}). \tag{28}\]
There is a multiplicative analogue of [17, Theorem 3.4.2], which says that when \(\check{G}\) is adjoint, the Lie algebra of \(\check{J}_{\mathbf{G}_{m}}\) over \(\check{T}/\!\!/W\) is isomorphic to \(T^{*}(\check{T}/\!\!/W)\). Therefore, Proposition A.2 and the Hochschild-Kostant-Rosenberg theorem gives an isomorphism
\[\operatorname{KU}_{G}^{*}(G)\otimes\mathbf{Q}\cong\pi_{*}\text{HH}(\check{T}/ \!\!/W/\mathbf{Z})\otimes_{\mathbf{Z}}\mathbf{Q}[\beta^{\pm 1}]\cong\mathscr{O}_{T[-1](T /\!\!/W)}\otimes_{\mathbf{Z}}\mathbf{Q}[\beta^{\pm 1}].\]
In particular, there is an equivalence
\[\operatorname{Mod}_{\operatorname{KU}^{0}_{G}(G)}\otimes\mathbf{Q}\simeq \operatorname{QCoh}(T[-1](\tilde{T}/\!/W))\otimes_{\mathbf{Z}}\mathbf{Q}. \tag{29}\]
By Koszul duality, the right-hand side is equivalent to the \(2\)-periodification of the \(\infty\)-category of ind-coherent sheaves over the formal completion of \(\tilde{J}_{\mathbf{G}_{m}}\) at the zero section. One can view the resulting description of \(\operatorname{Mod}_{\operatorname{KU}^{0}_{G}(G)}\otimes\mathbf{Q}\) as a infinitesimal version of the equivalence (28). By construction, the equivalence (29) is just a restatement of the Brylinski-Zhang isomorphism \(\operatorname{KU}^{0}_{G}(G)\otimes\mathbf{Q}\cong\Omega^{*}_{K_{0}( \operatorname{Rep}(G))/\mathbf{Z}}\otimes_{\mathbf{Z}}\mathbf{Q}\).
**Remark A.11**.: Just as in Example A.8, we can also specialize Remark A.5 to the case of K-theory. Then, we have
\[\operatorname{KU}^{*}_{G}(\Omega G)\cong\pi_{*}\left(\int_{S^{2}}\mathscr{M} _{G}\right). \tag{30}\]
Here, \(\mathscr{M}_{G}=\operatorname{Spec}\operatorname{KU}_{G}\) as a KU-scheme, and the factorization homology is taken over KU.
In general, \(\operatorname{Mod}_{\pi_{0}C^{*}_{G}(G;A)}\otimes\mathbf{Q}\) is an "infinitesimal analogue" of \(\operatorname{Loc}^{\operatorname{gr}}_{G}(G;A)\). The equivalence of Proposition A.2 can therefore be viewed as a infinitesimal version of the analogue of the equivalence of Theorem 1.1.10 for \(\operatorname{Loc}^{\operatorname{gr}}_{G}(G;A)\otimes\mathbf{Q}\).
## Appendix B Coulomb branches of pure supersymmetric gauge theories
In this brief appendix, we explain some motivation for the results of this article from the perspective of Coulomb branches of 4d \(\mathscr{N}=2\) and 5d \(\mathscr{N}=1\) gauge theories with a generic choice of complex structure. Our goal here is not to be precise, but instead explain some motivation for the ideas in this article. While reading this appendix, the reader should keep in mind that I know very little physics!
**Recollection B.1**.: In [1, 16] (see also [17]), it is argued that the Coulomb branch of 3d \(\mathscr{N}=4\) pure gauge theory on \(\mathbf{R}^{3}\) can be modeled by the algebraic symplectic variety \(\mathscr{M}_{C}:=\operatorname{Spec}\mathrm{H}_{*}^{G}(\mathrm{Gr}_{G}( \mathbf{C});\mathbf{C})\) over \(\mathbf{C}\). The calculations of [1] say that \(\mathscr{M}_{C}\) is isomorphic to \((T^{*}\bar{T})^{\mathrm{bl}}/\!\!/W\). This is in turn isomorphic by [10, Theorem 3] to the phase space of the Toda lattice for \(\check{G}\), as well as to the moduli space of solutions of Nahm's equations on \([-1,1]\) for a compact form of \(\check{G}\) by [1, Theorem A.1] with an appropriate boundary condition. The _quantized_ Coulomb branch of 3d \(\mathscr{N}=4\) pure gauge theory on \(\mathbf{R}^{3}\) is then modeled by \(\mathscr{A}_{\epsilon}:=\mathrm{H}_{*}^{G\times S^{1}_{\mathrm{rot}}}( \mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\). In [1], \(\mathscr{A}_{\epsilon}\) was identified with the algebra of operators of the quantized Toda lattice for \(\check{G}\).
**Remark B.2**.: The physical reason for the definition of \(\mathscr{A}_{\epsilon}\) is the "\(\Omega\)-background" (introduced in [17]); we refer the reader to [1, 20] for helpful expositions on this topic. The essential idea is as follows: \(C^{G}_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\) admits the structure of an \(\mathbf{E}_{3}^{\mathrm{fr}}\)-algebra. In particular, the \(\mathbf{E}_{3}\)-algebra structure on \(C^{G}_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\) is equivariant for the action of \(S^{1}\) on \(C^{G}_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\) via loop rotation, and the action of \(S^{1}\) on \(\mathbf{E}_{3}\) via rotation about a line \(\ell\subseteq\mathbf{R}^{3}\). Using the fact that the fixed points of the \(S^{1}\)-action on \(\mathbf{R}^{3}\) are given by the line \(\ell\), it is argued in [1] that the homotopy fixed points of \(C^{G}_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\) admits the structure of an \(\mathbf{E}_{1}\)-\(C^{*}_{S^{1}}(*;\mathbf{C})\)-algebra. Furthermore, the associative multiplication on \(C^{G\times S^{1}_{\mathrm{rot}}}_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\) is argued to degenerate to the 2-shifted Poisson bracket on \(\mathrm{H}_{*}^{G}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\) obtained from the \(\mathbf{E}_{3}\)-algebra structure. The "\(\Omega\)-background" is supposed to refer to the compatibility of the \(S^{1}\)-action on \(C^{G}_{*}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\) with the \(S^{1}\)-action on the \(\mathbf{E}_{3}\)-operad.
From the mathematical perspective, the idea that \(S^{1}\)-actions can be viewed as a deformation quantizations has been made precise by [10, 11], and more recently in [1, 20] (at least in characteristic zero). Although often not said explicitly, the idea has been a cornerstone of Hochschild homology. (The reader can skip the following discussion, since it will not be necessary in the remainder of this section; we only include it for completeness.)
Consider a smooth \(\mathbf{C}\)-scheme \(X\), so that the HKR theorem gives an isomorphism \(\mathrm{HH}(X/\mathbf{C})\simeq\operatorname{Sym}(\Omega^{1}_{X/\mathbf{C}}[1])\). There is an isomorphism \(\operatorname{Sym}(\Omega^{1}_{X/\mathbf{C}}[1])\simeq\bigoplus_{n\geq 0}( \wedge^{n}\Omega^{1}_{X/\mathbf{C}})[n]\), so \(\operatorname{Sym}(\Omega^{1}_{X/\mathbf{C}}[1])\) can be understood as a shearing of the algebra \(\Omega^{*}_{X/\mathbf{C}}=\bigoplus_{n\geq 0}(\wedge^{n}\Omega^{1}_{X/ \mathbf{C}})[-n]\) of differential forms. The HKR theorem further states that the \(S^{1}\)-action on \(\mathrm{HH}(X/\mathbf{C})\) is a shearing of the de Rham differential on \(\Omega^{*}_{X/\mathbf{C}}\). The Koszul dual of the algebra \(\mathrm{HH}(X/\mathbf{C})\simeq\operatorname{Sym}(\Omega^{1}_{X/\mathbf{C}}[1])\) is \(\operatorname{Sym}(T_{X/\mathbf{C}}[-2])\simeq\mathscr{O}_{T^{*}[2]X}\); in the same way, the sheaf of differential operators on \(X\) is Koszul dual to the de Rham complex of \(X\). This can be drawn pictorially as
follows:
Since the algebra \(\mathscr{D}_{X}^{\hbar}\) of differential operators is a quantization of \(T^{*}[2]X\), this drawing illustrates that the \(S^{1}\)-action on Hochschild homology plays the role of a Koszul dual to deformation quantization.
**Example B.3**.: We will keep \(G=\mathrm{SL}_{2}\) as a running example in discussing Coulomb branches (see also [11, Section 2]), so that \(\check{G}=\mathrm{PGL}_{2}\). In this case, \(\mathscr{M}_{C}\cong\mathrm{Spec}\,\mathbf{C}[x,t^{\pm 1},\frac{t-1}{x}]^{\mathbf{Z}/ 2}\cong\mathrm{Spec}\,\mathbf{C}[x^{2},t+t^{-1},\frac{t-t^{-1}}{x}]\) by Theorem 3.2.12 (and [1]), where \(\mathbf{Z}/2\) acts on \(\mathbf{C}[x,t^{\pm 1},\frac{t-1}{x}]\) by \(x\mapsto-x\) and \(t\mapsto t^{-1}\). Let us denote \(\Phi=x^{2}\), \(U=t+t^{-1}\), and \(V=\frac{t-t^{-1}}{x}\). Then
\[U^{2}-\Phi V^{2}=(t+t^{-1})^{2}-(t-t^{-1})^{2}=4,\]
so \(\mathscr{M}_{C}\) is isomorphic to the subvariety of \(\mathbf{A}_{\mathbf{C}}^{3}\) cut out by the equation
\[U^{2}-\Phi V^{2}=4.\]
Alternatively, and perhaps more suggestively:
\[(U+2)(U-2)=\Phi V^{2}.\]
This is known as the _Atiyah-Hitchin manifold_, and was studied in great detail in [1] (see [1, Page 20] for the definition). In [1, Theorem A.1], it was shown that the Atiyah-Hitchin manifold is isomorphic to the moduli space of solutions of Nahm's equations on \([-1,1]\) for \(\mathrm{PSU}(2)\) with an appropriate boundary condition. Since a normal vector to the defining equation of \(\mathscr{M}_{C}\) is \(2U\partial_{U}-V^{2}\partial_{\phi}-2V\Phi\partial_{V}\), the standard holomorphic 3-form \(dU\wedge d\Phi\wedge dV\) on \(\mathbf{A}_{\mathbf{C}}^{3}\) induces a holomorphic symplectic form \(\frac{d\Phi\wedge dV}{2U}\) on \(\mathscr{M}_{C}\). (This can also be written as \(\frac{dU\wedge dV}{V^{2}}\) or as \(\frac{d\Phi\wedge dU}{2\Phi V}\).) The associated Poisson bracket on \(\mathscr{O}_{\mathscr{M}_{C}}\cong\mathrm{H}_{*}^{G}(\mathrm{Gr}_{G}( \mathbf{C});\mathbf{C})\) agrees with the 2-shifted Poisson bracket arising from the \(\mathbf{E}_{3}\)-structure on \(C_{*}^{G}(\mathrm{Gr}_{G}(\mathbf{C});\mathbf{C})\).
The quantized algebra \(\mathscr{A}_{\epsilon}\) was described explicitly in [1]. Let us write \(\theta=\frac{1}{x}(s-1)\), where \(s\) is the simple reflection generating the Weyl group of \(\mathrm{SL}_{2}\). Then \(\mathscr{A}_{\epsilon}\) is generated as an algebra over \(\mathbf{C}[\hbar]\) by \(\mathbf{Z}/2\)-invariant polynomials in \(x\), \(t^{\pm 1}\), and \(\theta\), where \(x\) is to be viewed as \(t\partial_{t}\). Moreover, under the isomorphism \(\mathscr{A}_{\epsilon}/\hbar\cong\mathscr{O}_{\mathscr{M}_{C}}\), the class \(x\) is sent to \(x\), and \(\theta\) is sent to \(\frac{t-1}{x}\). We then have the commutation relation \([x,t^{\pm 1}]=\pm\hbar t^{-1}\), induced by \([\partial_{t},t]=\hbar\); see Example 3.3.5. This implies that \([x^{2},t^{\pm 1}]=\hbar^{2}t^{\pm 1}\pm 2\hbar t^{\pm 1}x\), which in turn implies that \(\mathscr{A}_{\epsilon}\) is the quotient of the free associative \(\mathbf{C}[\hbar]\)-algebra on \(\Phi\), \(U\), and \(V=\frac{1}{x}(t-t^{-1})\) subject to the relations
\[[\Phi,V] =2\hbar U-\hbar^{2}V,\] \[[\Phi,U] =2\hbar\Phi V-\hbar^{2}U,\] \[=\hbar V^{2},\] \[(U+2)(U-2) =\Phi V^{2}-\hbar UV.\]
Note that the commutation relations for \([\Phi,U]\) and \([U,V]\) in [19, Equation B.3] have typos, but it is stated correctly in [17, Equation 5.51]. The above is an explicit description of the nil-Hecke algebra \(e\mathscr{H}(\tilde{\mathfrak{t}},W^{\mathrm{aff}})e\) for \(\mathrm{PGL}_{2}\).
**Heuristic B.4**.: An unpublished conjecture of Gaiotto (which I learned about from Nakajima) says that the Coulomb branch of \(4\mathrm{d}\)\(\mathscr{N}=2\) pure gauge theory over \(\mathbf{R}^{3}\times S^{1}\) with a generic choice of complex structure can be modeled by \(\mathscr{M}_{C}^{\mathrm{4d}}:=\mathrm{Spec}\,\mathrm{KU}_{0}^{G}(\mathrm{Gr} _{G}(\mathbf{C}))\otimes\mathbf{C}\). Although I do not know Gaiotto's motivation for this conjecture (it is probably inspired by [18]), my attempt at heuristically justifying it goes as follows. Recall that \(\mathrm{Gr}_{G}(\mathbf{C})/G(\mathbf{C}[\![t]\!])\) can be viewed as \(\mathrm{Bun}_{G}(S^{2})\). It is reasonable to view \(\mathrm{KU}_{0}(\mathrm{Bun}_{G}(S^{2}))\otimes\mathbf{C}\) as closely related to \(\mathrm{H}_{*}(\mathscr{L}\mathrm{Bun}_{G}(S^{2});\mathbf{C})\), where \(\mathscr{L}\mathrm{Bun}_{G}(S^{2})\) denotes the free loop space. Since \(\mathscr{L}BG\simeq B\mathscr{L}G\), we have \(\mathscr{L}\mathrm{Bun}_{G}(S^{2})\simeq\mathrm{Bun}_{\mathscr{L}G}(S^{2})\), so one might view \(\mathrm{H}_{*}(\mathscr{L}\mathrm{Bun}_{G}(S^{2});\mathbf{C})\) as the ring of functions on the "Coulomb branch of \(3\mathrm{d}\)\(\mathscr{N}=4\) pure gauge theory on \(\mathbf{R}^{3}\) with gauge group \(\mathscr{L}G\)".
Making precise sense of this phrase seems difficult, but one possible workaround could be the following. It is often useful to view gauge theory with gauge group \(\mathscr{L}G\) as "finite temperature" gauge theory with gauge group \(G\). Recall that Wick rotation relates \((3+1)\)-dimensional quantum field theory at a finite temperature \(T\) to statistical mechanics over \(\mathbf{R}^{3}\times S^{1}\) where the circle has radius \(\frac{1}{2\pi T}\). This suggests that \(\mathrm{H}_{*}(\mathscr{L}\mathrm{Bun}_{G}(S^{2});\mathbf{C})\) (which is more precisely to be understood as \(\mathrm{KU}_{0}^{G}(\mathrm{Gr}_{G}(\mathbf{C}))\otimes\mathbf{C}\)) can be viewed as the ring of functions on the "Coulomb branch of \(4\mathrm{d}\)\(\mathscr{N}=2\) pure gauge theory on \(\mathbf{R}^{3}\times S^{1}\) with gauge group \(G\)". See [1, Remark 3.14]. In [1], \(\mathrm{Spec}\,\mathrm{KU}_{0}^{G}(\mathrm{Gr}_{G}(\mathbf{C}))\otimes \mathbf{C}\) was identified with the phase space of the relativistic Toda lattice for \(\check{G}\).
One can also define a quantization of \(\mathscr{M}_{C}^{\mathrm{4d}}\) via \(\mathscr{A}_{\epsilon}^{\mathrm{4d}}:=\mathrm{KU}_{0}^{G\times S_{\mathrm{rot} }^{1}}(\mathrm{Gr}_{G}(\mathbf{C}))\otimes\mathbf{C}\); this can be viewed as a model for the quantized Coulomb branch of \(4\mathrm{d}\)\(\mathscr{N}=2\) pure gauge theory on \(\mathbf{R}^{3}\times S^{1}\). In [1], \(\mathscr{A}_{\epsilon}^{\mathrm{4d}}\) was be identified with the algebra of operators of the quantized relativistic Toda lattice for \(\check{G}\).
**Example B.5**.: When \(G=\mathrm{SL}_{2}\), the calculations of Theorem 3.2.12 and [1] tell us that \(\mathscr{M}_{C}^{\mathrm{4d}}\cong\mathrm{Spec}\,\mathbf{C}[x^{\pm 1},t^{\pm 1}, \frac{t-1}{x-1}]^{\mathbf{Z}/2}\cong\mathrm{Spec}\,\mathbf{C}[x+x^{-1},t+t^{-1},\frac{t-t^{-1}}{x-x^{-1}}]\), where \(\mathbf{Z}/2\) acts on \(\mathbf{C}[x^{\pm 1},t^{\pm 1},\frac{t-1}{x-1}]\) by \(x\mapsto x^{-1}\) and \(t\mapsto t^{-1}\). Let us write \(\Psi=x+x^{-1}\), \(W=t+t^{-1}\), and \(Z=\frac{t-t^{-1}}{x-x^{-1}}\). Then, one easily verifies that \(\mathscr{M}_{C}^{\mathrm{4d}}\) is the subvariety of \(\mathbf{A}_{\mathbf{C}}^{3}\) cut out by the equation
\[W^{2}-(\Psi^{2}-4)Z^{2}=4.\]
Alternatively, and perhaps more suggestively:
\[(W+2)(W-2)=(\Psi+2)(\Psi-2)Z^{2}.\]
This may be regarded as an a multiplicative analogue of the Atiyah-Hitchin manifold. It would be very interesting to understand a relationship between this manifold and the moduli space of solutions to some analogue of Nahm's equations for \(\mathrm{PSU}(2)\) with an appropriate boundary condition. The complex manifold \(\mathscr{M}_{C}^{\mathrm{4d}}\) has a holomorphic symplectic form given by \(\frac{d\Psi\wedge dZ}{W}\), which can also be written as \(\frac{d\Psi\wedge dW}{(\Psi^{2}-4)Z}\) or as \(\frac{dZ\wedge dW}{\Psi Z^{2}}\).
We can also describe the quantized algebra \(\mathscr{A}_{\epsilon}^{\mathrm{4d}}\) explicitly. In this case, instead of the relation \([\partial_{t},t]=\hbar\) which appeared in Example B.3, we have the relation \(xt=qtx\) (i.e., \(xtx^{-1}t^{-1}=q\)); see Example 3.3.6. In particular, \(xt^{-1}=q^{-1}t^{-1}x\)
\(x^{-1}t=q^{-1}tx^{-1}\), and \(x^{-1}t^{-1}=qt^{-1}x^{-1}\). It follows after some tedious calculation that \(\mathscr{A}_{\epsilon}^{\text{4d}}\) is the quotient of the free associative \(\mathbf{C}[\![q-1]\!]\)-algebra (in fact, \(\mathbf{C}[q^{\pm 1}]\)-algebra) on \(\Psi\), \(W\), and \(Z=\frac{1}{x-x^{-1}}(t-t^{-1})\) subject to the relations
\[[\Psi,W] =(q-1)(\Psi^{2}-4)Z-\frac{(q-1)^{2}}{2q}((\Psi^{2}-4)Z+\Psi W),\] \[=(q-1)W-\frac{(q-1)^{2}}{2q}(\Psi Z+W),\] \[=(q-1)\Psi Z^{2}-\frac{(q-1)^{2}}{2q}(\Psi Z+W)Z,\] \[(W+2)(W-2) =(\Psi+2)(\Psi-2)Z^{2}-\frac{(q-1)^{2}}{2q}(\Psi^{2}-4)Z^{2}+ \frac{q^{2}-1}{2q}\Psi WZ.\]
This algebra is an explicit description of the multiplicative nil-Hecke algebra \(e\mathscr{H}(\widetilde{T},W^{\text{aff}})e\) from Conjecture 4.2.9 for \(\operatorname{PGL}_{2}\).
Now consider an elliptic curve \(E(\mathbf{C})\) over \(\mathbf{C}\). Motivated by Heuristic B.4 and [15], one might expect that the Coulomb branch of 5d \(\mathscr{N}=1\) pure gauge theory over \(\mathbf{R}^{3}\times E(\mathbf{C})\) (with some specific complex structure) can be modeled by the complexification of the \(G\)-equivariant \(A\)-homology of \(\operatorname{Gr}_{G}(\mathbf{C})\), where \(A\) is an elliptic cohomology theory associated to a putative integral lift of \(E\). A classical result of Tate says that there are no smooth elliptic curves over \(\mathbf{Z}\), so \(E(\mathbf{C})\) cannot literally lift to \(\mathbf{Z}\) (i.e., \(\pi_{0}(A)\) cannot be \(\mathbf{Z}\)). As a fix, one can more generally simultaneously consider all possible "Coulomb branches" \(\mathscr{M}_{C}^{\text{5d}}:=\operatorname{Spec}A_{0}^{G}(\operatorname{Gr}_{G }(\mathbf{C}))\otimes\mathbf{C}\) associated to every complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring \(A\) equipped with an oriented elliptic curve (this is almost equivalent to considering the universal example \(\operatorname{Spec}\operatorname{tmf}_{0}^{G}(\operatorname{Gr}_{G}(\mathbf{C }))\otimes\mathbf{C}\)). We have described \(\operatorname{Spec}A_{0}^{T}(\operatorname{Gr}_{G}(\mathbf{C}))\otimes \mathbf{C}\) in Theorem 3.2.12, from which one can calculate \(\mathscr{M}_{C}^{\text{5d}}\). Similarly, one can even use Corollary 3.2.3 to calculate \(A_{0}^{T\times S_{\text{rot}}^{1}}(\operatorname{Gr}_{G}(\mathbf{C}))\otimes \mathbf{C}\) and \(\mathscr{A}_{\epsilon}^{\text{5d}}:=A_{0}^{G\times S_{\text{rot}}^{1}}( \operatorname{Gr}_{G}(\mathbf{C}))\otimes\mathbf{C}\), but this is already incredibly complicated for \(G=\operatorname{SL}_{2}\).
**Example B.6**.: Let \(A\) be a complex-oriented even-periodic \(\mathbf{E}_{\infty}\)-ring equipped with an oriented elliptic curve \(\widetilde{E}\), and let \(E\) denote the associated elliptic curve over \(\pi_{0}(A)\otimes\mathbf{C}\). Let \((\mathbf{G}_{m}\times E)^{\text{bl}}\) denote the complement of the proper preimage of the zero section of \(E\) inside the blowup of \(\mathbf{G}_{m}\times E\) at the locus cut out by the zero sections of \(\mathbf{G}_{m}\) and \(E\). There is an action of \(\mathbf{Z}/2\) on \((\mathbf{G}_{m}\times E)^{\text{bl}}\), induced by the inversion on the group structures on \(\mathbf{G}_{m}\) and \(E\). If \(G=\operatorname{SL}_{2}\), then Theorem 3.2.12 can be used to show that \(\mathscr{M}_{C}^{\text{5d}}=\operatorname{Spec}A_{0}^{G}(\operatorname{Gr}_{G }(\mathbf{C}))\otimes\mathbf{C}\) is isomorphic to \((\mathbf{G}_{m}\times E)^{\text{bl}}/(\mathbf{Z}/2)\); this can be viewed as an elliptic analogue of the Atiyah-Hitchin manifold. We do not have a simple description for \(\mathscr{A}_{\epsilon}^{\text{5d}}\) analogous to Example B.3 and Example B.5.
It would be very interesting to give a physical interpretation to \(A_{0}^{G}(\operatorname{Gr}_{G}(\mathbf{C}))\otimes\mathbf{C}\) and \(A_{0}^{G\times S_{\text{rot}}^{1}}(\operatorname{Gr}_{G}(\mathbf{C}))\otimes \mathbf{C}\) for other even-periodic \(\mathbf{E}_{\infty}\)-rings \(A\), although we expect this to be very difficult (since most other chromatically interesting generalized cohomology theories only exist after profinite or \(p\)-adic completion, and do not admit transcendental analogues). It would also be very interesting to describe the analogue of our calculations for the ind-schemes \(\mathscr{R}_{G,\mathbf{N}}\) introduced in [1]. By adapting the methods of [1, Section 4], this is approachable when \(G\) is a torus. We expect it to lead to interesting geometry for nonabelian \(G\). |
2301.11404 | A Quantum Monte Carlo study of the structural, energetic, and magnetic
properties of two-dimensional (2D) H and T phase VSe$_2$ | Previous works have controversially claimed near-room temperature
ferromagnetism in two-dimensional (2D) VSe$_2$, with conflicting results
throughout the literature. These discrepancies in magnetic properties between
both phases (T and H phase) of 2D VSe$_2$ are most likely due to the structural
parameters being coupled to the magnetic properties. Specifically, both phases
have a close lattice match and similar total energies, which makes it difficult
to determine which phase is being observed experimentally. In this study, we
used a combination of density functional theory (DFT), highly accurate
diffusion Monte Carlo (DMC) and a surrogate Hessian line-search optimization
technique to resolve the previously reported discrepancy in structural
parameters and relative phase stability. With DMC accuracy, we determined the
freestanding geometry of both phases and constructed a phase diagram. Our
findings demonstrate the successes of the DMC method coupled with the surrogate
Hessian structural optimization technique when applied to a 2D magnetic system. | Daniel Wines, Juha Tiihonen, Kayahan Saritas, Jaron Krogel, Can Ataca | 2023-01-26T20:31:56Z | http://arxiv.org/abs/2301.11404v1 | A Quantum Monte Carlo study of the structural, energetic, and magnetic properties of two-dimensional (2D) H and T phase VSe\({}_{2}\)
###### Abstract
Previous works have controversially claimed near-room temperature ferromagnetism in two-dimensional (2D) VSe\({}_{2}\), with conflicting results throughout the literature. These discrepancies in magnetic properties between both phases (T and H phase) of 2D VSe\({}_{2}\) are most likely due to the structural parameters being coupled to the magnetic properties. Specifically, both phases have a close lattice match and similar total energies, which makes it difficult to determine which phase is being observed experimentally. In this study, we used a combination of density functional theory (DFT), highly accurate diffusion Monte Carlo (DMC) and a surrogate Hessian line-search optimization technique to resolve the previously reported discrepancy in structural parameters and relative phase stability. With DMC accuracy, we determined the freestanding geometry of both phases and constructed a phase diagram. Our findings demonstrate the successes of the DMC method coupled with the surrogate Hessian structural optimization technique when applied to a 2D magnetic system.
+
Footnote †: preprint: AIP/123-QED
## I Introduction
One of the most promising two-dimensional (2D) magnetic materials that has been extensively studied experimentally and theoretically is 2D VSe\({}_{2}\). Similar to other 2D transition metal dichalcogenides (such as MoS\({}_{2}\)) [1], VSe\({}_{2}\) exists in two phases, the T (octahedral phase (1T)-centered honey-combs) phase which is metallic and the H (the trigonal prismatic phase (2H)-hexagonal honeycombs, see Fig. 1) phase which is semiconducting. Several experimental and theoretical studies have controversially claimed near-room temperature ferromagnetism in VSe\({}_{2}\), with conflicting results throughout the literature. Density functional theory (DFT) along with classical Monte Carlo simulations have been used to obtain an estimate of the Curie temperature of H-VSe\({}_{2}\) (291 K) [2], but the model Ising Hamiltonian used did not take into account the magnetic anisotropy energies, which are essential for an accurate estimation of the Curie temperature of a 2D lattice. The Curie temperature of multilayered 2D H-VSe\({}_{2}\) has been experimentally measured to be 425 K, with the ferromagnetism softening as the thickness of the sample increases [3]. Additionally, the experimental Curie temperature for monolayer T-VSe\({}_{2}\) has ranged from 300 K to 470 K [4; 5] depending on which substrate is used (MoS\({}_{2}\), graphite, SiO\({}_{2}\)-coated silicon). The experimental magnetization of T-VSe\({}_{2}\) has also been met with controversy, with values of 15 \(\mu_{B}\) and 5 \(\mu_{B}\) (per formula unit) being reported from two separate studies [6; 4]. Insight has also been reported with regards to how the ferromagnetism is enhanced with defects, molecular adsorption and the choice of substrate for VSe\({}_{2}\)[4; 5; 7]. A wide range of values have also been reported for the charge density wave (CDW) transition temperature for T-VSe\({}_{2}\), ranging from 120 K to 350 K [6; 3; 8; 9; 10].
These discrepancies in the electronic and magnetic properties of either phase of 2D VSe\({}_{2}\) arise from the structural parameters of each phase being coupled closely to the magnetic and electronic properties and the external factors (substrates, defects) of the individual samples. One example of this has been a reported discrepancy on which phase (T or H) is energetically more favorable. Both the T and H phases have a close lattice match and similar total energies, which makes it difficult to determine which phase is being observed experimentally. Recently, it has been reported experimentally that the T phase is favored for bulk VSe2, but with dimensionality decrease, the H phase is favored [11; 3]. It has also been reported that a T-to-H phase transition can be realized by thermal annealing [11]. This same structural phase transition has even been reported by applying a biaxial strain of \(\approx\) 3 % (from calculated results) [7; 11; 12]. Researchers have proposed that this lattice strain can be induced by the mismatch that occurs from
Figure 1: Top and side view of the atomic structure of monolayer VSe\({}_{2}\) in the a) 1T and b) 2H phase.
putting 2D VSe\({}_{2}\) on a substrate [7; 12].
From a computational perspective, results for VSe\({}_{2}\) depend heavily on which methodology is employed. In most cases, DFT with an empirical Hubbard correction (+\(U\)) for correlated electrons is used [13]. For example, if the \(U\) correction is applied for T and H-VSe\({}_{2}\), the T phase is more energetically favorable, while if no \(U\) correction is applied, the H phase is more favorable [14]. In addition to the discrepancies in results calculated with DFT+\(U\), results between van der Waals (vdW) corrected functionals and hybrid functionals are also inconclusive [14] in terms of predicting the relative phase stability. In order to alleviate the uncertainty in DFT methods, more sophisticated methods can be used such as Diffusion Monte Carlo (DMC) [15]. DMC is a correlated, many-body electronic structure method that has demonstrated success for the electronic and magnetic properties of a variety of bulk and 2D systems [16; 17; 18; 19; 20; 21; 22; 23; 24]. This method has a weaker dependence on the starting density functional and \(U\) parameter and can successfully achieve results with an accuracy beyond the DFT+\(U\)[15].
Due to the fact that T and H-VSe\({}_{2}\) have structural parameters that are coupled to their electronic and magnetic properties, it makes it difficult to produce conclusive results that rely solely on DFT or DFT+\(U\). For this reason, we employed our recently developed energy-based surrogate Hessian method for structural optimization with stochastic electronic structure theories (such as DMC) [22] to obtain the geometry of T and H-VSe\({}_{2}\) with DMC accuracy, resulting in high-accuracy bond lengths that resolve previous functional dependent structural discrepancies. After obtaining an accurate geometry for both structures, we constructed a phase diagram between T and H-VSe\({}_{2}\) using DMC calculated energies and obtained accurate magnetic properties of each structure. The accurate estimates for lattice geometry, relative phase energy and the DMC phase diagram assist in clarifying previously inconclusive theoretical and experimental results regarding T and H phase VSe\({}_{2}\). For full details of the computational methods used, see the Supporting Information (SI).
As an initial starting point for our study, we performed benchmarking DFT and DFT+\(U\) calculations using a variety of density functionals (local density approximation (LDA) [25], Perdew-Burke-Erznerhof (PBE) [26], and strongly constrained and appropriately normed (SCAN) [27] meta-GGA functionals, see SI for more details) and the Vienna Ab initio Simulation Package (VASP) code for monolayer T-VSe\({}_{2}\) and H-VSe\({}_{2}\). The goal of these simulations were to assess how sensitive the relative energy between the T and H phase is with respect to functional and material geometry. Another goal of these simulations was to benchmark the structural parameters of each material with respect to several density functionals. It is advantageous to perform these reference calculations with VASP and PAW pseudopotentials as a precursor to the more expensive DMC calculations due to the fact that they require a much smaller cutoff energy and are more cost effective for a large number of simulations. It is important to note that for all DFT and DMC simulations, we assumed a ferromagnetic ground state for both T and H-VSe\({}_{2}\). Although recent reports have suggested that T-VSe\({}_{2}\) could be experimentally paramagnetic [3], we infer that this paramagnetism can be induced by magnetic anisotropy. In addition, the modeling of paramagnetism with computational methods imposes a great challenge, which is why we focus on the freestanding ferromagnetic ground states of both phases. A more robust treatment of the magnetic structure can be explored in future work, but is beyond the scope of this work which primarily focuses on determining the geometric structure and phase stability of 2D T and H-VSe\({}_{2}\).
In Fig. 2 we present a comprehensive look at the difference in total energy between T-VSe\({}_{2}\) and H-VSe\({}_{2}\), using several DFT functionals under different geometric constraints. We performed these calculations for a variety of \(U\) values in three different ways: fully relaxing the structure at each value of \(U\) (Fig. 2 a) ), fixing the lattice and atomic positions to the \(U\) = 0 eV relaxed geometry of that particular functional and calculating the static energy at each value of \(U\) (Fig 2 b)), fixing the lattice to the \(U\) = 0 eV relaxed geometry of that particular functional and relaxing just the atomic positions at each value of \(U\) (Fig. 2 c)). The results in Fig. 2 indicate that there is a significant disagreement between DFT functionals, \(U\) value used, and material geometries, with all three factors playing a significant role in the energy difference between T and H phase. Specifically, regardless of relaxation method, all bare (no \(U\) correction) SCAN, PBE, and PBEsol functionals predict H favorable, while bare LDA predicts T favorable. For all functionals, there is a critical value of \(U\) that reverses the relative phase stability, which is dependent on functional and relaxation method. The SCAN functional with a \(U\) correction predicts T phase favorable, with larger energy differences. As seen in Fig. 2, the trends in the relative phase stability between Fig. 2 b) and c) are nearly identical, but significantly vary from Fig. a). This implies that the density functional is strongly coupled to material geometry, but the lattice constant change has more of an effect on phase stability than atomic positions and bond distances. This is most prevalent for higher \(U\) values (\(>\) 2 eV), where the relaxed geometry changes more drastically with \(U\). The interrelated nature of the material's geometry, density functional, and value of \(U\) are reasons to seek out higher levels of theory beyond DFT/DFT+\(U\) such as DMC to accurately determine the optimal geometry and relative energy between the phases of 2D VSe\({}_{2}\).
The relaxed lattice constants, V-Se distances, and T - H energies from Fig. 2 a) are presented in Table 1 and Fig. 3, along with additional VASP reference calculations performed with the vdW corrected functionals (PBE-D2 [28], PBE-D3 [29], SCAN+rvv10 [30]). The DMC computed parameters are also given for comparison in Table 1 and Fig. 3 (more discussion to follow). We observe a \(\approx\) 7 % variability in lattice constant across the different methods for H-VSe\({}_{2}\) across the different methods for H-VSe\({}_{2}\). Between both phases, we observe a \(\approx\) 3 % variability in V-Se distance (\(d^{\rm V-Se}\)). Most strikingly, the energy difference between the T and H phases (E\({}^{\rm T-H}\)) drastically varies depending on the material geometry and computational methodology, ranging from -0.2 eV/f.u. to 0.06 eV/f.u.. Due to the fact that a strain-induced phase transition has been reported between T- and H-VSe\({}_{2}\)[7; 11; 12], we decided to perform additional VASP benchmarking calculations that involved the applica
tion of tensile and compressive strain for each monolayer. We performed these calculations for PBE, SCAN, and LDA (with \(U\) = 0 eV and \(U\) = 2 eV), starting from the \(U\) = 0 eV geometry for each functional. The resulting equations of state are depicted in Fig. S3. As seen in the figure, the equation of state and resulting strain-induced phase transition is entirely dependent on the functional and \(U\) value, with no consistent trend.
The strong sensitivity of each monolayer with respect to geometry and functional are grounds for using a higher-order method such as DMC to obtain a statistically accurate estimate of the lattice parameters and relative energy between phases. Prior to performing the DMC/line-search calculations, we optimized our nodal surface (orbitals selected for DFT wavefunction generation). Since DMC has the zero-variance property, it means that as the trial wave function approaches the
\begin{table}
\begin{tabular}{l|l l|l l|l} & T-VSe\({}_{2}\) & \multicolumn{2}{|l|}{H-VSe\({}_{2}\)} & \\ \hline \hline Method & \(a\) (Å) & \(d^{\rm V-Se}\) (Å) & \(a\) (Å) & \(d^{\rm V-Se}\) (Å) & E\({}^{\rm T-H}\) (eV/f.u.) \\ \hline \hline PBE & 3.336 & 2.489 & 3.333 & 2.502 & 0.045 \\ \hline PBE+\(U\)=2 & 3.435 & 2.526 & 3.364 & 2.520 & -0.008 \\ \hline LDA & 3.228 & 2.438 & 3.229 & 2.445 & -0.026 \\ \hline LDA+\(U\)=2 & 3.277 & 2.455 & 3.266 & 2.464 & 0.045 \\ \hline SCAN & 3.387 & 2.486 & 3.329 & 2.486 & 0.045 \\ \hline SCAN+\(U\)=2 & 3.462 & 2.524 & 3.353 & 2.502 & -0.202 \\ \hline PBEsol & 3.262 & 2.458 & 3.272 & 2.471 & 0.013 \\ \hline PBEsol+\(U\)=2 & 3.323 & 2.483 & 3.301 & 2.487 & 0.025 \\ \hline PBE-D2 & 3.323 & 2.484 & 3.318 & 2.496 & 0.010 \\ \hline PBE-D3 & 3.315 & 2.485 & 3.319 & 2.497 & 0.042 \\ \hline SCAN+rvv10 & 3.379 & 2.481 & 3.319 & 2.482 & 0.051 \\ \hline DMC & 3.414(12) & 2.505(7) & 3.335(8) & 2.503(5) & 0.06(2) \\ \hline \end{tabular}
\end{table}
Table 1: Tabulated results for lattice constant, V-Se distance, and relative energy (T - H) for both T and H phase 2D VSe\({}_{2}\) for several computational methods. DMC error bars (standard error about the mean) are included in parenthesis.
Figure 2: Relative (T - H) energy between T and H phase 2D VSe\({}_{2}\) as a function of \(U\) parameter for several density functionals and methods of atomic relaxation: a) fully relaxing the structure, b) fixing the lattice and atomic positions to the \(U\) = 0 eV relaxed geometry of that particular functional and calculating the static energy, c) fixing the lattice to the \(U\) = 0 eV relaxed geometry of that particular functional and relaxing just the atomic positions. The dotted line indicates 0 eV.
exact ground state, the statistical fluctuations in the energy reduce to zero [15]. Although there have been instances where various sophisticated methods have been used to optimize the nodal surface [31; 32; 33; 34], we employed the PBE+\(U\) approach, where the Hubbard (\(U\)) value was used as a variational parameter to optimize the nodal surface using DMC (similar to other successful DMC studies of magnetic materials [35; 36; 37; 16; 20; 12; 36]). We performed these calculations for both T and H-VSe\({}_{2}\) (24 atom supercells), where we tuned the \(U\) value from (1 to 4) eV while creating the trial wavefunction and computed the DMC energy. The results of these calculations are depicted in Fig. S4, where we observe that \(U\) = 2 eV yields the lowest energy for both phases. It is important to note that for the H phase, the DMC energies for \(U\) = 1 and \(U\) = 2 eV are statistically identical. Based on this, we created the trial wavefunction using PBE+\(U\) (\(U\) = 2 eV) for all subsequent DMC calculations within the surrogate Hessian line-search for both phases (all 52 DMC energy evaluations). Since we obtained an optimal \(U\) value of 2 eV for both materials, we focused our DFT+U benchmarking efforts more on \(U\) = 2 eV (Fig. 3, Fig 5, Table 1, Fig. 2, Fig. S3).
Based on the DMC line-search results, we determined accurate bounds on the lattice parameter (\(a\)) and off-plane displacement of Se (\(z\)), within an error tolerance of 0.018 A or lower for both parameters. This translates to within \(\approx\) 0.5% accuracy in a parameter set of \(a\) and \(d^{\rm V-Se}\) with 95% confidence. Convergence (absence of significant displacements outside of the error tolerance) was achieved after two parallel line-search iterations for both phases. This convergence is illustrated in Fig. S5, where the convergence of the parameter offsets of \(a\) and \(z\) and the convergence of the total energy per f.u. are depicted for both T and H phase 2D VSe\({}_{2}\) for the initial DFT relaxed structure (1) and both subsequent iterations of DMC (2 - 3). In addition, the final energy of both of the fitted structures (square points) are given.
The final geometric parameters and relative phase energies determined with DMC are given in Table 1 and Fig. 3. For T-VSe\({}_{2}\), we determined a lattice constant of 3.414(12) A and a V-Se distance of 2.505(7) A. For H-VSe\({}_{2}\), we determined a lattice constant of 3.335(8) A and a V-Se distance of 2.503(5) A. The DMC finite-size extrapolated energy difference (T - H) between the two phases was determined to be 0.06(2) eV/f.u., indicating that in freestanding form at the equilibrium geometry, H-VSe\({}_{2}\) is favored over T-VSe\({}_{2}\). When comparing these DMC results to the other DFT functionals in Table 1 and Fig. 3, it is clear that very few DFT functionals can reproduce the DMC results for lattice constant, V-Se distance and relative energy difference. The SCAN functional comes the closest to reproducing all three simultaneous DMC values, but still falls slightly short for the V-Se distances of both phases and the lattice constant of T-VSe\({}_{2}\). The fact that SCAN+U successfully predicts the structural properties (for H-VSe\({}_{2}\)) and the fact that SCAN+rv10 produces an energy difference closest to the average DMC energy difference for both phases loosely implies that a simultaneous description of correlated magnetism and vdW interactions are both needed to correctly represent the physics of VSe\({}_{2}\). Experimental measurements of
Figure 3: A summary of the deviation of the geometric properties relative to the DMC calculated geometric properties for a) T-VSe\({}_{2}\) and b) H-VSe\({}_{2}\) and c) the the deviation of T - H energy relative to the DMC calculated T - H energy for a variety of DFT functionals (\(U\) = 2 eV), where the DMC error bar (standard error about the mean) is represented by the red bars.
Figure 4: (Top) The phase diagram of 2D VSe\({}_{2}\) in terms of \(a\) and \(d^{\rm V-Se}\). The phase boundary (solid line, black) is estimated from bicubic fits. To assure quality of the fits, the estimated \(\pm\)0.01 eV error contours (dotted line) and the minima from the fits (’x’) and the line-search (’o’) are all well separated. (Bottom) Slices of the PES at \(d^{\rm V-Se}\) = 2.505 Å.
the lattice constant and V-Se distance of freestanding monolayer VSe\({}_{2}\) are scarce and often times dependent on external factors such as the substrate (more discussion to follow) and sample preparation technique [4; 38; 39; 5]. However, Chen et al. [38] have recently reported a lattice constant of 3.4 A for thin films of T-VSe\({}_{2}\) and Liu et al. [39] have recently reported a lattice constant of 3.3 A for epitaxially grown monolayer H-VSe\({}_{2}\). Both of these measured values are in excellent agreement with our DMC computed lattice constants. Additionally, we determined the near-equilibrium PES of both T and H 2D VSe\({}_{2}\) with DMC accuracy, which are both depicted in Fig. S6.
The phase diagram presented in Fig. 4 is based on similar fits to data, where the \(z\) displacement has been remapped to \(d^{\rm V-Se}\). This DMC phase diagram can directly be compared to the energy vs. strain DFT benchmarking calculations in Fig. S3, which emphasizes the need for an accurate representation of the phase boundary between the two phases. The freestanding geometries of both T and H lie in the energetic H phase, but a slice of the phase diagram along \(d^{\rm V-Se}=2.505\) A indicates that the T phase becomes favorable over H at biaxial strain of \(a\gtrsim 3.5\) A. This implies that in freestanding form, once T-VSe\({}_{2}\) is positively strained at least \(\approx 2.5\) %, T phase is favored over H. Alternatively, if freestanding H-VSe\({}_{2}\) is positively strained at least \(\approx 5\) %, T phase is also favored over H This strain can easily be accomplished by placing monolayer VSe\({}_{2}\) on a substrate with significant lattice mismatch. In fact, this type of mismatch has been reported to alter the material properties [4; 5; 40; 41], significantly contributing to the controversies of T and H-VSe\({}_{2}\) (for energetic favorability, magnetic properties). Whether or not the changes in energetic favorability or magnetic properties with respect to the substrate are due to lattice mismatch or more complicated interactions between the substrate and the monolayer remains to be answered and is beyond the scope of this work, which has focused solely on the freestanding forms of T and H-VSe\({}_{2}\). However, such calculations can be employed for future work using higher order methods such as DMC. The proximity of the phase boundary between T and H phase (Fig. 4) is emphasized by the small energy difference between the two phases (0.06(2) eV/f.u., at the equilibrium geometry) between the two curves. Since this energy difference is so close to room temperature (\(\approx 0.024\) eV), this implies that a process such as thermal annealing can easily induce a phase transition. In fact, recently it was demonstrated that a structural phase transition of multilayer VSe\({}_{2}\) from T to H occurs through annealing at 650 K, along with a metal-insulator transition [11].
To gain a deeper understanding of the magnetic properties of 2D T and H-VSe\({}_{2}\), we extracted the spin densities (using a trial wavefunction at \(U\) = 2 eV and 24 atom supercell at the final equilibrium geometry predicted by DMC/line-search). The spin density isosurfaces of each phase (\(\rho_{\rm up}\) - \(\rho_{\rm down}\)) are depicted in the insets of Fig. 5 a) and c) for T-VSe\({}_{2}\) and H-VSe\({}_{2}\) respectively. For both phases, we observe the V atoms are highly spin-polarized, while the Se atoms are slightly antiparallel with respect to the V atoms. For more calculation details regarding spin density, see SI.
We went on to plot the radial averaged spin densities as a function of distance, separately for V and Se for T and H-VSe\({}_{2}\) (depicted in Fig. 5 a) - d)). This allows us to view the spatial variations in spin density. Additionally, we benchmarked these V and Se radially averaged densities with PBE+\(U\) (\(U\) = 2 eV) using NC pseudopotentials at the equilibrium geometry (the calculation required to create the trial WF for the subsequent DMC runs). As seen in Fig. 5 a) and c), there is a substantial difference in the V spin density between DMC and PBE+\(U\) (\(U\) = 2 eV) for both T and H phase. This same substantial difference between DMC and PBE+\(U\) also occurs for the total charge density. This discrepancy is most prevalent near the radial density peak (peak of \(d\) orbital) and can be attributed to the fact that DFT functionals (even with the added Hubbard correction) tend to delocalize and unsuccessfully capture 3\(d\) orbitals. This large discrepancy in the spin densities highlights the need for more accurate, many-body computational methodologies for correlated materials such as VSe\({}_{2}\), where DFT fails. In contrast, there is closer agreement between the DMC and PBE+\(U\) spin densities for Se in T and H-VSe\({}_{2}\) (see Fig. 5 b) and d).
Finally, we estimated the site-averaged atomic magnetic moments per V and Se for both T and H phase by integrating the DMC and PBE+\(U\) spin densities depicted in Fig. 5. At the DMC level, we estimated a magnetic moment of 1.06(2) \(\mu_{\rm B}\) for V and -0.09(2) \(\mu_{\rm B}\) for Se in T-VSe\({}_{2}\) and a magnetic moment of 1.02(1) \(\mu_{\rm B}\) for V and -0.14(1) \(\mu_{\rm B}\) for Se in H-VSe\({}_{2}\). At the PBE+\(U\) (\(U\) = 2 eV) level, we estimated a magnetic moment of 1.30 \(\mu_{\rm B}\) for V and -0.12 \(\mu_{\rm B}\) for Se in T-VSe\({}_{2}\) and a magnetic moment of 1.40 \(\mu_{\rm B}\) for V and -0.15 \(\mu_{\rm B}\) for Se in H-VSe\({}_{2}\). Consistent with the radial spin density results in Fig. 5, we find that the DMC and PBE+\(U\) magnetic moments for Se are in much closer agreement than for V (for both T and H phase). By analyzing the spin densities and obtaining the on-site magnetic moments, we obtain a clear picture of how the magnetization of each ion depends on the computational method used, serving as a benchmark for the magnetic properties of 2D VSe\({}_{2}\).
In this work, we used a combination of DFT, DMC and a recently developed surrogate Hessian line-search optimization technique to resolve the previously reported discrepancy in structural parameters and relative phase stability of monolayer T-VSe\({}_{2}\) and H-VSe\({}_{2}\). Using these methods, we determined the lattice constant and V-Se distance (with DMC accuracy) to be 3.414(12) A and 2.505(7) A respectively for T-VSe\({}_{2}\) and 3.335(8) A and 2.503(5) respectively for H-VSe\({}_{2}\). In addition, we find the relative energy between the phases (T - H) to be 0.06(2) eV/f.u. at the DMC level, indicating that in freestanding form, H-VSe\({}_{2}\) is more energetically favorable than T-VSe\({}_{2}\). We went on to obtain a phase diagram between T and H phase from the PES and determined that a phase transition can be induced by strain or mechanisms such as thermal annealing. Additionally, we benchmarked the magnetic properties such as spin density and on-site magnetic moment for both phases and find substantial differences between DMC and DFT. The results of this study demonstrate the successes of the DMC method coupled with the surrogate Hessian line-search structural optimization technique when applied to a 2D magnetic system. The estimates for lattice constant, bond distance, relative phase energy and the extracted structural
-dependent phase diagram assist in clarifying previously inconclusive theoretical and experimental results regarding T and H phase VSe\({}_{2}\).
## II Code availability statement
Software packages mentioned in the article can be found at [https://github.com/usnistgov/jarvis](https://github.com/usnistgov/jarvis). Please note that the use of commercial software (VASP) does not imply recommendation by the National Institute of Standards and Technology.
## III Competing interests
The authors declare no competing interests.
## IV Acknowledgments
The authors thank the National Institute of Standards and Technology for funding, computational, and data-management resources. The authors thank Dr. Kamal Choudhary and Dr. Francesca Tavazza for fruitful discussions. We acknowledge grants of computer capacity from the Finnish Grid and Cloud Infrastructure (persistent identifier urn:nbn:fi:research-infras-2016072533).
|
2310.08614 | Analysing of 3D MIMO Communication Beamforming in Linear and Planar
Arrays | Massive multiple-input multiple-output (MIMO) systems are expected to play a
crucial role in the 5G wireless communication systems. These advanced systems,
which are being deployed since 2021, offer significant advantages over
conventional communications generations. Unlike previous versions of
communication, MIMO systems can transmit various probing signals through their
antennas, which may or may not be correlated with each other. This waveform
diversity provided by MIMO communication enables enhanced capabilities and
improved performance. Numerous research papers have proposed different
approaches for beamforming in MIMO communication. We anticipate that our
research will provide valuable insights into the performance of different
beamforming techniques for MIMO communication systems with planar arrays. We
will investigate the 3D beam patterns generated by these constellations using
the covariance-based MIMO communication waveform method. MATLAB simulations
will be utilized to analyze and evaluate the performance of these methods. | Amirsadegh Roshanzamir | 2023-10-12T11:52:24Z | http://arxiv.org/abs/2310.08614v1 | # Analysing of 3D MIMO Communication Beamforming in Linear and Planar Arrays
###### Abstract
Massive multiple-input multiple-output (MIMO) systems are expected to play a crucial role in the 5G wireless communication systems. These advanced systems, which are being deployed since 2021, offer significant advantages over conventional communication's generations. Unlike previous versions of communication, MIMO systems can transmit various probing signals through their antennas, which may or may not be correlated with each other. This waveform diversity provided by MIMO communication enables enhanced capabilities and improved performance.
Numerous research papers have proposed different approaches for beamforming in MIMO communication. We anticipate that our research will provide valuable insights into the performance of different beamforming techniques for MIMO communication systems with planar arrays. We will investigate the 3D beam patterns generated by these constellations using the covariance-based MIMO communication waveform method. MATLAB simulations will be utilized to analyze and evaluate the performance of these methods.
MIMO communication; Beamforming; covariance based; planar array;
## I Introduction
In the realm of wireless communication systems, the ever-increasing demand for higher data rates, improved reliability, and efficient spectrum utilization has necessitated the development of advanced transmission techniques. Multiple-Input Multiple-Output (MIMO) technology has emerged as a promising solution to address these challenges by exploiting the spatial dimension of wireless channels.
MIMO communication systems employ multiple antennas at both the transmitter and receiver ends, enabling the simultaneous transmission and reception of multiple data streams. By leveraging the spatial diversity provided by these antennas, MIMO systems offer significant advantages over traditional single-antenna systems, including increased capacity, improved link reliability, and enhanced spectral efficiency [1, 4, and 5].
A crucial aspect of MIMO communication lies in the design of optimal beampatterns. A beampattern represents the directional sensitivity of a MIMO antenna array, determining how the transmitted or received signals are spatially distributed. Properly designed beampatterns can significantly enhance system performance by focusing the transmitted energy towards desired directions while mitigating interference from unwanted directions [7, 8, 9, and 10].
The design of MIMO beampatterns is a multidimensional optimization problem involving several key factors. These factors include antenna geometry, array configuration, signal processing algorithms, and channel characteristics. Achieving an optimal beampattern requires careful consideration of these factors to maximize the desired signal power while minimizing interference and maintaining compatibility with existing communication standards [11, 12].
This paper aims to explore the various aspects involved in MIMO communication beampattern design. It will delve into the fundamental principles of MIMO systems, highlighting the benefits of exploiting spatial diversity. Furthermore, it will discuss the challenges associated with beampattern design and present state-of-the-art techniques and algorithms employed to optimize these patterns.
By comprehensively examining the intricacies of MIMO communication beampattern design, this research aims to contribute to the advancement of wireless communication systems. The outcomes of this study can potentially pave the way for future developments in MIMO technology, enabling the realization of efficient and reliable wireless networks that cater to the ever-growing demands of modern communication applications.
The study is divided into six sections as follows:
Section I provides a brief overview of MIMO communication.
Section II examines the use of covariance-based beamforming in MIMO communication.
Section III discusses the utilization of a model that concentrates the transmitter power at known desired locations.
Sections IV and V present an analysis of an algorithm for beam pattern design for both linear and planar arrays. These designs are compared with an ideal beamforming approach, and numerical results are provided.
Section VI focuses on the conclusion, and the paper includes references at the end.
Covariance Based Method for MIMO Communication Beampattern Design
Let's consider a collection of N transmitter antennas placed at known positions in a spherical coordinate system along the z-axis. These antennas are aligned along the z-axis and are driven by specific signals on a carrier frequency \(f_{c}\) or with a wavelength of \(\lambda\). Each antenna generates a signal in the far field at a particular point in space, characterized by distance \(d\) and direction \(Ar(\theta,\phi)\) from the antenna.
The complex envelope of the total radiated signal at this point and in discrete form is given by Equation (1), where \(EC_{i}(n,d,\theta,\phi)\) represents the signal generated by the i-th antenna.
\[EC_{i}(n,d,\theta,\phi)=\frac{1}{Kd}y_{i}\left(n-\frac{d}{c}\right)e^{i\left( \frac{2\pi n}{\lambda}\right)P_{i}^{T}Ar(\theta,\phi)} \tag{1}\]
Where in this equation, \(K\) is sphere surface constant equal to \(\sqrt{4\pi}\), \(y_{i}\) is complex envelope of each antenna transmitted signal and \(P_{i}\) is position of i-th antenna.
In the far field, these signals and their powers combine linearly. The resulting combined signal from all the transmitted signals in the far field can be expressed as Equation (2).
\[EC(n,d,\theta,\phi)=\sum_{l=1}^{N}EC_{i}(n,d,\theta,\phi)\] \[=\frac{1}{Kd}\sum_{i=1}^{N}y_{i}\left(n-\frac{d}{c}\right)e^{i \left(\frac{2\pi nz_{l}}{\lambda}\right)\sin(\theta)} \tag{2}\]
The resulted power of the combined signals which will be delivered to the users through the communication channel is given by Equation (3), which is a sum of cross-correlations between the transmitted signals [2].
\[P(\theta,\phi)=\frac{1}{K^{2}}\sum_{k=1}^{N}\sum_{l=1}^{N}R_{kl}e^{\frac{l2\pi }{\lambda}(z_{k}-z_{l})\sin(\theta)} \tag{3}\]
The cross-correlation between two signals is defined as \(R_{kl}\) in Equation (4).
\[R_{kl}=<y_{k}(t)y_{l}^{*}(t)> \tag{4}\]
By defining an steering vector of \(\mathbf{s}(\theta)\) in Equation (5) represents the conjugate transpose of the antenna response vector at \(\theta\), the power density \(P(\theta,\phi)\) which is written in (3), can be expressed as Equation (6).
\[\mathbf{s}(\theta)=\left[e^{i\left(\frac{2\pi z_{k}}{\lambda}\right)\sin(\theta)},\ \ldots\,\ e^{i\left(\frac{2\pi z_{N}}{\lambda}\right)\sin(\theta)}\right]^{T} \tag{5}\]
\[\text{P}(\theta,\phi)=\frac{1}{4\pi}\mathbf{s}^{*}(\theta)\mathbf{Rs}(\theta) \tag{6}\]
Equation (6) represents the power density \(\text{P}(\theta,\phi)\) for the users in terms of the cross-correlation matrix R represented in (7).
\[\text{R}=\sum_{k=1}^{N}\sum_{l=1}^{N}s_{k}(t)s_{l}^{*}(t) \tag{7}\]
These equations describe the desired beampattern generated by the cross-correlation matrix. In the following examples, we illustrate different beampatterns produced by such a matrix. Figure 1 shows the beampatterns of a 10-element uniform linear array (ULA) with half-wavelength spacing, generated by different signal cross-correlation matrices (8), (9), and (10).
It is important to note that these figures represent the directional characteristics of the array and provide information about the power distribution in different directions.
\[\begin{bmatrix}1&\cdots&1\\ \vdots&\ddots&\vdots\\ 1&\cdots&1\end{bmatrix} \tag{8}\]
\[\begin{bmatrix}0.8^{0}&\cdots&0.8^{9}\\ \vdots&\ddots&\vdots\\ 0.8^{9}&\cdots&0.8^{0}\end{bmatrix} \tag{9}\]
\[\begin{bmatrix}1&\cdots&0\\ \vdots&\vdots&\vdots\\ 0&\cdots&1\end{bmatrix} \tag{10}\]
In MIMO communication, the signal cross-correlation matrix usually consists of complex values, except for the real-valued diagonal elements. However, in the case of conventional communication, all transmitter signals
Figure 1: Beampattern respect to (7). The blue one is corresponds to cross-correlation matrix of (8), The black one is corresponds to cross-correlation matrix of (9) and the red one is corresponds to cross-correlation matrix of (10)
other. This means that the absolute value of all elements in the matrix R is equal to 1, as shown by the blue element in Figure 1.
## III Maximum Power Design for Known User Location
In this new task, we will be examining a set of antennas in a planar array. Our objective is to assess the effectiveness of the method described in reference [3] for maximizing the total power of probing signals at user's locations.
To summarize, we have K users of interest, with their positions denoted as \(\{U_{k}\}_{k=1}^{K}\). The combined power of the all the antenna's signals at these user locations is determined by equation (11).
\[\sum_{k=1}^{K}\mathbf{s}^{*}(\theta_{k})\mathbf{Rs}(\theta_{k}) =tr\left(\mathbf{R}\sum_{k=1}^{K}\mathbf{s}\left(\theta_{k}\right)\mathbf{s}^ {*}(\theta_{k})\right)\] \[\triangleq tr(\mathbf{RZ}) \tag{11}\]
Where
\[\mathbf{Z}=\sum_{k=1}^{K}\mathbf{s}(\theta_{k})\mathbf{s}^{*}(\theta_{k}) \tag{12}\]
Now as in reference [27] one can aims to maximize equation (11) at the desired locations, while adhering to certain constraints. So the optimization problem can be formulated as maximizing \(tr(\mathbf{RZ})\) subject to maximum transmitted power, as depicted in equation (13).
\[\max_{\mathbf{Z}} tr(\mathbf{RZ}) subject to\] \[tr(\mathbf{R})=P_{t}\left(Max\,TX\,Power\right)\] \[\mathbf{R}\geq 0\]
This equation (13) is a well know linear algebra optimization problem and has solution like the authors in reference [27] have shown for linear arrays and discovered that the optimal value of R is given by equation (14), where \(\mathbf{v}\) is the eigenvector associated with the highest eigenvalue of \(\mathbf{Z}\).
\[\mathbf{R}=\mathbf{vv}^{*} \tag{14}\]
In this paper, our goal is to explore this problem for planar array transmitter sets and assess the accuracy of this method. The following section will present numerical examples to illustrate this.
## IV Linear Arrays
As previously mentioned, the issue of linear arrays has been explored by authors in [3]. This section aims to examine their findings. For instance, let's consider a linear array with 50 transmitter elements positioned along the z-axis. Figure 2 illustrates a random arrangement of the users of interest, with the objective of concentrating the transmitter power around them along the y and z axes. If we analyze this problem using the approach outlined in section III, the resulting beampatterns will resemble those shown in Figure 3.
It is worth noting that in Figure 3, the ideal beampattern corresponds to the beampattern where the transmitted covariance matrix is equivalent to Z in equation (12). Additionally, since the linear array is aligned with the z-axis, it can only differentiate between users located at different positions along this axis.
From Figure 3, it is evident that the proposed method failed to evenly focus its transmitted power around all users. This represents a drawback of the method.
Planar Arrays
In this section, it is important to explore the precision of the proposed beampattern design method for a plannar array. To assess the accuracy, we will examine various examples and figures presented in this paper. It should be noted that the transmitter antennas are assumed to be positioned along the y and z axis throughout.
### _Square constellation_
In this subsection, we assume the presence of a flat array with a square pattern and a total of 400 transmitter elements (20\(\times\)20 array) as shown in Figure 4.
From the illustration, it is evident that the transmitter elements of this array are evenly distributed along the y and z axes, with a spacing of half a wavelength between them.
In this scenario, we consider a situation where there are six users of interest located 100 km away from the origin along the x-axis, as depicted in Figure 5.
It is important to note that, going forward, the user placement shown in Figure 5 will be used for simulations involving all planar arrays constellations.
Since all elements of vector \(\mathbf{s}\) in (5) have an amplitude of 1, equation (7) reveals that the matrix R, which the cost function, will take the following form:
\[R_{max} =\sum_{k=1}^{\bar{k}}\mathbf{s}(\theta_{k})\mathbf{s}(\theta_{k}) \tag{15}\]
In this equation, as said before, k represents the number of users. While the above expression maximizes power around the uses, it may not fulfill the requirement of being a cross-correlation matrix of signals, as stated in equation (13). The cross-correlation matrix of signals must be non-negative definite. We will utilize this matrix for result comparison, and the resulting beampattern from this cross-correlation matrix of transmitted signals will be referred to as the ideal beampattern.
Figure 6 and Figure 7 display the 3D and top view of the ideal beampattern in this case, respectively. These figures demonstrate that the ideal beampattern successfully resolves all six users and allocates equal power to each of them.
Now, Figure 8 and Figure 9 present the designed beampatterns based on equation (14). Notably, Figure 8 illustrates the 3D designed beampattern, while Figure 9 shows the same beampattern from a top view.
As observed from this figure, this algorithm successfully identifies three out of the total six users with concentrated power around them. However, three users are missed, and the power allocated to these users is not equal. In other words, this algorithm does not provide the designer with the ability to control the power to focus on one or more desired directions, which may be more important.
### _Circle constellation_
In this subsection, we assume the presence of a planar array with a circular arrangement consisting of a total of 400 transmitter elements, as shown in Figure 10.
From the illustration, it can be observed that the transmitter elements in this array are evenly distributed along the y and z axes, with a spacing of half a wavelength between them.
Figures 11 and 12 display the three-dimensional and top view representations of the ideal beampattern for this case, respectively. These figures reveal that the ideal beampattern successfully resolves all six users and allocates equal power to each of them.
It is worth noting that the ideal beampattern of the circular arrangement, as shown in these figures, bears resemblance to the ideal beampattern of a square arrangement. However, they differ in terms of sidelobe level and half-power beadwidth. Consequently, the ideal beampatterns for each arrangement will be depicted separately.
Moving on to Figures 13 and 14, they illustrate the designed beampatterns obtained from equation (14). Figure 13 showcases the three-dimensional designed beampattern, while Figure 14 provides a top view of the same beampattern.
As evident from these figures, this algorithm successfully identifies four out of the six users with a strong concentration of power around them. However, approximately two users are missed, albeit at a weaker peak level in the transmitted beampattern. Furthermore, the powers assigned to these users are not equal to each other. In other words, this algorithm does not offer the designer control over power allocation to focus it in specific desired directions, which may be more crucial.
An important observation to make here is the disparity between the designed beampatterns of the circular and square constellations. As illustrated in these figures, the proposed algorithm performs better in a circular constellation compared to a square one.
Fig. 8: 3D view of designed beampattern of square constellation related to (14)
Fig. 7: top view of ideal beampattern of square constellation
Fig. 9: 3D view of designed beampattern of square constellation related to (14)
### _Hexagonal constellation_
In this subsection, we will consider a planar array with a hexagonal arrangement consisting of 400 transmitter elements, as shown in Figure 15. It should be noted that this hexagonal constellation is an approximate representation of a circular constellation, which is sometimes used in practice as an alternative.
Figure 15 reveals that the transmitter elements of this array are evenly distributed along the y and z axes, with a spacing of half a wavelength between them.
Figures 16 and 17 illustrate the 3D and top view of the ideal beampattern for this case, respectively. These figures
Fig. 11: 3D view of ideal beampattern of circle constellation
Fig. 12: top view of ideal beampattern of circle constellation
Fig. 10: Placement of transmitter antenna array
Fig. 13: 3D view of designed beampattern of circle constellation related to (14)
demonstrate that the ideal beampattern successfully separates all six users and allocates equal power to each of them.
It should be noted that the ideal beampattern of the hexagonal constellation, as shown in these figures, resembles the ideal beampattern of previous constellations. However, there are differences in terms of sidelobe level and half power beadwidth. Therefore, the ideal beampatterns of each constellation will be depicted separately.
Now, let's examine Figures 18 and 19, which display the designed beampatterns obtained from equation (14). Figure 18 presents the 3D designed beampattern, while Figure 19 provides a top view of the same beampattern.
As observed in Figure 19, this algorithm successfully identifies four out of the six users with concentrated power around them. However, two users were missed, and the power allocated to each user is not equal. In other words, this algorithm lacks the ability to control the power distribution and focus it on specific desired directions, which may be more important.
Based on these simulations, it is evident that the hexagonal constellation performs better than the square constellation but worse than the circular constellation. However, when considering the results of both the square and hexagonal constellations, it becomes apparent that these constellations complement each other in maximizing power around the users' locations.
### _Spiral constellation_
In this subsection, we assume the presence of a planar array with a spiral pattern and a total of 400 transmitter elements, as shown in Figure 20.
In certain applications and for specific purposes, one may opt to use a spiral pattern as it is very well-known in the concept of antennas. The equation for the spiral is given by:
\[r=a.\,e^{b\theta}\quad\text{ (16)}\]
Here, \(r\) represents the radius from the center and \(\theta\), in radians, represents the angle from the y-axis in Figure 20.
It is important to note that, for the simulations in this subsection, we make the following assumptions:
\[a=0.15\quad\text{ (17)}\]
\[b=0.1\quad\text{ (18)}\]
As depicted in this figure, the transmitter elements of this array are not uniformly distributed along the y and z axes. In other words, their exact locations along the y and z axes can be determined using the following equations:
\[y=r.\cos\theta\quad\quad\text{ (19)}\] \[z=r.\sin\theta\quad\quad\text{ (20)}\]
Figure 21 and Figure 22 display the 3D and top views of the ideal beampattern for this case, respectively. These figures demonstrate that the ideal beampattern successfully resolves all six users and allocates equal power to each of them.
It is worth noting that the ideal beampattern of the spiral pattern, as shown in these figures, is similar to the ideal beampatterns of previous constellations. However, they differ in terms of sidelobe level and half power beampwidth. For instance, in this case, the sidelobe levels are higher compared to previous cases. Therefore, we will depict the ideal beampatterns of each constellation separately.
Now, Figure 23 and Figure 24 illustrate the designed beampatterns obtained from equation (14). Figure 23 represents the 3D designed beampattern, while Figure 24 provides a top view of the same beampattern.
As observed in these figures, this algorithm successfully identifies four out of the six users, concentrating power effectively around them. However, two users have been missed, and the power allocated to these users is not equal. In other words, this algorithm does not provide the designer with the ability to control and focus the power towards specific directions that may be more important.
In comparison to previous results, although this constellation also misses some users like the latter results, it exhibits higher sidelobe levels around the missed users, which can aid in detecting and maximizing the transmitted power towards those users.
### _Archimedes spiral constellation_
In this subsection, we assume the presence of a planar array with an Archimedes spiral arrangement consisting of a total of 400 transmitter elements, as depicted in Figure 25 and Figure 26.
In certain specific applications and for particular purposes, it is possible to utilize a spiral arrangement. The equation for the spiral is as follows:
\[r=a.\,\theta^{\frac{1}{n}} \tag{21}\]
Where r represents the radius from the center and 0, in radians, represents the angle from the y-axis in Figure 25 and Figure 26.
Here, we will consider this arrangement for two different values of n, namely n=1 and n=3. Therefore, it is important to note that in all simulations conducted in this section, the first figure corresponds to n=1 while the second figure corresponds to n=3. Additionally, for the sake of simplicity, the following assumptions have been made:
\[a =0.08\;\;\;for\;\;n=1 \tag{22}\] \[a =0.28\;\;\;for\;\;n=3 \tag{23}\]
As observed from these figures, the transmitter elements of this array are not uniformly distributed along the y and z axes. In other words, their precise locations along the y and z axes can be determined using the following equations:
\[a =0.08\;\;\;for\;\;n=1 \tag{22}\] \[a =0.28\;\;\;for\;\;n=3 \tag{23}\]
Figure 27, Figure 28, Figure 29, and Figure 30 display the three-dimensional and top views of the ideal beampattern for this case, respectively. As evident from these figures, the ideal beampattern successfully separates all six users and allocates equal power to each of them.
It should be noted that the ideal beampattern of the Archimedes spiral arrangement, as seen in these figures, is similar to that of previous constellations. However, they are not identical and differ in terms of sidelobe level and half power beamwidth. For instance, in this case, the sidelobe levels are higher compared to uniform arrangements. Therefore, the ideal beampatterns of each constellation will be depicted separately.
Now, Figure 31, Figure 32, Figure 33, and Figure 34 demonstrate the designed beampatterns derived from equation (14). It is worth mentioning that Figure 31 and Figure 32 present the three-dimensional designed beampattern, while Figure 33 and Figure 34 provide a top view of the same beampattern.
As evident from these figures, the results obtained from such an arrangement may even surpass those achieved with uniform arrays when employing the proposed algorithm in equation (14) and these constellations are able to focus their power toward 5 users.
Fig. 23: 3D view of designed beampattern of spiral constellation related to (14)
Fig. 22: top view of ideal beampattern of spiral constellation
Fig. 24: top view of designed beampattern of spiral constellation related to (14)
Figure 27: 3D view of ideal beampattern of Archimedes spiral constellation (n=1)
Figure 28: 3D view of ideal beampattern of Archimedes spiral constellation (n=3)
Figure 29: top view of ideal beampattern of Archimedes spiral constellation (n=1)
Figure 26: Placement of transmitter antenna array (Archimedes spiral, n=3)
Figure 30: top view of ideal beampattern of Archimedes spiral constellation (n=3)
Figure 25: Placement of transmitter antenna array (Archimedes spiral, n=1)
## VI Conclusion
In conclusion, the analysis of 3D MIMO communication beamforming in linear and planar arrays has demonstrated its immense potential in improving wireless communication performance. As this technology continues to evolve and mature, it holds great promise for enhancing network capacity, coverage, and overall user experience in diverse environments.
It is showed that an approach which had good results for a linear array, here in planar array didn't have good results necessarily and it should be opted based on the geographical of a city or the place where the antennas are located. As futures work one can consider other beam-forming approaches in planar arrays and also one can look for approaches which provide suitable result for planar arrays as well as linear arrays.
|
2310.17734 | Investigating Multilingual Coreference Resolution by Universal
Annotations | Multilingual coreference resolution (MCR) has been a long-standing and
challenging task. With the newly proposed multilingual coreference dataset,
CorefUD (Nedoluzhko et al., 2022), we conduct an investigation into the task by
using its harmonized universal morphosyntactic and coreference annotations.
First, we study coreference by examining the ground truth data at different
linguistic levels, namely mention, entity and document levels, and across
different genres, to gain insights into the characteristics of coreference
across multiple languages. Second, we perform an error analysis of the most
challenging cases that the SotA system fails to resolve in the CRAC 2022 shared
task using the universal annotations. Last, based on this analysis, we extract
features from universal morphosyntactic annotations and integrate these
features into a baseline system to assess their potential benefits for the MCR
task. Our results show that our best configuration of features improves the
baseline by 0.9% F1 score. | Haixia Chai, Michael Strube | 2023-10-26T18:50:04Z | http://arxiv.org/abs/2310.17734v1 | # Investigating Multilingual Coreference Resolution
###### Abstract
Multilingual coreference resolution (MCR) has been a long-standing and challenging task. With the newly proposed multilingual coreference dataset, CorefUD Nedoluzhko et al. (2022), we conduct an investigation into the task by using its harmonized universal morphosyntactic and coreference annotations. First, we study coreference by examining the ground truth data at different linguistic levels, namely mention, entity and document levels, and across different genres, to gain insights into the characteristics of coreference across multiple languages. Second, we perform an error analysis of the most challenging cases that the SotA system fails to resolve in the CRAC 2022 shared task using the universal annotations. Last, based on this analysis, we extract features from universal morphosyntactic annotations and integrate these features into a baseline system to assess their potential benefits for the MCR task. Our results show that our best configuration of features improves the baseline by 0.9% F1 score.1
Footnote 1: Our code and model are publicly available at [https://github.com/HaixiaChai/multi-coref](https://github.com/HaixiaChai/multi-coref).
## 1 Introduction
Coreference resolution is the task to identify expressions in a given text that refer to the same entity. While considerable progress has been made in coreference resolution for English Lee et al. (2017, 2018); Joshi et al. (2019, 2020); Kirstain et al. (2021); Grenander et al. (2022), extending this task to multiple languages presents significant challenges due to the linguistic diversity and complexity of different languages. The multilingual coreference resolution (MCR) task Recasens et al. (2010); Pradhan et al. (2012) focuses on developing a general and robust system that can effectively handle multiple languages and a wide range of coreference phenomena (e.g., pronoun-drop).
Recently, Nedoluzhko et al. (2022) propose a new set of multilingual coreference datasets, CorefUD, built upon the framework of Universal Dependencies2 de Marneffe et al. (2021), allowing coreference researchers to conduct cross-linguistic studies across 17 datasets for 12 languages. The datasets serve as resource for the CRAC 2022 shared task on multilingual coreference resolution Zabokrtsky and Ogrodniczuk (2022). Given the harmonized universal morphosyntactic and coreference annotations, we raise the question whether there are any universal features that are common to all languages and to what extent they can contribute to the development of an MCR system.
Footnote 2: One of the benefits of Universal Dependencies is that it provides cross-linguistic guidelines for morphosyntactic annotation in a consistent and language-independent manner.
In this work, we conduct an in-depth investigation into the MCR task by using universal annotations in CorefUD. First, we analyze ground truth data from different linguistic levels, including mention, entity and document levels, and across different genres, to gain an understanding of coreference across various languages. Second, we conduct an error analysis of the most challenging cases that MCR systems fail to resolve. Last, based on this analysis, we integrate several features extracted from universal morphosyntactic annotations into a baseline system to examine their potential benefits for the MCR task. To the best of our knowledge, our method represents the first attempt to leverage universal annotations for MCR.
Our findings reveal: (i) There are indeed commonalities across languages. For example, we observe a common pattern where the closest antecedent of an overt pronoun mainly corresponds to the subject or object position. These commonalities are valuable for potential future research, such as linguistic investigations aimed at further comprehending the linguistic phenomenon of coreference. However, it is important to note that explor
ing universal features is a challenging task due to the inherent variability among languages, e.g., the expression of definiteness. (ii) A common issue encountered in all languages by MCR systems is the difficulty of correctly detecting nominal nouns within some two-mention entities. (iii) Our experimental results show that our best configuration of features improves the baseline by 0.9% F1 score.
## 2 Related Work
Analysis in Multiple Languages.Coreference is a complex linguistic phenomenon that requires linguistic expertise, even more so when studying it in a multilingual context. Oftentimes, researchers primarily focus on investigating coreference within a single target language in which they possess expertise, enabling them to gain valuable insights specific to that language Ogrodniczuk and Nitor (2017); Urbizu et al. (2019); Sundar Ram and Lalitha Devi (2020). However, a few studies have been conducted on coreference across multiple languages by using multilingual coreference datasets Recasens et al. (2010); Pradhan et al. (2012); Nedoluzhko et al. (2022). These studies include statistical analysis of the datasets Nedoluzhko et al. (2022), as well as efforts to improve the performance and generalizability of MCR systems from a technical standpoint Kobdani and Schutze (2010); Bjorkelund and Kuhn (2014); Straka and Strakova (2022). It is apparent that analyzing coreference across multiple languages is a challenging task due to the expertise required of each language. However, CorefUD helps such analyses by providing universal annotations. Our work is the first attempt to analyze cross-linguistic patterns and gain a broader understanding of coreference across different languages and language families in a comprehensive and comparative manner.
In the field of MCR, there has been notable attention directed towards the research of two types of languages. One prominent area of investigation is around pro-drop languages, such as Chinese Kong and Ng (2013); Song et al. (2020); Chen et al. (2021); Zhang et al. (2022), Italian Iida and Poesio (2011) and Arabic Aloraini and Poesio (2020, 2021). Another research direction involves the study of morphologically rich languages, such as German and Arabic Roesiger and Kuhn (2016); Aloraini and Poesio (2021). In contrast to the aforementioned work, which primarily focuses on enhancing the model's capabilities through technical analysis of specific linguistic phenomena, our research delves into gold annotations to explore multilingual coreference including phenomena like zero pronouns from a linguistic perspective, uncovering valuable insights to foster further research.
MCR Systems.In the past decade, numerous MCR approaches have been proposed, including rule-based approaches, various training methodologies such as cross-lingual and joint training, and methods that leverage linguistic information: (i) Rule-based. It requires a complete redefinition of a set of rules to transform a monolingual coreference resolution system into a multilingual one, for example when using Stanford's multi-pass sieve CR system Lee et al. (2011). The adaptation process is time-consuming and requires a language expert to develop the rules. (ii) Translation-based projection. This is a technique that involves the automatic transfer of coreference annotations from a resource-rich language to a low-resource language using parallel corpora Rahman and Ng (2012); Chen and Ng (2014); Martins (2015); Novak et al. (2017); Lapshinova-Koltunski et al. (2019). The primary challenge of this approach is the occurrence of a large number of projected errors, such as a nominal phrase in English is translated as a pronoun in German. (iii) Latent structure learning. Fernandes et al. (2012) and Bjorkelund and Kuhn (2014) use a latent structure perceptron algorithm to predict document trees that are not provided in the training data. These document trees represent clusters using directed trees over mentions. This approach has achieved the best results in the CoNLL-2012 shared task for English, Chinese and Arabic at that time. (iv) Joint training. This is a technique that finetunes multilingual word embeddings on the concatenation of the training data in multiple languages. It allows the model to learn shared representations and help in cases where the target language has limited training data. Straka and Strakova (2022); Prazak et al. (2021); Prazak and Konopik (2022) (v) Methods with linguistic information. Several studies have incorporated syntactic and semantic information into their models Zhou et al. (2011); Jiang and Cohn (2021); Tan et al. (2021). These works either focus on coreference resolution within a single language or employ machine learning approaches to address the MCR task. Different from the above, our work incorporates universal morphosyntactic information into an end-to-end joint training method across multiple languages.
## 3 Linguistic Analyses on CorefUD 1.1
CorefUD 1.13 is the latest version of CorefUD Nedoluzhko et al. (2022) for the CRAC 2023 shared task on multilingual coreference resolution, including 17 datasets for 12 languages.4 In the following subsections, we conduct a linguistic study on it by using the ground truth from the training datasets, examining coreference phenomena from different linguistic levels, namely mention, entity and document perspectives, and across different genres, in multiple languages.
Footnote 3: See Appendix A for the statistics of CorefUD 1.1.
Footnote 4: [https://ufal.mff.cuni.cz/corefud/crac23](https://ufal.mff.cuni.cz/corefud/crac23)
### Mention
A mention is the smallest unit within a coreference relation, comprising one or more words (maybe even less than a word in some cases).
Position of Head.The head of a mention typically represents the entity being referred to. The remaining words in the mention either provide additional information that precedes the head word (pre-modification, e.g., _a highly radioactive element_) or further specify the meaning of the head after it (post-modification, e.g., _a car with leather seats._). Note that the modifying words can be dominant in the mention in some cases, e.g., _the first floor_, making resolution of those mentions harder sometimes.
Table 1 shows that Hungarian, Lithuanian and Turkish all have a high percentage of pre-modified mentions. They are from the Uralic, Baltic and Turkic language families that are considerably different from the other languages.
Mention Types.To gain insight into how mentions represent and refer to entities, we categorize five types of mentions by the universal part-of-speech (UPOS) tags of the head words in gold mentions, namely nominal noun, proper noun, overt pronoun, zero pronoun and others.
Unsurprisingly, in Figure 1, we observe that nominal noun and proper noun are the two main categories of mentions in most of datasets. en_parcofull Lapshinova-Koltunski et al. (2018), de_parcofull Lapshinova-Koltunski et al. (2018) and fr_democrat Landragin (2021) are the datasets having most overt pronouns, around 46% of mentions. In contrast, resolving zero pronouns is more crucial in the Czech datasets Nedoluzhko et al. (2016); Hajic et al. (2020) where the number of zero pronouns is higher than that of overt pronouns.
Universal Dependency Categories.By using universal dependency (UD) relations between words in a sentence, we can understand the hierarchical structure of the sentence and identify the potential antecedents of referring expressions. We classify UD relations of heads of gold mentions into 12 categories according to the UD taxonomy5, as illustrated in Table 2.
Footnote 5: [https://universaldependencies.org/u/dep/index.html](https://universaldependencies.org/u/dep/index.html)
Anaphor-Antecedent Relation.Given mention types and UD categories presented above, we have a particular interest in analyzing the UD category of the closest antecedent to an anaphor based on its mention types (e.g., core arguments_subject - overt pronoun). We consider all mentions in an entity as observed anaphors, but exclude the first mention.
The results in Figure 2 present the UD relations that are most frequently associated with nominal noun and overt pronoun. We found that non-core dependents (e.g., oblique nominal), nominal dependents (e.g., numeric mod
\begin{table}
\begin{tabular}{c c c c c c} ca & cs & en & hu & pl & es \\ \hline
13\% & 22\% & 27\% & **51\%** & 14\% & 14\% \\ lt & fr & de & ru & no & tr \\ \hline
**52\%** & 28\% & 32\% & 24\% & 18\% & **52\%** \\ \end{tabular}
\end{table}
Table 1: Percentage of pre-modified mentions in the respective languages.
Figure 1: Percentage of mention types in datasets.
ifier, nominal modifier and appositional modifier), core arguments_subject and core arguments_object are the primary UD relations of antecedents for nominal noun, e.g., _Sam, my brother, John's cousin, arrived._ In contrast, the closest antecedents of overt pronoun mainly correspond to subjects or objects within core arguments.6 It is important that these findings are applicable across all languages, emphasizing their universal relevance in the context of the multilingual coreference resolution task.
Footnote 6: See Appendix A.1 for the details of proper noun and zero pronoun.
### Entity
In a text, an entity can have multiple mentions all referring to the same identifiable object, such as a person or concept. Each gold entity in all datasets of CorefUD 1.1 has 3 to 4 mentions on average without considering singletons.
First Mention.The first mention within a mention chain serves to introduce the entity into a context. Thus, this mention could be seen as the most informative expression in the entity. In ca_ancora(Recasens and Marti, 2010), for example, 97% of first mentions belong to mention types of nominal noun or proper noun, which convey a richer semantic meaning than pronouns. Furthermore, we observe a consistent trend across all languages that the ratio of entities with the first mention being the longest mention in the entity ranges from 70% to 90%.7 The longer a mention is, the more information it represents, e.g., _a person_ vs. _a person that works at Penn_. Overall, the first mention captures semantic meaning of an entity.
Footnote 7: See Appendix A.2 for the details.
Semantic Similarity.In addition to the first mention, an entity can accumulate information with each subsequent mention. The mentions can be identical, slightly different, or completely different when compared to other mentions within the same entity. To examine the semantic similarity of coreferent mentions, we compute the Euclidean distance between the embeddings of each gold mention pair encoded using mBERT (Devlin et al., 2019).
In Figure 3, a greater distance indicates that the mentions have a bigger semantic distance, while still referring to the same entity. Conversely, a smaller distance suggests that the mentions are semantically more similar, if not identical. We speculate that the genres of the datasets have an
\begin{table}
\begin{tabular}{l l} \hline \hline UD Categories & UD Relations \\ \hline core arguments\_ & \\ subject (S) & nsubj \\ \hline core arguments\_ & \\ object (O) & obj, iobj \\ \hline non-core dependents\_ & \\ nominals (D) & \\ \hline nominal dependents\_ & \\ nominals (N) & nmod, appos, nummod \\ \hline clauses (C) & csubj, compp, xcomp, advcl, acl \\ modifier words (M) & advmod, discourse, amod \\ function words (F) & aux, cop mark det, clf, case \\ coordination (R) & conj, cc \\ MWE (W) & fixed, flat, compound \\ loose (L) & list, partaxis \\ special (P) & orphan, goeswith, reparandum \\ other (T) & punct, root, dep \\ \hline \hline \end{tabular}
\end{table}
Table 2: Universal dependency categories.
Figure 2: Ranking of relations between UD categories of antecedents and mention types of anaphors in each dataset. For instance, figure (a) shows, in each row, UD categories (initials) are ordered according to their frequency of association with nominal noun on a dataset. See Table 2 for the full name of UD categories.
impact on the analysis above. For example, in narrative texts such as EU Bookshop publications in en_parcforfull Lapshinova-Koltunski et al. (2018) and Hungarian Wikipedia in hu_korkor Vadasz (2020); Vadasz (2022), an entity can be realized with different expressions. Thus, the semantic similarity of mentions tends to be greater. Recall that nominal noun and proper noun are two main categories of mention types. So, it is challenging to resolve mentions that have bigger semantic distance.
### Document
In a document, there can be multiple entities, with some entities spanning the entire document while others appearing only in very few adjacent sentences. Occasionally, these entities may overlap within certain sections of the document, particularly in areas where complex relationships between entities are discussed. Table 3 shows an example text.
**Competing Antecedents of Pronominal Anaphors.** In a local context, the resolution of pronouns can become difficult due to their ambiguity caused by the presence of multiple potential antecedents from distinct entities or singletons. We focus on those ambiguous cases that have potential antecedents with gender and number agreement. Both the pronouns and their antecedents are located in the same or the immediately preceding sentence.
Figure 4 shows that in ca_ancora and es_ancora Recasens and Marti (2010), over 70% of overt pronouns satisfy the analysis conditions mentioned in the previous paragraph. This percentage is notably higher compared to the other datasets. Additionally, the average number of competing candidates in these two datasets is around six. This highlights the considerable difficulty in distinguishing the true antecedent(s) of the pronoun among a pool of antecedents. To address such complex scenarios, one heuristic and explainable approach is to leverage centering theory Grosz et al. (1995). It suggests that a pronoun tends to refer to the center or the most prominent entity in the preceding context. Specifically, by tracking the center transitions, we can identify potential antecedents based on salience and continuity of the entity. Centering theory is applicable across all languages, as it is not dependent on any specific language.
Besides analyzing overt pronouns, we also examine the competing antecedents for zero pronouns. In the Czech datasets Nedoluzhko et al. (2016); Hajic et al. (2020), the average number of competing antecedents is less than four, which is lower than that of ca_ancora and es_ancora.8 This implies that identifying the true antecedents of zero
Figure 4: Average number of competing antecedents for pronominal anaphors. The percentages next to the datasets represent the percentage of valid examinations of overt pronouns.
Figure 3: Mean and variance of Euclidean distances between pairs of mentions within the same entities across all datasets.
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline The study of how **[people]**\({}_{\text{s}}\), as **[fans]**\({}_{\text{s}}\), access and manage information within a transmedia system provides valuable insight that contributes not only to **[practitioners]**\({}_{\text{r}}\) and **[scholars of the media industry]**\({}_{\text{6}}\), but to the wider context of cultural studies, by offering findings on this new model of **[the fan]**\({}_{\text{s}}\) as **[consumer]**\({}_{\text{d}}\) and **[informationuser]**\({}_{\text{s}}\). For **[us]**\({}_{\text{i}}\), as **[digital humanists]**\({}_{\text{i}}\), defining **[the “transmedia fan”]**\({}_{\text{2}}\) is of particular relevance as **[we]**\({}_{\text{i}}\) seek to understand contemporary social and cultural transformations engender by digital technologies.
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline The study of how **[people]**\({}_{\text{s}}\), as **[fans]**\({}_{\text{s}}\), access and manage information within a transmedia system provides valuable insight that contributes not only to **[practitioners]**\({}_{\text{r}}\) and **[scholars of the media industry]**\({}_{\text{6}}\), but to the wider context of cultural studies, by offering findings on this new model of **[the fan]**\({}_{\text{s}}\) as **[consumer]**\({}_{\text{d}}\) and **[informationuser]**\({}_{\text{3}}\). For **[us]**\({}_{\text{i}}\), as **[digital humanists]**\({}_{\text{1}}\), defining **[the “transmedia fan”]**\({}_{\text{2}}\) is of particular relevance as **[we]**\({}_{\text{1}}\) seek to understand contemporary social and cultural transformations engender by digital technologies.
anphors is not very difficult in the Czech datasets. In pro-drop languages, a more coherent discourse tends to facilitate or encourage the use of zero pronoun especially in dialogue or social media contexts. We found that the nearest antecedents of some zero pronouns can either be overt pronouns or zero pronouns that are less informative. Hence, resolving anaphoric zero pronouns is a difficult subtask that requires contextual information.
### Genre
A document can be different in types of discourse with respect to referring expressions. For example, authors may use diverse expressions (e.g., _dog owners, owners, puppy owners_ and _they_) when referring to the same entity for the physical continuity of the text. In contrast, spoken discourse, especially in conversations, tends to have a higher density of referring expressions, including many pronouns and ellipsis, which contribute to the grammatical coherence within the discourse (e.g., _Sue? Is not here._), and relies mostly on shared situational knowledge between speaker and listener (known as the 'common ground'). [11]
In Figure 5, we present the frequency of personal pronouns usage per eight thousands words in each genres from the English corpus, en_gumZeldes2017. The results show that **vlog**, as a type of web discourse, has the highest frequency of pronoun usage. Different from **conversation**, content credators record themselves on video for their audience without engaging in real-time interaction during the recording process. When they share their thoughts or experiences, they tend to use first-person pronouns (e.g., \(I\) and _we_) more frequently compared to other genres. We also observe that the frequency of pronouns in **fiction** is high, surpassing even that of **speech**, indicating a strong continuity in reference, particularly related to the story's characters. This finding is in line with the results of Dorgeloh2022. As for written non-fiction, particularly in **academic**, **news** and **voyage** (describing a journey or trip), there is a lower use of pronouns, with academic texts showing the lowest frequency.
## 4 Error Analysis of MCR Systems
Apart from studying coreference on gold annotations solely, we also investigate the ground truth that the MCR systems failed to address. Our particular focus is on two-mention entities, which comprise over 80% of the gold entities where the recall is zero.9 Here, we analyze the predictions of two MCR systems: BASELINE [13], an end-to-end based system, and UFAL [26], the winning system in the CRAC 2022 shared task on MCR.10 Figure 6 presents the error analysis in a tree structure conducted on UFAL.
Footnote 9: See Appendix B for the details of the error analysis.
Footnote 10: The two system outputs from the development sets of CorefUD 1.0 are publicly accessible at [https://ufal.mff.cuni.cz/corefud/crac22](https://ufal.mff.cuni.cz/corefud/crac22).
### Undetected Mentions
The primary factor leading to unresolved two-mention entities is the inability to detect one or both of the mentions. UFAL identifies 22% of the mentions, while BASELINE detects 19%. UFAL employs a pipeline approach, treating mention detection as a separate token-level classification task. The proposed tags for tokens can handle embedded and also overlapping mention spans. We speculate that the mention detection module contributes slightly more to the identification of mentions.
We further analyze the mention types and length of the undetected mentions.11**(i)** More than 50% of the undetected mentions on average are nominal nouns, so we try to analyze the types of these noun phrases based on definiteness, such as demonstrative articles (e.g., _that house_) and proper noun-modified noun phrases (e.g., _Barack Obama presidency_). However, due to the highly variable nature of definiteness across languages and the lack of consistent annotations at this level of granularity, we encounter a challenge in implementing this analysis. For example, in Lithuanian, definiteness is encoded within adjectives or nouns, and possessive adjectives in Hungarian can only be inferred from word suffixes. Moreover, some languages, such
Figure 5: Number of personal pronouns per 8000 words in various genres within en_gumZeldes2017.
as Slavic ones, do not have grammaticalized definiteness at all. **(ii)** Analyzing mention length, we observe that the majority of mentions in Hungarian (70%) and Lithuanian (80%) consist of only one or two words. One of the reasons is that Hungarian, for example, is an agglutinative language11. When dealing with such languages, it is plausible to include a preprocessing stage to handle word splitting.
Footnote 11: Words are constructed by combining stem forms with multiple affixes to convey diverse grammatical features such as tense and number, for example, _beleselkethem_ (_I look into_) and _Odafigyelhettel volna_ (_You could have paid attention to it_).
### Missing Links
We also explore the relationship between the two mentions in the unresolved entities.9 First, we notice that in BASELINE, more than 45% of the entities have both mentions located in the same sentence. To resolve those entities, syntax information that captures the grammatical relationships and dependencies between words within the sentences is beneficial. One approach is employing binding theory (Chomsky, 1993). On the other hand, in UFAL, 39% of the entities have their two mentions spanning across multiple sentences. To address this issue, an approach is to use knowledge extracted from the discourse structure of the text. Second, for both systems, resolving cases where both mentions are nominal nouns presents difficulties across all languages. Additionally, our analysis in Section 3 demonstrates that there are mention pairs referring to the same entities, but showing lower semantic similarity. These findings suggest that it is important to improve the capability of resolving noun phrases. Lastly, we examine the gold anaphor-antecedent relations between the two mentions of the unresolved entities. We found that the most frequent UD relation associated with the antecedents of nominal nouns are nominal dependents (e.g., nominal modifier and appositional modifier). For antecedents of overt pronouns, the subject in core arguments is the most common UD relation.
Footnote 9: _Odafigyelhettel volna_ (_You could have paid attention to it_).
## 5 Modeling with Universal Annotations
Based on the findings above, we can gain additional insights and clues regarding MCR. For example, we found that the closest antecedents of overt pronouns are always located in subject position. This pattern is common in nearly all languages as shown in Figure 2 (b). Therefore, we use linguistic information extracted from universal annotations for the purpose of modeling and examine its effectiveness, in the following section.
### Model
Baseline.We adopt the model proposed by Prazak et al. (2021) as our base model, which is an end-to-end neural model inspired by the method introduced by Lee et al. (2017). It serves as the baseline for the CRAC 2022 shared task on multilingual coreference resolution.
Incorporating Linguistic Information.Given an input document consisting of \(n\) tokens, we first generate a contextual embedding for each token using mBERT denoted as **X** = (**x\({}_{1}\)**,..., **x\({}_{n}\)**). The tokenization is based on either word forms (**wf**) or
Figure 6: Error analysis of entities where UFAL fails to resolve, meaning that the recall of these entities is zero. For example, 81% of unresolved entities consist of two mentions. One of the reasons for the failure to resolve two-mention entities is that 78% of the mentions within those entities are not detected. The figures are computed on average across all datasets.
lemmas (**lem**). Then we define the embedding of each candidate span \(c\) as:
\[\mathbf{e}_{c}=[\mathbf{x}_{c_{start}},\mathbf{x}_{c_{end}},\mathbf{\hat{x}}_{c}, \phi(s_{c})]\]
where \(\mathbf{x}_{c_{start}}\) and \(\mathbf{x}_{c_{end}}\) denote the embeddings of the boundary tokens. \(\mathbf{\hat{x}}_{c}\) is the addition of attentionally weighted token representations in the candidate. \(\phi(s_{c})\) is a concatenated feature vector that includes the width, UPOS tags, UD relations, mention types and UD categories of the span. We select the token with the maximum attention weight as the head of the candidate to compute the mention types and UD categories as discussed in Section 3.
We measure how likely a candidate is a mention by using a mention score \(f_{m}(\cdot)\):
\[f_{m}(c)=\mathbf{FFNN}_{m}([\mathbf{e}_{c},\phi(u_{c})])\]
where \(\phi(u_{c})\) encodes the UPOS tag, UD relation, mention type and UD category of the candidate determined by its 'head' word as mentioned above.
After extracting the top \(\lambda n\) mentions based on the mention score, we compute the likelihood of a candidate mention \(c\) being an antecedent of a query mention \(q\) by a scoring function \(f(c,q)\):
\[f(c,q)=\mathbf{FFNN}_{s}([\mathbf{e}_{c},\mathbf{e}_{q},\mathbf{e}_{c}\circ \mathbf{e}_{q},\phi(c,q)])\]
\(\phi(c,q)\) denotes the embeddings of some general features of the document: language and word order of the language12. For each query mention, our model predicts a distribution \(\hat{P}(q)\) over its candidates, \(q\in Y(c)\):
Footnote 12: [https://wals.info/](https://wals.info/)
\[\hat{P}(q)=\frac{\exp(f(c,q))}{\sum_{k\in Y(c)}\exp(f(c,k))}\]
Note that if the query mention is a singleton, we set the scoring function to zero.13
Footnote 13: For more details, please refer to the original papers, Prazák et al. (2021) and Lee et al. (2017).
Training and Inference.Since UFAL (Straka and Strakova, 2022) demonstrated that a multilingual model based on a multilingual language model outperforms monolingual models on the MCR task, we adopt a similar approach. Our model is jointly trained on a mixture of datasets of 10 languages from CorefUD 1.0 (Nedoluzhko et al., 2022) using mBERT (Devlin et al., 2019) as the pretrained language model. Then we use this trained model to predict mention clusters on the target language-specific datasets.
### Experiments
Settings.We verify the effectiveness of our models on CorefUD 1.0 (Nedoluzhko et al., 2022). Because the test datasets are not publicly available, we partitioned approximately 10% of the training datasets to create our own test datasets. The results are reported using the CoNLL F1 score -- the average of MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), CEAFe (Luo, 2005). The final ranking score is calculated by macro-averaging the CoNLL F1 scores over all datasets. To ensure a fair comparison, we keep all parameters the same as the baseline (Prazák et al., 2021). All our experiments are performed on a single NVIDIA Tesla V100 32G GPU. We examine two models, namely **ours_wf** and **ours_lem**, as discussed in Section 5.1, in comparison with the baseline model trained on our specific setting.
Results.Table 4 presents our results. Our model **ours_wf** shows a modest improvement over the baseline with a margin of 0.9% F1 score on average across all languages. The model performs best on Germanic datasets, whereas the lt_lcc (Zitkus and Butkiene, 2018) and ru_tucor (Toldova et al., 2014) datasets present the greatest difficulties, indicating that these two Baltic and Slavic languages are particularly difficult to handle. In the ablation study, we observe that including general features like language and word order also yields
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c} \hline \hline Models & Avg & \multicolumn{2}{c}{c} & \multicolumn{2}{c}{cs} & \multicolumn{2}{c}{cs} & \multicolumn{2}{c}{en} & \multirow{2}{*}{hu} & \multirow{2}{*}{pl} & \multirow{2}{*}{es} & \multirow{2}{*}{lt} & \multirow{2}{*}{fr} & \multicolumn{2}{c}{de} & \multicolumn{2}{c}{de} & \multicolumn{2}{c}{en} \\ & & & & & & & & & & & & & & & & \\ & & & PCED & pdt & gum & & & & & & & & & & & \\ \hline BASELINE & 53.7 & 55.2 & 68.4 & 64.3 & 48.8 & 46.4 & 50.2 & 57.6 & **64.2** & 57.0 & 33.7 & 43.3 & 43.0 & **66.9** \\ ours\_lem & 51.7 & 52.8 & 65.0 & 62.7 & 48.1 & 44.0 & 44.3 & 54.6 & 60.2 & 56.7 & 30.6 & 40.8 & 48.4 & 64.5 \\ ours\_wf & **54.6** & **55.7** & **68.5** & **64.9** & **50.1** & **47.1** & **50.4** & **57.7** & 62.1 & **58.6** & **35.1** & **44.9** & **48.5** & 66.5 \\ \(\Box\)ua & -0.51 & -0.03 & -0.23 & 0.00 & -0.75 & -0.21 & -0.72 & +0.34 & +1.57 & -1.78 & +0.81 & -2.90 & -4.04 & +1.36 \\ \(\Box\) lang & -0.87 & -0.45 & -0.11 & -0.65 & -1.29 & -0.71 & -0.19 & -0.14 & +2.09 & -1.62 & -1.44 & -1.61 & -5.58 & +0.40 \\ \hline \hline \end{tabular}
\end{table}
Table 4: F1 scores on the test set in our setting are reported on average across three runs. \(\Box\) rules out the additional features extracted from universal annotations (ua) and languages (lang) from **ours_wf** for the ablation study.
positive effects on performance, in addition to incorporating universal annotations.
In contrast, the performance of **ours_lem** shows a decline compared with BASELINE. The method is specifically designed to address data sparsity and handling out-of-vocabulary words in morphological-rich languages. However, lemmatization can result in different words being mapped to the same lemma and loss of valuable morphological information present in word forms. In order to handle multiple languages together, it is crucial to employ a trade-off strategy or to implement a preprocessing approach.
Error Analyses.We employ the same analysis methodology as presented in Section 4 for the error analysis of our model **ours_wf** and BASELINE _in our setting_. We found that **ours_wf** predicts more clusters correctly than BASELINE, either in full or partially (i.e., the rate of gold entities with a recall of zero is lower on average, 39.19% vs. 39.77%.). Two-mention entities are the most difficult cases for the two examined systems. In these unresolved two-mention entities, **ours_wf** has fewer undetected mentions on average especially in fr_democrat and de_parcforull, as illustrated in Figure 7. Among those undetected, there are more mentions consisting of more than two words compared with BASELINE. For the missing links, the two systems produce similar results. Both mentions in two-mention entities are primarily nominal nouns. And the most frequent UD relation associated with the antecedent of nominal is still nominal dependents. Overall, our model **ours_wf** can resolve slightly more entities and shows a superior performance in mention detection compared with BASELINE. Nevertheless, there is still room for improvement.
## 6 Discussion and Conclusion
It has become apparent that leveraging universal morphosyntactic annotations can be advantageous in various ways, like exploring underlying patterns of coreference, performing in-depth analysis and making a contribution to the development of an MCR system. However, there are still language-specific characteristics that hinder the comprehensive study of multiple languages together, particularly when it involves analyzing intricate aspects of the morphological layer, like definiteness and compound nouns in German. In addition, while multilingual datasets are harmonized to some extent, there are still cases where certain information, such as entity types, is only provided for a limited number of languages. This limitation prevents us from conducting further analyses, such as examining semantic class agreement across languages. We study MCR primarily focusing on identity coreference since it is the most important relation across all datasets. However, it is important to note that there exist various other anaphoric relations, such as bridging and discourse deixis (Yu et al., 2022), that remain unexplored. In this work, we analyze coreference across multiple languages by leveraging the harmonized universal morphosyntactic and coreference annotations in CorefUD. This analysis provides valuable insights into common features and challenges in MCR. We demonstrate the benefits of incorporating linguistic features for enhancing the MCR system performance.
## Limitations
In this work, our analyses are mainly corpus-based studies. The reliance on selected specific corpora may result in a focus on particular genres, domains, or time periods that may not be representative of other contexts. However, with the high number of datasets from diverse genres and domains, we believe the findings still can provide some valuable insights into MCR. The languages examined in our study belong to the European language group. It would be interesting to involve languages from other regions, like Arabic and Chinese.
## Acknowledgements
We thank the anonymous reviewers for their helpful feedback that greatly improved the final version of the paper. We also thank Margareta Kulcsar for her early experiments contributing to this work.
Figure 7: Percentage of undetected mentions in unresolved two-mention entities.
This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS Ph.D. scholarship.
|
2305.02831 | Local Computation Algorithms for Hypergraph Coloring -- following Beck's
approach (full version) | We investigate local computation algorithms (LCA) for two-coloring of
$k$-uniform hypergraphs. We focus on hypergraph instances that satisfy
strengthened assumption of the Lov\'{a}sz Local Lemma of the form $2^{1-\alpha
k} (\Delta+1) \mathrm{e} < 1$, where $\Delta$ is the bound on the maximum edge
degree. The main question which arises here is for how large $\alpha$ there
exists an LCA that is able to properly color such hypergraphs in
polylogarithmic time per query. We describe briefly how upgrading the classical
sequential procedure of Beck from 1991 with Moser and Tardos' RESAMPLE yields
polylogarithmic LCA that works for $\alpha$ up to $1/4$. Then, we present an
improved procedure that solves wider range of instances by allowing $\alpha$ up
to $1/3$. | Andrzej Dorobisz, Jakub Kozik | 2023-05-04T13:47:57Z | http://arxiv.org/abs/2305.02831v1 | # Local Computation Algorithms for Hypergraph Coloring - following Beck's approach
###### Abstract
We investigate local computation algorithms (LCA) for two-coloring of \(k\)-uniform hypergraphs. We focus on hypergraph instances that satisfy strengthened assumption of the Lovasz Local Lemma of the form \(2^{1-\alpha k}(\Delta+1)\mathrm{e}<1\), where \(\Delta\) is the bound on the maximum edge degree. The main question which arises here is for how large \(\alpha\) there exists an LCA that is able to properly color such hypergraphs in polylogarithmic time per query. We describe briefly how upgrading the classical sequential procedure of Beck from 1991 with Moser and Tardos' Resample yields polylogarithmic LCA that works for \(\alpha\) up to \(1/4\). Then, we present an improved procedure that solves wider range of instances by allowing \(\alpha\) up to \(1/3\).
Property B, Hypergraph Coloring, Local Computation Algorithms 2016/21/B/ST6/02165
### Local Computation Algorithms
Rubinfeld, Tamir, Vardi and Xie proposed in [20] a general model of sublinear sequential algorithms called Local Computation Algorithms (LCA). The model is intended to capture the situation where some computation has to be performed on a large instance but, at any specific time, only parts of the answer are required. The interaction with a local computation algorithm is organized in the sequence of queries about fragments of a global solution. The algorithm shall answer each consecutive query in sublinear time (wrt the size of the instance), systematically producing a partial answer that is consistent with some global solution. The model allows for randomness, and algorithm may occasionally fail.
For example, for the hypergraph two-coloring problem, the aim of an LCA procedure is to find a proper coloring of a given hypergraph. The algorithm can be queried about any vertex, and in response, it has to assign to the queried vertex one of the two available colors. For any sequence of queries, with high probability, it should be possible to extend the returned partial coloring to a proper one.
Formally, for a fixed problem, a procedure is a _\((t,s,\delta)\)-local computation algorithm_, if for any instance of size \(n\) and any sequence of queries, it can consistently answer each of them in time \(t(n)\) using up to \(s(n)\) space for computation memory. The time \(t(n)\) has to be sublinear in \(n\), but a polylogarithmic dependence is desirable. The value \(\delta(n)\) shall bound the probability of failure for the whole sequence of queries. It is usually demanded to be small. The computation memory, the input, and the source of random bits are all represented as tapes with random access (the last two are not counted in \(s(n)\) limit). The computation memory can be preserved between queries. In particular, it can store some partial answers determined in the previous calls. For the precise general definition of the model consult [20].
A procedure is called _query oblivious_ if the returned solution does not depend on the order of the queries (i.e. it depends only on the input and the random bits). It usually indicates that the algorithm uses computation memory only to answer the current query and that there is no need to preserve information between queries. It is a desirable property, since it allows to run queries to algorithm in parallel. In a follow-up paper [3], Alon, Rubinfeld, Vardi, and Xie presented generic methods of removing query order dependence and reducing necessary number of random bits in LCA procedures. In the same paper, these techniques were applied to the example procedures (including hypergraph coloring) from [20] converting them to query oblivious LCAs. The improved procedures work not only in polylogarithmic time but also in polylogarithmic space. Mansour, Rubinstein, Vardi, and Xie in [15] improved analysis of this approach.
### Constructive Local Lemma and LCA
The Lovasz Local Lemma (LLL) is one of the most important tools in the field of local algorithms. In its basic form, it allows one to non-constructively prove the existence of combinatorial objects omitting a collection of undesirable properties, so-called bad events. A brief introduction to this topic and a summary of various versions of LLL can be found in the recent survey by Farago [10].
For a fixed \(k\)-uniform hypergraph, let \(p=2^{-k}\) denote the probability that, in a uniformly random coloring, a fixed edge is monochromatic in a specific color. A straighforward application of the symmetric version of Local Lemma (see e.g., [10]) proves that the condition \(2p\ (\Delta+1)\ \mathrm{e}<1\), is sufficient for a hypergraph with the maximum edge degree \(\Delta\), to be two-colorable.
For many years, Local Lemma resisted attempts to make it efficiently algorithmic. The
first breakthrough came in 1991, when Beck [5], working on the example of hypergraph two-coloring, showed a method of converting some of LLL existence proofs into polynomial-time algorithmic procedures. However, in order to achieve that, the assumptions of Local Lemma had to be strengthened and took form
\[2\;p^{\alpha}\;(\Delta+1)\;\mathrm{e}<1. \tag{1}\]
For \(\alpha=1\) the inequality reduces to the standard assumption. The above inequality constraints \(\Delta\), and the constraint becomes more restrictive as \(\alpha\) gets smaller. The original proof of Beck worked for \(\alpha<1/48\). From that time, a lot of effort has been put into studying applications to specific problems and pushing \(\alpha\) forward, as close as possible to standard LLL criterion [2, 16, 7, 21, 18].
The next breakthrough was made by Moser in 2009. In cooperation with Tardos, Moser's ideas have been recasted in [19] into general constructive formulation of the lemma. They showed that, assuming so called variable setting of LLL, a natural randomized procedure called Resample3 quickly finds an evaluation of involved random variables for which none of the bad events hold. They also proved that, in typical cases, the expected running time of the procedure is linear in the size of the instance. For the problem of two-coloring of \(k\)-uniform hypergraphs, the total expected number of resamplings is bounded by \(m/\Delta\) (see Theorem 7 in [10]).
Footnote 3: As long as some bad events are violated, the procedure picks any such event and resamples all variables on which that event depends.
Adjusting constructive LLL to LCA model remains one of the most challenging problems in the area. It turns out, however, that previous results on algorithmization of Local Lemma can be adapted in the natural way. In fact, the first LCA algorithm for the hypergraph coloring from [20], is built on the variant of Beck's algorithm that is described in the book by Alon and Spencer [4]. That version works for \(\alpha<1/11\), and runs in polylogarithmic time per query. Later refinements focused on optimizing space and time requirements ([3], [15]), however, for polylogarithmic LCAs the bound on \(\alpha\) has not been improved. In a recent work, Achlioptas, Gouleakis, and Iliopoulos [1] showed how to adjust Resample to LCA model. They did not manage, however, to obtain a polylogarithmic time. Their version answers queries in time \(t(n)=n^{\beta(\alpha)}\). They establish some trade-off between the bound on \(\alpha\) and the time needed to answer a query. In particular, when \(\alpha\) approaches \(1/2\) then \(\beta(\alpha)\) tends to \(1\), which results in a very weak bound on the running time per query.
### Main result
Our research focuses on the following general question in the area of local constructive versions of the Lovasz Local Lemma: up to what value of \(\alpha\) there exists a polylogarithmic LCA for the problem of two-coloring of \(k\)-uniform hypergraphs satisfying condition \(2(\Delta+1)\mathrm{e}<2^{\alpha k}\). We prove the following theorem:
[main result] For every \(\alpha<1/3\) and all large enough \(k\), there exists a local computation algorithm that, in polylogarithmic time per query, with probability \(1-O(1/n)\) solves the problem of two-coloring for \(k\)-uniform hypergraphs with maximum edge degree \(\Delta\), that satisfies \(2\mathrm{e}(\Delta+1)<2^{\alpha k}\).
Within the notation of [20] we present \((\mathrm{polylog}(n),\mathcal{O}(n),\mathcal{O}(1/n))\)-local computation algorithm that properly colors hypergraphs that satisfy the above assumption. Our algorithm is
not query oblivious. Moreover, typical methods of eliminating the dependence on the order of queried vertices do not seem to be applicable without sacrificing constant \(\alpha\). More technical and precise statement of our main result is presented in Appendix C as Theorem 9.
For comparison, Alon et al. [3] after Rubinfeld et al. [20] present a query oblivious \((\mathrm{polylog}(n),\mathrm{polylog}(n),\mathcal{O}(1/n))\)-local computation algorithm working for hypergraphs satisfying
\[16\;\Delta(\Delta-1)^{3}(\Delta+1) <2^{k_{1}}, \tag{2}\] \[16\;\Delta(\Delta-1)^{3}(\Delta+1) <2^{k_{2}},\] \[2e(\Delta+1) <2^{k_{3}},\]
where \(k_{1},k_{2}\) and \(k_{3}\) are positive integers such that \(k=k_{1}+k_{2}+k_{3}\). These assumptions correspond to \(\alpha<1/11\).
The analysis of the LCA procedure from [3] guarantees only that the running time is of the order \(\mathcal{O}\Big{(}\log^{\Delta}(n)\Big{)}\). Mansour et al. in [15] focus on improving time and space bounds within polylogarithmic class, removing the dependency on the maximal edge degree from the exponent. They obtain an LCA working in \(\mathcal{O}\big{(}\log^{4}(n)\big{)}\) time and space, assuming that \(k\geq 16\log(\Delta)+19\), so it requires even stronger bound on \(\alpha\).
### LOCAL distributed algorithms
The model of Local Computation Algorithms is related to the classical model of local distributed computations by Linial [13] (called LOCAL). For comparison of these two models, see work of Even, Medina, and Ron [9]. Chang and Pettie observed recently in [6] that within LOCAL model, the general problem of solving Local Lemma instances with a dependency graph of bounded degree is in some sense complete for a large class of problems (these are the problems which can be solved in sublogarithmic number of rounds). They also conjectured that for sufficiently strengthened condition of Local Lemma (like taking small enough \(\alpha\) in (1)) there exists a distributed LOCAL algorithm that solves the problem in \(\mathcal{O}(\log\log n)\) rounds. The straightforward simulation of such an algorithm within LCA framework would yield a procedure that, at least for fixed maximum degree, answers queries in polylogarithmic time.
Recently, progress towards this conjecture has been made by Fischer and Ghaffari [11], who proved that there exists an algorithm for Local Lemma instances that works in \(2^{\mathcal{O}\big{(}\sqrt{\log\log n}\big{)}}\) rounds. The influence of the degree of underlying dependency graph on running time has been later improved by Ghaffari, Harris and Kuhn in [12]. In particular, for sufficiently constrained problem of hypergraph two-coloring, that result allows one to obtain an LCA procedure that answers queries in sublinear time. The time, however, would be superpologarithmic. Moreover, the necessary strengthening of Local Lemma assumptions appears to be much stronger than the one required to apply the result of Rubinfeld et al. [20].
The possibility of simulation of LOCAL algorithms within LCA model implies that if Chang and Pettie conjecture holds, then any problem satisfying sufficiently strengthened LLL conditions can be solved in LCA model in polylogarithmic time per query. We can therefore formulate a weaker conjecture that for some \(\alpha\) every such \(\alpha\)-strengthened problem can be solved in LCA in polylogarithmic time per query. For the specific problem of hypergraph coloring, this property is known to hold. We can, however, ask what is the maximum such \(\alpha\) for a fixed problem. That is precisely the general problem stated at the beginning of Section 1.3. It is interesting to note that our algorithms make essential use of the sequential nature
of LCA. For that reason, they cannot be translated to \(\mathcal{O}(\log\log n)\) LOCAL algorithms. This also illustrates an important difference between the models.
## 2 Main techniques and ideas of the proof
The algorithmic procedure of Beck [5] is divided into two phases. In the first one, which we call _the shattering phase_, it builds a random partial coloring that guarantees that a fraction of all edges are already properly colored. Moreover, the edges which are not yet taken care of have sufficiently many non-colored vertices to make sure that the partial coloring can be completed to a proper one. They also form connected components of logarithmic sizes which can be colored independently. Then, in the second phase, which we call _the final coloring phase_, an exhaustive search is used to complete the coloring of each component. This results in a sequential procedure with polynomial running time. In order to reduce the running time to almost linear, the shattering phase can be applied twice. Then, the final components w.h.p. are of size \(\mathcal{O}(\log\log(n))\). The polylogarithmic LCA procedure for hypergraph coloring from [20] followed that approach and simulates locally two shattering phases and an exhaustive search when answering a single query. Division into these three phases is directly reflected in the conditions (2) required by the procedure.
While it is not known whether it is possible to design an LCA algorithm based solely on Resample, combining it with previous local algorithms brings significant improvements. It turns out that, within polylogarithmic time, after only one shattering phase, the coloring can be completed with the use of Resample. This simple modification, with slightly improved analysis, is sufficient to derive Theorem 1 for \(\alpha\leq 1/4\). This is our first contribution. That procedure provides a reference point for explaining the intuitions and motivations that underlie the further improvements that we derive. In particular, we define a notion of _component-hypergraph_ that allows for a more fine-grained analysis of the components of the residual hypergraph. For that reason, we present our base algorithm in detail in Section 3.
The first modification that we make in order to improve the base algorithm is that within the shattering phase we sample colors for all vertices. Then, for some vertices, the color is final, and for others, it is allowed to change the assigned color in the final coloring phase. Coloring all the vertices during the first phase somehow blurs the border between the shattering and final coloring phases. Its main purpose is to enable a more refined partition of the residual hypergraph into independent fragments. It also allows to determine some components of the residual hypergraph for which no recoloring would be necessary. This corresponds to a situation in which the first sampled colors in Resample happen to define a proper coloring. Altogether, we managed to significantly reduce the pessimistic size of the independent fragments colored in the final coloring phase, which enables further relaxation of the necessary conditions on \(\alpha\) to \(\alpha<1/3\). The improved procedure is described in Section 4.
In order to analyze the procedures, we employ a common technique of associating some tree-like _witness structures_ with components that require recoloring. Every such structure describes a collection of events associated with some edges of the hypergraph. All these events are determined by the colors assigned in the shattering phase. For the base algorithm, these structures are quite typical. However, in order to achieve the better bound on \(\alpha\), we developed more sophisticated structures that are capable of tracking different kinds of events, which can also depend on the colors that are allowed to be recolored. Different kinds of events come with different bounds on probability. An important aspect of the analysis concerns amortization of different kinds of events within a single structure. The construction of these structures is our main technical contribution. We described it in detail in Appendix D.
We finally note that, while our methods are not general enough to work for all instances satisfying the strengthened assumptions of LLL, they can be applied to a number of problems similar to hypergraph coloring, like, e.g. \(k\)-SAT.
## 3 Establishing base result
In this section we show how the Beck's algorithm can be combined with Resample to construct a local computation algorithm that works in polylogarithmic time per query for \(\alpha\) up to \(1/4\). In other words, we prove Theorem 1 under the stronger assumption that \(\alpha\leq 1/4\). To keep the exposition simple, we first present a global randomized algorithm. Then, we comment on how to adapt this procedure to LCA model. The analysis of the procedure can be found in Appendix B.
Let \(H=(V,E)\) be a hypergraph that satisfies the assumptions of Theorem 1 for a fixed \(\alpha\leq 1/4\). For technical convenience, we assume that \(\alpha k\) is an integer4. By assigning a random color, we mean choosing uniformly one of the two available colors. For a set of edges \(S\), by \(V(S)\) we mean all vertices covered by the edges from \(S\). For an edge \(f\), \(N(f)\) denotes the set of edges intersecting \(f\). We use a naming convention that is similar to other works on the subject - in particular, our view of Beck's algorithm is influenced by its descriptions by Alon and Spencer [4] and Molloy and Reed [17], as well as LCA realization given in [20].
Footnote 4: In fact, for the given \(k\) it is only reasonable to take \(\alpha\) in the form of \(t/k\), where \(t\) is an integer \(2\leq t\leq k\).
### Global coloring procedure
The algorithm starts with choosing an arbitrary order of vertices. Then, it proceeds in two phases: _the shattering phase_ and _the final coloring phase_. The shattering phase colors some vertices of the input hypergraph and then splits the edges of the hypergraph that are not properly colored yet into _final components_ - subhypergraphs that can be colored independently. The final coloring phase completes the coloring by considering the final components separately, one by one.
#### The shattering phase
The procedure processes vertices sequentially according to the fixed ordering. For every vertex, it either assigns a random color to the vertex or leave it non-colored in case it belongs to a _bad_ edge. An edge is called _bad_ if it contains \((1-\alpha)k\) colored vertices and is still not colored properly (that is, all these vertices have the same color). Once an edge becomes bad, no more vertices from that edge will be colored - such vertices are called _troubled_. Vertices with assigned colors are called _accepted_.
Upon completion of the shattering phase, there are three types of edges:
* properly colored by the accepted vertices,
* containing exactly \((1-\alpha)k\) accepted vertices, all of the same color,
* containing fewer than \((1-\alpha)k\) accepted vertices, all of the same color.
Observe that in the resulting (partial) coloring, every edge that is not colored properly has at least \(\alpha k\) troubled vertices, which will be colored in the next phase. Note also that it might happen that some unsafe edge has no colored vertices at all.
The colors of accepted vertices are not going to be changed, so the safe edges are already taken care of. Therefore, we focus on bad and unsafe edges. Let \(E_{bad}\) denote the set of all
bad edges. Consider hypergraph \((V(E_{bad}),E_{bad})\). It is naturally decomposed into connected components.
Every component of the hypergraph \((V(E_{bad}),E_{bad})\) is called a _bad-component_.
Note that every troubled vertex belongs to some bad-component. On top of them we build an abstract structure to express dependencies between bad-components through unsafe edges.
A component-hypergraph is constructed as follows: its vertices are bad-components of \(H\) and for every unsafe edge \(f\) intersecting more than one bad-component, an edge that contains all bad-components intersected by \(f\) is added to it.
For each connected component of the component-hypergraph (that is, a maximal set of bad-components that is connected in the component-hypergraph) we construct a _final component_ by taking the union of those bad-components (hence a final component is a subhypergraph of \(H\)). The shattering phase is _successful_ if each final component contains at most \(2(\Delta+1)\log(m)\) bad edges. If this is not the case, the procedure declares a failure. It turns out that this is very unlikely to happen.
#### The final coloring phase
For each final component \(\mathcal{C}\) determined during the shattering phase, we add to \(\mathcal{C}\) all unsafe edges intersecting it, and then, we restrict \(\mathcal{C}\) to troubled vertices5. We obtain a hypergraph \(\mathcal{C}^{\prime}\) containing at most \(2(\Delta+1)^{2}\log(m)\) edges, and each of them has at least \(\alpha k\) vertices. The maximum edge degree in \(\mathcal{C}^{\prime}\) cannot be larger than \(\Delta\), which is the maximum edge degree in \(H\). Since \(2e(\Delta+1)<2^{\alpha k}\) (by the assumptions of Theorem 3.1), Lovasz Local Lemma ensures that \(\mathcal{C}^{\prime}\) is two-colorable. Hence, by the theorem of Moser and Tardos Resample finds a proper coloring of it using on average \(|E(\mathcal{C}^{\prime})|/\Delta\) resamplings (see Theorem 3.1 in [10]).
Footnote 5: Restriction of \(H=(V,E)\) to \(V^{\prime}\subseteq V\) is defined as \(H^{\prime}=(V^{\prime},\{e\cap V^{\prime}|\ e\in E,e\cap V^{\prime}\neq\emptyset\})\).
When the final coloring phase is over, all final components are properly colored. Since each bad or unsafe edge is dealt within some final component, and each safe edge was properly colored during the shattering phase, it is now guaranteed that the constructed coloring is proper for the whole \(H\).
### LCA realization
We employ quite standard techniques to obtain an LCA realization of the described algorithm. We articulate it below to provide a context for the description of our main algorithm. An important property of the described procedure is that the ordering of vertices does not have to be fixed a priori. In fact it can be even chosen in an on-line manner by an adversary. Following [20], we are going to exploit the freedom of choice of ordering. The LCA version of the algorithm is going to simulate the global version run with a specific ordering. That ordering is constructed dynamically during the evaluation and is driven by the queries. Apart from some minor adjustment (resulting from adaptation to LCA model) when the algorithm is queried about vertex \(v\), it performs all the work of the standard algorithm needed to assign a final color to \(v\). The LCA version is presented in Listings 1, 2, 3, and 4. All colors assigned during work of the algorithm are stored in the computation memory (which is preserved between queries). For convenience, we also store there the status of each vertex - _uncolored_, _accepted_ or _troubled_. Initially all vertices are uncolored.
#### 3.2.1 query
When a vertex \(v\) has been already marked as accepted, its color is immediately returned. If it has not been processed before, the algorithm checks whether \(v\) belongs to any bad edge (that requires inspecting the current statuses of all the edges that contain \(v\)). If not, a random color is assigned to \(v\), the vertex is marked as accepted, and the procedure returns the assigned color. On the other hand, when \(v\) belongs to a bad edge, it is marked as troubled. The algorithm then determines the final component containing \(v\) in procedure BUILD_FINAL_COMPONENT. These steps can be viewed as the shattering phase. Afterwards, the final coloring phase is performed for the final component in procedure COLOR_FINAL_COMPONENT.
```
1Procedurebuild_FINAL_COMPONENT(\(v\)-troubled vertex):
2\(B\leftarrow\emptyset\)//initialize set of bad edges of the component
3\(U\leftarrow\emptyset\)// initialize set of unsafe edges to process
4\(e\leftarrow\) any bad edge containing \(v\)
5 mark \(e\) as explored and run expand_BAD_COMPONENT(\(e\), \(B\), \(U\))
6 // process surrounding unsafe edges
7while\(U\) is not emptydo
8\(f\leftarrow\) next edge from \(U\) (remove it from \(U\))
9\(\texttt{expand\_via\_unsafe}(f\), \(B\), \(U\))
10\(//\) return hypergraph built on set of bad edges
11return\(\mathcal{C}=(V(B),B)\)
```
**Algorithm 2** Building the final component for \(v\) that belongs to some bad edge
#### 3.2.2 build_final_component
This procedure builds the set \(B\) of bad edges of the final component of \(v\), exploring the line graph of \(H\)6. It uses a temporary flag _explored_ to mark visited edges (this flag is not preserved between queries). The construction starts from a bad edge containing troubled vertex \(v\) and expands it to a bad-component. Then, as long as possible, set \(B\) is extended by edges of neighboring bad-components, which can be reached through unsafe edges adjacent to \(B\). If at
some point the number of bad edges in \(B\) exceeds the prescribed bound \(2(\Delta+1)\log(m)\), then the procedure declares a failure (note that it cannot be restarted since LCA model does not allow to change colors returned for previous queries). Construction of the final component is done when there are no more bad edges to add. Then, the hypergraph \(\mathcal{C}=(V(B),B)\) built on the collected bad edges is returned.
The expansion of bad-components is done within subprocedure expand_BAD_component. It starts from the given bad edge and explores the line graph by inspecting the adjacent edges. For each adjacent edge, its type (safe, unsafe, or bad) is determined using DETERMINE_EDGE_STATUS. Determining status of an edge may require processing some uncolored vertices of that edge. For each of them, the procedure check whether it is troubled. If it is not, a random color is assigned to the vertex and the vertex is marked as accepted.
```
1Procedureexpand_BAD_component(e - bad edge, \(B\) - bad edges, \(U\) - unsafe edges):
2\(Q\leftarrow\{e\}\)// initialize set of bad edges to process while\(Q\) is not emptydo
3\(f\leftarrow\) next edge from \(Q\) (remove it from \(Q\)) add \(f\) to \(B\)andif\(|B|>2(\Delta+1)\log(m)\)then FAIL
4for\(g\in N(f)\) which are not exploreddo
5 mark \(g\) as exploredandetermine_EDGE_STATUS(\(g\)) if\(g\)is badthen add \(g\) to \(Q\) if\(g\)is unsafethen add \(g\) to \(U\)
6
7Procedureexpand_via_UNsafe(f - unsafe edge, \(B\) - bad edges, \(U\) - unsafe edges):
8for\(g\in N(f)\) which are not exploreddo
9 Determine_EDGE_STATUS(\(g\)) if\(g\)is badthen
10 mark \(g\) as exploredandrunexpand_BAD_component(\(g\), \(B\), \(U\))
11
12Procedureetermine_EDGE_STATUS(\(g\) - edge):
13foreach\(w\) in \(g\) that isuncoloredunless \(g\) becomes safedo
14ifsome edge containing \(w\) (including \(g\)) is badthen mark \(w\) as troubled
15elseassign a random color to \(w\)andmark it as accepted
16count accepted vertices and check their colors to determine status of \(g\)
```
**Algorithm 3** Subprocedures for the final component construction
During the expansion through unsafe edges we keep a set \(U\) of not processed unsafe edges that intersects any edge of \(B\). As long as \(U\) is not empty, we pick any unsafe \(f\) from \(U\) and process it by EXPAND_via_UNsafe. Here we determine the statuses of all edges adjacent to \(f\) and if we encounter a bad edge which is not in \(B\), then we add it and expand a bad-component containing it. For technical convenience, during bad-component expansion we collect non-explored adjacent unsafe edges and add them to \(U\).
#### color_final_component
Final component \(\mathcal{C}\) is extended with unsafe edges that intersect it. Then it is restricted to the set of its troubled vertices. The resulting hypergraph is denoted by \(\mathcal{C}^{\prime}\). The algorithm tries to find a proper coloring of \(\mathcal{C}^{\prime}\) using Resample procedure. To ensure polylogarithmic
time, it is run only for the limited number of resampling steps. To decrease the probability of a failure, the procedure may be restarted a few times. When a proper coloring is found, each vertex of \(\mathcal{C}^{\prime}\) is marked as accepted. From now on, all edges of \(\mathcal{C}\) are treated as safe. However, if all trials were unsuccessful, the procedure declares a failure.
## 4 Main result - algorithm
We show how to improve the base procedure described in the previous section to obtain an algorithm that can be used to prove Theorem 1, that is, an algorithm that works in polylogarithmic time per query on input hypergraphs that satisfy strengthened LLL condition (1) for \(\alpha<1/3\). Actually, our procedure can be used to find a proper coloring also for instances that satisfy that condition with any \(\alpha\in(0,1)\), but the running time is not guaranteed for \(\alpha\geq 1/3\). We start with introducing the main ideas behind algorithm improvement and describe its global version. Then, we discuss how to adapt it to the model of the local computation algorithms, and finally we present a description of the LCA procedure. The analysis of the algorithm is placed in Appendix C.
### A general idea
It is a common approach in randomized coloring algorithms to start from an initial random coloring and then make some correction to convert it to a proper one (like in Resample [19] or in Alon's parallel algorithm [2]). This is not the case of Beck's procedure, in which a proper coloring is constructed incrementally, but coloring of some vertices (those marked as troubled) is postponed to the later phase. Our approach lies somewhere in between. We generally try to follow the latter one, but we sample colors for the troubled vertices already in the shattering phase. Such colors are considered as _proposed_, and we reserve the possibility of changing them in the final coloring phase. We use the information about the proposed colors to shrink the area that will be processed in the final coloring phase. In particular, if we look at the colors proposed for troubled vertices, then only those final components that contain a monochromatic edge require recoloring. Moreover, if we carefully track dependencies between bad-components (see Definition 2), it is also possible to decrease the sizes of the
final components. We explain this idea in more detail in the following subsections.
#### Activation of bad-components
Imagine that all the vertices were colored in the shattering phase and we want to determine the final components. We look at the component-hypergraph (see Definition 3) and have to decide which of the bad-components should be recolored. We start from bad-components that are intersected by monochromatic edges - we mark them as _initially active_ and treat them as seeds of final components. The remaining ones are currently _inactive_. Our intention is to recolor only active components in the final coloring phase. Note that it might not be sufficient to alter the coloring in a way that makes initially active components properly colored, because after their recoloring, it is possible that some unsafe edge which get both colors in the shattering phase becomes monochromatic. That is why the activation has to be propagated. We use the following rule
* Let \(A_{t}\) be the set of troubled vertices that are covered by active bad-components, and \(f\) be an unsafe edge that intersects \(A_{t}\). If \(f\setminus A_{t}\) is monochromatic, then all inactive bad-components that intersect \(f\) become active and all bad-components that intersect \(f\) are merged into one (eventually final) component.
The above propagation rule is applied as long as possible. When it stops, it is guaranteed that all monochromatic edges are inside active components and all unsafe and bad edges outside of active components are properly colored by the vertices that are outside of active bad-components. In particular, we can accept all the colors proposed for inactive vertices.
#### Edge trimming
We employ an additional technique, which can further reduce the area of the final components. Observe that, in order to guarantee two-colorability of the final components, it is enough to ensure that each edge has at least \(\alpha k\) vertices to recolor inside one final component. It means that if some active component already contains \(\alpha k\) troubled vertices of some edge, then it is not necessary to propagate activation through that edge. Thus, we can improve the propagation rule in the following way. Consider an unsafe edge \(f\) for which \(f\setminus A_{t}\) is monochromatic (recall that \(A_{t}\) denotes the set of currently active troubled vertices). If some active component contains at least \(\alpha k\) troubled vertices of \(f\), then \(f\) is trimmed to that active component. Otherwise, all bad-components intersected by \(f\) are activated and merged into one component (as described in the previous section).
We point out that the direct inspiration for this technique came from the work of Czumaj and Scheideler [7] in which the edge trimming is actively used during the construction of the area to be recolored. One of the consequences of using it is that the shapes of the final components depend on the specific order in which activation is propagated.
### Global coloring procedure
Similarly to the base algorithm from Section 3.1, the improved procedure performs the shattering phase and then the final coloring phase. The former is modified according to the ideas described in the previous subsection. In particular, we use the notions of _proposed_ and _accepted_ colors. Pseudocode of the whole procedure can be found in Listing 5 in Appendix A.
#### The shattering phase
The first part of the shattering phase is almost the same, except that now each vertex is colored. The procedure processes the vertices in a fixed order, and for each vertex it marks it as _accepted_ or _troubled_ and chooses a random color. A vertex \(v\) is accepted if, at the time of processing, \(v\) does not belong to any of the bad edges. Otherwise, it is troubled. An edge becomes _bad_ when its set of accepted vertices reaches size \((1-\alpha)k\) and is still monochromatic. After processing all the vertices, _safe_ and _unsafe_ edges are determined in the same way as in the base algorithm. Additionally, by a _monochromatic_ edge, we mean an edge for which all its vertices (accepted and troubled) have the same color. The colors of the accepted vertices are called _accepted colors_. The colors of the troubled vertices are called _proposed colors_. By accepting a color assigned to a vertex, we mean changing its status to accepted.
The next step involves determining the final components. We work with the component-hypergraph. We are going to mark some bad-components and unsafe edges as _active_. By an _active component_, we mean a maximal set of active bad-components which is connected in the component-hypergraph via active unsafe edges. We start with marking as active all monochromatic unsafe edges and all bad-components that are intersected by any (bad or unsafe) monochromatic edge. Let \(A_{t}\) denote the set of troubled vertices that are currently covered by active bad-components. Then, as long as there exists an inactive unsafe edge \(f\) satisfying the following conditions:
* \(f\) is monochromatic outside the active troubled area (i.e., \(f\setminus A_{t}\) is monochromatic), and
* each active component contains less than \(\alpha k\) troubled vertices of \(f\),
we activate \(f\) and activate all bad-components intersected by \(f\). When this propagation rule can no longer be applied, we accept the colors of all the troubled vertices from inactive bad-components. At that time, each active component determines a final component as the union of its bad-components. Just like in the base algorithm, the shattering phase is _successful_ if each final component contains at most \(2(\Delta+1)\log(m)\) bad edges. Otherwise, the procedure declares a failure.
#### The final coloring phase
We implement one modification at the beginning of the final coloring phase. For each final component \(\mathcal{C}\), we add to \(\mathcal{C}\) not all unsafe edges intersecting it, but only those that have at least \(\alpha k\) troubled vertices in \(V(\mathcal{C})\). Then, we proceed exactly as in the base algorithm: we restrict \(\mathcal{C}\) to the troubled vertices and apply Resample.
### Ideas behind LCA realization
In the base case, the conversion of the global algorithm to LCA is straightforward. In fact, the LCA version determines the same area to recolor (assuming that both versions process the vertices in the same order). For the improved algorithm described in the previous subsection, conversion to LCA is more complex and alters the behavior of the algorithm. The main difficulty is that for a bad-component alone that is not initially active, it is not easy to quickly decide whether it is going to be activated or not. There might exist a long chain of activation leading to an activation of the considered bad-component, and we do not know in which direction to search for the sources of this eventual activation. Moreover, even if we find out that it will be activated, it is not obvious what the shape of the final component containing it will be, since it requires performing activation propagation and determining activation statuses of neighboring bad-components as well. To address these problems, when a troubled vertex of some bad-component is queried, we focus on finding an area containing that vertex
that can be recolored independently from the remaining part of the input hypergraph. It means that from the beginning of the procedure the component of that vertex is treated as active and we allow trimming unsafe edges to that component. Moreover, we use additional techniques described below to limit the expansion of the processed area in a single query.
#### Trimming to bad-component
We extend edge trimming to the case when an unsafe edge \(f\) has at least \(\alpha k\) troubled vertices in some bad-component \(S\), and the set of those vertices together with the accepted vertices of \(f\) is not monochromatic. In such a case, \(f\) can be trimmed by removing from it the troubled vertices that do not belong to \(S\). Note that we do not check here whether \(S\) is active or not. The idea behind this step is that from now on \(S\) is responsible for the proper coloring of \(f\). If at some point, the colors of the vertices of \(S\) get accepted without any resamplings, then \(f\) will be obviously colored properly. Otherwise, if \(S\) becomes active, then \(f\) will be trimmed anyway, and \(S\) has enough troubled vertices of \(f\) to not break two-colorability of \(S\).
#### Activation exclusion
The necessary condition for an inactive bad-component \(S\) to be activated is that there is an unsafe edge \(f\) whose accepted vertices and troubled vertices in \(f\cap V(S)\) are of the same color. When there is no such edge or all such edges were trimmed to other components, then \(S\) cannot be activated. Therefore if it is not initially active, it stays inactive. In such a case, we can accept all the proposed colors for the vertices of \(S\). As a result, some unsafe edges become properly colored, and we can treat them as safe. This, in turn, may enable proving that neighboring bad-components will also not be activated. The same reasoning can be applied to a set \(C\) of bad-components. If none of the bad-components in \(C\) is initially active and there are no unsafe edges intersecting some bad-component outside \(C\) that may activate bad-component from \(C\), then we can conclude that all bad-components in \(C\) remain inactive.
#### Conditional expansion
The idea described in the previous subsection can be used for a bad-component to perform some kind of search for a potential reason of activation. If \(S_{1}\) is not initially active, we inspect unsafe edges that may cause the activation of \(S_{1}\). We can select any such \(f\), and ask whether other bad-component \(S_{2}\) intersected by \(f\) may become active. We can continue that procedure as long as there is a risk of activating any \(S_{i}\) from the group of bad-components visited so far. In the end, we either find some initially active component or we prove that all the considered bad-components cannot be activated. It turns out that, if we do not follow the edges that can be trimmed with the trimming to bad-component technique, then the processed area during such a search is unlikely to be large.
The possibility of finding an initially active bad-component can be used in expansion of the component to extend it by a neighboring area. For a selected bad-component adjacent to the currently constructed eventually final component, we launch a search and either we find some monochromatic edge (initially active component) and extend the component with the whole searched area, or convince ourselves that this area cannot be activated. In the latter case we can simply accept the proposed colors in that area. In the former we can perform the expansion because the occurrence of a monochromatic edge, as an unlikely event, in a sense amortizes the expansion of the component. In fact, we can stop the search procedure not only when we find a monochromatic edge but also in a less restrictive case when we find
an unsafe edge intersecting at least two disjoint bad edges outside the search area. This possibility follows from the technical details of the analysis.
### LCA procedure
We describe the improved LCA procedure in reference to the base algorithm presented in Section 3.2. As previously, the ordering of the vertices is constructed dynamically and is driven by the queries and the work of the algorithm. For a set of edges \(S\), by \(V_{t}(S)\) we mean all troubled vertices in \(V(S)\). For an edge \(f\), we denote by \(f|_{t}\) the set of troubled vertices of \(f\), and by \(f|_{a}\) the set of accepted vertices of \(f\).
#### query
The main procedure is almost identical to its counterpart in the base algorithm (Listing 1). The only difference is that when processing a vertex \(v\) of a bad edge, it is not only marked as troubled, but also a random color is assigned to \(v\).
#### build_final_component
This procedure is the heart of the algorithm and is substantially more complex than its analogue in the base version. It is presented in Listings 6 and 7 available in Appendix A. It also makes use of subprocedures defined earlier (see Listing 3), with one modification in DETERMINE_EDGE_STATUS - once a vertex \(w\) is marked as troubled, a random color is also assigned to \(w\). As previously, the procedure works on the line graph of \(H\) and grows a set \(B\) of bad edges that will be converted to a final component at the end of the procedure. It always starts from the bad-component containing the queried vertex \(v\), and expands it by neighbor bad-components via unsafe edges. The main change is that in the base algorithm each unsafe edge causes expansion of the component, here unsafe edges are processed more carefully. Throughout the procedure we make sure that the size of \(B\) does not exceed \(2(\Delta+1)\log(m)\) bound on number of edges - if that happens, the procedure stops and declares a failure.
Let \(U\) be the set of not processed unsafe edges intersecting \(V(B)\). If some edge can be trimmed to \(V(B)\), it can be safely removed from \(U\). Thus, we may assume that each \(f\) in \(U\) has fewer than \(\alpha k\) troubled vertices in \(V(B)\). Since every unsafe edge has more than \(\alpha k\) troubled vertices, each \(f\) from \(U\) has to intersect at least one bad-component outside \(V(B)\). The procedure applies the following _extension rules_ as long as possible:
* (r1) if there exists \(f\) in \(U\) that intersects at least two disjoint bad edges outside \(B\), or
* (r2) if there exists \(f\) in \(U\) for which all the vertices of \(f\) outside of \(V_{t}(B)\) are monochromatic, then \(B\) is extended with all bad edges from the bad-components intersected by \(f\);
* (r3) if there are no edges in \(U\) that meet the conditions (r1) or (r2), but there exists \(f\) in \(U\) that has fewer than \(\alpha k\) troubled vertices outside \(V(B)\),
then call EXPAND_OR_ACCEPT procedure (described in the following subsection) for \(f\), which implements the conditional expansion technique, and extend \(B\) with the returned set of bad edges (which may happen to be empty).
Note that, when there are no edges that meet conditions (r1) or (r2), then for any remaining \(f\) from \(U\) it is guaranteed that \(f\) intersects exactly one bad-component outside \(V(B)\) and \(f\setminus V_{t}(B)\) is not monochromatic. If such \(f\) does not satisfy condition (r3), it has at least \(\alpha k\) troubled vertices in that external bad-component, so it can be trimmed to it (according to trimming to bad-component technique). Thus, \(f\) can be removed from \(U\).
After each extension rule, the processed edge is removed from \(U\). On the other hand, when \(B\) is extended, new unsafe edges may be added to \(U\), but we remove those that can
now be trimmed to \(V(B)\). Since edges which do not fulfill any of the extension rules are also removed from \(U\), finally \(U\) becomes empty and the procedure stops. At this point, \(B\) is a set of bad edges which are surrounded only by safe and trimmed unsafe edges.
#### expand_or_accept
This procedure is an implementation of the conditional expansion technique, through a given unsafe edge \(e\). Similarly to build_final_component, it grows a set \(A\) of bad edges, which we call a _search area_, and makes sure that its size does not exceed \(2(\Delta+1)\log(m)\) bound (if that happens, the whole algorithm stops and declares a failure). Initially, \(A\) is empty. Then it becomes expanded by bad-components which may lead to initially active bad-component, starting from the not explored bad-component intersected by \(e\). The expansion naturally stops when there are no more candidate bad-components. The procedure, however, can also stop earlier in case when some monochromatic edge or unsafe edge intersecting two disjoint not explored bad edges is found.
Let \(Q\) be the set of unsafe edges to be processed (initially it is empty). Let \(C\) be the set of bad edges of the currently expanded bad-component. Let \(U_{C}\) denote the set of unsafe edges intersecting \(V(C)\) but not adjacent to the edges of \(B\) and \(A\) (these are simply those unsafe edges adjacent to the edges in \(C\) that were not explored before expansion of \(C\)). The procedure extends \(A\) with all edges from \(C\), and then looks for the following _amortizing configuration_:
* (e1) if \(C\) contains monochromatic edge \(f\) then the procedure stops and returns set \(A\);
* (e2) if \(U_{C}\) contains a monochromatic edge \(f\), or
* (e3) if \(U_{C}\) contains an edge \(f\), which intersects at least two disjoint bad edges outside \(C\), then first set \(A\) is extended with all the bad edges of the bad-components intersected by \(f\), and then the procedure stops and returns \(A\).
When no such configuration is found, all unsafe edges in \(U_{C}\) are not monochromatic and, moreover, each intersects at most one bad-component outside \(A\). We focus on the edges from \(U_{C}\) that can cause an activation of \(C\) - these are the edges whose troubled vertices in \(V(C)\) together with accepted vertices are monochromatic. Each such an edge \(f\) has to intersect exactly one external bad-component and troubled vertices of that component together with \(f|_{a}\) ensure a proper coloring of \(f\). If there are at least \(\alpha k\) troubled vertices of \(f\) in that external bad-component, \(f\) can be trimmed to it (according to the technique of trimming to bad-component). That is why we add to \(Q\) only those edges from \(U_{C}\) that may cause activation of \(C\) and have fewer than \(\alpha k\) troubled vertices outside of \(V(C)\).
When processing of \(C\) is finished, we pick any edge from \(Q\) (the set of unsafe edges to be processed) and repeat the above steps for the external bad-component intersected by the selected edge. It may happen that this component has already been added to \(A\), in a such case the procedure continues picking edges from \(Q\). When the procedure finishes without encountering amortizing configuration, there are no monochromatic edges in \(A\) and all unsafe edges intersecting \(V(A)\) are either properly colored by the colors of the accepted vertices and the vertices from \(V_{t}(A)\), or are trimmed to bad-components outside it. Thus, an activation of whole \(A\) is excluded. Then we mark all vertices in \(V_{t}(A)\) as accepted and treat edges properly colored by their colors as safe. In that case, the procedure returns the empty set.
Note that during this procedure, we do not apply edge trimming to \(V(A)\) when it covers at least \(\alpha k\) troubled vertices of some unsafe edge, since it can result in a false activation (in case the edge is monochromatic inside \(V(A)\)). We also ignore all unsafe edges intersecting \(V(B)\) (they were explored before call to expand_OR_ACCEPT) since, due to not satisfying (r1)
and (r2) they cannot be used in an amortizing configuration or cause an activation (it is guaranteed that they are not monochromatic outside \(V_{t}(B)\)).
#### color_final_component
The last procedure is almost identical to its counterpart in the base algorithm (Listing 4). Recall that the only change is at the beginning of the procedure. Instead of extending \(\mathcal{C}\) with all unsafe edges intersecting it, only those unsafe edges that have at least \(\alpha k\) troubled vertices in \(V(\mathcal{C})\) are added. Then we proceed as in the base algorithm.
|
2307.12283 | Reconciling experimental and lattice data of $Z_c(3900)$ in a
$J/ψπ$-$D\bar{D}^*$ coupled-channel analysis | We study the $J/\psi \pi$ and $D\bar{D}^*$ coupled-channel system within a
covariant framework. The $J/\psi \pi$ and $D\bar{D}^*$ invariant-mass
distributions measured at 4.23~GeV and 4.26~GeV by BESIII and the finite-volume
energy levels from recent lattice QCD simulations are simultaneously fitted.
Phase shifts and inelasticities of the $J/\psi \pi$ and $D\bar{D}^*$ scattering
are predicted using the resulting amplitudes. Poles corresponding to the
$Z_c(3900)$ state are found in the complex energy plane and their couplings
with $J/\psi \pi$ and $D\bar{D}^*$ are determined. Our results indicate that
the current lattice data do not preclude the existence of a physical
$Z_c(3900)$ state. | Lin-Wan Yan, Zhi-Hui Guo, Feng-Kun Guo, De-Liang Yao, Zhi-Yong Zhou | 2023-07-23T10:25:34Z | http://arxiv.org/abs/2307.12283v1 | Reconciling experimental and lattice data of \(Z_{c}(3900)\) in a \(J/\psi\pi\)-\(D\bar{D}^{*}\) coupled-channel analysis
###### Abstract
We study the \(J/\psi\pi\) and \(D\bar{D}^{*}\) coupled-channel system within a covariant framework. The \(J/\psi\pi\) and \(D\bar{D}^{*}\) invariant-mass distributions measured at 4.23 GeV and 4.26 GeV by BESIII and the finite-volume energy levels from recent lattice QCD simulations are simultaneously fitted. Phase shifts and inelasticities of the \(J/\psi\pi\) and \(D\bar{D}^{*}\) scattering are predicted using the resulting amplitudes. Poles corresponding to the \(Z_{c}(3900)\) state are found in the complex energy plane and their couplings with \(J/\psi\pi\) and \(D\bar{D}^{*}\) are determined. Our results indicate that the current lattice data do not preclude the existence of a physical \(Z_{c}(3900)\) state.
## 1 Introduction
The first undoubted tetraquark candidate \(Z_{c}(3900)\) in the charm sector, which was observed by BESIII and Belle in the \(J/\psi\pi^{\pm}\) distributions from the \(e^{+}e^{-}\to J/\psi\pi^{+}\pi^{-}\) process at
\(\sqrt{s}=4.26\) GeV, has attracted much attention from the experimental, theoretical and lattice QCD communities since its discovery in 2013 [1; 2]. This charged charmoniumlike state was confirmed in an analysis of the CLEO-c data of \(e^{+}e^{-}\to J/\psi\pi^{+}\pi^{-}\) at \(\sqrt{s}=4.17\) GeV [3] and in the semi-inclusive decays of \(b\)-flavored hadrons with \(J/\psi\pi^{+}\pi^{-}\) in the range 4.2-4.7 GeV by D0 Collaboration [4]. The neutral partner of \(Z_{c}(3900)\) was discovered later by BESIII in the \(e^{+}e^{-}\to J/\psi\pi^{0}\pi^{0}\) process, confirming the \(Z_{c}(3900)\) as an isovector state [5]. Its spin and parity were unambiguously determined to be \(J^{P}=1^{+}\) in Ref. [6]. Interestingly similar exotic charmoniumlike \(Z_{c}(3885)\) states, both charged [7; 8] and neutral [9] ones, were observed in the \((D\bar{D}^{*})^{\pm,0}\) distributions in the \(e^{+}e^{-}\to\pi^{\pm}(D\bar{D}^{*})^{\mp}\) and \(\pi^{0}(D\bar{D}^{*})^{0}\) processes. Recent years have witnessed the intensive and interesting studies on the theoretical explanations of the intriguing \(Z_{c}\) states, including the compact tetraquark states [10; 11; 12; 13; 14], the kinematical singularities as either threshold cusp effects [15; 16; 17; 18] or triangle singularities [19; 20] and the hadronic molecules [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33].
Although it is not definitely clear whether the two resonant structures observed in the \(J/\psi\pi\) and \(D\bar{D}^{*}\) distributions have the same origin, it is natural to assume that \(Z_{c}(3900)\) and \(Z_{c}(3885)\), which will be simply denoted as \(Z_{c}(3900)\) hereafter, correspond to the same state, due to the close values of their masses and widths [6; 7; 8; 9]. In order to concretely verify this presumption, it is necessary to perform a coupled-channel calculation to simultaneously describe the available experimental data on the \(J/\psi\pi\) and \(D\bar{D}^{*}\) distributions. It is advocated in Refs. [18; 34] that the off-diagonal \(J/\psi\pi\)-\(D\bar{D}^{*}\) interaction is mostly responsible for the resonant structure of the \(Z_{c}(3900)\) (the \(\eta_{c}\rho\)-\(D\bar{D}^{*}\) interaction is also suggested to be important in Ref. [18]). The importance of triangle singularities in producing the \(Z_{c}(3900)\) signal has been pointed out in Ref. [21] immediately after the discovery, and confirmed in Ref. [30; 33; 35]. Nevertheless, it is concluded in Refs. [21; 30; 33], which consider both triangle singularities and final state interactions with \(J/\psi\pi\)-\(D\bar{D}^{*}\) coupled-channel amplitudes, that a \(Z_{c}(3900)\) pole is still present in the best fits. It is further shown in Refs. [30; 33] that if only constant contact terms are considered in the perturbative amplitudes for the \(J/\psi\pi\)-\(D\bar{D}^{*}\) coupled channels, one can only get a virtual-state like pole for the \(Z_{c}(3900)\), _i.e._, a pole below the \(D\bar{D}^{*}\) threshold on the unphysical Riemann sheet of the complex energy plane. If a more general form of the perturbative amplitudes by including an energy-dependent term is considered, the best fit to the BESIII data leads to a resonance pole above the \(D\bar{D}^{*}\) threshold [30; 33]. Therefore, it would be important to include general terms in the \(J/\psi\pi\) and \(D\bar{D}^{*}\) coupled-channel scattering amplitudes to check the robustness.
Moreover, important progresses on the lattice QCD simulations of the \(Z_{c}(3900)\) have been made by several groups in Refs. [18; 36; 37; 38; 39; 40]. Weakly repulsive interaction was revealed for the \(D\bar{D}^{*}\) system in a single-channel lattice simulation [36]. Later on improvement by considering the coupled-channel \(J/\psi\pi\) and \(D\bar{D}^{*}\) scattering has been achieved by the CLQCD Collaboration [40]. Several precise finite-volume energy levels in the center-of-mass (CM) frame were obtained. By using the coupled-channel Luscher formalism to fit the energy levels, no resonant peaks were found near the \(D\bar{D}^{*}\) threshold in the \(I^{G}(J^{PC})=1^{+}(1^{+-})\) channel [40]. Similar conclusions were also obtained earlier in Ref. [37] and later by the Hadron Spectrum Collaboration (HSC) [39]. The HALQCD collaboration performed a three-channel \((J/\psi\pi,\rho\eta_{c}\) and \(D\bar{D}^{*})\) simulation, and found that the \(Z_{c}(3900)\) could correspond to a threshold cusp [18; 38].1
Footnote 1: A pole was found far away from the \(D\bar{D}^{*}\) threshold [38].
One of the novelties in the present work is to perform a joint analysis of the experimental \(J/\psi\pi\) and \(D\bar{D}^{*}\) distributions and the lattice finite-volume energy levels resulting from
the coupled-channel simulations with \(J/\psi\pi\) and \(D\bar{D}^{*}\) in Refs. [39, 40], within the unitarized partial-wave amplitude approach. In this way we expect to tightly constrain the unknown parameters in the \(J/\psi\pi\) and \(D\bar{D}^{*}\) scattering amplitudes and draw more definite conclusions on the properties of the \(Z_{c}(3900)\).
In this work we use the effective Lagrangian approach to calculate the relevant perturbative amplitudes. Different from the nonrelativistic treatment of the amplitudes [30, 33, 41], we utilize the manifestly relativistic formalism to perform the partial-wave projections of the coupled \(J/\psi\pi\) and \(D\bar{D}^{*}\) scattering amplitudes. By taking into account the unitarity conditions, the partial-wave amplitudes are employed to construct the unitarized \(T\) matrix, which then provides the key input to analyze the experimental \(J/\psi\pi\) and \(D\bar{D}^{*}\) event distributions and the lattice finite-volume energy levels.
This article is organized as follows. In Sec. 2, we introduce the theoretical formalism of the covariant partial-wave amplitude and its unitarization. The inclusion of the finite-volume effects in the unitarized amplitude is also elaborated. In Sec. 3, we perform the fits to the experimental event distributions and the lattice discrete energies. The resulting resonance poles, their couplings and the compositeness are then discussed in detail in Sec. 4. Finally in Sec. 5 we give a short summary and conclusions.
## 2 Covariant partial-wave amplitude and its unitarization
The effective Lagrangian that includes the contact \(D\bar{D}^{*}\to D\bar{D}^{*}\) vertex reads [42]
\[\mathcal{L}_{D\bar{D}^{*}D\bar{D}^{*}}= -C_{0a}(D^{\dagger}D+D_{\mu}^{*\dagger}D^{*\mu})(\bar{D}\bar{D}^{ \dagger}+\bar{D}_{\mu}^{*}\bar{D}^{*\mu\dagger})\] \[+C_{0b}(D^{\dagger}D_{\mu}^{*}+D_{\mu}^{*\dagger}D)(\bar{D}^{* \mu}\bar{D}^{\dagger}+\bar{D}\bar{D}^{*\mu\dagger})\,, \tag{1}\]
with
\[D=\begin{pmatrix}D^{+}\\ D^{0}\end{pmatrix}\,,\quad D^{\dagger}=(D^{+,\dagger},\,D^{0,\dagger})\,,\quad \bar{D}=(\bar{D}^{0},\,D^{-})\,,\quad\bar{D}^{\dagger}=\begin{pmatrix}\bar{D} ^{0,\dagger}\\ D^{-,\dagger}\end{pmatrix}\,. \tag{2}\]
The flavor contents for the vector charmed mesons \(D^{*}\) and \(\bar{D}^{*}\) are the same as those of the \(D\) and \(\bar{D}\), respectively. Since we only focus on the \(Z_{c}(3900)\) channel with definite quantum numbers \(I^{G}(J^{PC})=1^{+}(1^{+-})\), only one linear combination of \(C_{0a}\) and \(C_{0b}\) in Eq. (1) will be relevant [42, 22, 43], which will be simply denoted as \(\hat{\lambda}_{1}\).
We include the following effective operators to describe the interactions between the \(J/\psi\pi\) and \(D\bar{D}^{*}\) states [31]
\[\mathcal{L}_{D\bar{D}^{*}J/\psi\pi}= \hat{\lambda}_{2}\psi_{\mu}(\nabla^{\mu}D^{\dagger}u_{\nu}\bar{D} ^{*\nu\dagger}+\bar{D}^{*\nu}u_{\nu}\nabla^{\mu}D)+\hat{\lambda}_{3}\psi_{\mu }(\nabla^{\nu}D^{\dagger}u^{\mu}\bar{D}^{*\dagger}_{\nu}+\bar{D}^{*}_{\nu}u^{ \mu}\nabla^{\nu}D)\] \[+\hat{\lambda}_{4}\psi_{\mu}(\nabla^{\nu}D^{\dagger}u_{\nu}\bar{ D}^{*\mu\dagger}+\bar{D}^{*\mu}u_{\nu}\nabla^{\nu}D)+\hat{\lambda}_{5}\psi_{\mu }(D^{\dagger}\nabla^{\mu}u^{\nu}\bar{D}^{*\dagger}_{\nu}+\bar{D}^{*}_{\nu} \nabla^{\mu}u^{\nu}D)\,, \tag{3}\]
with
\[\nabla_{\nu}D^{\dagger}=D^{\dagger}(\overset{\leftarrow}{\partial _{\nu}}+\Gamma_{\nu})\,\quad\nabla_{\nu}\bar{D}^{\dagger}=(\partial_{\nu}+\Gamma_{\nu}^{ \dagger})\bar{D}^{\dagger}\,,\quad\Gamma_{\nu}=\frac{1}{2}\bigg{(}u^{\dagger} \partial_{\nu}u+u\partial_{\nu}u^{\dagger}\bigg{)}\,,\] \[u=e^{\frac{i\,\Phi}{\sqrt{2F}}}\,,\quad u_{\nu}=i(u^{\dagger} \partial_{\nu}u\,-\,u\partial_{\nu}u^{\dagger})\,,\quad\Phi=\begin{pmatrix} \frac{1}{\sqrt{2}}\pi^{0}&\pi^{+}\\ \pi^{-}&-\frac{1}{\sqrt{2}}\pi^{0}\end{pmatrix}\,. \tag{4}\]
Next it is straightforward to calculate the scattering amplitudes in the physical bases using the Lagrangians (1) and (3), and then transform them into the amplitudes with the proper isospin and \(J^{PC}\) quantum numbers that are consistent with the \(Z_{c}(3900)\) state. In practice, since only the \(Z_{c}(3900)\) channel with isospin one and \(J^{PC}=1^{+-}\) is focused in the present work, we will simply write the amplitudes with definite isospin and \(J^{PC}\) in terms of \(\lambda_{i=1,\cdots,5}\), which are proportional to the parameters \(\hat{\lambda}_{1}\) introduced above and \(\hat{\lambda}_{i=2,\cdots,5}\) in Eq. (3) in order.
The \(\bar{D}^{*}(a)\,D(b)\to\bar{D}^{*}(c)\,D(d)\) amplitude is given by
\[V_{\bar{D}^{*}D\to\bar{D}^{*}D}=\lambda_{1}\varepsilon_{c}^{\dagger}\cdot \varepsilon_{a}\,. \tag{5}\]
The \(J/\psi(a)\,\pi(b)\to\bar{D}^{*}(c)\,D(d)\) transition amplitude takes the form
\[V_{J/\psi\pi\to\bar{D}^{*}D}=\frac{\sqrt{2}}{F_{\pi}}\bigg{(} \lambda_{2}\,\varepsilon_{a}\cdot p_{d}\,\varepsilon_{c}^{\dagger}\cdot p_{b} +\lambda_{3}\,\varepsilon_{a}\cdot p_{b}\,\varepsilon_{c}^{\dagger}\cdot p_{d }+\lambda_{4}\,\varepsilon_{c}^{\dagger}\cdot\varepsilon_{a}\,p_{b}\cdot p_{d }+\lambda_{5}\,\varepsilon_{a}\cdot p_{b}\,\varepsilon_{c}^{\dagger}\cdot p_{b }\bigg{)}\,. \tag{6}\]
In order to implement the unitarity conditions, it is convenient to work with the partial-wave amplitudes. For the scattering processes involving particles with spins, both the \(\ell S\) and helicity bases can be used to perform the partial-wave projections. Although in general cases the two approaches are equivalent, the \(\ell S\) basis is more suitable for the present study. This is because the \(S\)-wave interaction of the \(D\bar{D}^{*}\) should be the dominant one in the molecular description of the \(Z_{c}(3900)\), while the \(D\)-wave part should be much suppressed, at least in the focused energy region around the \(D\bar{D}^{*}\) threshold. We follow Ref. [44] to perform the partial-wave projections in a covariant manner, which improves the nonrelativistic descriptions adopted in Refs. [30, 33, 41]. The covariant approach of the partial-wave projections will automatically introduce certain energy dependent terms to the scattering amplitudes from the polarization vectors, which are of higher orders in the nonrelativistic expansion, without including additional free parameters. It is pointed out in Ref. [30] that the energy dependence in the interacting kernel is crucial for generating resonance poles of the \(Z_{c}(3900)\) on the Riemann sheet of the complex energy plane close to the physical region. Therefore, we consider that the covariant improvement of the scattering amplitudes should play relevant roles to get further insights into the properties of the \(Z_{c}(3900)\) despite that the energy dependence is not complete at a given order in the nonrelativistic expansion. We give details of the partial-wave projections in Appendix 5. We will focus on the \(S\)-wave scattering and the superscript \(J=0\) will be omitted throughout for simplicity.
The expressions of the \(S\)-wave \(J/\psi\pi\) (labeled as channel 1) and \(\bar{D}^{*}D\) (labeled as channel 2) scattering amplitudes from Eqs. (5) and (6) take the form
\[V_{11}(s)= \,0\,,\] \[V_{12}(s)= \,\frac{\sqrt{2}}{9F_{\pi}M_{D^{*}}M_{J/\psi}}\bigg{\{}\lambda_{2 }\big{[}q_{2}^{2}(2M_{J/\psi}+E_{J/\psi})E_{\pi}+q_{1}^{2}(2M_{D^{*}}+E_{D^{*} })E_{D}\big{]}\] \[-\lambda_{4}\big{[}q_{1}^{2}q_{2}^{2}+(2M_{J/\psi}+E_{J/\psi})(2M _{D^{*}}+E_{D^{*}})E_{\pi}E_{D}\big{]}+\lambda_{5}\big{[}q_{1}^{2}\sqrt{s}(2 M_{D^{*}}+E_{D^{*}})\big{]}\bigg{\}}\,,\] \[V_{22}(s)= -\frac{\lambda_{1}}{9M_{D^{*}}^{2}}(2M_{D^{*}}+E_{D^{*}})^{2}\,, \tag{7}\]
where the explicit forms of the three-momenta \(q_{i}\) and the energies \(E_{i}\) in the CM frame are
\[q_{1}(s) =\frac{\sqrt{[s-(M_{J/\psi}+M_{\pi})^{2}]}[s-(M_{J/\psi}-M_{\pi})^{2 }]}{2\sqrt{s}}\,,\] \[q_{2}(s) =\frac{\sqrt{[s-(M_{D^{*}}+M_{D})^{2}]}[s-(M_{D^{*}}-M_{D})^{2}]} {2\sqrt{s}}\,,\] \[E_{J/\psi}(s) =\frac{s+M_{J/\psi}^{2}-M_{\pi}^{2}}{2\sqrt{s}}\,, E_{\pi}(s) =\frac{s+M_{\pi}^{2}-M_{J/\psi}^{2}}{2\sqrt{s}}\,,\] \[E_{D^{*}}(s) =\frac{s+M_{D^{*}}^{2}-M_{D}^{2}}{2\sqrt{s}}\,, E_{D}(s) =\frac{s+M_{D}^{2}-M_{D^{*}}^{2}}{2\sqrt{s}}\,. \tag{8}\]
Our choice to set \(V_{11}=0\), _i.e._, assuming that the perturbative \(J/\psi\pi\to J/\psi\pi\) transition amplitude is negligibly small, reconciles with the tiny scattering length of the \(J/\psi\pi\) interaction found in Refs. [45, 46, 47]. It is noted that the \(\hat{\lambda}_{3}\) term in Eq. (3) does not contribute to the \(S\)-wave amplitude.
The on-shell unitary partial-wave two-body scattering amplitudes can be written as
\[T(s)=[1-N(s)\cdot G(s)]^{-1}\cdot N(s)\,, \tag{9}\]
where in the coupled-channel scattering case \(T(s)\), \(N(s)\) and \(G(s)\) should be understood as matrices spanned in the scattering-channel space. The matrix \(N(s)\) here takes the form
\[N(s)=\begin{pmatrix}V_{11}(s)&V_{12}(s)\\ V_{12}(s)&V_{22}(s)\end{pmatrix}\,, \tag{10}\]
where the matrix elements are given in Eq. (7). The \(G(s)\) matrix responsible for the right-hand cut is diagonal, \(G(s)=\mathrm{diag}(G_{1}(s),G_{2}(s))\). The \(s\)-channel unitarity determines the imaginary part of \(G_{i}(s)\) as
\[\mathrm{Im}\,G_{i}(s)=\frac{q_{i}(s)}{8\pi\,\sqrt{s}}\,,\qquad(s>s_{\mathrm{ th},i})\,, \tag{11}\]
where \(s_{\mathrm{th},i}\) denotes the threshold of the \(i\)-th channel. Evaluating the \(G_{i}(s)\) function by using the once-subtracted dispersion relation or dimensional regularization, one obtains [48]
\[G_{i}(s) =-\frac{1}{16\pi^{2}}\left[a_{\mathrm{SC},i}(\mu)+\log\frac{m_{2} ^{2}}{\mu^{2}}-x_{+}\log\frac{x_{+}-1}{x_{+}}-x_{-}\log\frac{x_{-}-1}{x_{-}} \right]\,,\] \[x_{\pm} =\frac{s+m_{1}^{2}-m_{2}^{2}}{2s}\pm\frac{q(s)}{\sqrt{s}}\,, \tag{12}\]
where \(m_{1}\) and \(m_{2}\) are the masses of the two particles in the considered channel, \(\mu\) is an energy scale and \(a_{\mathrm{SC}}(\mu)\) is the subtraction constant. The \(G_{i}(s)\) function has only one free parameter--a change of \(\mu\) can be absorbed by a corresponding change of \(a_{\mathrm{SC},i}(\mu)\). To be specific, we shall take \(\mu=770\) MeV throughout.
To describe the experimental event distributions, one needs to further consider the production amplitudes, which should incorporate the final-state interactions. Consistent with the
recipe of the construction of the unitarized scattering amplitude in Eq. (9), a similar two-body production formula
\[\mathcal{P}(s)=\begin{pmatrix}P_{1}(s)\\ P_{2}(s)\end{pmatrix}=\left[1-N(s)\cdot G(s)\right]^{-1}\cdot\alpha\,, \tag{13}\]
with constant production vertices
\[\alpha=\begin{pmatrix}\alpha_{1}\\ \alpha_{2}\end{pmatrix}\,, \tag{14}\]
has been demonstrated to be able to successfully describe various event distributions [49, 50, 51, 52, 53, 54]. To be more specific, the functions \(P_{1}(s)\) and \(P_{2}(s)\) describe the \(J/\psi\pi\) and \(D\bar{D}^{*}\) production amplitudes, respectively. In this way, once the unknown parameters in the production amplitudes (13) are determined through fitting to data, the unitarized scattering amplitudes of Eq. (9) will be totally fixed. The resonance information of the \(Z_{c}(3900)\), including its pole positions and coupling strengths to the considered channels, can then be extracted from the unitarized scattering amplitudes. In fact, all the unknown parameters describing the discrete lattice energy levels that will be discussed later also appear in the production amplitudes in Eq. (13). Therefore, this enables us to perform a joint fit to both the experimental and lattice data.
The experimental event distributions of the \(J/\psi\pi\) and \(D\bar{D}^{*}\) channels are projected from the three-body decays \(Y\to J/\psi\pi\pi\) and \(D\bar{D}^{*}\pi\), respectively. To account for the strong coupled-channel final-state interactions, we use the production amplitudes in Eq. (13) to construct the full decay amplitude for the \(Y\to J/\psi\pi\pi\) process
\[M_{1}(s,t)=\epsilon_{Y}^{\dagger}\cdot\epsilon_{J/\psi}\big{[}P_{1}(s)+P_{1}(t )\big{]}\,, \tag{15}\]
where \(\epsilon_{Y}^{\dagger}\) and \(\epsilon_{J/\psi}\) stand for the polarization vectors, \(s\) and \(t\) correspond to the invariant energy squared of the \(J/\psi\pi\) systems. To reasonably reproduce the experimental \(J/\psi\pi\) line shapes, it is necessary to include the background contributions, such as the effects from the crossed \(\pi\pi\) channels [31, 35, 55]. Following the recipes from Refs. [1, 30], we parameterize the background effects in the \(J/\psi\pi\) event distribution as
\[B_{1}=b_{1}[(\sqrt{s}-M_{J/\psi}-M_{\pi})(M_{Y}-M_{\pi}-\sqrt{s})]^{c_{1}}\,, \tag{16}\]
where the unknown parameters \(b_{1}\) and \(c_{1}\) will be determined by data. For the \(Y\to D\bar{D}^{*}\pi\) process, the experimental double-\(D\) tag data indicate that the background contributions to the \(D\bar{D}^{*}\) invariant-mass distributions are tiny [8]. We will simply subtract the background events from the BESIII analysis out from the \(D\bar{D}^{*}\) event distribution. Then we fit to the resulting \(D\bar{D}^{*}\) event distribution with a vanishing background term \(B_{2}(s)=0\). The \(Y\to D\bar{D}^{*}\pi\) decay amplitude reads
\[M_{2}(s,t)=\epsilon_{Y}^{\dagger}\cdot\epsilon_{D^{*}}P_{2}(s)\,, \tag{17}\]
with \(s\) and \(t\) the energy squared of the \(D\bar{D}^{*}\) and \(D\pi\) systems, respectively. The experimental event distributions are fitted using
\[\frac{dN_{i}}{d\sqrt{s}}=A_{i}(s)+B_{i}(s)\,, \tag{18}\]
with
\[A_{i}(s)=\int_{t_{i,-}}^{t_{i,+}}\frac{1}{(2\pi)^{3}}\frac{1}{32M_{Y} ^{3}}\,\frac{1}{3}\sum_{spins}\big{|}M_{i}(s,t)\big{|}^{2}dt\,, \tag{19}\]
where \(t_{i,-}\) and \(t_{i,+}\) stand for the kinematic boundaries of the \(i\)-th process, being \(i=1\) for \(Y\to J/\psi\pi\pi\) and \(i=2\) for \(Y\to D\bar{D}^{*}\pi\).
In order to describe the lattice energy levels, we need to put the unitarized amplitudes into a finite volume. We will utilize the method proposed in Refs. [56, 57], which has been successfully applied to fit the lattice energy levels in many processes, such as the \(\pi\eta\)-\(K\bar{K}\)-\(\pi\eta^{\prime}\) coupled-channel scattering process [52] and the \(D\pi\)-\(D\eta\)-\(D_{s}\bar{K}\) coupled-channel scattering [58]. In this formalism, the finite-volume effects are introduced via the \(G(s)\) function, while finite-volume corrections to the tree-level partial-wave amplitudes in Eq. (7), happening at shorter distances, are exponentially suppressed and thus neglected. For a cubic box of length \(L\) with periodical boundary conditions, the finite-volume correction of the \(G(s)\) function in the CM frame is given by [52, 56, 57, 58]
\[\Delta G(s) = \frac{1}{L^{3}}\sum_{\vec{n}}^{|\vec{q}|<q_{\rm max}}I(|\vec{q}|) -\int^{|\vec{q}|<q_{\rm max}}\frac{{\rm d}^{3}\vec{q}}{(2\pi)^{3}}I(|\vec{q}| )\,, \tag{20}\]
with
\[\vec{q}=\frac{2\pi}{L}\vec{n},\,(\vec{n}\in\mathbb{Z}^{3})\,, \quad I(|\vec{q}|)=\frac{\omega_{1}+\omega_{2}}{2\omega_{1}\omega_{2}\left[s -(\omega_{1}+\omega_{2})^{2}\right]}\,,\quad\omega_{i}=\sqrt{|\vec{q}|^{2}+m_ {i}^{2}}\,. \tag{21}\]
We introduce the three-momentum cutoff \(q_{\rm max}\) to make the discrete sum and the continuous integral convergent in Eq. (20). Since the finite-volume corrections are of long distance nature, \(\Delta G(s)\) depends little on the ultraviolet regulator, which is the hard cutoff \(q_{\rm max}\), as has been explicitly demonstrated in, _e.g._, Ref. [52]. Then in the finite-volume study the function \(G(s)\) in Eq. (9) should be replaced by
\[\widetilde{G}(s)=G(s)+\Delta G(s)\,, \tag{22}\]
where \(G(s)\) and \(\Delta G(s)\) are given in Eqs. (12) and (20), respectively. The finite-volume spectra can be obtained by solving
\[\det\left[1-N(s)\cdot\widetilde{G}(s)\right]=0\,. \tag{23}\]
In this work we only analyze the discrete finite-volume data obtained in the CM frame [39, 40]. In this case it is enough for us to rely on Eq. (23) to fit the lattice data. For the general case in a moving frame, one should accordingly use different formulas by properly including the possible mixing between different partial waves [57, 58, 59]. The dispersion relation between the energy and mass of the charm meson adopted in the lattice QCD simulation of Ref. [37] is different from the relativistic one, \(\omega_{i}=\sqrt{|\vec{q}|^{2}+m_{i}^{2}}\), used in Eqs. (20) and (21). It has been demonstrated in Ref. [60] that the lattice results in Ref. [37] are consistent with the amplitude determined in Ref. [30], which allows for either a virtual state or a resonance \(Z_{c}(3900)\) pole. To be consistent with the relativistic kinematical relations used in our finite-volume setups (21), we will include the lattice results only from Refs. [39, 40] in later fits.
Global fits to the experimental and lattice data
We determine the unknown parameters through simultaneous fits to experimental and lattice data. The relevant experimental data include the \(J/\psi\pi^{\pm}\) event distributions from the \(e^{+}e^{-}\to J/\psi\pi^{+}\pi^{-}\) process at the \(e^{+}e^{-}\) CM energies 4.23 GeV and 4.26 GeV [6], and the \(D^{0}D^{*-}\) and \(D^{-}D^{*0}\) event distributions from the \(e^{+}e^{-}\to\pi^{\pm}(D\bar{D}^{*})^{\mp}\) processes at the same \(e^{+}e^{-}\) CM energies [8]. For the lattice data, the finite-volume spectra given in Refs. [39, 40], whose simulations are done with unphysically large pion masses but with the charm quark mass close to its physical value, will be analyzed. The masses used in the lattice simulations are \(m_{\pi}=391\) MeV, \(M_{D}=1885\) MeV, \(M_{D^{*}}=2009\) MeV, \(M_{J/\psi}=3045\) MeV in Ref. [39], and \(m_{\pi}=324\) MeV, \(M_{D}=1822\) MeV, \(M_{D^{*}}=2029\) MeV, \(M_{J/\psi}=2969\) MeV in Ref. [40], which will be directly taken in our fits to the lattice finite-volume energy levels.
The roles of the triangle diagrams, with \(D_{1},D^{*}\) and \(D\) running in the loops, are currently under vivid discussions for the production of the \(Z_{c}(3900)\) peaks in the \(e^{+}e^{-}\to J/\psi\pi\pi\) and \(\pi D\bar{D}^{*}\)[21, 30, 33, 35, 61, 62]. It is noticed that the triangle singularities are rather sensitive to the specific \(e^{+}e^{-}\) energies [63]. In order to further check the possible energy sensitivity, we first take the strategy to separately fit the \(J/\psi\pi\) and \(D\bar{D}^{*}\) event distributions obtained from the two different \(e^{+}e^{-}\) energies, namely 4.23 GeV and 4.26 GeV. In this way we do not explicitly introduce the triangle diagrams in the production mechanisms, but rather we use the same theoretical formalism to individually fit the data at 4.23 GeV and 4.26 GeV. If the resulting parameters from the two fits resemble each other, it indicates that that the triangle diagrams are not necessarily the exclusive source for the \(Z_{c}(3900)\). Otherwise, if the resulting parameters from the two different fits are rather distinct, it is quite plausible that the triangle singularities indeed can be decisive in the description of the experimental data for the \(Z_{c}(3900)\) peaks.
Regarding the background terms (16) in the \(J/\psi\pi\) event distributions, it is found that good reproduction of the experimental data can be achieved by releasing \(b_{1}\) and fixing \(c_{1}=1\) in our fits, while in addition to \(b_{1}\) the parameter \(c_{1}\) is also set to free in Refs. [1, 30]. Due to the obvious differences of the experimental \(J/\psi\pi\) event distributions at 4.23 GeV and 4.26 GeV, two different values of \(b_{1}\) will be separately fitted to the data at the two different \(e^{+}e^{-}\) energy points.
The coupling \(\lambda_{1}\) in Eq. (5) is dimensionless. For the couplings \(\lambda_{i=2,3,4,5}\) in Eq. (6), they have the mass dimension, and it is convenient to define dimensionless ones as
\[\tilde{\lambda}_{i=2,3,4,5}=\frac{\lambda_{i=2,3,4,5}}{M_{D}}\,. \tag{24}\]
The \(\lambda_{3}\) term in Eq. (6) will not contribute after the partial-wave projection. The remaining \(\tilde{\lambda}_{2},\tilde{\lambda}_{4}\) and \(\tilde{\lambda}_{5}\) will be fitted. The subtraction constants \(a_{\rm SC}^{J/\psi\pi}\) and \(a_{\rm SC}^{D\bar{D}^{*}}\) introduced through the unitarization procedure are allowed to float in our fits. Since the production of open-charm \(D\bar{D}^{*}\) channel is much easier than that of the \(J/\psi\pi\)--the charm and anti-charm quark pair produced in \(e^{+}e^{-}\) collisions need to fly with similar momenta to form \(J/\psi\) and thus have a very limited phase space, it is reasonable to set \(\alpha_{1}=0\) for the production vertices (14).2 For \(\alpha_{2}\), one needs to separately fit its values in the \(J/\psi\pi\) and \(D\bar{D}^{*}\) event distributions at 4.23 GeV and 4.26 GeV, in order to account for the various experimental factors for the two different channels at different \(e^{+}e^{-}\) energy points.
Footnote 2: The cross section of \(e^{+}e^{-}\to\pi^{+}D^{0}D^{*-}\) between 4.2 and 4.3 GeV is about 200–300 pb [64], while that of \(e^{+}e^{-}\to\pi^{+}\pi^{-}J/\psi\) is several times smaller, about 50–80 pb [65].
The resulting parameters from the two separate fits are summarized in the columns labeled as Fit-4230 and Fit-4260 in Table 1. It is clear that the parameters that determine the \(J/\psi\pi\) and \(D\bar{D}^{*}\) amplitudes from the two fits are perfectly consistent with each other. We verify that the qualities of the reproduction of the experimental and lattice data from the two types of fits are also quite similar, which can be inferred from the close values of \(\chi^{2}\) given in Table 1. This important finding indicates that the finite-state interactions between the \(J/\psi\pi\) and \(D\bar{D}^{*}\) are able to reasonably describe the \(Z_{c}(3900)\) peaks observed at different \(e^{+}e^{-}\) energies. It also supports that triangle singularities in the \(Z_{c}(3900)\) production does not seem to play the exclusively decisive role [21, 30, 33, 62, 66].
Because of the similarities of the resulting parameters in the scattering amplitudes from the separate fits, it is meaningful to further perform a joint fit to simultaneously include the two sets of experimental data obtained at 4.23 GeV and 4.26 GeV, together with the lattice discrete spectra. It is expected that the joint fit can provide more reliable and tighter constraints on the coupled-channel scattering amplitudes of \(J/\psi\pi\) and \(D\bar{D}^{*}\). Reasonably good reproductions of the experimental and lattice data from the joint fit are shown in Figs. 1 and 2. The values of the parameters from the joint fit are given in the last column in Table 1. The two subtraction constants, \(\lambda_{1}\) and \(\tilde{\lambda}_{2,4,5}\), describing the nonperturbative interactions of the \(J/\psi\pi\) and \(D\bar{D}^{*}\), enter their coupled-channel scattering amplitudes, which in turn affect the event distributions at all the energy points and also the lattice finite-volume spectra. In other words these six parameters simultaneously influence all the considered experimental and lattice data. In contrast, the parameters \(b_{1}\) in the background and the production vertex \(\alpha_{2}\), which acts as a normalization constant, are expected to vary with different data sets. It turns out that the parameters from the joint fit are compatible with the separate fits within uncertainties. This fact shows a clear sign of the stability of the fits shown in Table 1.
Based on the unitarized decay amplitudes in Eqs. (15),(17), we estimate the branching ratios
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Fit-4230 & Fit–4260 & Joint Fit \\ \hline \(a_{\rm SC,1}\) & \(-3.76^{+0.38}_{-0.47}\) & \(-4.13^{+0.51}_{-0.71}\) & \(-4.02^{+0.32}_{-0.49}\) \\ \(a_{\rm SC,2}\) & \(-2.82^{+0.04}_{-0.03}\) & \(-2.82^{+0.03}_{-0.04}\) & \(-2.80^{+0.02}_{-0.02}\) \\ \(\lambda_{1}\) & \(-88^{+18}_{-14}\) & \(-63^{+28}_{-21}\) & \(-86^{+12}_{-11}\) \\ \(\tilde{\lambda}_{2}\) & \(1064^{+103}_{-127}\) & \(1056^{+112}_{-139}\) & \(1082^{+\;93}_{-116}\) \\ \(\tilde{\lambda}_{4}\) & \(-39^{+15}_{-11}\) & \(-40^{+16}_{-12}\) & \(-41^{+14}_{-11}\) \\ \(\tilde{\lambda}_{5}\) & \(-725^{+149}_{-114}\) & \(-725^{+164}_{-121}\) & \(-751^{+137}_{-111}\) \\ \(b_{1}({\rm MeV}^{-3})\) & \((8.6^{+0.2}_{-0.2})\times 10^{-4}\) & \((4.7^{+0.2}_{-0.1})\times 10^{-4}\) & \((8.7^{+0.2}_{-0.2})/(4.8^{+0.1}_{-0.2})\times 10^{-4}\) \\ \(|\alpha_{2}^{J/\psi\pi}|^{2}\) & \(17.5^{+1.8}_{-1.1}\) & \(9.1^{+2.3}_{-1.5}\) & \(19.4^{+3.4}_{-2.1}/9.0^{+1.6}_{-1.1}\) \\ \(|\alpha_{2}^{D\bar{D}^{*}}|^{2}\) & \(2.9^{+0.7}_{-0.5}\) & \(1.5^{+0.6}_{-0.4}\) & \(2.7^{+0.6}_{-0.6}/1.3^{+0.3}_{-0.3}\) \\ \(\chi^{2}/{\rm d.o.f}\) & \(1.31\) & \(1.16\) & \(1.18\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Resulting parameters from the fits. For the notations of different fits and parameters, see the text for details. For the joint fit, the left/right numbers for the entries \(b_{1}\) and \(\alpha_{2}\) correspond to the results from the data sets at 4.23 GeV and 4.26 GeV, respectively.
Figure 1: Joint fit to the \(J/\psi\pi\) (upper) and \(D\bar{D}^{*}\) (lower) event distributions from BESIII. The data are taken from Refs. [6] and [8], respectively. For the \(D\bar{D}^{*}\) event distributions, the background events from the experimental analysis [8] are subtracted. The shaded areas correspond to the uncertainties propagated from fitting using the bootstrap method.
of the \(Z_{c}(3900)\) decaying into \(D\bar{D}^{*}\) and \(J/\psi\pi\) channels, by performing the phase space integrals of Eq. (19) in the energy range of \(\sqrt{s}=(3900\pm 35)\) MeV, as proposed in Ref. [30]. When evaluating the ratio of the partial decay widths of \(Z_{c}(3900)\) using Eq. (19), the background contributions are excluded and the \(\alpha_{2}^{J/\psi\pi}\), \(\alpha_{2}^{D\bar{D}^{*}}\) factors accounting for the normalizations of experimental event distributions are set to unity. The resulting branching ratios of \(R=\Gamma_{Z_{c}(3900)\to D\bar{D}^{*}}/\Gamma_{Z_{c}(3900)\to J/\psi\pi}\) from the various fits are
\[R=1.6^{+0.4}_{-0.4}\,\,\,\,\,\,\,\,\,R=1.5^{+1.2}_{-0.8}\,\,\,\mbox{(Fit-4260 )}\,,\,\,\,\,\,\,R=1.9^{+0.9}_{-0.6}\,\,\,\mbox{(JointFit)}\,. \tag{25}\]
The central values are smaller than that from the experimental determination \(R^{\rm Exp}=6.2\pm 2.9\)[7]. Taking into account the large uncertainties, the difference between the one from our joint fit from the BESIII determination is merely slightly larger than \(1\sigma\). Notice that we do not consider the \(\eta_{c}\rho\) channel since the statistics of the data in this channel is not high (the statistical significance of \(Z_{c}(3900)\) in \(\eta_{c}\rho\) is \(3.9\sigma\)[67]), and \(R\) may be underestimated in our calculation.
The theoretical uncertainties, shown as shaded areas in Figs. 1 and 2, are calculated through the bootstrap method. In this procedure, we implement the normal distributions by taking the central values and uncertainties of the experimental and lattice data as the means and variances, respectively, to randomly generate large amounts of pseudo-data sets. The pseudo-data sets are then used to repeat the fits. The large samples of the parameter configurations from the repeated fits using the random pseudo-data sets are further exploited to obtain the uncertainties of the theoretical quantities. When performing the fits to the \(D\bar{D}^{*}\) event distributions from Ref. [8], the experimental background effects are subtracted. In Fig. 2, we also give the predictions to the lattice discrete spectra in a wide range of the volume sizes, which hopefully
Figure 2: Finite volume energy levels from the joint fit. For the CLQCD data from Ref. [40] we have taken the six energy levels below 4.0 GeV in the fit. The HSC data are taken from Ref. [39]. The blue solid lines correspond to our theoretical predictions by continuously varying the box length \(L\), and the gray bands correspond to the uncertainties propagated from the fit. The brown dashed lines stand for the free energy levels. The theoretical results shown as blue circles with error bars are slightly shifted to the right in order to be distinguished from the lattice data shown as red squares.
can provide useful guidelines for future lattice simulations.
## 4 Insights into the \(Z_{c}(3900)\) resonance poles
Kinematical singularities, such as the two-body cusps and triangle or box singularities, provide one possible source to explain some structures observed in experiments [63]. Such kind of singularities is highly sensitive to the kinematics of the processes. On the contrary, resonance poles in the complex energy plane are universal in all production amplitudes involving the same set of particles. They are expected to show up in all relevant coupled channels, though not necessarily as peaks [68]. In this case details of the kinematics do not play a crucial role, although they may affect the resonant line shapes. Therefore it is of utmost importance to discern whether there exist relevant resonance poles in the system under study.
The physical scattering amplitudes in Eq. (9) can be extrapolated to the complex energy plane via the unitarity \(G(s)\) functions. The expression in Eq. (12) stands for the \(G(s)\) on the physical/first Riemann sheet (RS) and its corresponding result on the second RS takes the form [48]
\[G(s)^{\rm II}=G(s)-i\frac{q(s)}{4\pi\sqrt{s}}\,, \tag{26}\]
being \(q(s)\) the magnitude of the CM three-momentum. In our convention, the imaginary part of \(G(s)\) on the first RS is positive in the energy region above the threshold, and the imaginary part of \(G(s)^{\rm II}\) is negative. The scattering amplitudes can be analytically continued to different RSs by taking the proper combinations of the \(G(s)\) and \(G(s)^{\rm II}\) for different channels. For instance, the amplitudes on the second RS can be labeled as \((-,+)\), where the minus/plus sign at each entry corresponds to taking \(G(s)^{\rm II}/G(s)\) for that channel. In such convention, the amplitudes on the first, third and fourth RSs are given by \((+,+)\), \((-,-)\) and \((+,-)\), respectively.
Next we search for resonance poles of the scattering amplitudes in the complex plane. At each pole, the singular term of the Laurent expansion of the scattering amplitude takes the form
\[T_{ij}(s)=-\frac{\gamma_{i}\gamma_{j}}{s-s_{R}}\,, \tag{27}\]
where the indexes \(i,j\) stand for the coupled channels of the considered system, and the resonance pole is given by \(\sqrt{s_{R}}=(M_{R}-i\Gamma_{R}/2)\), being \(M_{R}\) and \(\Gamma_{R}\) the mass and width, respectively. The relevant resonance pole positions, together with their residues \(\gamma_{i}\), are summarized in Table 2. Around the \(Z_{c}(3900)\) region, three types of resonance poles are found by taking into account the uncertainties of the parameters in Table 1. We stress that in fact for each parameter configuration it gives two types of poles: one in the third RS and the other one located on either the second RS or the fourth RS, depending on the specific parameter set taken from the large bootstrap samples in the uncertainty analysis. The central values of the parameters in Table 1 lead to the fourth sheet pole shown in Table 2. In fact the upper half plane of the second RS is continuously connected to the lower half plane of the fourth RS in the energy region above the \(D\bar{D}^{*}\) threshold. Therefore, when the pole positions on the second and fourth sheets are similar, the amplitude with a pole with a small imaginary part near threshold on the second RS is expected to resemble the one with a fourth-sheet nearby pole. Besides, other distant poles, including the spurious ones in the first sheet, can be also found in the complex plane on different RSs. For instance, a fourth-sheet pole around \((3800-6i)\) MeV is found in
the amplitude. These remote poles are far away from the interested energy region and do not seem to visibly affect the physical amplitudes, we do not discuss them any further.
In addition to the preferred fit presented in Table 1, we also find another type of joint fit, which can reasonably reproduce the experimental and lattice data to some extent with somewhat larger \(\chi^{2}/\mathrm{d.o.f}\simeq 1.45\) and has only a virtual state pole around 3.8 GeV in the amplitude. It is interesting to point out that our finding here is similar to that in a recent study of Ref. [33]: the solutions with resonance poles around the \(D\bar{D}^{*}\) threshold can better reproduce the experimental data than the ones with only virtual state poles around 3.8 GeV.
According to Morgan's pole counting criteria [69], an elementary resonance state would correspond to two similar poles near the threshold on different RSs. For the molecular type of resonance state, there would be just one pole near the threshold. The situation for the \(Z_{c}(3900)\) poles in Table 2 is very subtle. Indeed two poles near the \(D\bar{D}^{*}\) threshold are found for each parameter configuration. However, a closer look at their positions on different RSs reveals that the two poles are not really so close. The imaginary part of the third-sheet pole is one-order larger in magnitude than the one on the second or fourth sheet. Besides, the real part of the pole on the third RS is below the \(D\bar{D}^{*}\) threshold, while the pole on the second or fourth RS lies above that threshold. A qualitative conclusion would be that the \(Z_{c}(3900)\) corresponds to a state lying between the elementary and molecular types, or in another word both types of components are likely to be of similar importance in the \(Z_{c}(3900)\) compositions. This conclusion can be further quantitatively verified by using the recipe proposed in Ref. [70] to calculate the compositeness coefficients for the resonances, which has been applied in the study of various exotic hadronic states [71, 72, 54, 73]. The partial compositeness coefficient \(X_{k}\), _i.e._, the probability to find the two-body component from the \(k\)th channel in the considered resonance state, is given by [70]
\[X_{k}=|\gamma_{k}|^{2}\left|\frac{dG_{k}^{(\mathrm{II})}(s_{R})}{ds}\right|\,, \tag{28}\]
where \(\gamma_{k}\) is the residue defined in Eq. (27) at the pole \(s_{R}\). Depending on the RS of the location of the pole, one should take the derivation of the proper \(G(s)\) or \(G^{\mathrm{II}}(s)\) with respect to the \(s\) in Eq. (28). There is a caveat when applying Eq. (28) to calculate the compositeness \(X\): this recipe can not be generally used for any resonance pole. The working condition for Eq. (28) is that the resonance pole should lie above the nearby threshold [70]. However, since the resonance pole on the third RS in Table 2 is rather close to the \(D\bar{D}^{*}\) threshold, it is expected that Eq. (28) could be also applied to estimate the compositeness of the third-sheet pole. The
\begin{table}
\begin{tabular}{c c c c c} \hline \hline RS & \(M_{R}\) (MeV) & \(\Gamma_{R}/2\) (MeV) & \(|\gamma_{1}|\) (GeV) & \(|\gamma_{2}|\) (GeV) \\ \hline \hline III & \(3874.8^{+3.7}_{-4.0}\) & \(32.7^{+1.6}_{-1.9}\) & \(4.3^{+0.3}_{-0.3}\) & \(8.7^{+0.8}_{-0.7}\) \\ \hline II & \(3902.7^{+1.3}_{-1.3}\) & \(3.0^{+2.4}_{-2.4}\) & \(4.9^{+0.2}_{-0.2}\) & \(8.3^{+0.3}_{-0.3}\) \\ IV & \(3902.4^{+1.2}_{-2.2}\) & \(3.3^{+4.4}_{-2.6}\) & \(4.6^{+0.3}_{-0.4}\) & \(8.6^{+0.4}_{-0.3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Resonance pole positions and their residues. \(\gamma_{1}\) and \(\gamma_{2}\) correspond to the residues of \(J/\psi\pi\) and \(D\bar{D}^{*}\), respectively. Large amount of bootstrap samples of parameters are generated in our uncertainty analyses. For each parameter sample, it can give a couple of poles, one on the RS III and the other one either on RS II or RS IV.
compositeness coefficients contributed from the \(D\bar{D}^{*}\) channel to the three kinds of resonance poles in Table 2 are found to be3
Footnote 3: Since the \(J/\psi\pi\) channel is rather distant from the \(Z_{c}(3900)\) pole position, \(X_{1}\) computed using Eq. (28) is one order of magnitude smaller than \(X_{2}\).
\[X_{2}=0.38^{+0.03}_{-0.03}\,\left(\text{RS\,II}\right),\quad X_{2}=0.43^{+0.09}_ {-0.06}\,\left(\text{RS\,III}\right),\quad X_{2}=0.42^{+0.04}_{-0.03}\,\left( \text{RS\,IV}\right), \tag{29}\]
which perfectly agree with the determinations from Ref. [72].
Other ways to calculate the compositeness coefficients for resonances have also been proposed based on the scattering length (\(a\)) and effective range (\(r\)) in Refs. [74, 75]. The proposal from Ref. [74] is
\[\bar{X}=\sqrt{\left|\frac{1}{1+2r/a}\right|}\,, \tag{30}\]
and the expression from Ref. [75] reads
\[\hat{X}=\sqrt{\frac{1}{1+|2r/a|}}\,. \tag{31}\]
By taking the joint fit in Table 1, our predictions to the values of the scattering length and effective range for the \(\bar{D}D^{*}\) channel read
\[a_{\bar{D}D^{*}}=\big{(}-0.56^{+0.11}_{-0.13}-i0.28^{+0.04}_{-0.04}\big{)}\, \text{fm}\,,\qquad r_{\bar{D}D^{*}}=\big{(}-2.03^{+0.51}_{-0.68}-i0.77^{+0.24}_ {-0.13}\big{)}\,\text{fm}\,. \tag{32}\]
The two proposals in Eqs. (30) and (31) lead to almost identical results \(\bar{X}_{2}=\hat{X}_{2}=0.36^{+0.06}_{-0.06}\), which are compatible with the values in Eq. (29) within uncertainties.
The above results of the \(Z_{c}\) compositeness indicate that the energy dependence in the contact amplitudes (7) (and thus interaction of finite range) plays an important role in forming the \(Z_{c}\) pole of this analysis. To probe the origin of the energy dependence, from either other degrees of freedom with higher masses or compact quark state cores, one needs to rely on the specific theoretical models, which is beyond the scope of our present study.
Figure 3: Phase shifts and inelasticities of the \(J/\psi\pi\to J/\psi\pi\) scattering.
The pole singularities on the unphysical RSs in the complex energy plane also manifest themselves in the physical amplitudes, which can be characterized by the phase shifts and inelasticities in the coupled-channel scattering processes. In Figs. 3 and 4, we give our predictions of the phase shifts and inelasticities, together with the theoretical uncertainties, for the \(J/\psi\pi\to J/\psi\pi\) and \(J/\psi\pi\to D\bar{D}^{*}\) processes, respectively. Within the uncertainties there are two branches of phase shifts for the \(J/\psi\pi\to J/\psi\pi\) above the \(D\bar{D}^{*}\) threshold, and we confirm that the lower branch corresponds to the parameter samples with a pole on the fourth RS and the upper one corresponds to the parameter samples with a pole on the second RS. This also tells us that the narrow poles on the second and fourth sheets in Table 2 are the most responsible ones for the \(Z_{c}(3900)\) signals in our amplitudes. The two phase-shift branches in Fig. 3 differ by about 180 degrees above the \(D\bar{D}^{*}\) threshold. As a result they actually lead to rather similar \(S\) matrix elements.
## 5 Summary and conclusions
In this work we calculate the covariant partial-wave amplitudes of the coupled-channel \(J/\psi\pi\) and \(D\bar{D}^{*}\) scattering with energy-dependent interaction kernels. The perturbative covariant expressions are then unitarized to include the strong final-state interactions. The experimental event distributions of \(J/\psi\pi\) and \(D\bar{D}^{*}\) measured at 4.23 GeV and 4.26 GeV by the BESIII Collaboration are first separately fitted, and it turns out that the resulting parameters of the scattering amplitudes are quite similar from the two types of fits. This implies that the nonperturbative strong interactions of the \(J/\psi\pi\) and \(D\bar{D}^{*}\) are the responsible mechanism to explain the \(Z_{c}(3900)\) peaks in our study and the triangle singularities are not necessarily the decisive sources for the observed \(Z_{c}(3900)\) peaks. Based on this observation, we then use the same interaction amplitudes to perform global fits to both the event distributions at 4.23 GeV and 4.26 GeV from BESIII and the lattice finite-volume energy levels. Reasonably good reproduction of the experimental and lattice data is achieved.
Relevant resonance poles on different Riemann sheets are found for the \(Z_{c}(3900)\) in our scattering amplitudes. The couplings of the resonance poles with the \(D\bar{D}^{*}\) and \(J/\psi\pi\) channels are calculated, and the magnitudes of the former channel are found to be around two times larger than those with the latter one. The compositeness coefficient of the \(D\bar{D}^{*}\), \(i.e.\), the probabilities of the \(D\bar{D}^{*}\) component in the \(Z_{c}(3900)\) state, is calculated to be less than 0.5,
Figure 4: Phase shifts and inelasticities of the \(J/\psi\pi\to D\bar{D}^{*}\) scattering.
indicating that other higher-mass hadronic components or compact quark state cores could be also important in the formation of the \(Z_{c}(3900)\). It is worthwhile to notice that the \(D^{*}\bar{D}^{*}\) channel is only 140 MeV higher than the \(D\bar{D}^{*}\) and it can couple to the same quantum numbers as the \(Z_{c}(3900)\). It would be interesting to simultaneously analyze all the data of the \(Z_{c}(3900)\) and \(Z_{c}(4020)\) in order to understand these charged charmoniumlike structures.
## Acknowledgements
We thank Mei-Zhu Yan for an early-stage contribution. This work is funded in part by the National Natural Science Foundation of China (NSFC) under Grants Nos. 11975090, 12150013, 12047503, 11905258, 12275076, 12125507, 11975075, and 11835015; by the Chinese Academy of Sciences under Grant No. XDB34030000; and by the NSFC and the Deutsche Forschungsgemeinschaft (DFG) through the funds provided to the TRR110 "Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, DFG Project-ID 196253076). ZHG appreciates the support of Peng Huan-Wu visiting professorship and the hospitality of Institute of Theoretical Physics at Chinese Academy of Sciences, where part of this work has been done.
## Appendix: Covariant partial-wave projection
One way to describe the angular momenta of a general process \(1+2\to\bar{1}+\bar{2}\) is to include the total angular momentum \(J\) and its third component \(M\), and the orbital angular momentum \(\ell\) and the total spin \(S\), where only \(J\) and \(M\) are the good quantum numbers for relativistic reactions. The general partial-wave projection for the process \(1+2\to\bar{1}+\bar{2}\) in the \(\ell S\) basis is given by [44]
\[V^{J}_{\ell S;\bar{S}}(s)= \frac{Y^{0}_{\ell}(\hat{p}_{z})}{2(2J+1)}\sum_{\sigma_{1},\sigma _{2},\bar{\sigma}_{1},\bar{\sigma}_{2},m}\int d\hat{\bar{p}}\,Y^{m}_{\ell}( \hat{\bar{p}})^{*}\left(\sigma_{1}\sigma_{2}M|s_{1}s_{2}S\right)(mM\bar{M}| \ell SJ)(\bar{\sigma}_{1}\bar{\sigma}_{2}\bar{M}|\bar{s}_{1}\bar{s}_{2}\bar{S})\] \[\left(0\bar{M}\bar{M}|\bar{\ell}\bar{S}J\right)V(p_{1},p_{2},\bar {p}_{1},\bar{p}_{2},\sigma_{1},\sigma_{2},\bar{\sigma}_{1},\bar{\sigma}_{2})\,,\] (A.1)
where the direction of the initial three-momentum \(\vec{p}_{1}\) in the CM frame is defined as the \(z\)-axis, i.e. \(\vec{p}_{1}=-\vec{p}_{2}=|\vec{p}|\hat{e}_{z}\), and the three-momenta of the final-state particles are denoted by \(\vec{\bar{p}}_{1}=-\vec{\bar{p}}_{2}=|\vec{\bar{p}}|\hat{\bar{p}}\). The energy squared \(s\) is given by \(s=(p_{1}+p_{2})^{2}\). \(\sigma_{i}\) and \(\bar{\sigma}_{i}\) correspond to the third components of the \(i\)th particle in the initial and final states, respectively, with \(S=\sigma_{1}+\sigma_{2}\) and \(\bar{S}=\bar{\sigma}_{1}+\bar{\sigma}_{2}\). The Clebsch-Gordan coefficient (\(m_{1}m_{2}m_{3}|j_{1}j_{2}j_{3}\)) refers to the composition of \(\vec{j}_{1}+\vec{j}_{2}=\vec{\bar{j}}_{3}\), with \(m_{i}\) the third component of \(\vec{j}_{i}\).
For the \(S\)-wave scattering of the 1 (vector) + 2 (pseudoscalar) \(\to\bar{1}\) (vector) + \(\bar{2}\) (pseudoscalar) process, the partial-wave projection in Eq. (A.1) can be simplified as
\[V^{J=0}_{01;01}(s)=\frac{1}{2(2J+1)}\sum_{\sigma_{1}=\bar{\sigma}_{1}=0,\pm 1 }\int d\cos\theta\ V(s,t(s,\cos\theta),\sigma_{1},\bar{\sigma}_{1})\,,\] (A.2)
where \(\theta\) denotes the scattering angle defined in the CM frame and the Mandelstam variable \(t\) is given by
\[t=M_{1}^{2}+M_{\bar{1}}^{2}-\frac{1}{2s}\left(s+M_{1}^{2}-M_{2}^{2}\right) \left(s+M_{\bar{1}}^{2}-M_{2}^{2}\right)-\frac{\cos\theta}{2s}\sqrt{\lambda(s, M_{1}^{2},M_{2}^{2})\lambda(s,M_{\bar{1}}^{2},M_{2}^{2})}\,,\] (A.3)
with \(\lambda(a,b,c)=a^{2}+b^{2}+c^{2}-2ab-2bc-2ac\) the Kallen kinematical function. The polarization vectors of the vector meson with mass \(m_{V}\) should be accordingly taken as [44]
\[\varepsilon(\vec{k},0)=\begin{pmatrix}\frac{k}{m_{V}}\cos\theta\\ \frac{1}{2}\left(\frac{E_{k}}{m_{V}}-1\right)\sin 2\theta\\ 0\\ (1+\cos 2\theta)\,\frac{E_{k}}{m_{V}}-\cos 2\theta\end{pmatrix}\,,\quad \varepsilon(\vec{k},\pm)=\begin{pmatrix}\mp\frac{1}{\sqrt{2}}\frac{k}{m_{V}} \sin\theta\\ \mp\frac{1}{\sqrt{2}}\left(\frac{E_{k}}{m_{V}}\sin^{2}\theta+\cos^{2}\theta \right)\\ -\frac{i}{\sqrt{2}}\\ \mp\frac{1}{2\sqrt{2}}\left(\frac{E_{k}}{m_{V}}-1\right)\sin 2\theta\end{pmatrix}\,,\] (A.4)
in order to be consistent with the partial-wave projection formula in Eq. (A.1). The magnitude of the three-momentum \(k\) and the corresponding energy \(E_{k}\) in Eq. (A.4) are defined as \(k=|\vec{k}|\) and \(E_{k}=\sqrt{k^{2}+m_{V}^{2}}\), respectively. We explicitly verify that the partial-wave amplitudes obtained from the \(\ell S\) basis using Eqs. (A.1) and (A.4) are consistent with the results from the helicity basis [76] by using suitable polarization vectors, such as those provided in Refs. [77, 78]. Since we only focus on the \(S\)-wave scattering in this work, the superscript and subscript partial-wave indices \(J\) and \(\ell S\) always take \(J=1\) and \((\ell=0,S=1)\) and are omitted in the main text for simplicity. The explicit expressions after performing the partial-wave projection of the amplitudes in Eqs. (5) and (6) can be found in Eq. (7).
|
2308.02615 | An Intrinsic Approach to Scalar-Curvature Estimation for Point Clouds | We introduce an intrinsic estimator for the scalar curvature of a data set
presented as a finite metric space. Our estimator depends only on the metric
structure of the data and not on an embedding in $\mathbb{R}^n$. We show that
the estimator is consistent in the sense that for points sampled from a
probability measure on a compact Riemannian manifold, the estimator converges
to the scalar curvature as the number of points increases. To justify its use
in applications, we show that the estimator is stable with respect to
perturbations of the metric structure, e.g., noise in the sample or error
estimating the intrinsic metric. We validate our estimator experimentally on
synthetic data that is sampled from manifolds with specified curvature. | Abigail Hickok, Andrew J. Blumberg | 2023-08-04T14:29:50Z | http://arxiv.org/abs/2308.02615v1 | # An intrinsic approach to scalar-curvature estimation for point clouds
###### Abstract.
We introduce an intrinsic estimator for the scalar curvature of a data set presented as a finite metric space. Our estimator depends only on the metric structure of the data and not on an embedding in \(\mathbb{R}^{n}\). We show that the estimator is consistent in the sense that for points sampled from a probability measure on a compact Riemannian manifold, the estimator converges to the scalar curvature as the number of points increases. To justify its use in applications, we show that the estimator is stable with respect to perturbations of the metric structure, e.g., noise in the sample or error estimating the intrinsic metric. We validate our estimator experimentally on synthetic data that is sampled from manifolds with specified curvature.
## 1. Introduction
A compact Riemannian manifold is a smooth manifold \(M\) equipped with compatible choices of inner product for each tangent space \(T_{p}M\). The presence of this structure equips \(M\) with a metric (induced by the fact that the inner product gives a definition of the length of a tangent vector) and moreover lets us makes sense of various geometric notions on \(M\)--in particular, the notion of _curvature_.
Curvature, which measures the extent to which a Riemannian manifold deviates from being "flat," is a generalization of the use of the second derivative to measure the extent to which a curve pulls away from the tangent line at a point. There are several different notions of curvature in Riemannian geometry. The focus of this paper is _scalar curvature_, which is a function \(S\colon M\to\mathbb{R}\) that quantifies the curvature at a point \(x\in M\) by a number \(S(x)\). On a surface, scalar curvature is proportional to Gaussian curvature, but in contrast to Gaussian curvature, scalar curvature is defined for higher-dimensional manifolds as well.
The purpose of this paper is to study the problem of estimating the scalar curvature of a manifold given a finite sample \(X\subset M\) regarded as a finite metric space, which we assume consists of independent draws from some (possibly nonuniform) probability density function \(\rho\colon M\to\mathbb{R}_{+}\). Our estimator is based on the fact that scalar curvature at \(x\in M\) characterizes the growth rate of the volume of a geodesic ball \(B^{M}(x,r)\) as \(r\) increases. More precisely, as \(r\to 0\), the scalar curvature \(S(x)\) at \(x\in M\) has the following relationship to geodesic ball volume:
\[\frac{\operatorname{vol}(B^{M}(x,r))}{v_{n}r^{n}}=1-\frac{S(x)}{6(n+2)}r^{2}+ \mathcal{O}(r^{4})\,, \tag{1.1}\]
where \(n\) is the dimension of the manifold, \(v_{n}\) is the volume of a unit Euclidean \(n\)-ball, and \(v_{n}r^{n}\) is the volume of a Euclidean \(n\)-ball of radius \(r\). We proceed by computing maximum-likelihood estimators for the volumes on the left side of equation (1.1) and fitting a quadratic
Introduction
Let \(M\) be a compact Riemannian manifold and \(\mathbb{R}^{d}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold and \(\mathcal{F}\) be a compact Riemannian manifold. Let \(\mathcal{F}\) be a compact Riemannian manifold.
embedding the points into hyperbolic space (respectively, a sphere). In recent years, there has been much research on non-Euclidean embeddings, such as hyperbolic embeddings [10, 13, 18].
Another application is the generalization of curvature to metric spaces that do not obviously come from manifolds. A notable example is given by a network (specifically, a weighted graph), with the metric given by the shortest-path distance. This yields a definition of discrete scalar curvature that is defined on the vertices of a network.
### Related Work
To the best of our knowledge, there is only one other paper on scalar-curvature estimation for manifolds of any dimension. Sritharan et al. [22] developed a different method to estimate scalar curvature by using the second fundamental form and the Gauss-Codazzi equation. However, their method requires an embedding of the points in Euclidean space. Furthermore, it is particularly sensitive to noise because it involves tangent-space estimation. In one experiment, in which points were sampled from a Klein bottle, their method did not recover the correct sign for the scalar curvature after only a small amount of Gaussian noise was added (standard deviation \(\sigma=.01\)). In contrast, we are able to obtain higher accuracy on noisy data sets.
There are many methods to estimate Gaussian curvature from point clouds that are sampled from surfaces. Guerrero et al. [9] estimated Gaussian curvature by designing a neural network with a PointNet-inspired architecture [17]. Topological data analysis can also be used to detect curvature. Bubenik et al. [5] used persistent homology to classify point clouds by the Gaussian curvature of the constant-curvature surface from which they were sampled. Another Gaussian curvature estimation method is given by Cazals and Pouget [7]. However, none of these methods have a straightforward generalization to scalar curvature estimation.
Bhaskar et al. [3] defined "diffusion curvature," which is a new (unsigned) measure of local curvature for point clouds that are sampled from a manifold (with any dimension). Although diffusion curvature is not the same as Gaussian or scalar curvature, numerical experiments in [3] suggest that it is correlated with Gaussian curvature. However, unlike Gaussian and scalar curvature, diffusion curvature is always positive, so it cannot be used to infer whether scalar curvature is positive or negative. By contrast, our scalar-curvature estimates are signed, so our method can be used to distinguish between regions of positive and negative curvature.
Chazal et al. [8] considered curvature measures (which are distinct from curvature). They showed that curvature measures can be estimated stably. However, as in [22], their method requires an embedding of the points in Euclidean space. Moreover, their method is not feasible for point clouds in high dimensions because it requires computing and storing the boundaries and intersections of a set of balls in the ambient space; Chazal et al. implemented and tested their method only in \(\mathbb{R}^{3}\). By contrast, the accuracy and computational complexity of our method in the present paper depends only on the intrinsic dimension of the manifold; it does not depend on the dimension of the ambient space.
Lastly, we note that there is an important relationship to discrete network curvature [19]. Our scalar-curvature estimator can be applied to networks with the shortest-path metric. There are two other definitions of discrete scalar curvature for networks, both of which are defined as "contractions" of discrete Ricci curvature [20, 21]. More precisely, the discrete scalar curvature at a node is defined to be the sum of its adjacent edges' discrete Ricci curvature. Sandhu et al. [20] defined scalar curvature at a vertex as the contraction of Ollivier-Ricci curvature and Sreejith et al. [21] defined scalar curvature as the contraction of Forman-Ricci curvature. These are justified by the fact that scalar curvature is the trace
of Ricci curvature. However, it has not been proven that either notion of discrete scalar curvature converges to the scalar curvature of the manifold when the network is a geometric network on a manifold.
### Organization
We briefly review the basics of Riemannian geometry and scalar curvature in Section 2. We discuss our method for estimating scalar curvature in Section 3. We prove stability (Theorem 4.1) in Section 4 and convergence (Theorem 5.3) in Section 5. Finally, we discuss our numerical experiments in Section 6.
### Acknowledgements
We thank Yining Liu, Michael Mandell, and Mason Porter for helpful conversations.
## 2. Background
In this section, we briefly review relevant background on Riemannian geometry and scalar curvature. For a detailed treatment, we recommend any standard textbook, e.g. [16].
A _Riemannian manifold_\((M,g)\) is a smooth manifold \(M\) with a Riemannian metric. A _Riemannian metric_\(g\) is an assignment of a symmetric bilinear form \(g_{x}:T_{x}M\times T_{x}M\to\mathbb{R}\) for each point \(x\in M\). The Riemannian metric defines an inner product on each tangent space \(T_{x}M\). For example, in Euclidean space, the canonical Riemannian metric is the usual Euclidean inner product.
The Riemannian metric induces a metric on the manifold as follows. The _norm_ of a vector \(v\) in the tangent space \(T_{x}M\) at \(x\) is \(\|v\|:=g_{x}(v,v)^{1/2}\). The _length_ of a continuously differentiable path \(\gamma:[a,b]\to M\) is \(L(\gamma):=\int_{a}^{b}\bigl{\|}\gamma^{\prime}(t)\bigr{\|}\,\mathrm{d}t\). The _geodesic distance_ between points \(x\) and \(y\) in the same connected component of \(M\) is
\[d_{M,g}(x,y):=\inf\{L(\gamma)\mid\gamma:[a,b]\to M\text{ is a }C^{1}\text{ path such that }\gamma(a)=x\text{ and }\gamma(b)=y\}.\]
The closed _geodesic ball_ centered at \(x\in M\) with radius \(r\geq 0\) is
\[B^{M}(x,r):=\left\{y\in M\mid d_{M,g}(x,y)\leq r\right\}.\]
Scalar curvature characterizes the rate at which the volume of a geodesic ball \(B^{M}(x,r)\) grows as \(r\) grows. Equation (1.1) gives the relationship between the scalar curvature \(S(x)\) at \(x\in M\) and the geodesic ball volume \(\operatorname{vol}(B^{M}(x,r))\) as \(r\to 0\). For example, if \(S(x)\) is negative (respectively, positive), then the volume of a small geodesic ball that is centered at \(x\) tends to be larger (respectively, smaller) than the volume of an \(n\)-dimensional Euclidean ball of the same radius.
In this paper, we estimate scalar curvature from point clouds that are sampled randomly from \(M\). Probability density functions \(\rho\colon M\to\mathbb{R}\) (for sampling points on \(M\)) are defined as follows. The induced _Riemannian volume form_\(dV\) is given in local coordinates by
\[dV=\sqrt{|g|}dx^{1}\wedge\cdots\wedge dx^{n}\,.\]
Equivalently, the volume form \(dV\) is defined to be the unique \(n\)-form on \(M\) that equals \(1\) on all positively-oriented orthnormal bases. The Riemannian volume form induces the _Riemannian volume measure_\(\mu\) on \(M\); the measure of a Borel subset \(A\) is
\[\mu(A)=\int_{A}dV\,.\]
A random point \(x\) that is sampled from \(M\) has _probability density function_\((\operatorname{pdf})\)\(\rho:M\to\mathbb{R}\) if
\[\mathbb{P}[x\in A]=\int_{A}\rho(y)dV\]
for all Borel subsets \(A\) and \(\mathbb{P}[M]=1\). For example, the uniform pdf on \(M\) is \(\rho(x)\equiv\frac{1}{\operatorname{vol}(M)}\), where \(\operatorname{vol}(M):=\mu(M)\) is the volume of \(M\). For a more thorough introduction to statistics on Riemannian manifolds, see [15].
## 3. Estimating scalar curvature via geodesic ball-volume estimation
Suppose that we are given a _distance matrix_\(d_{X}\), which is an \(N\times N\) matrix whose \((i,j)\)th entry is the distance between \(x_{i}\) and \(x_{j}\) for points \(x_{i},x_{j}\in X\), where \(X\) is a point cloud such that \(|X|=N\). By a slight abuse of notation, we will write \(d_{X}(x,y)\) to denote the distance between points \(x\in X\) and \(y\in X\). We assume that:
1. \((X,d_{X})\) is a metric subspace of an unknown Riemannian manifold \((M,g)\) of unknown dimension \(n\) and
2. \(X\) is sampled randomly from an unknown probability density function \(\rho:M\to\mathbb{R}_{+}\).
Let \(d\) denote the geodesic distance on \(M\). Importantly, we do not assume that we are given coordinates for the point cloud \(X\); we assume only that we have the distance matrix \(d_{X}\). However, it is possible to begin with a point cloud \(X\) (instead of its distance matrix \(d_{X}\)), from which one can estimate geodesic distances using, for example, the graph-approximation technique of Tenenbaum et al. [2, 23].
We summarize our scalar-curvature estimation method in Figure 1. To estimate the scalar curvature \(S(x)\) at a point \(x\in X\), the idea of our approach is to estimate \(\operatorname{vol}(B^{M}(x,r))\) for a sequence of increasing \(r\) and then estimate \(S(x)\) by fitting a quadratic polynomial to the estimated ball-volume ratios \(\operatorname{vol}(B^{M}(x,r))/(v_{n}r^{n})\).
### Maximum-likelihood estimator of ball volume
For a given radius \(r\) and a point \(x\in X\), we estimate \(\operatorname{vol}(B^{M}(x,r))\) as follows. Let \(N\) be the number of points in \(X\), and let \(N[d_{X}](x,r)\) denote the number of points in \(B^{M}(x,r)\cap(X\setminus\{x\})\). That is,
\[N[d_{X}](x,r):=\left|\{y\in X\setminus\{x\}\mid d_{X}(x,y)\leq r\}\right|.\]
When the metric \(d_{X}\) is clear from context, we omit it from the notation and write \(N(x,r)\). Let \(\mu_{\rho}(x,r)\) denote the mean density within \(B^{M}(x,r)\). That is,
\[\mu_{\rho}(x,r):=\frac{1}{\operatorname{vol}(B^{M}(x,r))}\int_{z\in B^{M}(x,r) }\rho(z)dV\,,\]
where \(dV\) is the volume form on \(M\). When \(\rho(x)\equiv\rho\) is constant, \(\mu_{\rho}(x,r)=\rho\). In Section 3.4, we discuss a method to estimate \(\mu_{\rho}(x,r)\) empirically, without prior knowledge of \(\operatorname{vol}(B^{M}(x,r))\).
Our likelihood function for \(\operatorname{vol}(B^{M}(x,r))\) is
\[L(v) =\mathbb{P}[N(x,r)\mid\operatorname{vol}(B^{M}(x,r))=v]\] \[=\binom{N-1}{N(x,r)}\Big{(}\mu_{\rho}(x,r)v\Big{)}^{N(x,r)} \Big{(}1-\mu_{\rho}(x,r)v\Big{)}^{N-1-N(x,r)}\]
because the random variable \(N(x,r)\) is a binomial random variable with \(N-1\) trials and success probability \(\mu_{\rho}(x,r)\cdot\operatorname{vol}(B^{M}(x,r))\). Solving \(0=L^{\prime}(v)\), we find that the maximum-likelihood estimator is
\[v_{*}=\frac{N(x,r)}{(N-1)\mu_{\rho}(x,r)}\,. \tag{3.1}\]
The expectation of \(v_{*}\) is
\[\mathbb{E}[v_{*}]=\frac{\mathbb{E}[N(x,r)]}{(N-1)\mu_{\rho}(x,r)}=\operatorname {vol}(B^{M}(x,r)). \tag{3.2}\]
### Dimension estimation
Our scalar-curvature estimation method requires an estimate \(\hat{n}\in\mathbb{N}\) of the manifold dimension \(n\); there are a wide variety of methods to do this -- see [6] for a review. One method to estimate dimension is the maximum-likelihood method of Levina and Bickel [11], which requires only the distance matrix \(d_{X}\) as input. (See Section 6.2 for details.) When the distance matrix \(d_{X}\) is not clear from context, we denote our dimension estimate by \(\hat{n}[d_{X}]\). We assume that \(\hat{n}=n\) in our theoretical results (Sections 4 and 5). In our numerical experiments (Section 6), we use [11] to calculate a dimension estimate \(\hat{n}\).
### Density estimation
In Equation (3.1), the mean density \(\mu_{\rho}(x,r)\) in the ball must be estimated empirically. To do so, we first empirically estimate the density at each point \(z\in X\). One method for doing so is kernel density estimation (KDE) on a manifold [14],
Figure 1. The pipeline for our scalar-curvature estimation method.
which requires only the distance matrix \(d_{X}\) and an estimate \(\hat{n}\) of the manifold dimension as input.
**Remark 3.1**.: We denote a choice of density estimator by \(\hat{\rho}\), and we denote our density estimate at \(z\in X\) by \(\hat{\rho}[d_{X},\hat{n}](z)\). For example, in our numerical experiments in Section 6, the density estimator \(\hat{\rho}\) is a kernel density estimator with either a Gaussian or biweight kernel. If \(d_{X}\) and \(\hat{n}\) are clear from context, we omit them and write \(\hat{\rho}(z)\).
After we compute our pointwise-density estimates \(\hat{\rho}(z)\) for all \(z\in X\), we calculate an estimate \(\widehat{\mu_{\rho}}[\hat{\rho}](x,r)\) of the mean density \(\mu_{\rho}(x,r)\) within \(B^{M}(x,r)\). We define
\[\widehat{\mu_{\rho}}[\hat{\rho}](x,r):=\begin{cases}\left(\frac{1}{N(x,r)}\sum _{z\in B^{M}(x,r)\cap(X\setminus\{x\})}1/\hat{\rho}(z)\right)^{-1},&N(x,r)>0 \\ \hat{\rho}(x)\,,&N(x,r)=0\,.\end{cases} \tag{3.3}\]
We write \(\widehat{\mu_{\rho}}[\rho](x,r)\) when \(\hat{\rho}(x)=\rho(x)\) for all \(x\in X\).
Notably, our estimate \(\widehat{\mu_{\rho}}[\hat{\rho}](x,r)\) is not the sample mean of
\[\left\{\hat{\rho}(z)\mid z\in B^{M}(x,r)\cap(X\setminus\{x\})\right\}.\]
The sample mean \(\frac{1}{N(x,r)}\sum_{z}\hat{\rho}(z)\) is an overestimate of \(\mu_{\rho}(x,r)\) because points with high density are overrepresented in the sample \(B^{M}(x,r)\cap(X\setminus\{x\})\).
**Remark 3.2**.: When \(r\) is small, we have \(\mu_{\rho}(x,r)\approx\rho(x)\approx\hat{\rho}(x)\). Indeed, one can show that \(\mu_{\rho}(x,r)\to\rho(x)\) as \(r\to 0\). However, in our numerical experiments in Section 6, our scalar-curvature estimation method sometimes requires us to estimate \(\mu_{\rho}(x,r)\) when \(r\) is not small. Informally, what we show in Lemma 3.3 is that \(1/\widehat{\mu_{\rho}}[\hat{\rho}](x,r)\) is a good approximation to \(1/\mu_{\rho}(x,r)\) even for large \(r\). (Estimating the maximum-likelihood estimator of Equation (3.1) requires us to estimate the reciprocal \(1/\mu_{\rho}(x,r)\).) In our experiments (Section 6), we observe significant empirical improvement from using Equation (3.3) to estimate \(\mu_{\rho}(x,r)\) instead of using \(\hat{\rho}(x)\) to estimate \(\mu_{\rho}(x,r)\). This observation holds even when the data is uniformly sampled because \(\widehat{\mu_{\rho}}[\hat{\rho}](x,r)\) averages the empirical densities (which may differ from the ground truth density) within the ball.
**Lemma 3.3**.: If \(X\) is a point cloud sampled from the pdf \(\rho:M\to\mathbb{R}_{+}\), then
\[\mathbb{E}\Big{[}\frac{1}{\widehat{\mu_{\rho}}[\rho](x,r)}\,\Big{|}\,N(x,r)>0 \Big{]}=\frac{1}{\mu_{\rho}(x,r)}\]
for all \(x\in X\) and \(r>0\).
Proof.: If \(r\) is sufficiently large so that \(N(x,r)>0\), then \(\frac{1}{\widehat{\mu_{\rho}}[\rho](x,r)}\) is the sample mean of \(1/\rho(z)\) for \(z\in B^{M}(x,r)\cap(X\setminus\{x\})\). Therefore,
\[\mathbb{E}\Big{[}\frac{1}{\widehat{\mu_{\rho}}[\rho](x,r)}\Big{]}=\mathbb{E} \Big{[}\frac{1}{\rho(z)}\Big{]}\,,\]
where \(z\) is a point that is conditioned to lie in \(B^{M}(x,r)\). The pdf for \(z\) is
\[\psi(z):=\frac{\rho(z)}{\int_{w\in B^{M}(x,r)}\rho(w)dV}\,. \tag{3.4}\]
Therefore,
\[\mathbb{E}\Big{[}\frac{1}{\rho(z)}\Big{]}=\int_{z\in B^{M}(x,r)}\frac{1}{\rho (z)}\psi(z)dV=\frac{\operatorname{vol}(B^{M}(x,r))}{\int_{w\in B^{M}(x,r)} \rho(w)dV}=\frac{1}{\mu_{\rho}(x,r)}\,. \tag{3.5}\]
### Empirical approximation of the maximum-likelihood estimator
For a given \(x\in X\) and radius \(r>0\), we define our estimate of \(\operatorname{vol}(B^{M}(x,r))\) to be
\[\widehat{\operatorname{vol}}[d_{X},\hat{\rho}](x,r):=\frac{N[d_{X}](x,r)}{(N-1) \widehat{\mu_{\rho}}[\hat{\rho}](x,r)}\,, \tag{3.6}\]
where \(\widehat{\mu_{\rho}}[\hat{\rho}](x,r)\) is defined as in Eq. (3.3). We write \(\widehat{\operatorname{vol}}[d_{X},\rho](x,r)\) when \(\hat{\rho}(x)=\rho(x)\) for all \(x\in X\).
The quantity \(\widehat{\operatorname{vol}}[d_{X},\hat{\rho}](x,r)\) is an approximation of the true maximum-likelihood estimator \(v_{*}\) (defined in equation (3.1)). An equivalent formula for \(\widehat{\operatorname{vol}}[d_{X},\hat{\rho}](x,r)\) is
\[\widehat{\operatorname{vol}}[d_{X},\hat{\rho}](x,r)=\frac{\sum_{z\in B^{M}(x, r)\cap(X\cap\{x\})}\frac{1}{\hat{\rho}(z)}}{(N-1)}\,. \tag{3.7}\]
**Lemma 3.4**.: If \(X\) is a finite point cloud that is sampled from the pdf \(\rho:M\to\mathbb{R}_{+}\), then
\[\mathbb{E}\Big{[}\widehat{\operatorname{vol}}[d_{X},\rho](x,r)\Big{]}= \operatorname{vol}(B^{M}(x,r))\]
for all \(x\in X\) and \(r>0\).
Proof.: Let \(N\) be the number of points in \(X\). For all \(k\in\{0,\dots,N-1\}\),
\[\mathbb{E}\Big{[}\widehat{\operatorname{vol}}[d_{X},\rho](x,r)\Big{|}N(x,r)=k \Big{]}=\frac{k}{N-1}\mathbb{E}[1/\rho(z)] \tag{3.8}\]
by equation (3.7), where \(z\) is randomly drawn according to the pdf \(\psi(z)\) defined in equation (3.4). Substituting equation (3.5) into equation (3.8) yields
\[\mathbb{E}\Big{[}\widehat{\operatorname{vol}}[d_{X},\rho](x,r)\Big{|}N(x,r)=k \Big{]}=\frac{k}{(N-1)\mu_{\rho}(x,r)} \tag{3.9}\]
for all \(k\in\{0,\dots,N-1\}\). Therefore,
\[\mathbb{E}\Big{[}\widehat{\operatorname{vol}}[d_{X},\rho](x,r) \Big{]} =\sum_{k=0}^{N-1}\mathbb{E}\Big{[}\widehat{\operatorname{vol}}[d _{X},\rho](x,r)\Big{|}N(x,r)=k\Big{]}\cdot\mathbb{P}[N(x,r)=k]\] \[=\frac{1}{(N-1)\mu_{\rho}(x,r)}\sum_{k=0}^{N-1}k\cdot\mathbb{P}[N (x,r)=k]\] \[=\frac{\mathbb{E}[N(x,r)]}{(N-1)\mu_{\rho}(x,r)}\] \[=\operatorname{vol}(B^{M}(x,r))\,.\]
**Lemma 3.5**.: Let \(X\) be a point cloud that consists of \(N\) points that are sampled from the pdf \(\rho:M\to\mathbb{R}_{+}\). If \(M\) is compact, then there is a constant \(A>0\) that only depends on \(\rho\) and the Riemannian metric of \(M\) and satisfies
\[\operatorname{var}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r))\leq Ar^{n}/N\]
for sufficiently large \(N\), sufficiently small \(r\), and all \(x\in X\).
Proof.: By Lemma A.2,
\[\operatorname{var}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r))=\frac{ \operatorname{var}(1/\rho(z))\cdot\mu_{\rho}(x,r)\cdot\operatorname{vol}(B^{M}(x,r))}{(N-1)}+\frac{\operatorname{var}N(x,r)}{(N-1)^{2}\mu_{\rho}(x,r)^{2}}\,, \tag{3.10}\]
where \(z\in B^{M}(x,r)\) is a point chosen randomly from the pdf \(\psi(z)\) defined as in Eq. (3.4) and
\[\operatorname{var}(N(x,r))=(N-1)\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r))( 1-\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r)))\,.\]
Now we bound \(\operatorname{var}(1/\rho(z))\). Define
\[A(r):=\max_{x\in M,\,z\in B^{M}(x,r)}|\rho(z)-\rho(x)|\,. \tag{3.11}\]
The quantity \(A(r)\) exists because \(M\) is compact and \(\rho\) is continuous. We note that \(A(r)\to 0\) as \(r\to 0\). For the remainder of the proof, we assume that \(r\) is sufficiently small such that \(A(r)\leq\min(\rho)/2\). Because \(h(\rho)=1/\rho^{2}\) is convex and monotonically decreasing for \(\rho>0\), we have
\[\Big{|}\frac{1}{\rho(z)^{2}}-\frac{1}{\rho(x)^{2}}\Big{|}\leq|h^{\prime}(\min \{\rho(z),\rho(x)\})|\cdot A(r)\leq\frac{2A(r)}{(\rho(x)-A(r))^{3}}\leq\frac{1 6A(r)}{\min(\rho)^{3}}\]
for all \(z\in B^{M}(x,r)\). Therefore,
\[\Big{|}\mathbb{E}[1/\rho(z)^{2}]-1/\rho(x)^{2}\Big{|}\leq\frac{16A(r)}{\min( \rho)^{3}}\,.\]
Similarly,
\[\Big{|}\frac{1}{\mu_{\rho}(x,r)^{2}}-\frac{1}{\rho(x)^{2}}\Big{|}\leq\frac{16A (r)}{\min(\rho)^{3}}\]
because \(|\mu_{\rho}(x,r)-\rho(x)|\leq A(r)\). Therefore,
\[\operatorname{var}(1/\rho(z)) =\Big{|}\mathbb{E}\Big{[}\frac{1}{\rho(z)^{2}}\Big{]}-\frac{1}{ \mu_{\rho}(x,r)^{2}}\Big{|}\] \[\leq\Big{|}\mathbb{E}\Big{[}\frac{1}{\rho(z)^{2}}\Big{]}-\frac{1 }{\rho(x)^{2}}\Big{|}+\Big{|}\frac{1}{\mu_{\rho}(x,r)^{2}}-\frac{1}{\rho(x)^{ 2}}\Big{|} \tag{3.12}\] \[\leq\frac{32A(r)}{\min(\rho)^{3}}\,,\]
which implies that
\[\operatorname{var}(1/\rho(z))\cdot\frac{\mu_{\rho}(x,r)\cdot\operatorname{vol} (B^{M}(x,r))}{(N-1)}\leq\frac{32\cdot A(r)\max(\rho)\cdot\operatorname{vol}(B ^{M}(x,r))}{\min(\rho)^{3}(N-1)}\]
for sufficiently small \(r\) such that \(A(r)<\min(\rho)/2\). By Lemma A.1, there is a constant \(B^{\prime}>0\) such that \(\operatorname{vol}(B^{M}(x,r))\leq B^{\prime}r^{n}\) for all \(x\) and sufficiently small \(r\). Additionally, we have \(A(r)<1\) for sufficiently small \(r\), so
\[\operatorname{var}(1/\rho(z))\cdot\frac{\mu_{\rho}(x,r)\cdot\operatorname{vol }(B^{M}(x,r))}{(N-1)}\leq\frac{32B\cdot\max(\rho)}{\min(\rho)^{3}}\cdot\frac{ r^{n}}{N} \tag{3.13}\]
for sufficiently large \(N\), sufficiently small \(r\), and some constant \(B>0\). Lastly, we bound \(\frac{\mathrm{var}N(x,r)}{(N-1)^{2}\mu_{\rho}(x,r)^{2}}\). We have
\[\frac{\mathrm{var}N(x,r)}{(N-1)^{2}\mu_{\rho}(x,r)^{2}} =\frac{\mathrm{vol}(B^{M}(x,r))(1-\mu_{\rho}(x,r)\mathrm{vol}(B^{ M}(x,r))}{(N-1)\mu_{\rho}(x,r)}\] \[\leq\frac{\mathrm{vol}(B^{M}(x,r))}{(N-1)\mu_{\rho}(x,r)} \tag{3.14}\] \[\leq\frac{B}{\min(\rho)}\cdot\frac{r^{n}}{N}\,,\]
where \(B>0\) is the same constant as earlier in the proof. Substituting equations (3.13) and (3.14) into equation (3.10) completes the proof.
### Fitting a quadratic curve
For radius \(r>0\), let \(y(x,r)\) and \(\hat{y}[d_{X},\hat{\rho},\hat{n}](x,r)\) denote the actual and estimated ball-volume ratios, respectively, for a ball of radius \(r\) that is centered at a fixed \(x\in X\). That is, we define
\[y(x,r) :=\frac{\mathrm{vol}(B^{M}(x,r))}{v_{n}r^{n}}\,, \tag{3.16}\] \[\hat{y}[d_{X},\hat{\rho},\hat{n}](x,r) :=\frac{\widehat{\mathrm{vol}}[d_{X},\hat{\rho}](x,r)}{v_{n}r^{ \hat{n}}}\,, \tag{3.15}\]
where \(\widehat{\mathrm{vol}}[d_{X},\hat{\rho}](x,r)\) is defined as in Eq. (3.6). When \(\hat{\rho}(x)=\rho(x)\) for all \(x\in X\), we write \(\hat{y}[d_{X},\rho,\hat{n}](x,r)\). When \(d_{X}\), \(\hat{\rho}\), or \(\hat{n}\) are clear from context, we omit them from our notation.
Let \(r_{\min}\) and \(r_{\max}\), respectively, be the minimum and maximum ball radius that we consider, where \(0\leq r_{\min}<r_{\max}\). These are hyperparameters that must be set by a user. Let \(r_{0}:=r_{\min}<r_{1}<\cdots<r_{m}:=r_{\max}\) be a monotonically increasing sequence, which is also set by a user. These are the radius values at which we estimate geodesic ball volumes by empirically approximating the maximum-likelihood estimator, as in Section 3.4. We allow any choice of sequence \(\{r_{i}\}_{j=1}^{m}\), although we study only two possible choices in this paper:
1. **Equal spacing:** The sequence is evenly spaced with spacing \(\Delta r\). This is the choice that we make in Theorems 4.1 and 5.3.
2. **Nearest-neighbor distance:** In our numerical experiments, we allow \(\{r_{j}\}\) to depend on \(x\) and set \(r_{j}\) to be equal to the distance from \(x\) to its \(j\)th nearest neighbor.
We define \(C(x)\) to be the coefficient such that \(1+C(x)r^{2}\) is the "best-fit" quadratic curve to the curve \(y(x,r)\)for \(r\in[r_{\min},r_{\max}]\). More precisely, we define
\[C(x):=\operatorname*{arg\,min}_{c\in\mathbb{R}}\bigl{\|}y(x,r)-(1+cr^{2}) \bigr{\|}_{L^{2}([r_{\min},r_{\max}])}\,\,.\]
It is standard that
\[C(x)=\frac{\int_{r_{\min}}^{r_{\max}}r^{2}[y(x,r)-1]dr}{\frac{1}{5}(r_{\max}^{ 5}-r_{\min}^{5})}\,.\]
We define
\[\hat{C}[d_{X},\hat{\rho},\hat{n}](x):=\frac{\sum_{i=1}^{m}r_{i}^{2}(\hat{y}[d _{X},\hat{\rho},\hat{n}](x,r_{i})-1)(r_{i}-r_{i-1})}{\frac{1}{5}(r_{\max}^{5} -r_{\min}^{5})} \tag{3.17}\]
to be an estimate of \(C(x)\). We omit \(d_{X}\), \(\hat{\rho}\), and \(\hat{n}\) from our notation when they are clear from context.
### Our scalar curvature estimate
Putting together Sections 3.1-3.5, we estimate scalar curvature.
**Definition 3.6**.: Let \(d_{X}\) be a distance matrix, let \(\hat{\rho}\) be a density estimator, and let \(\hat{n}\) be a dimension estimate. Given hyperparameters \(r_{\min}\geq 0\) (the minimum ball radius that we consider), \(r_{\max}>r_{\min}\) (the maximum ball radius that we consider), and \(\{r_{j}\}_{j=0}^{m}\) (the sequence of ball radii that we consider, where \(r_{0}=r_{\min}\) and \(r_{m}=r_{\max}\)), our estimate of the scalar curvature at \(x\) is
\[\hat{S}[d_{X},\hat{\rho},\hat{n}](x):=-6(\hat{n}+2)\hat{C}[d_{X},\hat{\rho}, \hat{n}](x)\,,\]
where \(\hat{C}[d_{X},\hat{\rho},\hat{n}](x)\) is defined in equation (3.17).
When the distance matrix \(d_{X}\), density estimator \(\hat{\rho}\), and dimension estimate \(\hat{n}\) are clear from context, we omit them and write \(\hat{S}(x)\).
### Computational complexity
In our numerical experiments (Section 6), we find that setting \(r_{i}\) equal to the distance to the \(i\)th nearest neighbor results in an estimate that is both accurate and computationally efficient. In this case,
\[\widehat{\operatorname{vol}}[d_{X},\hat{\rho}](x,r_{i})=\frac{1}{N-1}\sum_{j= 1}^{m}\frac{1}{\hat{\rho}(z_{j})}\,,\]
where \(z_{j}\in X\) is the \(j\)th nearest neighbor of \(x\). We precompute the pointwise density estimates \(\hat{\rho}(z)\) for all \(z\in X\). For every \(x\in X\), we sort \(\{d(x,z)\mid z\in X\}\) to compute its nearest neighbors \(z_{1},z_{2},\ldots\) and its distance to those neighbors. (For very large data sets approximate nearest-neighbor algorithms could be used.) Given these quantities, the set \(\{\widehat{\operatorname{vol}}[d_{X},\hat{\rho}](x,r_{i})\}_{i=1}^{m}\) can be computed in \(\mathcal{O}(m)\) time for any \(m\) because
\[\widehat{\operatorname{vol}}[d_{X},\hat{\rho}](x,r_{i+1})=\widehat{ \operatorname{vol}}[d_{X},\hat{\rho}](x,r_{i})+\frac{1}{(N-1)\cdot\hat{\rho}(z _{i+1})}\,.\]
## 4. Stability
Most real-world data sets have errors and/or noise, which means that the given distances \(d_{X}\) will differ from the true geodesic distances \(d\). Moreover, when the geodesic distances are estimated from a point cloud, errors are expected even if there is no noise in the data (i.e., the point cloud) itself. Density estimation introduces additional errors. Theorem 4.1 below says that our scalar curvature estimate \(\hat{S}\) is stable with respect to errors in estimates of the metric and the density. This allows us to accurately estimate scalar curvature in real-world data or in synthetic point-cloud data in which distances are estimated.
Throughout this section, we consider a compact \(n\)-dimensional Riemannian manifold \(M\) with geodesic distance \(d\) and a sequence \(\{X_{k}\}_{k=1}^{\infty}\) of point clouds that are sampled randomly from a pdf \(\rho\colon M\to\mathbb{R}_{+}\). We assume that \(|X_{k}|\to\infty\) as \(k\to\infty\). Let \(d_{X_{k}}\) denote the geodesic distance matrix for \(X_{k}\). By a slight abuse of notation, let \(d_{X_{k}}(x,y)\) denote the geodesic distance between points \(x\in X_{k}\) and \(y\in X_{k}\). We also consider sequences \(\{r_{\min,k}\}_{k=1}^{\infty}\), \(\{r_{\max,k}\}_{k=1}^{\infty}\), and \(\{(\Delta r)_{k}\}_{k=1}^{\infty}\) of hyperparameter values. The \(k\)th radius sequence that we consider is \(\{r_{j,k}\}_{j=0}^{m_{k}}\), where \(r_{j,k}:=r_{\min,k}+j(\Delta r)_{k}\). When \(k\) is clear from context, we omit it and write \(r_{j}\) instead of \(r_{j,k}\). We require that
1. \(0<r_{\min,k}<r_{\max,k}\) for all \(k\),
2. the number \(m_{k}:=(r_{\max,k}-r_{\min,k})/(\Delta r)_{k}\) of radial steps is a positive integer for all \(k\), and
3. \(r_{\min,\,k}\to 0\,,\,r_{\max,\,k}\to 0\,,\) and \((\Delta r)_{k}\to 0\) as \(k\to\infty\).
**Theorem 4.1** (Stability).: _For each \(k\), suppose that \(\widehat{d_{X_{k}}}\) is a metric on \(X_{k}\) such that_
\[\delta_{k}:=\max_{x,x^{\prime}\in X_{k}}|\widehat{d_{X_{k}}}(x,x^{\prime})-d(x,x^{\prime})|\to 0\qquad\text{as }k\to\infty\,.\]
_Suppose that \(\hat{\rho}\) is a density estimator such that_
\[\eta_{k}:=\max_{x\in X_{k}}\Big{|}\hat{\rho}[\widehat{d_{X_{k}}}](x)-\rho(x) \Big{|}\to 0\qquad\text{as }k\to\infty\,,\]
_and suppose that \(\widehat{n[d_{X_{k}}]}=\hat{n}[d_{X_{k}}]=n\) for sufficiently large \(k\). If the hyperparameter value sequences satisfy_
1. \(\max_{j}\frac{A(2r_{j})}{r_{\min,k}^{n}}\to 0\) _as_ \(k\to\infty\) _(where_ \(A(r)\) _is defined in equation (_3.11_)),_
2. \(\eta_{k}/(r_{\min,\,k}+(\Delta r)_{k})^{n+2/3}\to 0\) _as_ \(k\to\infty\) _,_
3. \(r_{\min,\,k}+(\Delta r)_{k}>\delta_{k}\) _for sufficiently large_ \(k\) _,_
4. \(|X_{k}|(\Delta r)_{k}(r_{\min,\,k}+(\Delta r)_{k}-\delta_{k})^{n}\to\infty\) _as_ \(k\to\infty\) _,_
5. \(r_{\min,\,k}/r_{\max,\,k}^{3}\to 0\) _as_ \(k\to\infty\) _,_
6. \(((\Delta r)_{k}+\delta_{k})/r_{\max,\,k}^{3}\to 0\) _as_ \(k\to\infty\) _, and_
7. \(((\Delta r)_{k}+\delta_{k})/[(r_{\min,\,k}+(\Delta r)_{k}-\delta_{k})^{n+1}r_{ \max,\,k}^{2}]\to 0\) _as_ \(k\to\infty\) _,_
_then \(|\widehat{S[d_{X_{k}}]}(x_{k})-\widehat{S}[d_{X_{k}},\rho,\hat{n}](x_{k})|\to 0\) in probability as \(k\to\infty\), where \(\{x_{k}\}\) is any sequence of points such that \(x_{k}\in X_{k}\)._
**Remark 4.2**.: The conditions above on the hyperparameter value sequences are complex. The following is a set of simpler conditions that collectively imply the conditions of Theorem 4.1:
1. \(\frac{A(r)}{r^{n+2}}\to 0\) as \(r\to 0\,,\)
2. \(\eta_{k}/r_{\min,\,k}^{n+2/3}\to 0\) as \(k\to\infty\,,\)
3. \(\delta_{k}=\mathcal{O}(r_{\min,\,k}^{n+2})\) as \(k\to\infty\,,\)
4. \(|X_{k}|(\Delta r)_{k}(r_{\min,\,k}^{n})\to\infty\) as \(k\to\infty\,,\)
5. \((r_{\min,\,k})/r_{\max,\,k}^{3}\to 0\) as \(k\to\infty\,,\) and
6. \((\Delta r)_{k}/r_{\min,\,k}^{n+5/3}\to 0\) as \(k\to\infty\,.\)
Proof of Theorem 4.1.: For any \(x\in X_{k}\), we have
\[\Big{|}\widehat{S[d_{X_{k}}]}(x)-\widehat{S}[d_{X_{k}},\rho](x)\Big{|}=6(n+2) \Big{|}\hat{C}[\widehat{d_{X_{k}}},\hat{\rho}](x)-\hat{C}[d_{X_{k}},\rho](x) \Big{|}\,.\]
The theorem follows from Lemma A.5, which shows that
\[|\hat{C}[\widehat{d_{X_{k}}},\hat{\rho}](x_{k})-\hat{C}[d_{X_{k}},\rho](x_{k} )|\to 0\]
in probability as \(k\to\infty\).
## 5. Convergence
In this section, we show that as the number of samples increases our estimator converges to the underlying scalar curvature of \(M\). Informally, what we show in Theorem 5.3 is that (1) as the number of points increases, (2) as our given metric data becomes more accurate, and (3) as our density estimations become more accurate, our scalar curvature estimate \(\hat{S}(x)\) converges to the true scalar curvature \(S(x)\). Throughout this section, the symbols \(M\), \(d\)
\(n\), \(\{X_{k}\}_{k=1}^{\infty}\), \(d_{X_{k}}\), \(\rho\), \(\{r_{\min,\,k}\}_{k=1}^{\infty}\), \(\{r_{\max,\,k}\}_{k=1}^{\infty}\), \(\{(\Delta r)_{k}\}_{k=1}^{\infty}\), \(m_{k}\) and \(\{r_{j}\}_{j=1}^{m_{k}}\) are defined as in Section 4.
Theorem 5.3 is an immediate consequence of Theorem 4.1 (stability) above and Proposition 5.2 below; the latter states that if we are given perfect metric data and the exact density, then our scalar curvature estimate \(\hat{S}(x)\) converges to \(S(x)\) as the number of points increases. The challenge is that we must take \(r_{\max,\,k}\to 0\) for equation (1.1) to hold, but (as we show in Proposition 5.1 below) the mean squared error of the ball-ratio estimate \(\hat{y}[d_{X},\rho](x,r)\) grows as \(\mathcal{O}(1/(Nr^{n}))\) as \(r\to 0\), where \(N\) is the number of points in the point cloud.
**Proposition 5.1**.: Let \(X\) be a point cloud that consists of \(N\) points that are drawn from the pdf \(\rho\colon M\to\mathbb{R}_{+}\). Then there is a constant \(A>0\) that only depends on \(\rho\) and the Riemannian metric of \(M\) such that
\[\operatorname{MSE}(\hat{y}[d_{X},\rho](x,r))=\operatorname{var}(\hat{y}[d_{X },\rho](x,r))\leq\frac{A}{Nr^{n}}\]
for sufficiently large \(N\), sufficiently small \(r\), and all \(x\in X\).
Proof.: By Lemma 3.4,
\[\operatorname{MSE}(\hat{y}[d_{X},\rho](x,r))=\operatorname{var}(\hat{y}[d_{X },\rho](x,r))\,.\]
By Lemma 3.5, there is a constant \(A^{\prime}>0\) such that
\[\operatorname{var}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r))\leq\frac{A^ {\prime}r^{n}}{N}\]
for sufficiently large \(N\), sufficiently small \(r\), and all \(x\in X\). Therefore,
\[\operatorname{var}(\hat{y}[d_{X},\rho](x,r))=\frac{\operatorname{var}( \widehat{\operatorname{vol}}[d_{X},\rho](x,r)}{)}v_{n}^{2}r^{2n}\leq\frac{A^{ \prime}}{v_{n}^{2}Nr^{n}}\]
for sufficiently large \(N\), sufficiently small \(r\), and all \(x\in X\).
**Proposition 5.2**.: Suppose that the estimated dimension \(\hat{n}[d_{X_{k}}]=n\) for sufficiently large \(k\). If the hyperparameter value sequences satisfy
1. \((\Delta r)_{k}/r_{\max,\,k}^{3}\to 0\) as \(k\to\infty\),
2. \(|X_{k}|(r_{\min,\,k}+(\Delta r)_{k})^{n}\to\infty\) as \(k\to\infty\), and
3. \(r_{\min,\,k}/r_{\max,\,k}^{3}\to 0\) as \(k\to\infty\),
then \(|\hat{S}[d_{X_{k}},\rho,\hat{n}](x_{k})\to S(x_{k})|\to 0\) as \(k\to\infty\), where \(\{x_{k}\}\) is any sequence of points such that \(x_{k}\in X_{k}\).
Proof.: Let \(x\) be any point in \(X_{k}\). By Eq. (1.1), we have
\[C(x)=\frac{\int_{r_{\min,\,k}}^{r_{\max,\,k}}\Big{[}-\frac{S(x)}{6(n+2)}r^{4}+ \mathcal{O}(r^{6})\Big{]}dr}{\frac{1}{5}(r_{\max,\,k}^{5}-r_{\min,\,k}^{5})}=- \frac{S(x)}{6(n+2)}+\mathcal{O}(r_{\max,\,k}^{2})\,.\]
The absolute difference \(|\hat{S}[d_{X_{k}},\rho](x)-S(x)|\) is
\[|\hat{S}[d_{X_{k}},\rho](x)-S(x)|=6(n+2)|\hat{C}[d_{X_{k}},\rho](x)-C(x)|+ \mathcal{O}(r_{\max,\,k}^{2})\,.\]
Applying Lemma A.6, which controls \(|\hat{C}[d_{X_{k}},\rho](x)-C(x)|\), yields the desired result.
Theorem 5.3 now follows from Theorem 4.1 and Proposition 5.2.
**Theorem 5.3**.: _For each \(k\), suppose that \(\widehat{d_{X_{k}}}\) is a metric on \(X_{k}\) such that_
\[\delta_{k}:=\max_{x,x^{\prime}\in X_{k}}|\widehat{d_{X_{k}}}(x,x^{\prime})-d(x,x ^{\prime})|\to 0\qquad\text{as }k\to\infty\,.\]
_Suppose that \(\hat{\rho}\) is a density estimator such that_
\[\eta_{k}:=\max_{x\in X_{k}}\Big{|}\hat{\rho}|\widehat{d_{X_{k}}}|(x)-\rho(x) \Big{|}\to 0\qquad\text{as }k\to\infty\,.\]
_Suppose that \(\widehat{n[d_{X_{k}}]}=\hat{n}[d_{X_{k}}]=n\) for sufficiently large \(k\). If the hyperparameter value sequences satisfy_
1. \(\max_{j}\frac{A(2r_{j})}{r_{j}^{n}r_{\max,k}^{2}}\to 0\) _as_ \(k\to\infty\) _(where_ \(A(r)\) _is defined in equation (_3.11_)),_
2. \(\eta_{k}/(r_{\min,k}+(\Delta r)_{k})^{n+2/3}\to 0\) _as_ \(k\to\infty\) _,_
3. \(r_{\min,k}+(\Delta r)_{k}>\delta_{k}\) _for sufficiently large_ \(k\) _,_
4. \(|X_{k}|(\Delta r)_{k}(r_{\min,k}+(\Delta r)_{k}-\delta_{k})^{n}\to\infty\) _as_ \(k\to\infty\) _,_
5. \(r_{\min,k}/r_{\max,k}^{3}\to 0\) _as_ \(k\to\infty\) _,_
6. \(((\Delta r)_{k}+\delta_{k})/r_{\max,k}^{3}\to 0\) _as_ \(k\to\infty\) _, and_
7. \(((\Delta r)_{k}+\delta_{k})/[(r_{\min,k}+(\Delta r)_{k}-\delta_{k})^{n+1}r_{ \max,k}^{2}]\to 0\) _as_ \(k\to\infty\)__
_then \(|\widehat{S[d_{X_{k}},\hat{\rho},\hat{n}]}(x_{k})-S(x_{k})|\to 0\) in probability as \(k\to\infty\), where \(\{x_{k}\}\) is any sequence of points such that \(x_{k}\in X_{k}\)._
**Remark 5.4**.: The simpler set of conditions from Remark 4.2 collectively implies the conditions of Theorem 5.3.
## 6. Numerical Experiments
### Data sets
We generate synthetic data by sampling uniformly at random from manifolds with known scalar curvature.
First, we sample \(N=10^{4}\) points each from three constant-curvature surfaces:
1. A disk in the Euclidean plane with radius \(2\). The scalar curvature is \(S(x)\equiv 0\).
2. A unit \(2\)-sphere. The scalar curvature is \(S(x)\equiv 2\).
3. A disk in the hyperbolic plane with hyperbolic radius \(2\). The scalar curvature is \(S(x)\equiv-2\).
For the last of these, we use the Poincare disk model. Notably, the points that we sample from the hyperbolic plane are not embedded in Euclidean space, which means that it is not possible to use the scalar-curvature estimation method of [22]. To avoid boundary effects, we only estimate curvature at points within the unit disk in the Euclidean sample and within hyperbolic radius \(1\) in the hyperbolic sample. Additionally, we sample point clouds from \(S^{2}\) with noise. For \(\sigma\in\{.001,.003,.01,.03\}\), we sample \(N=10^{4}\) points from \(S^{2}\) and add isotropic Gaussian noise with standard deviation \(\sigma\).
Next, we sample point clouds from several other manifolds. We sample \(N=10^{4}\) points each from the higher-dimensional unit spheres \(S^{n}\) for \(n=3,5,\) and \(7\). Lastly, we sample one point cloud each from two surfaces with non-constant scalar curvature:
1. A \(2\)-torus. We sample \(N=10^{4}\) points from a \(2\)-torus with parameters \(r=1\), \(R=2\)
2. A one-sheet hyperboloid. The points \((x,y,z)\in\mathbb{R}^{3}\) are given by the equations \[x =2\sqrt{1+u^{2}}\cos(\theta)\,,\] \[y =2\sqrt{1+u^{2}}\sin(\theta)\,,\] \[z =u\] for \(u\in\mathbb{R}\) and \(\theta\in[0,2\pi)\). We sample points uniformly at random from the subset of the hyperboloid such that \(|z|\leq 2\) until we have \(N=10^{4}\) points within the subset such that \(|z|\leq 1\). To avoid boundary effects, we only estimate curvature at points on the hyperboloid such that \(|z|\leq 1\).
### Dimension estimation
To estimate dimension, we use the maximum-likelihood method of Levina and Bickel [11]. Our estimate of the dimension of a point cloud \(X\) is the nearest integer \(\hat{n}\) to
\[\frac{1}{k_{2}-k_{1}+1}\sum_{k=k_{1}}^{k_{2}}\hat{n}_{k}\,,\]
where \(k_{1}\) and \(k_{2}\) are hyperparameters and
\[\hat{n}_{k} :=\frac{1}{N}\sum_{i=1}^{N}\hat{n}_{k}(x_{i})\,,\] \[\hat{n}_{k}(x_{i}) :=\Big{[}\frac{1}{k-1}\sum_{j=1}^{k-1}\log\Big{(}\frac{T_{k}(x_{i} )}{T_{j}(x_{i})}\Big{)}\Big{]}^{-1}\,,\]
where \(T_{j}(x_{i})\) is the distance from \(x_{i}\) to its \(j\)th nearest neighbor in \(X\). For all data sets, we set \(k_{1}=20\) and calculate \(\hat{n}\) for \(k_{2}\in\{30,\ldots,100\}\). We obtain \(\hat{n}=n\), where \(n\) is the ground-truth dimension, for all data sets and all choices of \(k_{2}\).
We make one modification to [11], which is that instead of using Euclidean distance to measure distances to nearest neighbors, as was done in [11], we use geodesic distance.1 This choice reduces overall computation time because computing geodesic nearest-neighbor distances is also part of our scalar-curvature estimation pipeline. In addition, using geodesic distance improves the accuracy of the approximations that were made in [11] and allows us to estimate the dimension of our Poincare-disk data, which is not embedded in Euclidean space.
Footnote 1: In cases where we can calculate both exact and estimated geodesic distances, we use both; otherwise, we use whichever is available. For \(S^{2}\), \(S^{3}\), \(S^{5}\), \(S^{7}\), and the Euclidean disk, we possess both exact and estimated geodesic distances. For the Poincaré disk, we have only exact geodesic distances. For all other data sets, we have only estimated geodesic distances.
### Density estimation
We use kernel density estimation to obtain pointwise estimates of density, using the dimension estimates obtained in Section 6.2. We test two choices of kernel: (1) a Gaussian kernel because it is a very common choice for density estimation and (2) a biweight kernel because it is compactly supported. As input, the kernel function takes geodesic distances (either exact or estimated), rather than Euclidean distances.
### Geodesic-distance estimation
On the spheres, the Euclidean disk, the torus, and the hyperboloid, we estimate pairwise geodesic distances using the method of Tenenbaum et al. [2, 23]. For each point cloud, we construct the \(k\)-nearest neighbor graph \(G\) with \(k=20\) for \(n=2\), with \(k=50\) for \(n=3\), with \(k=100\) for \(n=5\), and with \(k=200\) for \(n=7\). Edge weights are Euclidean distances. Our estimation of the geodesic distance between points \(x_{1}\) and \(x_{2}\) is the length of a shortest weighted path in \(G\).
### Hyperparameter choices
Our method requires a choice of minimum ball radius \(r_{\min}\), maximum ball radius \(r_{\max}\), and radius sequence \(\{r_{j}\}_{j=0}^{m}\) such that \(r_{0}=r_{\min}\) and \(r_{m}=r_{\max}\). For a given point \(x\) in a data set, we set \(r_{i}\) equal to the distance from \(x\) to its \(i\)th nearest neighbor (as measured by the given distance matrix \(d_{X}\)), for the subset of neighbors such that \(r_{\min}\leq r_{i}\leq r_{\max}\). We set \(r_{\min}=0\) for all data sets.
Our choice of \(r_{\max}\) differs across data sets because the scales and sampling densities are different in different data sets. For the spheres (including the point clouds with noise), we set \(r_{\max}=\pi/2\). For the Euclidean and Poincare disks, we set \(r_{\max}=1\). For the torus, we set \(r_{\max}=\pi\). For the hyperboloid, we set \(r_{\max}=2\). These values were chosen to minimize the amount of noise in our curvature estimation results and to ensure that our geodesic balls \(B^{M}(x,r)\) do not intersect the boundary of the manifold \(M\).
### Results
First, we apply our method to our constant-curvature data sets. For the two surfaces that are embedded into Euclidean space (\(S^{2}\) and the Euclidean disk), we test our method in two different ways. First, we use the exact geodesic distances for our distance matrix. Second, we estimate geodesic distances from the point clouds. In Figure 2, we show our results.
We next test our method on the point clouds that are sampled from higher-dimensional spheres. Again, we test our method in two scenarios: (1) given as input exact geodesic distances and (2) using estimated geodesic distances from the point clouds. In early experiments, we found that on the highest-dimensional spheres (\(n\geq 5\)), using a biweight kernel to estimate density led to significantly better performance than using a Gaussian kernel, so we use a biweight kernel for density estimation. In Figure 3, we show our results. Unexpectedly, we find in Figure 3(A) that scalar curvature is systematically underestimated (although still reasonably accurate) when we start with the exact geodesic distances. In both experiments, the accuracy of our estimates decreases as the dimension \(n\) increases, but the performance is comparable to that in [22]. The main reason that scalar curvature is more difficult for us to estimate in higher dimensions is that the mean squared error in our ball-ratio estimates increases exponentially in \(n\) (see Proposition 5.1). Another reason is that the accuracy of geodesic-distance estimation decreases as \(n\) increases and \(N\) stays constant. Typically, the number of points \(N\) must scale exponentially with \(n\) to maintain the same "resolution" of the manifold, so it is unsurprising that our scalar curvature estimates become less accurate as \(n\) increases for fixed \(N\).
To test our method on manifolds with non-constant scalar curvature, we apply our scalar-curvature estimator to our torus and hyperboloid data sets. On both surfaces, we find that using a Gaussian kernel for density estimation yields more accurate curvature estimates, so we use a Gaussian kernel. We show our results in Figure 4. On the torus, our estimator correctly distinguishes between regions of positive, negative, and zero scalar curvature. The estimates are accurate except near \(\theta=\pi\), where scalar curvature is minimized. On the hyperboloid, our estimator correctly identifies the fact that scalar curvature is minimized
Figure 3. Histograms for our scalar-curvature estimates on \(S^{n}\) (for \(n=2,3,5,7\)) using (A) exact geodesic distances and (B) point clouds, from which geodesic distances were estimated. In (A) and (B), the histograms are plotted on a log-log scale. The ground-truth scalar curvature, which is indicated by the red dashed lines, is \(S(x)\equiv n(n-1)\) for each \(n\) and all \(x\in S^{n}\).
Figure 2. Histograms for our scalar-curvature estimates on three surfaces of constant curvature, given (A–B) exact geodesic distances and (C–D) point clouds, from which geodesic distances were estimated. In (A) and (C), we use a Gaussian kernel to estimate density, and in (B) and (D), we use a biweight kernel to estimate density. The ground-truth scalar curvatures values are \(-2\) in the hyperbolic disk, \(0\) in the Euclidean disk, and \(2\) on the sphere. (Note that we only have exact distances on the hyperbolic disk.)
(and negative) near \(z=0\) and increases as \(z\) increases. As in the torus, the estimates are accurate except near \(z=0\), where scalar curvature is minimized.
We investigate the stability of our estimator by estimating curvature on our noisy-sphere data sets. We show our results in Figure 5. In Figures 5(A) and (B), we show our results when we use Gaussian and biweight kernels, respectively, for density estimation and we input the estimated geodesic distances to the kernel. At the highest noise level (standard deviation \(\sigma=.03\)), our scalar curvature estimates have the wrong sign when we use a biweight kernel, but all other curvature estimates have the correct sign. In Figures 5(C) and (D), we test our estimator by inputting Euclidean distances to the kernel for density estimation. We find that performance is significantly improved, especially at the highest noise level (\(\sigma=.03\)). This suggests that if a point cloud has a high noise level, then one should input the Euclidean distance to the kernel.
Figure 4. (A) Scalar-curvature estimation on a torus. (B) Scalar curvature on the torus as a function of angle \(\theta\). In red, we show the exact scalar curvature values; in blue, we show the estimated scalar curvature values. (C) Scalar-curvature estimation on a one-sheet hyperboloid. (D) Scalar curvature on the hyperboloid as a function of the \(z\) coordinate. In red, we show the exact scalar curvature values; in blue, we show the estimated scalar curvature values.
distances to the kernel instead of inputting the estimated geodesic distances, which may not be accurate enough.
## 7. Conclusions
In this paper, we described a new method to estimate scalar curvature in discrete data. The only information that our approach requires is the set of pairwise distances between the points. By contrast, prior methods were limited to surfaces in \(\mathbb{R}^{3}\) or to point clouds embedded in Euclidean space. Because our method depends only on metric data, one can use it to estimate curvature not only in point-cloud data (from which geodesic distances can be estimated using the approach in [2, 23], for example), but also at vertices in a graph that is equipped with the shortest-path metric or at finite samples from an arbitrary metric space (e.g., the Billera-Holmes-Vogtmann space of phylogenetic trees [4]). We proved that under suitable conditions, our estimator is stable (Theorem 4.1) and that it converges to the ground-truth scalar curvature (Theorem 5.3).
We validated our method on several synthetic data sets in Section 6. Notably, our experiments included a data set (a point cloud that is sampled from the Poincare disk) for which we possessed only the pairwise exact geodesic distances, not an embedding of the points in Euclidean space. Our experiments on point-cloud data embedded in Euclidean space are equivalent to experiments on _geometric graphs_, which are graphs in which vertices
Figure 5. Scalar-curvature estimation on \(S^{2}\) with isotropic Gaussian noise (standard deviation \(\sigma\)) added to the point cloud. (A) We use a Gaussian kernel for density estimation. The kernel takes the estimated geodesic distances as input. (B) We use a biweight kernel that takes estimated geodesic distances as input. (C) We use a Gaussian kernel that takes Euclidean distances as input. (D) We use a biweight kernel that takes Euclidean distances as input.
are sampled from a manifold, edges connect nearby points, and edge weights are given by distances. This is because we estimated geodesic distance in our point clouds by constructing a nearest-neighbor graph (which is a type of geometric graph) and computing shortest-path lengths. Therefore, our method for scalar-curvature estimation on a point cloud is equivalent to scalar-curvature estimation on the nearest-neighbor graph equipped with the shortest-path metric. Our experiments show that one can achieve reasonable accuracy even without having or using a Euclidean embedding of the data.
The primary limitation of our estimator is that it can be inaccurate on regions with non-constant curvature, especially near points on a manifold where a local extremum in the curvature is attained. (For example, see our experiments on the torus and hyperboloid in Section 6.) The reason is that when the radius \(r\) is small, we cannot reliably estimate the ratio between \(\operatorname{vol}(B^{M}(x,r))\) and the volume of a Euclidean ball of radius \(r\) (see Proposition 5.1). We addressed this by using a relatively high \(r_{\max}\) parameter, which controls the maximum ball radius that we consider. However, requiring \(r\) to be relatively large has the drawback that we are unable to detect local variation in scalar curvature; we are effectively smoothing out the curvature. In future work, we plan to investigate strategies to increase the accuracy of our method on manifolds with non-constant scalar curvature.
We expect that our scalar-curvature estimator will improve with improvements in state-of-the-art methods for density and geodesic-distance estimation on manifolds. Our method involves density estimation on a manifold as an intermediary step, and it also requires geodesic-distance estimation when we are given a point cloud embedded in Euclidean space instead of a distance matrix \(d_{X}\). There are several other methods for geodesic-distance estimation that we did not use in our experiments; see [1, 12], for example. Improvements to the intermediary steps of our pipeline will lead to better performance of our scalar-curvature estimator.
It would also be interesting to incorporate machine learning into our curvature-estimation pipeline. For example, at each point, we estimate a sequence of ball-volume ratios (see Eq. (3.16)); this is a vector that one can feed into a neural network, rather than using the method in Section 3.5 for estimating a quadratic coefficient. One could also use a graph neural network in which the graph is the nearest-neighbor graph for the data set and the initial node features are the vectors of ball-volume ratio estimates. Using machine learning would allow one to sidestep the choices of hyperparameters (the maximum ball radius \(r_{\max}\), the minimum ball radius \(r_{\min}\), and the radius sequence \(\{r_{j}\}\)), although those decisions would be replaced by different hyperparameter choices (e.g., a choice of learning rate). However, our current approach has the advantage that it is highly interpretable. We have designed our method so that, at minimum, one can reliably trust that the scalar curvature sign is accurate--in many cases, the sign of the curvature is the qualitative information that matters most--and that our method will generalize to manifolds that are not present in the training data set.
## Appendix
Here we prove some technical lemmas for proving our stability theorem (Theorem 4.1) and convergence theorem (Theorem 5.3). The notation that we use is the same as in Sections 4 and 5.
**Lemma A.1**.: If \(M\) is compact, then there are positive constants \(B^{(1)}\) and \(B^{(2)}\) such that
(A.1) \[B^{(1)}r^{n}\leq\operatorname{vol}(B^{M}(x,r))\leq B^{(2)}r^{n}\]
for sufficiently small \(r\) and all \(x\) in \(M\).
Proof.: By equation (1.1), there are positive constants \(B_{x}^{(1)}\), \(B_{x}^{(2)}\), and \(r_{x}\) for each \(x\in M\) such that
\[B_{x}^{(1)}r^{n}\leq\operatorname{vol}(B^{M}(x,r))\leq B_{x}^{(2)}r^{n}\qquad \text{for }r<r_{x}\,.\]
Because the Riemannian metric \(g\) is smooth, the quantities \(r_{x}^{\prime}\), \(B_{x}^{(1)}\), and \(B_{x}^{(2)}\) can be chosen for each \(x\in M\) such that the functions \(x\mapsto r_{x}^{\prime}\), \(x\mapsto B_{x}^{(1)}\), and \(x\mapsto B_{x}^{(2)}\) are continuous. If \(M\) is compact, then \(B^{(i)}\mathrel{\mathop{:}}=\max_{x\in M}B_{x}^{(i)}\) and \(r_{*}\mathrel{\mathop{:}}=\min_{x\in M}r_{x}\) exist, so Eq. (A.1) holds for \(r<r_{*}\) and all \(x\) in \(M\).
**Lemma A.2**.: Let \(X\) be a point cloud that consists of \(N\) points drawn from pdf \(\rho:M\to\mathbb{R}_{+}\). Then
\[\operatorname{var}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r))=\frac{ \operatorname{var}(1/\rho(z))\cdot\mu_{\rho}(x,r)\cdot\operatorname{vol}(B^{M} (x,r))}{N-1}+\frac{\operatorname{var}N(x,r)}{(N-1)^{2}\mu_{\rho}(x,r)^{2}}\,,\]
where \(z\in B^{M}(x,r)\) is a point chosen randomly from the pdf \(\psi(z)\) defined in Eq. (3.4) and
\[\operatorname{var}(N(x,r))=(N-1)\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r))( 1-\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r)))\,.\]
Proof.: By Lemma 3.4,
\[\operatorname{var}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r))=\mathbb{E} \Bigl{[}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r)-\operatorname{vol}(B^{ M}(x,r)))^{2}\Bigr{]}\,.\]
By equation (3.9),
\[\mathbb{E}\Bigl{[}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r) -\operatorname{vol}(B^{M}(x,r)))^{2}\Bigr{|}N(x,r)=k\Bigr{]}\] (A.2) \[\qquad\qquad=\mathbb{E}[\widehat{\operatorname{vol}}[d_{X},\rho] (x,r)^{2}\mid N(x,r)=k]-\frac{2k\operatorname{vol}(B^{M}(x,r))}{(N-1)\mu_{ \rho}(x,r)}+\operatorname{vol}(B^{M}(x,r))^{2}\]
for all \(k\in\{0,\ldots,N-1\}\). By equation (3.7),
(A.3) \[\mathbb{E}[\widehat{\operatorname{vol}}[d_{X},\rho](x,r)^{2}\mid N(x,r)=k]= \frac{k^{2}}{(N-1)^{2}}\cdot\mathbb{E}\Bigl{[}\Bigl{(}\frac{1}{k}\sum_{i=1}^{ k}1/\rho(z_{i})\Bigr{)}^{2}\Bigr{]}\,,\]
where \(\{z_{i}\}_{i=1}^{k}=B^{M}(x,r)\cap(X\setminus\{x\})\). If \(k\geq 1\), the quantity \(\frac{1}{k}\sum_{i=1}^{k}1/\rho(z_{i})\) is a sample mean. Therefore,
(A.4) \[\mathbb{E}\Bigl{[}\Bigl{(}\frac{1}{k}\sum_{i=1}^{k}1/\rho(z_{i})\Bigr{)}^{2} \Bigr{]}=\frac{\operatorname{var}(1/\rho(z))}{k}+\mathbb{E}[1/\rho(z)]^{2}= \frac{\operatorname{var}(1/\rho(z))}{k}+\frac{1}{\mu_{\rho}(x,r)^{2}}\,,\]
where \(z\) is chosen from the pdf \(\psi(z)\) defined in equation (3.4). The last equality follows by equation (3.5). Substituting equation (A.4) into equation (A.3) and equation (A.3) into (A.2), we obtain
\[\mathbb{E}\Bigl{[}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r) -\operatorname{vol}(B^{M}(x,r)))^{2}\Bigr{|}N(x,r)=k\Bigr{]}\] \[=\Biggl{[}\frac{k}{(N-1)^{2}}\cdot\operatorname{var}\Bigl{(}\frac {1}{\rho(z)}\Bigr{)}\] (A.5) \[\qquad+\frac{k^{2}-2k(N-1)\mu_{\rho}(x,r)\operatorname{vol}(B^{M} (x,r))+(N-1)^{2}\mu_{\rho}(x,r)^{2}\operatorname{vol}(B^{M}(x,r))^{2}}{(N-1)^ {2}\mu_{\rho}(x,r)^{2}}\Biggr{]}\]
for \(k\in\{1,\ldots,N-1\}\). When \(k=0\), equation (A.5) holds because the righthand side equals \(\operatorname{vol}(B^{M}(x,r))\) and \(\mathbb{E}[(\widehat{\operatorname{vol}}[d_{X},\rho](x,r)-\operatorname{vol}(B^ {M}(x,r)))^{2}\mid N(x,r)=0]=\operatorname{vol}(B^{M}(x,r))\).
To simplify the righthand side of equation (A.5), we observe that \(N(x,r)\) is a binomial random variable with \(N-1\) trials and success probability \(\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r))\), so
\[\mathbb{E}[N(x,r)]=(N-1)\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r))\]
and
\[\mathbb{E}[(N(x,r)-\mathbb{E}N(x,r))^{2}\mid N(x,r)=k]\] \[\qquad=k^{2}-2k(N-1)\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r)) +(N-1)^{2}\mu_{\rho}(x,r)^{2}\operatorname{vol}(B^{M}(x,r))\,.\]
Therefore
\[\mathbb{E}\Big{[}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r) -\operatorname{vol}(B^{M}(x,r)))^{2}\Big{|}N(x,r)=k\Big{]}\] \[\qquad=\frac{k}{(N-1)^{2}}\cdot\operatorname{var}(1/\rho(z))+ \frac{1}{(N-1)^{2}\mu_{\rho}(x,r)^{2}}\cdot\mathbb{E}[(N(x,r)-\mathbb{E}N(x,r ))^{2}\mid N(x,r)=k]\,.\]
Putting it all together, we have that \(\operatorname{var}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r))\) equals
\[\mathbb{E}\Big{[}(\widehat{\operatorname{vol}}[d_{X},\rho](x,r) -\operatorname{vol}(B^{M}(x,r)))^{2}\Big{]}\] \[\qquad=\sum_{k=0}^{N-1}\mathbb{E}\Big{[}(\widehat{\operatorname{ vol}}[d_{X},\rho](x,r)-\operatorname{vol}(B^{M}(x,r)))^{2}\Big{|}N(x,r)=k\Big{]} \mathbb{P}[N(x,r)=k]\] \[\qquad=\frac{\operatorname{var}(1/\rho(z))}{(N-1)^{2}}\sum_{k=0} ^{N-1}k\cdot\mathbb{P}[N(x,r)=k]\] \[\qquad+\frac{1}{(N-1)^{2}\mu_{\rho}(x,r)^{2}}\sum_{k=0}^{N-1} \mathbb{E}[(N(x,r)-\mathbb{E}N(x,r))^{2}\mid N(x,r)=k]\mathbb{P}[N(x,r)=k]\Bigg{)}\] \[\qquad=\frac{\operatorname{var}(1/\rho(z))}{(N-1)^{2}}\mathbb{E }[N(x,r)]+\frac{\operatorname{var}(N(x,r))}{(N-1)^{2}\mu_{\rho}(x,r)^{2}}\] \[\qquad=\frac{\operatorname{var}(1/\rho(z))\cdot\mu_{\rho}(x,r) \cdot\operatorname{vol}(B^{M}(x,r))}{(N-1)}+\frac{\operatorname{var}(N(x,r))}{ (N-1)^{2}\mu_{\rho}(x,r)^{2}}\,,\]
where
\[\operatorname{var}(N(x,r))=(N-1)\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r)) (1-\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r)))\]
because \(N(x,r)\) is a binomial random variable with parameters \(N-1\) and \(\mu_{\rho}(x,r)\operatorname{vol}(B^{M}(x,r))\).
**Lemma A.3**.: Assume that \(\hat{n}[d_{X_{k}}]=n\) for sufficiently large \(k\). Let \(\{a_{k}\}_{k=1}^{\infty}\) and \(\{b_{k}\}_{k=1}^{\infty}\) be sequences of positive real numbers such that
1. \(0<a_{k}<b_{k}\) for all \(k\), and
2. \(a_{k},b_{k}\to 0\) as \(k\to\infty\,.\)
For each \(k\), let \(R_{k}\) be a finite subset of \([a_{k},b_{k}]\) such that \(\frac{|R_{k}|}{|X_{k}|a_{k}^{n}}\to 0\) as \(k\to 0\). Then
\[\max_{r\in R_{k}}|\hat{y}[d_{X_{k}},\rho](x_{k},r)-1|\to 0\]
in probability as \(k\to\infty\), where \(\{x_{k}\}\) is any sequence of points such that \(x_{k}\in X_{k}\).
Proof.: Let \(\epsilon>0\). To simplify our notation, we denote \(\hat{y}[d_{X_{k}},\rho](x,r)\) by \(\hat{y}(x,r)\). For any \(x\in X_{k}\) and any \(r\in[a_{k},b_{k}]\),
(A.6) \[\mathbb{P}\Big{[}\big{|}\hat{y}(x,r)-1\big{|}>\epsilon\Big{]}\leq\mathbb{P} \Big{[}\big{|}\hat{y}(x,r)-y(x,r)\big{|}+|y(x,r)-1|>\epsilon\Big{]}\,.\]
By equation (1.1), there are constants \(A>0\) and \(r_{1}>0\) such that
\[|y(x,r)-1|\leq Ar^{2}\qquad\text{for $r<r_{1}$ and all $x\in M$}\,.\]
Let \(r_{2}=\min(\sqrt{\epsilon/(2A)},r_{1})\). If \(r<r_{2}\), then \(|y(x,r)-1|<\frac{\epsilon}{2}\). For sufficiently large \(k\), we have \(b_{k}<r_{2}\), so by equation (A.6),
(A.7) \[\mathbb{P}\Big{[}\big{|}\hat{y}(x,r)-1\big{|}>\epsilon\Big{]}\leq\mathbb{P} \Big{[}\big{|}\hat{y}(x,r)-y(x,r)\big{|}>\epsilon/2\Big{]}\]
for any \(r\in[a_{k},b_{k}]\) and for sufficiently large \(k\). By Chebyshev's inequality,
(A.8) \[\mathbb{P}\Big{[}\big{|}\hat{y}(x,r)-y(x,r)\big{|}>\epsilon/2\Big{]}\leq\frac {4\text{var}(\hat{y}(x,r))}{\epsilon^{2}}\,.\]
By Proposition 5.1, there are positive constants \(B\) and \(r_{3}<r_{2}\) such that
\[\text{var}(\hat{y}(x,r))\leq\frac{B}{|X_{k}|r^{n}}\]
for sufficiently large \(k\), all \(r<r_{3}\), and any \(x\in X_{k}\). Substituting into equation (A.8) shows that
\[\mathbb{P}\Big{[}\big{|}\hat{y}(x,r)-y(x,r)\big{|}>\epsilon/2\Big{]}\leq\frac {4B}{\epsilon^{2}|X_{k}|r^{n}}\]
for any \(r<r_{3}\) and any \(x\in X_{k}\). For sufficiently large \(k\), we have \(b_{k}<r_{3}\), so
\[\mathbb{P}\Big{[}\big{|}\hat{y}(x,r)-y(x,r)\big{|}>\epsilon/2\Big{]}\leq\frac {4B}{\epsilon^{2}|X_{k}|a_{k}^{n}}\]
for any \(r\in[a_{k},b_{k}]\) and for sufficiently large \(k\). Therefore,
\[\mathbb{P}\Big{[}\max_{r\in R_{k}}\big{|}\hat{y}(x,r)-y(x,r)\big{|}>\epsilon/ 2\Big{]}\leq\frac{4B|R_{k}|}{\epsilon^{2}|X_{k}|a_{k}^{n}}\]
By hypothesis, the righthand side approaches \(0\) as \(k\to\infty\) because \(\frac{|R_{k}|}{|X_{k}|a_{k}^{n}}\to 0\). Applying equation (A.7) concludes the proof.
**Lemma A.4** (Stability of \(\hat{y}\)).: For each \(k\), suppose that \(\widehat{d_{X_{k}}}\) is a metric on \(X_{k}\) such that
\[\delta_{k}:=\max_{x,x^{\prime}\in X_{k}}|\widehat{d_{X_{k}}}(x,x^{\prime})-d( x,x^{\prime})|\to 0\qquad\text{as $k\to\infty$}\]
and \(\hat{\rho}\) is a density estimator such that
\[\eta_{k}:=\max_{x\in X_{k}}|\widehat{\rho[d_{X_{k}}]}(x)-\rho(x)|\to 0\qquad \text{as $k\to\infty$}\,.\]
Suppose that \(\hat{n}[\widehat{d_{X_{k}}}]=\hat{n}[d_{X_{k}}]=n\) for sufficiently large \(k\). Additionally, suppose that the hyperparameter value sequences satisfy the conditions:
1. \((r_{\min,\,k}+(\Delta r)_{k})/r_{\max,\,k}^{3}\to 0\) as \(k\to\infty\,.\)
2. \(\eta_{k}/(r_{\min,\,k}+(\Delta r)_{k})^{n+2/3}\to 0\) as \(k\to\infty\,.\)
3. \(\max_{j}\frac{A(r_{j})}{r_{j}^{n}r_{\max,\,k}^{2}}\to 0\) as \(k\to\infty\), where \(A(r)\) is defined as in equation (3.11).
4. \(r_{\min,\,k}+(\Delta r)_{k}-\delta_{k}>0\) for sufficiently large \(k\,.\)
5. \(\frac{\delta_{k}+(\Delta r)_{k}}{(r_{\min,\,k}+(\Delta r)_{k}-\delta_{k})^{n+1}r_{ \max,\,k}^{2}}\to 0\) as \(k\to\infty\,\).
Define \(\ell_{k}:=\min\{\ell\in\mathbb{Z}\mid\ell(\Delta r)_{k}\geq\delta_{k}\}\). Then there is a sequence \(\{\xi_{k}\}\) of nonnegative real numbers satisfying \(\xi_{k}/r_{\max,\,k}^{2}\to 0\) as \(k\to\infty\) such that for any sequence \(\{x_{k}\}_{k=1}^{\infty}\), where \(x_{k}\in X_{k}\) for all \(k\),
\[\hat{y}[d_{X_{k}},\rho](x_{k},r_{j-\ell_{k}})-\xi_{k}\leq\hat{y}[\widehat{d_{X _{k}}},\hat{\rho}](x_{k},r_{j})\leq\hat{y}[d_{X_{k}},\rho](x_{k},r_{j+\ell_{k} })+\xi_{k}\]
for all \(j\geq 2\) and
\[\hat{y}[d_{X_{k}},\rho](x_{k},r_{1}-\delta_{k})-\xi_{k}\leq\hat{y}[\widehat{d _{X_{k}}},\hat{\rho}](x_{k},r_{1})\leq\hat{y}[d_{X_{k}},\rho](x_{k},r_{1+\ell_ {k}})+\xi_{k}\]
for \(j=1\).
Proof.: For convenience, let \(\hat{y}_{k}(x,r)\) denote \(\hat{y}[\widehat{d_{X_{k}}},\hat{\rho}](x,r)\) and let \(\hat{y}(x,r)\) denote \(\hat{y}[d_{X_{k}},\rho](x,r)\). Define
\[\lambda_{j,k}^{+} :=\ell_{k}(\Delta r)_{k}\,,\] \[\lambda_{j,k}^{-} :=\begin{cases}\ell_{k}(\Delta r)_{k}\,,&j\geq 2\\ \delta_{k}\,,&j=1\end{cases}\]
for all \(j\) and \(k\). Our goal is to compare \(\hat{y}_{k}(x,r_{j})\) to both \(\hat{y}(x,r_{j}-\lambda_{j,k}^{-})\) and \(\hat{y}(x,r_{j}+\lambda_{j,k}^{+})\). The "radial shift values" \(\lambda_{j,k}^{\pm}\) are defined so that they satisfy
1. \(r_{j}\pm\lambda_{j,k}^{\pm}>0\) (all shifted radius values are positive) and
2. \(\delta_{k}\leq\lambda_{j,k}^{\pm}\leq\delta_{k}+(\Delta r)_{k}\)
for all \(j\) and sufficiently large \(k\). For the remainder of the proof, we consider only \(k\) sufficiently large such that (1) holds. The key is that because \(\lambda_{j,k}^{\pm}\geq\delta_{k}\) for all \(j\) and \(k\),
(A.9) \[N[d_{X_{k}}](x,r-\lambda_{j,k}^{-})\leq N[\widehat{d_{X_{k}}}](x,r)\leq N[d_{ X_{k}}](x,r+\lambda_{j,k}^{+})\]
for all \(x\in X_{k}\) and \(r\geq 0\). We use Eq. (A.9) to compare \(\widehat{\operatorname{vol}}[\widehat{d_{X_{k}}},\hat{\rho}](x,r)\) and \(\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r\pm\lambda_{j,k}^{\pm})\). First we quantify the error introduced by the error in density estimation. Observe that
\[\left|\widehat{\operatorname{vol}}[\widehat{d_{X_{k}}},\hat{\rho }](x,r)-\widehat{\operatorname{vol}}[\widehat{d_{X_{k}}},\rho](x,r)\right| =\left|\frac{N[\widehat{d_{X_{k}}}](x,r)}{(|X_{k}|-1)\widehat{ \mu}_{\rho}[\hat{\rho}](x,r)}-\frac{N[\widehat{d_{X_{k}}}](x,r)}{(|X_{k}|-1) \widehat{\mu}_{\rho}[\rho](x,r)}\right|\] \[\leq\left|\frac{1}{\widehat{\mu}_{\rho}[\hat{\rho}](x,r)}-\frac{1 }{\widehat{\mu}_{\rho}[\rho](x,r)}\right|.\] (A.10) \[\leq\left|\frac{1}{\widehat{\mu}_{\rho}[\hat{\rho}](x,r)}-\frac{1 }{\rho(x)}\right|+\left|\frac{1}{\widehat{\mu}_{\rho}[\rho](x,r)}-\frac{1}{ \rho(x)}\right|\]
If \(N[\widehat{d_{X_{k}}}](x,r)\geq 1\) and \(d_{X_{k}}(x,z)\leq r\), then
\[\left|\frac{1}{\rho(z)}-\frac{1}{\rho(x)}\right|\leq\frac{A(r)}{\min(\rho)^{2}}\]
and
\[\left|\frac{1}{\hat{\rho}(z)}-\frac{1}{\rho(x)}\right| \leq\left|\frac{1}{\hat{\rho}(z)}-\frac{1}{\rho(z)}\right|+\left| \frac{1}{\rho(z)}-\frac{1}{\rho(x)}\right|\] \[\leq\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}}+\frac{A(r)}{\min( \rho)^{2}}\,.\]
If \(N[\widehat{d_{X_{k}}}](x,r)=0\), then
\[\Big{|}\frac{1}{\widehat{\mu_{\rho}}[\hat{\rho}](x,r)}-\frac{1}{ \rho(x)}\Big{|} =\Big{|}\frac{1}{\hat{\rho}(x)}-\frac{1}{\rho(x)}\Big{|}\leq\frac{ \eta_{k}}{(\min(\rho)-\eta_{k})^{2}}\,,\] \[\Big{|}\frac{1}{\widehat{\mu_{\rho}}[\rho](x,r)}-\frac{1}{\rho(x )}\Big{|} =0\,.\]
Therefore,
\[\Big{|}\frac{1}{\widehat{\mu_{\rho}}[\hat{\rho}](x,r)}-\frac{1}{\rho(x)}\Big{|} +\Big{|}\frac{1}{\widehat{\mu_{\rho}}[\rho](x,r)}-\frac{1}{\rho(x)}\Big{|}\leq \frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}}+\frac{2A(r)}{\min(\rho)^{2}}\,.\]
By Eq. (A.10),
\[\widehat{\operatorname{vol}}[\widehat{d_{X_{k}}},\rho](x,r_{j}) -\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}}+\frac{2A(r_{j})}{\min(\rho )^{2}}\Big{)}\leq\widehat{\operatorname{vol}}[\widehat{d_{X_{k}}},\hat{\rho} ](x,r_{j})\] (A.11) \[\leq\widehat{\operatorname{vol}}[\widehat{d_{X_{k}}},\rho](x,r_{ j})+\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}}+\frac{2A(r_{j})}{\min(\rho)^{2}}\,.\]
Together, equations (A.9) and (A.11) show that
\[\frac{N[d_{X_{k}}](x,r_{j}-\lambda_{j,k}^{-})}{(|X_{k}|-1) \widehat{\mu_{\rho}}[\rho](x,r_{j})}-\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_{ k})^{2}}+\frac{2A(r_{j})}{\min(\rho)^{2}}\Big{)}\leq\widehat{\operatorname{vol}}[ \widehat{d_{X_{k}}},\hat{\rho}](x,r_{j})\] (A.12) \[\leq\frac{N[d_{X_{k}}](x,r_{j}+\lambda_{j,k}^{+})}{(|X_{k}|-1) \widehat{\mu_{\rho}}[\rho](x,r_{j})}+\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_ {k})^{2}}+\frac{2A(r_{j})}{\min(\rho)^{2}}\Big{)}\,.\]
We have
\[\Big{|}\frac{N[d_{X_{k}}](x,r_{j}\pm\lambda_{j,k}^{\pm})}{(|X_{k} |-1)\widehat{\mu_{\rho}}[\rho](x,r_{j})}-\widehat{\operatorname{vol}}[d_{X_{k }},\rho](x,r_{j}\pm\lambda_{j,k}^{\pm})\Big{|}\] \[\qquad\leq\Big{|}\frac{1}{\widehat{\mu_{\rho}}[\rho](x,r_{j})}- \frac{1}{\rho(x)}\Big{|}+\Big{|}\frac{1}{\widehat{\mu_{\rho}}[\rho](x,r_{j} \pm\lambda_{j,k}^{\pm})}-\frac{1}{\rho(x)}\Big{|}\] \[\qquad\leq\frac{A(r_{j})}{\min(\rho)^{2}}+\frac{A(r_{j}+\lambda_ {j,k}^{+})}{\min(\rho)^{2}}\] \[\qquad\leq\frac{2A(r_{j}+\lambda_{j,k}^{+})}{\min(\rho)^{2}}\,.\]
Therefore, by Eq. (A.12),
\[\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r_{j}-\lambda_{j, k}^{-})-\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}}+\frac{2A(r_{j})}{\min( \rho)^{2}}+\frac{2A(r_{j}+\lambda_{j,k}^{+})}{\min(\rho)^{2}}\Big{)}\leq \widehat{\operatorname{vol}}[\widehat{d_{X_{k}}},\hat{\rho}](x,r_{j})\] \[\qquad\qquad\leq\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r_ {j}+\lambda_{j,k}^{+})+\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}}+\frac{ 2A(r_{j})}{\min(\rho)^{2}}+\frac{2A(r_{j}+\lambda_{j,k}^{+})}{\min(\rho)^{2}} \Big{)}\,.\]
Because \(A(r)\) increases monotonically,
(A.13) \[\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r_{j} -\lambda_{j,k}^{-})-\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{ 2}}+\frac{4A(r_{j}+\lambda_{j,k}^{+})}{\min(\rho)^{2}}\Big{)}\leq\widehat{ \operatorname{vol}}[\widehat{d_{X_{k}}},\hat{\rho}](x,r_{j})\] \[\leq\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r_{j}+ \lambda_{j,k}^{+})+\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}}+\frac{4A (r_{j}+\lambda_{j,k}^{+})}{\min(\rho)^{2}}\Big{)}\,.\]
Next, we use equation (A.13) to compare \(\hat{y}_{k}(x,r_{j})\) to \(\hat{y}(x,r_{j}\pm\lambda_{j,k}^{\pm})\) for all \(j\in\{1,\ldots,m_{k}\}\). Dividing equation (A.13) by \(v_{n}r_{j}^{n}\), we obtain
(A.14) \[\frac{\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r_{j}- \lambda_{j,k}^{-})}{v_{n}r_{j}^{n}}-\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_{k })^{2}v_{n}r_{j}^{n}}+\frac{4A(r_{j}+\lambda_{j,k}^{+})}{v_{n}r_{j}^{n}\min( \rho)^{2}}\Big{)}\leq\hat{y}_{k}(x,r_{j})\] \[\leq\frac{\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r_{j}+ \lambda_{j,k}^{+})}{v_{n}r_{j}^{n}}+\Big{(}\frac{\eta_{k}}{(\min(\rho)-\eta_{ k})^{2}v_{n}r_{j}^{n}}+\frac{4A(r_{j}+\lambda_{j,k}^{+})}{v_{n}r_{j}^{n}\min( \rho)^{2}}\Big{)}\]
for all \(j\). We now compare \(\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r_{j}\pm\lambda_{j,k}^{\pm})/(v _{n}r_{j}^{n})\) to \(\hat{y}(x,r_{j}\pm\lambda_{j,k}^{\pm})\). We have
\[\Big{|}\hat{y}(x,r_{j}\pm\lambda_{j,k}^{\pm})-\frac{\widehat{ \operatorname{vol}}[d_{X_{k}},\rho](x,r_{j}\pm\lambda_{j,k}^{\pm})}{v_{n}r_{j }^{n}}\Big{|} =\frac{\widehat{\operatorname{vol}}[d_{X_{k}},\rho](x,r_{j}\pm \lambda_{j,k}^{\pm})}{v_{n}}\Bigg{|}\frac{1}{(r_{j}\pm\lambda_{j,k}^{\pm})^{n }}-\frac{1}{r^{n}}\Bigg{|}\] \[=\frac{N[d_{X_{k}}](x,r_{j}\pm\lambda_{j,k}^{\pm})}{(|X_{k}|-1)v_ {n}\widehat{\mu}_{\rho}|\rho|(x,r)}\Bigg{|}\frac{1}{(r_{j}\pm\lambda_{j,k}^{ \pm})^{n}}-\frac{1}{r_{j}^{n}}\Bigg{|}\] \[\leq\frac{1}{v_{n}\widehat{\mu}_{\rho}[\rho](x,r)}\bigg{|}\frac{1 }{(r_{j}\pm\lambda_{j,k}^{\pm})^{n}}-\frac{1}{r_{j}^{n}}\bigg{|}\] \[\leq\frac{1}{v_{n}\min(\rho)}\Big{|}\frac{1}{(r_{j}\pm\lambda_{j, k}^{\pm})^{n}}-\frac{1}{r_{j}^{n}}\Big{|}\,.\]
Because \(g(r)=1/r^{n}\) is convex and monotonically decreasing for \(r>0\), we have
\[\Big{|}\frac{1}{(r_{j}+\lambda_{j,k}^{+})^{n}}-\frac{1}{r_{j}^{n}}\Big{|}\leq \lambda_{j,k}^{+}|g^{\prime}(r_{j})|=\frac{n\lambda_{j,k}^{+}}{r_{j}^{n+1}}\]
and
\[\Big{|}\frac{1}{(r_{j}-\lambda_{j,k}^{-})^{n}}-\frac{1}{r_{j}^{n}}\Big{|}\leq \lambda_{j,k}^{-}|g^{\prime}(r_{j}-\lambda_{j,k}^{-})|=\frac{n\lambda_{j,k}^{ -}}{(r_{j}-\lambda_{j,k}^{-})^{n+1}}\,.\]
Therefore,
(A.15) \[\Big{|}\hat{y}(x,r_{j}+\lambda_{j,k}^{+})-\frac{\widehat{\operatorname{vol}}[d _{X_{k}},\rho](x,r_{j}+\lambda_{j,k}^{+})}{v_{n}r_{j}^{n}}\Big{|}\leq\frac{1} {v_{n}\min(\rho)}\frac{n\lambda_{j,k}^{+}}{r_{j}^{n+1}}\]
and
(A.16) \[\Big{|}\hat{y}(x,r_{j}-\lambda_{j,k}^{-})-\frac{\widehat{\operatorname{vol}}[d _{X_{k}},\rho](x,r_{j}-\lambda_{j,k}^{-})}{v_{n}r_{j}^{n}}\Big{|}\leq\frac{1}{v _{n}\min(\rho)}\frac{n\lambda_{j,k}^{-}}{(r_{j}-\lambda_{j,k}^{-})^{n+1}}\,.\]
Together, Equations (A.14), (A.15), and (A.16) show that
\[\hat{y}_{k}(x,r_{j}) \geq\hat{y}(x,r_{j}-\lambda_{j,k}^{-})-\left(\frac{\eta_{k}}{(\min( \rho)-\eta_{k})^{2}v_{n}r_{j}^{n}}+\frac{n\lambda_{j,k}^{-}}{v_{n}\min(\rho)(r_ {j}-\lambda_{j,k}^{-})^{n+1}}+\frac{4A(r_{j}+\lambda_{j,k}^{+})}{v_{n}r_{j}^{n} \min(\rho)^{2}}\right),\] \[\hat{y}_{k}(x,r_{j}) \leq\hat{y}(x,r_{j}+\lambda_{j,k}^{+})+\left(\frac{\eta_{k}}{( \min(\rho)-\eta_{k})^{2}v_{n}r_{j}^{n}}+\frac{n\lambda_{j,k}^{+}}{v_{n}\min( \rho)r_{j}^{n+1}}+\frac{4A(r_{j}+\lambda_{j,k}^{+})}{v_{n}r_{j}^{n}\min(\rho)^ {2}}\right).\]
We define the following error terms:
\[\xi_{j,k}^{+} :=\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}v_{n}r_{j}^{n}}+ \frac{n\lambda_{j,k}^{+}}{v_{n}\min(\rho)r_{j}^{n+1}}+\frac{4A(r_{j}+\lambda_ {j,k}^{+})}{v_{n}r_{j}^{n}\min(\rho)^{2}}\,,\] \[\xi_{j,k}^{-} :=\frac{\eta_{k}}{(\min(\rho)-\eta_{k})^{2}v_{n}r_{j}^{n}}+\frac{ n\lambda_{j,k}^{-}}{v_{n}\min(\rho)(r_{j}-\lambda_{j,k}^{-})^{n+1}}+\frac{4A(r_{j}+ \lambda_{j,k}^{+})}{v_{n}r_{j}^{n}\min(\rho)^{2}}\,.\] \[\xi_{k} :=\max_{j}\{\xi_{j,k}^{+},\xi_{j,k}^{-}\}\,.\]
The error terms \(\xi_{j,k}^{\pm}\) are nonnegative. To complete the proof, it suffices to show that \(\xi_{k}/r_{\max,\,k}^{2}\to 0\) as \(k\to\infty\). For sufficiently large \(k\),
\[\frac{\eta_{k}}{v_{n}r_{j}^{n}(\min(\rho)-\eta_{k})^{2}r_{\max,\,k}^{2}}\leq \frac{\eta_{k}}{v_{n}(r_{\min,\,k}+(\Delta r)_{k})^{n}(\frac{1}{2}\min(\rho)) ^{2}r_{\max,\,k}^{2}}\,.\]
Rearranging the terms on the righthand side, we obtain
\[\frac{\eta_{k}}{v_{n}(r_{\min,\,k}+(\Delta r)_{k})^{n}(\frac{1}{2}\rho)^{2}r_ {\max,\,k}^{2}}=\frac{4}{v_{n}\min(\rho)^{2}}\Bigg{(}\frac{\eta_{k}}{(r_{\min, \,k}+(\Delta r)_{k})^{n+2/3}}\Bigg{)}\Bigg{(}\frac{(r_{\min,\,k}+(\Delta r)_{ k})^{2/3}}{r_{\max,\,k}^{2}}\Bigg{)}\,.\]
By hypothesis, the quantity above approaches \(0\) as \(k\to\infty\), so \(\max_{j}\frac{\eta_{k}}{v_{n}r_{j}^{n}(\min(\rho)-\eta_{k})^{2}r_{\max,\,k}^{2 }}\to 0\) as \(k\to\infty\). Additionally,
\[\frac{n\lambda_{j,k}^{+}}{v_{n}\min(\rho)r_{j}^{n+1}r_{\max,\,k}^ {2}} \leq\frac{n(\delta_{k}+(\Delta r)_{k})}{v_{n}\min(\rho)(r_{\min,\,k} +(\Delta r)_{k}-\delta_{k})^{n+1}r_{\max,\,k}^{2}}\,,\] \[\frac{n\lambda_{j,k}^{-}}{v_{n}\min(\rho)(r_{j}-\lambda_{j,k}^{-} )^{n+1}r_{\max,\,k}^{2}} \leq\frac{n(\delta_{k}+(\Delta r)_{k})}{v_{n}\min(\rho)(r_{\min,\,k} +(\Delta r)_{k}-\delta_{k})^{n+1}r_{\max,\,k}^{2}}\,.\]
By hypothesis, the righthand sides above approach \(0\) as \(k\to\infty\). Finally, we upper bound \(\frac{A(r_{j}+\lambda_{j,k}^{+})}{r_{j}^{n}r_{\max,\,k}^{2}}\) by recalling that \(A(r)\) increases monotonically and
\[r_{j}+\lambda_{j,k}^{+} =r_{j}+\ell_{k}(\Delta r)_{k}\] \[\leq r_{j}+\delta_{k}+(\Delta r)_{k}\] \[\leq 2r_{j}\]
for sufficiently large \(k\). Therefore, \(\frac{A(r_{j}+\lambda_{j,k}^{+})}{r_{j}^{n}r_{\max,\,k}^{2}}\leq\frac{A(2r_{j })}{r_{\max,\,k}^{2}r_{j}^{n}}\), which approaches \(0\) by hypothesis. This implies that \(\xi_{k}/r_{\max,\,k}^{2}\to 0\) as \(k\to\infty\).
**Lemma A.5** (Stability of \(\hat{C}\)).: For each \(k\), suppose that \(\widehat{d_{X_{k}}}\) is a metric on \(X_{k}\) such that
\[\delta_{k}:=\max_{x,x^{\prime}\in X_{k}}|\widehat{d_{X_{k}}}(x,x^{\prime})-d(x, x^{\prime})|\to 0\qquad\text{as }k\to\infty\,.\]
Suppose that \(\hat{\rho}\) is a density estimator such that
\[\eta_{k}:=\max_{x\in X_{k}}\Big{|}\hat{\rho}\widehat{[d_{X_{k}}]}(x)-\rho(x) \Big{|}\to 0\qquad\text{as }k\to\infty\,,\]
and suppose that \(\widehat{n[d_{X_{k}}]}=\hat{n}[d_{X_{k}}]=n\) for sufficiently large \(k\). If the hyperparameter value sequences satisfy
1. \(\max_{j}\frac{A(2r_{j})}{r_{j}^{n}r_{\max,\,k}^{2}}\to 0\) as \(k\to\infty\,,\)
2. \(\eta_{k}/(r_{\min,\,k}+(\Delta r)_{k})^{n+2/3}\to 0\) as \(k\to\infty\,,\)
3. \(r_{\min,\,k}+(\Delta r)_{k}>\delta_{k}\) for sufficiently large \(k\,,\)
4. \(|X_{k}|(\Delta r)_{k}(r_{\min,\,k}+(\Delta r)_{k}-\delta_{k})^{n}\to\infty\) as \(k\to\infty\,,\)
5. \(r_{\min,\,k}/r_{\max,\,k}^{3}\to 0\) as \(k\to\infty\,,\)
6. \(((\Delta r)_{k}+\delta_{k})/r_{\max,\,k}^{3}\to 0\) as \(k\to\infty\), and
7. \(((\Delta r)_{k}+\delta_{k})/[(r_{\min,\,k}+(\Delta r)_{k}-\delta_{k})^{n+1}r_{ \max,\,k}^{2}]\to 0\) as \(k\to\infty\)
then \(|\widehat{C}[X_{k},\widehat{d_{X_{k}}},\hat{\rho}](x_{k})-\hat{C}[X_{k},d,\rho ](x_{k})|\to 0\) in probability as \(k\to\infty\), where \(\{x_{k}\}\) is any sequence of points such that \(x_{k}\in X_{k}\).
Proof.: To simplify our notation, we define
\[\hat{C}_{k}(x) :=\hat{C}\widehat{[d_{X_{k}}},\hat{\rho}](x)\,,\] \[\hat{C}(x) :=\hat{C}[d,\rho](x)\,,\] \[\hat{y}_{k}(x,r) :=\hat{y}\widehat{[d_{X_{k}}},\hat{\rho}](x,r)\,,\] \[\hat{y}(x,r) :=\hat{y}[d,\rho](x,r)\]
for all \(x\in X_{k}\). Let \(\ell_{k}=\min\{\ell_{k}^{\prime}\in\mathbb{Z}\mid\ell_{k}^{\prime}(\Delta r)_ {k}\geq\delta_{k}\}\), let \(a_{k}=r_{\min,\,k}+(\Delta r)_{k}-\delta_{k}\), and let \(b_{k}=r_{\max,\,k}+\ell_{k}(\Delta r)_{k}\). By hypothesis and choice of \(\ell_{k}\),
\[a_{k}>0 \text{for all }k\,,\] \[|X_{k}|(a_{k})^{n}\to\infty \text{as }k\to\infty\,,\] \[a_{k}<(r_{\min,\,k}+(\Delta r)_{k})\to 0 \text{as }k\to\infty\,,\] \[b_{k}=r_{\max,\,k}+(\Delta r)_{k}+(\ell_{k}-1)(\Delta r)_{k}<(r_ {\max,\,k}+(\Delta r)_{k}+\delta_{k})\to 0 \text{as }k\to\infty\,.\]
Let \(J:=\{2-\ell_{k},\ldots,m_{k}+\ell_{k}\}\), where \(m_{k}:=\frac{r_{\max,\,k}-r_{\min,\,k}}{(\Delta r)_{k}}\) is the number of radial steps. Let \(R_{k}:=\{r_{j}\mid j\in J\}\cup\{r_{\min,\,k}+(\Delta r)_{k}+\delta_{k}\}\). We have
\[|R_{k}|=m_{k}+2\ell_{k}\leq m_{k}+\frac{\delta_{k}}{(\Delta r)_{k}}+1\leq\frac {2}{(\Delta r)_{k}}+1\,.\]
Because \(|X_{k}|(\Delta r)_{k}(r_{\min,\,k}+(\Delta r)_{k}-\delta_{k})^{n}\to\infty\), we have \(\frac{|R_{k}|}{|X_{k}|a_{k}^{n}}\to 0\) as \(k\to\infty\). Therefore, by Lemma A.3,
(A.17) \[\mathbb{P}\Big{[}\max_{r\in R_{k}}|\hat{y}(x,r)-1|\leq 1\Big{]}\to 1\]
as \(k\to\infty\).
By Lemma A.4, there is a nonnegative sequence \(\{\xi_{k}\}\) such that \(\xi_{k}/r_{\max,\,k}^{2}\to 0\) and
(A.18) \[\hat{y}[d_{X_{k}},\rho](x_{k},r_{j-\ell_{k}})-\xi_{k}\leq\hat{y}\widehat{[d_{X_{ k}},\hat{\rho}]}(x_{k},r_{j})\leq\hat{y}[d_{X_{k}},\rho](x_{k},r_{j+\ell_{k}})+ \xi_{k}\]
for all \(j\geq 2\) and
(A.19) \[\hat{y}[d_{X_{k}},\rho](x_{k},r_{1}-\delta_{k})-\xi_{k}\leq\hat{y}[\widehat{d_{X_{ k}}},\hat{\rho}](x_{k},r_{1})\leq\hat{y}[d_{X_{k}},\rho](x_{k},r_{1+\ell_{k}})+\xi_{k}\]
for sufficiently large \(k\). (The case \(j=1\) is different because it is not necessarily true that \(r_{1-\ell_{k}}\geq 0\).)
Let \(\epsilon>0\). We want to show that \(\mathbb{P}[|\hat{C}_{k}(x)-\hat{C}(x)|<\epsilon]\to 1\) as \(k\to\infty\). By equation (A.17), it suffices to show that \(|\hat{C}_{k}(x)-\hat{C}(x)|<\epsilon\) for sufficiently large \(k\) if
(A.20) \[\max_{r\in R_{k}}|\hat{y}(x,r)-1|\leq 1\,.\]
Therefore, for the remainder of the proof, we assume equation (A.20) holds.
First, we obtain an upper bound on \(\hat{C}_{k}(x)-\hat{C}(x)\). The upper bounds in equations (A.18)-(A.19) imply that
\[\hat{C}_{k}(x) =\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}\sum_{j=1}^{m _{k}}r_{j}^{2}\Big{(}\hat{y}_{k}(x,r_{j})-1\Big{)}(\Delta r)_{k}\Bigg{)}\] \[\leq\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}\sum_{j=1}^ {m_{k}}r_{j}^{2}\Big{(}\hat{y}(x,r_{j+\ell_{k}})-1+\xi_{k}\Big{)}(\Delta r)_{k }\Bigg{)}\,.\]
Substituting \(r_{j}^{2}=r_{j+\ell_{k}}^{2}-\ell_{k}(\Delta r)_{k}(2r_{j}+\ell_{k}(\Delta r) _{k})\), we obtain
\[\hat{C}_{k}(x) \leq\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}\sum_{j=1} ^{m_{k}}\Big{(}r_{j+\ell_{k}}^{2}-\ell_{k}(\Delta r)_{k}(2r_{j}+\ell_{k}( \Delta r)_{k})\Big{)}\Big{(}\hat{y}(x,r_{j+\ell_{k}})-1+\xi_{k}\Big{)}(\Delta r )_{k}\Bigg{)}\] \[=\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}\sum_{j=1+ \ell_{k}}^{m_{k}+\ell_{k}}r_{j}^{2}\Big{(}\hat{y}(x,r_{j})-1+\xi_{k}\Big{)}( \Delta r)_{k}\] \[\qquad-\sum_{j=1}^{m_{k}}\Big{(}\ell_{k}(\Delta r)_{k}(2r_{j}+ \ell_{k}(\Delta r)_{k})\Big{)}\Big{(}\hat{y}(x,r_{j+\ell_{k}})-1+\xi_{k}\Big{)} (\Delta r)_{k}\Bigg{)}\,.\]
Rearranging terms, we obtain
\[\hat{C}_{k}(x)\leq\hat{C}(x)+\frac{5}{r_{\max,\,k}^{5}-r_{\min,\, k}^{5}}\Bigg{(}\sum_{j=1+\ell_{k}}^{m_{k}+\ell_{k}}r_{j}^{2}\xi_{k}(\Delta r)_{k}+ \sum_{j=m_{k}+1}^{m_{k}+\ell_{k}}r_{j}^{2}\Big{(}\hat{y}(x,r_{j})-1\Big{)}( \Delta r)_{k}\] \[\qquad\qquad\qquad\qquad\qquad-\sum_{j=1}^{\ell_{k}}r_{j}^{2} \Big{(}\hat{y}(x,r_{j})-1\Big{)}(\Delta r)_{k}\] \[\qquad\qquad\qquad\qquad\qquad-\sum_{j=1}^{m_{k}}\ell_{k}(\Delta r )_{k}(2r_{j}+\ell_{k}(\Delta r)_{k})\Big{(}\hat{y}(x,r_{j+\ell_{k}})-1+\xi_{k} \Big{)}(\Delta r)_{k}\Bigg{)}\,.\]
By equation (A.20),
\[\hat{C}_{k}(x) \leq\hat{C}(x)+\frac{5\xi_{k}}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\sum _{j=1+\ell_{k}}^{m_{k}+\ell_{k}}r_{j}^{2}(\Delta r)_{k}\] \[+\frac{5(1+\xi_{k})}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}\sum _{j=m_{k}+1}^{m_{k}+\ell_{k}}r_{j}^{2}(\Delta r)_{k}+\sum_{j=1}^{\ell_{k}}r_{j }^{2}(\Delta r)_{k}+\sum_{j=1}^{m_{k}}\ell_{k}(\Delta r)_{k}(2r_{j}+\ell_{k}( \Delta r)_{k})(\Delta r)_{k}\Bigg{)}\,.\]
By comparing the sum \(\sum_{j=1+\ell_{k}}^{m_{k}+\ell_{k}}r_{j}^{2}(\Delta r)_{k}\) to the integral \(\int_{0}^{r_{\max,\,k}+(\Delta r)_{k}+\delta_{k}}r^{2}dr\), we obtain
\[\hat{C}_{k}(x)\leq\hat{C}(x) +\frac{5\xi_{k}}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\cdot\frac{(r_ {\max,\,k}+(\Delta r)_{k}+\delta_{k})^{3}}{3}\] \[+\frac{5(1+\xi_{k})}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(} \sum_{j=m_{k}+1}^{m_{k}+\ell_{k}}r_{j}^{2}(\Delta r)_{k}+\sum_{j=1}^{\ell_{k}} r_{j}^{2}(\Delta r)_{k}\] \[+\sum_{j=1}^{m_{k}}\ell_{k}(\Delta r)_{k}(2r_{j}+\ell_{k}(\Delta r )_{k})(\Delta r)_{k}\Bigg{)}\,.\]
By hypothesis, \(r_{\min,\,k}\leq Br_{\max,\,k}\) for some \(B<1\) and \((\Delta r)_{k}+\delta_{k}<r_{\max,\,k}\) for sufficiently large \(k\), so
\[\hat{C}_{k}(x)\leq\hat{C}(x) +\frac{40}{3(1-B^{5})}\cdot\frac{\xi_{k}}{r_{\max,\,k}^{2}}\] \[+\frac{5(1+\xi_{k})}{(1-B^{5})r_{\max,\,k}^{5}}\Bigg{(}\sum_{j=m_ {k}+1}^{m_{k}+\ell_{k}}r_{j}^{2}(\Delta r)_{k}+\sum_{j=1}^{\ell_{k}}r_{j}^{2} (\Delta r)_{k}\] \[+\sum_{j=1}^{m_{k}}\ell_{k}(\Delta r)_{k}(2r_{j}+\ell_{k}(\Delta r )_{k})(\Delta r)_{k}\Bigg{)}\,.\]
Because \(r_{j}\) increases monotonically with \(j\),
\[\hat{C}_{k}(x)\leq\hat{C}(x)+\frac{40}{3(1-B^{5})}\cdot\frac{\xi _{k}}{r_{\max,\,k}^{2}}+\frac{5(1+\xi_{k})}{(1-B^{5})r_{\max,\,k}^{5}}\Bigg{(} \ell_{k}(\Delta r)_{k}(r_{\max,\,k}+\ell_{k}(\Delta r)_{k})^{2}\] \[+\ell_{k}(\Delta r)_{k}(r_{\min,\,k}+\ell_{k}(\Delta r)_{k})^{2}+ (r_{\max,\,k}-r_{\min})\ell_{k}(\Delta r)_{k}(2r_{\max,\,k}+\ell_{k}(\Delta r) _{k})\Bigg{)}\,.\]
By choice of \(\ell_{k}\), we have \(\ell_{k}(\Delta r)_{k}<\delta_{k}+(\Delta r)_{k}\), so
\[\hat{C}_{k}(x)\leq\hat{C}(x)+\frac{40}{3(1-B^{5})}\cdot\frac{\xi _{k}}{r_{\max,\,k}^{2}}+\frac{5(1+\xi_{k})(\delta_{k}+(\Delta r)_{k})}{(1-B^{5 })r_{\max,\,k}^{5}}\Bigg{(}(r_{\max,\,k}+(\delta_{k}+(\Delta r)_{k}))^{2}\] \[+(r_{\min,\,k}+(\delta_{k}+(\Delta r)_{k}))^{2}+(r_{\max,\,k}-r_{ \min,\,k})(2r_{\max,\,k}+(\delta_{k}+(\Delta r)_{k}))\Bigg{)}\,.\]
Because \(0\leq r_{\min,\,k}<r_{\max,\,k}\),
\[\hat{C}_{k}(x)\leq\hat{C}(x)+\frac{40}{3(1-B^{5})}\cdot\frac{\xi_{k }}{r_{\max,\,k}^{2}}+\frac{5(1+\xi_{k})(\delta_{k}+(\Delta r)_{k})}{(1-B^{5})r_ {\max,\,k}^{5}}\Bigg{(}2(r_{\max,\,k}+(\delta_{k}+(\Delta r)_{k}))^{2}\\ +r_{\max,\,k}(2r_{\max,\,k}+\delta_{k}+(\Delta r)_{k})\Bigg{)}\,.\]
By hypothesis, \(\delta_{k}+(\Delta r)_{k}<r_{\max,\,k}\) for sufficiently large \(k\), so
\[\hat{C}_{k}(x)-\hat{C}(x)\leq\frac{40}{3(1-B^{5})}\cdot\frac{\xi_{k}}{r_{\max, \,k}^{2}}+\frac{55(1+\xi_{k})}{(1-B^{5})}\cdot\frac{(\delta_{k}+(\Delta r)_{k })}{r_{\max,\,k}^{3}}\]
for sufficiently large \(k\). The righthand side is positive and approaches \(0\) as \(k\to\infty\), so
(A.21) \[\hat{C}_{k}(x)-\hat{C}(x)<\epsilon\]
for sufficiently large \(k\).
Next, we obtain a lower bound on \(\hat{C}_{k}(x)-\hat{C}(x)\). The calculation proceeds almost the same way as our calculation of an upper bound, except that the lower bound in equation (A.19) is of a slightly different form than the upper bound. The lower bounds in equations (A.18)-(A.19) imply that
\[\hat{C}_{k}(x) =\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}\sum_{j=1}^{m _{k}}r_{j}^{2}\Big{(}\hat{y}_{k}(x,r_{j})-1\Big{)}(\Delta r)_{k}\Bigg{)}\] \[\geq\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}r_{1}^{2} \Big{(}\hat{y}_{k}(x,r_{1}-\delta_{k})-1-\xi_{k}\Big{)}(\Delta r)_{k}\] \[\qquad+\sum_{j=2}^{m_{k}}r_{j}^{2}\Big{(}\hat{y}(x,r_{j-\ell_{k}} )-1-\xi_{k}\Big{)}(\Delta r)_{k}\Bigg{)}\,.\]
Substituting \(r_{j}^{2}=r_{j-\ell_{k}}^{2}+\ell_{k}(\Delta r)_{k}(2r_{j}-\ell_{k}(\Delta r) _{k})\), we obtain
\[\hat{C}_{k}(x)\geq\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}} \Bigg{(}r_{1}^{2}\Big{(}\hat{y}(x,r_{1}-\delta_{k})-1-\xi_{k}\Big{)}(\Delta r) _{k}\] \[\qquad+\sum_{j=2-\ell_{k}}^{m_{k}-\ell_{k}}r_{j}^{2}\Big{(}\hat{y }(x,r_{j})-1-\xi_{k}\Big{)}(\Delta r)_{k}\] \[\qquad+\sum_{j=2}^{m_{k}}\ell_{k}(\Delta r)_{k}(2r_{j}-\ell_{k}( \Delta r)_{k})\Big{(}\hat{y}(x,r_{j-\ell_{k}})-1-\xi_{k}\Big{)}(\Delta r)_{k} \Bigg{)}\,.\]
Rearranging terms, we have
\[\hat{C}_{k}(x)\geq\hat{C}(x)+\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^ {5}}\Bigg{(}r_{1}^{2}\Big{(}\hat{y}(x,r_{1}-\delta_{k})-1-\xi_{k}\Big{)}(\Delta r )_{k}\] \[\qquad+\sum_{j=2-\ell_{k}}^{0}r_{j}^{2}\Big{(}\hat{y}(x,r_{j})-1 \Big{)}(\Delta r)_{k}-r_{1}^{2}\Big{(}\hat{y}(x,r_{1})-1\Big{)}(\Delta r)_{k}\] \[\qquad-\sum_{j=m_{k}-\ell_{k}+1}^{m_{k}}r_{j}^{2}\Big{(}\hat{y}(x,r_{j})-1\Big{)}(\Delta r)_{k}-\sum_{j=2-\ell_{k}}^{m_{k}-\ell_{k}}r_{j}^{2}\xi _{k}(\Delta r)_{k}\] \[\qquad+\sum_{j=2}^{m_{k}}\ell_{k}(\Delta r)_{k}(2r_{j}-\ell_{k}( \Delta r)_{k})\Big{(}\hat{y}(x,r_{j-\ell_{k}})-1-\xi_{k}\Big{)}(\Delta r)_{k} \Bigg{)}\,.\]
By equation (A.20),
\[\hat{C}_{k}(x)\geq\hat{C}(x)-\frac{5\xi_{k}}{r_{\max,\,k}^{5}-r_ {\min,\,k}^{5}}\sum_{j=2-\ell_{k}}^{m_{k}-\ell_{k}}r_{j}^{2}(\Delta r)_{k}- \frac{5(1+\xi_{k})}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}r_{1}^{2}(\Delta r )_{k}+\sum_{j=2-\ell_{k}}^{1}r_{j}^{2}(\Delta r)_{k}\] \[\qquad\qquad+\sum_{j=m_{k}-\ell_{k}+1}^{m_{k}}r_{j}^{2}(\Delta r )_{k}+\sum_{j=2}^{m_{k}}\ell_{k}(\Delta r)_{k}(2r_{j}-\ell_{k}(\Delta r)_{k}) (\Delta r)_{k}\Bigg{)}\,.\]
By comparing the sum \(\sum_{j=2-\ell_{k}}^{m_{k}-\ell_{k}}r_{j}^{2}(\Delta r)_{k}\) to the integral \(\int_{0}^{r_{\max}}r^{2}dr\), we obtain
\[\hat{C}_{k}(x)\geq\hat{C}(x)-\Bigg{(}\frac{5\xi_{k}}{r_{\max,\,k} ^{5}-r_{\min,\,k}^{5}}\cdot\frac{r_{\max,\,k}^{3}}{3}\Bigg{)}-\frac{5(1+\xi_{ k})}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}r_{1}^{2}(\Delta r)_{k}+\sum_{j=2- \ell_{k}}^{1}r_{j}^{2}(\Delta r)_{k}\] \[\qquad\qquad+\sum_{j=m_{k}-\ell_{k}+1}^{m_{k}}r_{j}^{2}(\Delta r )_{k}+\sum_{j=2}^{m_{k}}\ell_{k}(\Delta r)_{k}(2r_{j}-\ell_{k}(\Delta r)_{k}) (\Delta r)_{k}\Bigg{)}\,.\]
By hypothesis, \(r_{\min,\,k}\leq Br_{\max,\,k}\) for some \(B<1\), so
\[\hat{C}_{k}(x)\geq\hat{C}(x)-\Bigg{(}\frac{5}{3(1-B^{5})}\cdot \frac{\xi_{k}}{r_{\max,\,k}^{2}}\Bigg{)}-\frac{5(1+\xi_{k})}{(1-B^{5})r_{\max, \,k}^{5}}\Bigg{(}r_{1}^{2}(\Delta r)_{k}+\sum_{j=2-\ell_{k}}^{1}r_{j}^{2}( \Delta r)_{k}\] \[\qquad\qquad+\sum_{j=m_{k}-\ell_{k}+1}^{m_{k}}r_{j}^{2}(\Delta r )_{k}+\sum_{j=2}^{m_{k}}\ell_{k}(\Delta r)_{k}(2r_{j}-\ell_{k}(\Delta r)_{k}) (\Delta r)_{k}\Bigg{)}\,.\]
Because \(r_{j}^{2}\) increases monotonically with \(j\in\{2-\ell_{k},\ldots,m_{k}\}\) (and noting that \(r_{2-\ell_{k}}>0\) by hypothesis),
\[\hat{C}_{k}(x) \geq\hat{C}(x)-\left(\frac{5}{3(1-B^{5})}\cdot\frac{\xi_{k}}{r_{ \max,\,k}^{2}}\right)-\frac{5(1+\xi_{k})}{(1-B^{5})r_{\max,\,k}^{5}}\Bigg{(}r_{1 }^{2}(\Delta r)_{k}+\ell_{k}r_{1}^{2}(\Delta r)_{k}\] \[\qquad+\ell_{k}r_{\max,\,k}^{2}(\Delta r)_{k}+(m_{k}-1)(\Delta r) _{k}\ell_{k}(\Delta r)_{k}(2r_{\max,\,k}-\ell_{k}(\Delta r)_{k})\Bigg{)}\,.\] \[\geq\hat{C}(x)-\frac{5}{3(1-B^{5})}\cdot\frac{\xi_{k}}{r_{\max,\, k}^{2}}-\frac{5(1+\xi_{k})}{(1-B^{5})r_{\max,\,k}^{5}}\Bigg{(}(1+\ell_{k})r_{1 }^{2}(\Delta r)_{k}+3r_{\max,\,k}^{2}(\Delta r)_{k}\ell_{k}\Bigg{)}\] \[\geq\hat{C}(x)-\frac{5}{3(1-B^{5})}\cdot\frac{\xi_{k}}{r_{\max,\, k}^{2}}\] \[\qquad-\frac{5(1+\xi_{k})}{(1-B^{5})r_{\max,\,k}^{5}}\Bigg{(}(1+2 \ell_{k})r_{\max,\,k}^{2}(\Delta r)_{k}+3r_{\max,\,k}^{2}(\Delta r)_{k}\ell_{ k}\Bigg{)}\,.\]
By choice of \(\ell_{k}\), we have \(\ell_{k}(\Delta r)_{k}<(\Delta r)_{k}+\delta_{k}\), which implies
\[\hat{C}_{k}(x)-\hat{C}(x) \geq-\Bigg{(}\frac{5}{3(1-B^{5})}\cdot\frac{\xi_{k}}{r_{\max,\,k} ^{2}}\Bigg{)}-\frac{5(1+\xi_{k})}{(1-B^{5})r_{\max,\,k}^{3}}\Big{(}6(\Delta r) _{k}+5\delta_{k}\Big{)}\] \[\geq-\Bigg{(}\frac{5}{3(1-B^{5})}\cdot\frac{\xi_{k}}{r_{\max,\,k} ^{2}}\Bigg{)}-\frac{30(1+\xi_{k})}{(1-B^{5})}\cdot\frac{(\Delta r)_{k}+\delta _{k}}{r_{\max,\,k}^{3}}\,.\]
The righthand side is negative and approaches \(0\) as \(k\to\infty\), so
(A.22) \[\hat{C}_{k}(x)-\hat{C}(x)>-\epsilon\]
for sufficiently large \(k\). Together, equations (A.21) and (A.22) complete the proof.
**Lemma A.6**.: If the hyperparameter value sequences satisfy
1. \((\Delta r)_{k}/r_{\max,\,k}^{3}\to 0\) as \(k\to\infty\,,\)
2. \(|X_{k}|(r_{\min,\,k}+(\Delta r)_{k})^{n}\to\infty\,,\) and
3. \(r_{\min,\,k}/r_{\max,\,k}^{3}\to 0\) as \(k\to\infty\,,\)
then \(|\hat{C}[d_{X_{k}},\rho](x_{k})-C(x_{k})|\to 0\) in probability as \(k\to\infty\), where \(\{x_{k}\}\) is any sequence of points such that \(x_{k}\in X_{k}\).
Proof.: Let \(x\) be any point in \(X_{k}\). For all \(j\in\{1,\ldots,m_{k}\}\), let \(\hat{y}_{j}:=\hat{y}(x,r_{j})\) and let \(y_{j}:=y(x,r_{j})\). The absolute difference \(|\hat{C}(x)-C(x)|\) is bounded above by
\[|\hat{C}(x)-C(x)| \leq\frac{5}{r_{\max,\,k}^{5}-r_{\min,\,k}^{5}}\Bigg{(}\Big{|}\sum _{j=1}^{m_{k}}r_{j}^{2}(\Delta r)_{k}-\int_{r_{\min,\,k}}^{r_{\max,\,k}}r^{2} dr\Big{|}\] \[\qquad+\Big{|}\int_{r_{\min,\,k}}^{r_{\max,\,k}}r^{2}y(x,r)dr- \sum_{j=1}^{m_{k}}r_{j}^{2}y_{j}(\Delta r)_{k}\Big{|}+\Big{|}\sum_{j=1}^{m_{k} }r_{j}^{2}(\hat{y}_{j}-y_{j})(\Delta r)_{k}\Big{|}\Bigg{)}\,.\]
Because \(r_{\min,\,k}/r_{\max,\,k}^{3}\to 0\) (by hypothesis), there is a constant \(B<1\) such that \(r_{\min,\,k}\leq Br_{\max,\,k}\) for all \(k\). Therefore,
\[|\hat{C}(x)-C(x)| \leq\frac{5}{(1-B^{5})r_{\max,\,k}^{5}}\Bigg{(}\Big{|}\sum_{j=1}^ {m_{k}}r_{j}^{2}(\Delta r)_{k}-\int_{r_{\min,\,k}}^{r_{\max,\,k}}r^{2}dr\Big{|}\] (A.23) \[\qquad+\Big{|}\int_{r_{\min,\,k}}^{r_{\max,\,k}}r^{2}y(x,r)dr-\sum _{j=1}^{m_{k}}r_{j}^{2}y_{j}(\Delta r)_{k}\Big{|}+\Big{|}\sum_{j=1}^{m_{k}}r_{ j}^{2}(\hat{y}_{j}-y_{j})(\Delta r)_{k}\Big{|}\Bigg{)}\,.\]
The first term on the righthand side of equation (A.23) is a Riemann sum error. For any function \(f(r)\) that is integrated on \([r_{\min,\,k},r_{\max,\,k}]\), the error in the right Riemann sum is bounded above by \(\max_{r\in[r_{\min,\,k},r_{\max,\,k}]}|f^{\prime}(r)|(\Delta r)_{k}\cdot(r_{ \max,\,k}-r_{\min,\,k})/2\). Therefore,
\[\Big{|}\int_{r_{\min,\,k}}^{r_{\max,\,k}}r^{2}y(x,r)dr-\sum_{j=1}^ {m_{k}}r_{j}^{2}y_{j}(\Delta r)_{k}\Big{|}\] (A.24) \[\qquad\leq(\Delta r)_{k}\Big{(}\max_{r\in[r_{\min,\,k},r_{\max,\, k}]}\Big{|}\frac{d}{dr}r^{2}y(x,r)\Big{|}\Big{)}(r_{\max,\,k}-r_{\min,\,k})/2\] \[\qquad\leq(\Delta r)_{k}\Big{(}\max_{r\in[r_{\min,\,k},r_{\max,\, k}]}\Big{|}\frac{d}{dr}r^{2}y(x,r)\Big{|}\Big{)}r_{\max,\,k}/2.\]
We have \(\frac{d}{dr}r^{2}y(x,r)=r^{2}\frac{d}{dr}y(x,r)+2ry(x,r)\). By equation (1.1), we have \(\lim_{r\to 0}y(x,r)=1\) and \(\lim_{r\to 0}\frac{d}{dr}y(x,r)=0\). Therefore, \(|\frac{d}{dr}r^{2}y(x,r)|=\mathcal{O}(r)\) as \(r\to 0\), so there is a constant \(A>1\) such that
\[\max_{r\in[r_{\min,\,k},r_{\max,\,k}]}\big{|}\frac{d}{dr}r^{2}y(x,r)\big{|}\leq 2 Ar_{\max,\,k}\]
for sufficiently small \(r_{\max,\,k}\). Thus for sufficiently large \(k\),
(A.25) \[\Big{(}\max_{r\in[r_{\min,\,k},r_{\max,\,k}]}\Big{|}\frac{d}{dr}r^{2}y(x,r) \Big{|}\Big{)}r_{\max,\,k}/2\leq Ar_{\max,\,k}^{2}\]
because \(r_{\max,\,k}\to 0\) as \(k\to\infty\).
Let \(\epsilon>0\). By hypothesis, \(\frac{r_{\max,\,k}^{3}}{(\Delta r)_{k}}\to\infty\) as \(k\to\infty\), so
(A.26) \[r_{\max,\,k}^{2}\leq\frac{r_{\max,\,k}^{5}(1-B^{5})\epsilon}{15A(\Delta r)_{k}}\]
for sufficiently large \(k\). Substituting equation (A.26) into equation (A.25) and equation (A.25) into equation (A.24) yields
(A.27) \[\Big{|}\int_{r_{\min,\,k}}^{r_{\max,\,k}}r^{2}y(x,r)dr-\sum_{j=1}^{m_{k}}r_{ j}^{2}y_{j}(\Delta r)_{k}\Big{|}\leq\frac{r_{\max,\,k}^{5}(1-B^{5})\epsilon}{15}\,.\]
Next, we bound the second term on the righthand side of equation (A.23), which is also a Riemann sum error. For a monotonic function \(f(r)\) that is integrated on \([r_{\min,\,k},r_{\max,\,k}]\), the error in the right Riemann sum is bounded above by \((\Delta r)_{k}|f(r_{\max,\,k})-f(r_{\min,\,k})|\). Therefore,
\[\Big{|}\sum_{j=1}^{m_{k}}r_{j}^{2}(\Delta r)_{k}-\int_{r_{\min,\,k}}^{r_{\max, \,k}}r^{2}dr\Big{|}\leq(\Delta r)_{k}(r_{\max,\,k}^{2}-r_{\min,\,k}^{2})\leq( \Delta r)_{k}\cdot r_{\max,\,k}^{2}\,.\]
By Eq. (A.26),
(A.28) \[\Big{|}\sum_{j=1}^{m_{k}}r_{j}^{2}(\Delta r)_{k}-\int_{r_{\min,\,k}}^{r_{\max,\,k} }r^{2}dr\Big{|}\leq\frac{r_{\max,\,k}^{5}(1-B^{5})\epsilon}{15A}<\frac{r_{\max, \,k}^{5}(1-B^{5})\epsilon}{15}\]
for sufficiently large \(k\).
Putting the inequalities of Eqs. (A.28) and (A.27) into Eq. (A.23), we obtain
\[|\hat{C}(x)-C(x)|\leq\frac{2}{3}\epsilon+\Big{|}\sum_{j=1}^{m_{k}}r_{j}^{2}( \hat{y}_{j}-y_{j})\Big{|}\frac{5(\Delta r)_{k}}{(1-B^{5})r_{\max,\,k}^{5}}.\]
Therefore,
(A.29) \[\mathbb{P}[|\hat{C}(x)-C(x)|>\epsilon]\leq\mathbb{P}\Bigg{[}\Big{|}\sum_{j=1 }^{m_{k}}r_{j}^{2}(\hat{y}_{j}-y_{j})\Big{|}>\frac{(1-B^{5})r_{\max,\,k}^{5} \epsilon}{15(\Delta r)_{k}}\Bigg{]}\,.\]
We have \(\mathbb{E}\Big{[}\sum_{j=1}^{m_{k}}r_{j}^{2}\hat{y}_{j}\Big{]}=\sum_{j=1}^{m_ {k}}r_{j}^{2}y_{j}\) because \(\mathbb{E}[\hat{y}_{j}]=y_{j}\) (Lemma 3.4). By applying Chebyshev's inequality to the righthand side of Eq. (A.29), we obtain
(A.30) \[\mathbb{P}[|\hat{C}(x)-C(x)|>\epsilon]\leq\Big{(}\frac{15}{(1-B^{5})\epsilon} \Big{)}^{2}\Big{(}\frac{(\Delta r)_{k}}{r_{\max,\,k}^{5}}\Big{)}^{2}\mathrm{ var}\Big{(}\sum_{j=1}^{m_{k}}r_{j}^{2}\hat{y}_{j}\Big{)}.\]
We expand the variance as
\[\mathrm{var}\Big{(}\sum_{j=1}^{m_{k}}r_{j}^{2}\hat{y}_{j}\Big{)}=\sum_{j=1}^{m _{k}}r_{j}^{4}\mathrm{var}(\hat{y}_{j})+\sum_{i\neq j}r_{i}^{2}r_{j}^{2} \mathrm{cov}(\hat{y}_{i},\hat{y}_{j}).\]
For all \(i\neq j\), we have \(\mathrm{cov}(\hat{y}_{i},\hat{y}_{j})^{2}\leq\mathrm{var}(\hat{y}_{i})\mathrm{ var}(\hat{y}_{j})\). Therefore,
\[\mathrm{var}\Big{(}\sum_{j=1}^{m_{k}}r_{j}^{2}\hat{y}_{j}\Big{)}\leq\Big{(} \sum_{j=1}^{m_{k}}r_{j}^{2}\sqrt{\mathrm{var}(\hat{y}_{j})}\Big{)}^{2}.\]
By Proposition 5.1, there is a constant \(A^{\prime}\geq 0\) such that
\[\mathrm{var}(\hat{y}_{j})\leq\frac{A^{\prime}}{|X_{k}|r_{j}^{n}}\]
for all \(j\) and sufficiently large \(k\). Therefore,
(A.31) \[\mathrm{var}\Big{(}\sum_{j=1}^{m_{k}}r_{j}^{2}\hat{y}_{j}\Big{)}\leq\frac{A^ {\prime}}{|X_{k}|}\Big{(}\sum_{j=1}^{m_{k}}r_{j}^{2-n/2}\Big{)}^{2}\]
for sufficiently large \(k\). Below, we use Eq. (A.31) to obtain an upper bound on the righthand side of Eq. (A.30). There are two cases, depending on \(n\).
**Case 1**: \((2\leq n\leq 4)\).
In this case,
(A.32) \[\Big{(}\sum_{j=1}^{m_{k}}r_{j}^{2-n/2}\Big{)}^{2}\leq r_{\max,\,k}^{4-n}\Big{(} \frac{r_{\max,\,k}-r_{\min,\,k}}{(\Delta r)_{k}}\Big{)}^{2}\leq\frac{r_{\max, \,k}^{6-n}}{(\Delta r)_{k}^{2}}\]
because \(r_{j}^{2-n/2}\) is monotonically increasing. Combining Eqs (A.30), (A.31), and (A.32), we obtain
\[\mathbb{P}[|\hat{C}(x)-C(x)|>\epsilon] \leq\Big{(}\frac{15}{(1-B^{5})\epsilon}\Big{)}^{2}\Big{(}\frac{( \Delta r)_{k}}{r_{\max,\,k}^{5}}\Big{)}^{2}\frac{A^{\prime}}{|X_{k}|}\frac{r_{ \max,\,k}^{6-n}}{(\Delta r)_{k}^{2}}\] \[\leq A^{\prime}\Big{(}\frac{15}{(1-B^{5})\epsilon}\Big{)}^{2} \frac{1}{|X_{k}|r_{\max,\,k}^{n+4}}\] \[=A^{\prime}\Big{(}\frac{15}{(1-B^{5})\epsilon}\Big{)}^{2}\frac{1 }{|X_{k}|(r_{\min,\,k}+(\Delta r)_{k})^{n/3+4/3}}\Big{(}\frac{r_{\min,\,k}+( \Delta r)_{k}}{r_{\max,\,k}^{3}}\Big{)}^{n/3+4/3}\,.\]
Because \(n/3+4/3\leq n\) for \(n\geq 2\) and \(r_{\min,\,k}+(\Delta r)_{k}<1\) for sufficiently large \(k\),
\[\mathbb{P}[|\hat{C}(x)-C(x)|>\epsilon]\leq A^{\prime}\Big{(}\frac{15}{(1-B^{5 })\epsilon}\Big{)}^{2}\frac{1}{|X_{k}|(r_{\min,\,k}+(\Delta r)_{k})^{n}} \Big{(}\frac{r_{\min,\,k}+(\Delta r)_{k}}{r_{\max,\,k}^{3}}\Big{)}^{n/3+3/4}\]
for sufficiently large \(k\). By hypothesis, \(\Big{(}\frac{r_{\min,\,k}+(\Delta r)_{k}}{r_{\max,\,k}^{3}}\Big{)}\to 0\) and \(\frac{1}{|X_{k}|(r_{\min,\,k}+(\Delta r)_{k})^{n}}\to 0\) as \(k\to\infty\). Therefore, \(\mathbb{P}[|\hat{C}(x_{k})-C(x_{k})|>\epsilon]\to 0\) as \(k\to\infty\).
**Case 2:**\((n>4)\).
In this case,
(A.33) \[\Big{(}\sum_{j=1}^{m_{k}}r_{j}^{2-n/2}\Big{)}^{2}\leq(r_{\min,\,k}+(\Delta r) _{k})^{4-n}\Big{(}\frac{r_{\max,\,k}-r_{\min,\,k}}{(\Delta r)_{k}}\Big{)}^{2} \leq(r_{\min,\,k}+(\Delta r)_{k})^{4-n}\Big{(}\frac{r_{\max,\,k}}{(\Delta r)_ {k}}\Big{)}^{2}\]
because \(r_{j}^{2-n/2}\) is monotonically decreasing. Combining Eqs (A.30), (A.31), and (A.33) yields
\[\mathbb{P}[|\hat{C}(x)-C(x)|>\epsilon] \leq A^{\prime}\Big{(}\frac{15}{(1-B^{5})\epsilon}\Big{)}^{2} \Big{(}\frac{(\Delta r)_{k}}{r_{\max,\,k}^{5}}\Big{)}^{2}\frac{(r_{\min,\,k}+ (\Delta r)_{k})^{4-n}r_{\max,\,k}^{2}}{|X_{k}|(\Delta r)_{k}^{2}}\] \[=A^{\prime}\Big{(}\frac{15}{(1-B^{5})\epsilon}\Big{)}^{2}\Big{(} \frac{r_{\min,\,k}+(\Delta r)_{k}}{r_{\max,\,k}^{2}}\Big{)}^{4}\frac{1}{|X_{k }|(r_{\min,\,k}+(\Delta r)_{k})^{n}}\,.\]
By hypothesis, \(\Big{(}\frac{r_{\min,\,k}+(\Delta r)_{k}}{r_{\max,\,k}^{2}}\Big{)}\to 0\) and \(\frac{1}{|X_{k}|(r_{\min,\,k}+(\Delta r)_{k})^{n}}\to 0\) as \(k\to\infty\). Therefore, \(\mathbb{P}[|\hat{C}(x_{k})-C(x_{k})|>\epsilon]\to 0\) as \(k\to\infty\).
|
2305.07791 | Using Deepfake Technologies for Word Emphasis Detection | In this work, we consider the task of automated emphasis detection for spoken
language. This problem is challenging in that emphasis is affected by the
particularities of speech of the subject, for example the subject accent,
dialect or voice. To address this task, we propose to utilize deep fake
technology to produce an emphasis devoid speech for this speaker. This requires
extracting the text of the spoken voice, and then using a voice sample from the
same speaker to produce emphasis devoid speech for this task. By comparing the
generated speech with the spoken voice, we are able to isolate patterns of
emphasis which are relatively easy to detect. | Eran Kaufman, Lee-Ad Gottlieb | 2023-05-12T22:50:53Z | http://arxiv.org/abs/2305.07791v1 | # Using Deepfake Technologies for Word Emphasis Detection
###### Abstract
In this work, we consider the task of automated emphasis detection for spoken language. This problem is challenging in that emphasis is affected by the particularities of speech of the subject, for example the subject's accent, dialect or voice.
To address this task, we propose to utilize deep fake technology to produce an emphasis-devoid speech for this speaker. This requires extracting the text of the spoken voice, and then using a voice sample from the same speaker to produce emphasis-devoid speech for this task. By comparing the generated speech with the spoken voice, we are able to isolate patterns of emphasis which are relatively easy to detect.
Eran Kaufman\({}^{1}\), Lee-Ad Gottlieb\({}^{2}\)\({}^{1}\)Shenkar College
\({}^{2}\)Ariel University
[email protected], [email protected]
**Index Terms**: intonation, word emphasis, speech recognition, human-computer interaction, computational paralinguistics.
## 1 Introduction
We as humans have developed a deep sensitivity to the'music' of speech, meaning its stress, rhythm and intonation. Intonation in particular may be used to express wonder, cynicism or emphasis, and any one of these may alter (or even completely reverse) the meaning of a sentence.
Let us take for example the simple sentence 'I did not take your bag.' Placing emphasis on different words of the sentence can affect its overall meaning: Emphasizing the subject of the sentence - 'I did not take your bag' - implies that the bag may still have been taken, but by someone else. Emphasis on the possessive adjective - 'I did not take _your_ bag' - implies that I did take a bag, only a different one. And emphasis on the object - 'I did not take your _bag'_ - implies that I took a different object of yours.
Establishing the correct emphasis in a spoken sentence is therefore central to a correct interpretation of that sentence. Indeed, written language has long ago adopted tools to convey emphasize or meaning, such as italicization, punctuation marks, and the more recent use of emoji symbols. Hence, understanding and classifying word emphasis is a potentially important task for fields related to human-machine interaction, for example machine translation, spoken information retrieval, automated question answering, sentiment analysis and speech synthetics.
**Our contribution.** The task of automated emphasis detection is complicated due to the fact that different languages, dialects or accents already feature inherent differences in emphasis. In addition, different voices resonate at different frequencies. Hence, this makes our task speaker specific. We propose to address this problem by employing deepfake techniques: Given a spoken statement upon which emphasis must be determined, we utilize deepfake methods (along with a speech sample from the same speaker) to automatically generate an 'emotionless' version of this query statement, that is a version which mimics the speaker reciting the query statement with no particular emphasis. Then by comparing the original query statement and its synthesized emphasis-devoid version, we can identify emphasized parts of speech in the query statement.
An overview of our computational approach is as follows: Our detector is built by composing several separate modules. A voice encoder processes the speech sample to produce a representative data vector capturing the speaker's voice characteristics. Given the query statement, a speech-to-text (STT) module generates text from the spoken sentence. Then a text-to-speech (TTS) module uses the embedded data vector and the text of the spoken sentence to generate an audio waveform of the same text as if produced by the same speaker, but devoid of any special emphasis. This constitutes the deepfake synthesized version of the speech. Finally, an analyzer will compare the query statement and its deepfake. As these two differ solely in their emphasis, this final step finds the emphasized words.
We remark that our work is in harmony with the theme of inclusivity of INTERSPEECH 2023, and has the potential to facilitate interaction between peoples of different dialect and idiosyncrasies of speech.
## 2 Related Work
Prosody and word emphasis are the subjects of significant research in the field of speech correction, in particular as relates to speech of non-native speakers. It has also received much attention in the field of neural TTS synthesis, where attention to emphasis can yield more expressive speech.
Intonation models, such as the Fujisaki [1], Hirst [2], Rise/Fall/Connection (RFC) [3] and Tilt models [4], aim to provide linguistically meaningful interpretations to an utterance. The Fujisaki model works on the F\({}_{0}\) (fundamental frequencies) contour by applying a pair of filters to generate the phrase and accent components of speech, and then adding these to a base frequency value. By specifying different amplitudes and durations, the model is able to successfully detect different types of accents. In the Hirst model, the F\({}_{0}\) contour is first encoded by a number of target points using a fitting algorithm. It is then classified into different phonological descriptions. Similar to the Hirst model, the RFC model attempts to split the F\({}_{0}\) contour into three different categories, a rise or fall in intonation, or a neutral connection. In the Tilt model, amplitude, duration and tilt are used for describing the intonation shapes of rise or fall, or a rise followed by a fall.
Basic components of an intonational events include pitch accents and edge tones [5]. Pitch accents are associated with syllables and signify emphasis, while edge tones occur at the edges of the phrase and give cues such as continuation, question or statement. Kun et al. [6] used intonation detection in order to detect errors in English speech, and to then provide corrective feedback to speakers of English as a second language. They developed a pitch accent detector based on a Gaussian mixture
model, and used features based on energy, pitch contour and the vowel duration.
There have been several relevant contributions in the field of neural TTS, with the overarching goal of improving generated prosody. Several variational [7, 8] and non-variational [9, 10] models have been suggested for learning latent prosodic representations. One line of work proposes methods for low-level prosody control [11, 12], while another exploits various syntactic and semantic features to generate context-suited prosody [13, 14, 15, 16]. Bai et al. [17] used a separate model to extract semantic features such as questions from text alone, and then fed these new features into a chosen vocoder. Similarly, Mass et al. [18] suggested incorporating a word emphasis predictor based on text alone. Their predicator is based on recurrent neural networks (RNNs), and its output was fed to a TTS module. They found that word emphasis patterns are both speaker specific and difficult to identify using only the speaker's voice. This motivated their use of an independent emphasis predictor. We aim to solve this problem by incorporating a sample of the speaker's own voice into the learning process, and use this to generate emphasis-devoid baseline waveform for emphasis comparison.
## 3 Our Work
We present a new deepfake-based approach for emphasis detection. Our algorithm is given a speech sample from a target speaker, and uses this to familiarize itself with the ambient properties of this speaker. Then given a spoken query statement, the algorithm extracts the text of the query, and produces a 'vanilla' TTS version of this text (that is, TTS with no specific emphasis). This emphasis-void speech is then compared to the query statement.
**Background.** Recent vanilla neural TTS synthesis technologies have achieved realistic synthetic speech generated from a very small sample of a speaker's voice [19, 20, 21]. These TTS models are based on deep neural networks, and are trained using an encoder-decoder architecture. They map input characters or phonemes to acoustic features (for example, mel-spectrograms) or directly to the waveform. The acoustic features can be converted into waveforms via vocoders [22, 23].
Our work is based primarily on the SV2TTS TTS architecture [21]. This specific architecture is composed of three independently trained neural networks:
* A speaker encoder (based on [24]), which uses a sample of the speaker's voice to compute a fixed size embedding vector.
* A sequence-to-sequence synthesizer (based on [25]), which constructs a mel-spectrogram from a sequence of grapheme or phoneme inputs, conditioned on the embedding vector.
* An autoregressive WaveNet vocoder [26], which converts the mel spectrogram into time-domain waveforms.
**Our construction.** We will utilize all three of these neural networks. The encoder is used to produce an embedding vector representing properties of the speaker's voice, that is a 'voice print.' This will later be used to produce a deepfake of the speaker reciting the query statement. The synthesizer is fed by text sequences concatenated with the speaker's embedding vector to create the log-mel spectrogram. The log-mel spectrogram is fed into the vocoder to output a synthetic waveform. To these we will add a detector to compare the synthetic and the original waveforms and identify pitch or skew accent. Our word emphasis detector is composed of five distinct ordered parts:
**Step 1: Encoder.** The above encoder utilizes a voice sample provided by the speaker to create an embedding vector representing the voice properties of the speaker.
**Step 2: Speech to text.** The speaker's query statement is inputted into an STT module, which extracts the text of this statement.
**Step 3: Text to speech.** The TTS module uses the synthesizer described above. Both the text produced from the STT step and the embedding vector produced by the encoding step are fed to the synthesizer, which then produces a waveform.
This waveform is an emphasis-devoid deepfake of the speaker reciting the query statement. We recall that vanilla neural TTS systems are not capable of synthesizing emphasis due to the loss of sentiment information [27]. This computed waveform serves us as the baseline for the task of emphasis detection.
**Step 4: Waveform comparison.** Having computed the synthesized speech, we can compare it to the spoken query statement, to determine which word or words are emphasized. Our comparison technique is detailed in Section 3.1 below.
### Comparison between waveforms
Our premise is that the synthesizer can produce a reasonable imitation of emphasis-devoid speech of the speaker. The emphasis of a word by the speaker may differ from the synthesizer
Figure 1: Algorithm work flow.
waveform in that the speaker's word is pitched or skewed relative to the normal voice produced by the synthesizer. Hence, a cross correlation test between the respective spectrograms of these two waveforms may allow us to identify the special emphasis made by the speaker.
To effectively compare the two waveforms, we need to first separate both the synthetic and query speech into their distinct words. This is done using a sliding root mean square (RMS) window, while applying a low threshold to distinguish between spoken and silent parts of the speech (see Figure 2)[28]. We then compute the fast Fourier transform (FFT) for each individual word, and compare for each word its two respective spectrograms corresponding to the synthesized and spoken speech. We focus on the two distinct modes of emphasis mentioned above:
* The first is _pitch_, meaning that the speaker's emphasis of a word is accomplished by modulating regular speech into a higher (or sometimes lower) tone. In this case, the general shape of the spectrogram remains the same, but its central frequency shifts. This is identifiable by the peak of the cross correlation of the two spectrograms.
* The second is _skew_, wherein the speaker modulates the voice up and down to emphasize a word. Here the spectral distribution is significantly different from the auto-generated waveform, and its total energy is spread over a wider range of the spectrum. In this case the cross correlation between the two spectrograms is low for all frequency shifts.
Detection of differences due to pitch is illustrated by the comparison of Figures 3 and 4:
Figure 3 illustrates the above comparison for the word 'bag' in the sentence 'I' did not _take_ your bag' (i.e., where the word 'take' and not 'bag' is emphasized.) The comparison is between the spoken and generated waveforms' FFT. One can see that the spectral analysis of the two waveforms are quite similar, and this is due to the fact that the word 'bag' was not emphasized in this query. The figure showing the cross-correlation between the two FFTs shows that the peak is close to zero, implying a relatively high correlation between the two waveforms.
Figure 4 illustrates the comparison of the word 'take' in the same sentence (where 'take' was indeed emphasised). It is readily seen that the synthesized and spoken spectrum of the waveforms differ significantly. The corresponding cross correlation demonstrates a shift of the peak correlation of the signal by about \(80Hz\).
The general shape of these two spectrograms does not differ significantly, and so by applying a threshold on the frequency shift of the cross correlation it is possible to identify if the word was emphasized. The correlation and FFTs are normalized by the wave's total energy.
Figure 3: Comparison of the FFTs for the word ‘bag.’
Figure 2: RMS sliding window and word separation.
For skew we applied a low threshold on the cross correlation peak amplitude. When there is no correlation at all this is also a signal of word emphasis (see Figure 5).
Figure 1 illustrates the workflow of the algorithm. The recorded waveform is inputted into the STT module. The extracted text and the embedded vector of the speaker are fed into the synthesizer, which then creates the synthetic mel-spectrogram which turns into a waveform by the vocoder. Both the original and synthesized wave forms are fed into a decomposition module which separates the speech into its individual words. Corresponding words are compared using the FFT cross correlation module, and this determined whether or not the word was emphasized.
## 4 Implementation and experiments
As already mentioned in Section 3 above, our encoder and decoder are adapted from the SV2TTS architecture [21]. which was in turn, based on the recurrent sequence-to-sequence Tacotron2 network [29], extended with an attention network to support multiple speakers, similar to the scheme suggested for Deep Voice2 [30].
We used the sample-by-sample autoregressive WaveNet [26] as a vocoder to invert synthesized mel-spectrograms emitted by the synthesis network into time-domain waveforms. This architecture is composed of \(30\) dilated convolution layers, similar to what was described in [25]. The network is not directly conditioned on the output of the speaker encoder. The mel-spectrogram predicted by the synthesizer network captures the information needed to produce a multi-speaker vocoder. To train the speech synthesis and vocoder neural networks, we used the VCTK [31] dataset, which contains 44 hours of speech from 109 speakers. We downsampled the audio files to 16 kHz, and trimmed leading and trailing silence sequences.
Our word emphasis predictor, described in Section 3 above, computes the word by word cross-correlation between the generated and the original FFTs. Since the number of samples for the generated and original words are not of the same in length, a simple linear interpolator is applied in the frequency domain.
For our experiments, we constructed a dataset of \(100\) different voice samples: Five different speakers recited five different sentences, each sentence with word emphasis on one of four different words. The five sentences are:
1. "I did not take your bag."
2. "Hello, this is our intonation project."
3. "There are very few black rhinos left in Africa."
4. "I saw her face under the hood."
5. "Why did you give Sarah the sandwich with mustard."
The above underlined words were the ones given emphasis. We obtained an accuracy, precision, recall, and F1 score of \(92\), \(89.14\), \(89.33\), and \(89.23\), respectively.
The project open source code can be found online.1 It runs as a python application with 3 distinct parts: (i) Configuration of a user using live or recorded voice. (ii) Recording a sentence from that same user to create a synthetic voice. (iii) Word emphasis - the recording is converted into text and a separation is applied. Each word is placed in a different box, with emphasized word boxes highlighted. Pressing a box will open the spectrum analysis of the original and synthesized words, alongside the cross correlation between them (See figure 3). The control panel is demonstrated in figure 6. The original speech in the time domain is represented in blue, the mel-spectrogram of the synthesized speech as outputted from the decoder is found above, and the embedding vector of this specific user is found alongside. The output result highlights the word 'take' which was emphasized in this specific example. Videos demonstrating the use of the application can be found online.2
Footnote 1: [https://anonymous.4open.science/r/Intonation-Project-215B](https://anonymous.4open.science/r/Intonation-Project-215B)
Footnote 2: [https://www.youtube.com/@intonationdetection-kl77np](https://www.youtube.com/@intonationdetection-kl77np)
## 5 Conclusions and Future Work
In this paper, we presented the layout and empirical results for our word emphasis detector. As we have described above, this problem is especially challenging in that emphasis is affected by dialect and accent, and also different voices may differ significantly in their resonance. For this problem we developed a novel approach using deep fake technology to produce an emphasis-devoid speech for this speaker. We used a double conversion from speech to text and back to speech again. By comparing the generated and spoken voice, we are able to isolate patterns of emphasis which are relatively easy to detect.
For future work, we intend to use our technique not only to detect emphasis, but also to cluster and classify different emotions for the purpose of sentiment analysis.
Figure 4: Comparison of the FFTs for the word ‘take.’
Figure 5: RMS sliding window and word separation.
Figure 6: RMS sliding window and word separation. |
2303.14305 | Singular examples of the Matrix Bochner Problem | The Matrix Bochner Problem aims to classify which weight matrices have their
sequence of orthogonal polynomials as eigenfunctions of a second-order
differential operator. Casper and Yakimov, in [2], demonstrated that, under
certain hypotheses, all solutions to the Matrix Bochner Problem are
noncommutative bispectral Darboux transformations of a direct sum of classical
scalar weights. This paper aims to provide the first proof that there are
solutions to the Matrix Bochner Problem that do not arise through a
noncommutative bispectral Darboux transformation of any direct sum of classical
scalar weights. This initial example could contribute to a more comprehensive
understanding of the general solution to the Matrix Bochner Problem. | Ignacio Bono Parisi, Inés Pacharoni | 2023-03-25T00:08:37Z | http://arxiv.org/abs/2303.14305v2 | # Singular examples of the matrix Bochner problem
###### Abstract.
The aim of this paper is to exhibit and study explicit weight matrices \(W(x)\) which are solutions of the matrix Bochner problem and which can not be obtained as a bispectral Darboux transformation of classical scalar weights.
## 1. Introduction
Back in 1929, Bochner [1] posed the problem of determining all scalar-valued orthogonal polynomials that are eigenfunctions of some arbitrary, but fixed, second-order differential operator. Bochner proved that, up to an affine change of coordinates, the only weights satisfying these properties are the classical weights \(e^{-x^{2}}\), \(x^{b}e^{-x}\) and \((1-x)^{\alpha}(1+x)^{\beta}\) of Hermite, Laguerre, and Jacobi respectively.
Orthogonal matrix polynomials are sequences of matrix-valued polynomials which are pairwise orthogonal with respect to a matrix-valued inner product defined by an \(N\times N\) weight matrix \(W(x)\). The theory of these matrix valued orthogonal polynomials, without any consideration of differential equations, goes back to [17] and [18]. In [4], the study of the matrix valued orthogonal polynomials that are eigenfunctions of certain second order symmetric differential operators was started.
Nowadays, the problem of finding weight matrices \(W(x)\) of size \(N\times N\) such that the associated sequence of orthogonal matrix polynomials are eigenfunctions of a second order matrix differential operator, is known as the matrix Bochner Problem.
In [8] and [9], Grunbaum, Pacharoni and Tirao found the first nontrivial solutions of the problem, by using Harish-Chandra modules for real simple groups and the associated matrix spherical functions. In the past twenty years a number of other examples have been found, not necessarily associated with Lie theory. See [10], [11], [5], [12], [3], [22], [6],[20], [21], [23], [14], [15], [16].
After the appearance of the first examples, some works focused on the study of the algebra \(\mathcal{D}(W)\) of all differential operators that have a sequence of matrix valued orthogonal polynomials with respect to \(W\) as eigenfunctions.
In the classical cases of Hermite, Laguerre and Jacobi weights, the structure of this algebra is well understood: it is a polynomial algebra in a second order differential operator, see [19]. In the matrix case, the first attempt to go beyond the existence of an element of order two in \(\mathcal{D}(W)\) and to study the full algebra, is undertaken in [3] with the assistance of symbolic computation, for a few weights \(W\). The first deep study of the algebra \(\mathcal{D}(W)\) in a specific case can be found in [24], where the author worked out one of the examples introduced in [3]. Also in [25] the author studied the structure of the algebra \(\mathcal{D}(W)\) for an example of Gegenbauer weight matrix arising from matrix valued spherical functions. The basic definitions and main results concerning the algebra \(\mathcal{D}(W)\), including the definition of an adjoint in \(\mathcal{D}(W)\), are given in [13].
In [2], Casper and Yakimov developed a general framework for the study of the structure of the algebra \(\mathcal{D}(W)\) by using techniques from noncommutative algebra, performing an important breakthrough in this area. They define the notion of the _rank_ of the algebra \(\mathcal{D}(W)\) as the maximal number of generalized orthogonal idempotents of \(\mathcal{D}(W)\) which add to a central element. Using representation theory, they proved that the algebraic structure of the algebra \(\mathcal{D}(W)\) has a profound influence on the shape of the weight matrix \(W(x)\) itself. Specifically, they proved that when the algebra \(\mathcal{D}(W)\) is full in the sense that the rank equals \(N\) (the
Introduction
Let \(W(x)\) be a smooth
## 2. Background
### Matrix valued orthogonal polynomials and the algebra \(\mathcal{D}(W)\)
Let \(W=W(x)\) be a weight matrix of size \(N\) on the real line, that is, a complex \(N\times N\) matrix valued integrable function on the interval \((x_{0},x_{1})\) such that \(W(x)\) is positive definite almost everywhere and with finite moments of all orders. Let \(\operatorname{Mat}_{N}(\mathbb{C})\) be the algebra of all \(N\times N\) complex matrices and let \(\operatorname{Mat}_{N}(\mathbb{C})[x]\) be the algebra of polynomials in the indeterminate \(x\) with coefficients in \(\operatorname{Mat}_{N}(\mathbb{C})\). We consider the following Hermitian sesquilinear form in the linear space \(\operatorname{Mat}_{N}(\mathbb{C})[x]\)
\[\langle P,Q\rangle=\langle P,Q\rangle_{W}=\int_{x_{0}}^{x_{1}}P(x)W(x)Q(x)^{*} \,dx.\]
Given a weight matrix \(W\) one can construct sequences \(\{Q_{n}\}_{n\in\mathbb{N}_{0}}\) of matrix valued orthogonal polynomials, i.e. the \(Q_{n}\) are polynomials of degree \(n\) with nonsingular leading coefficient and \(\langle Q_{n},Q_{m}\rangle=0\) for \(n\neq m\). We observe that there exists a unique sequence of monic orthogonal polynomials \(\{P_{n}\}_{n\in\mathbb{N}_{0}}\) in \(\operatorname{Mat}_{N}(\mathbb{C})[x]\). By following a standard argument (see [17] or [18]) one shows that the monic orthogonal polynomials \(\{P_{n}\}_{n\in\mathbb{N}_{0}}\) satisfy a three-term recursion relation
\[xP_{n}(x)=P_{n+1}(x)+B_{n}P_{n}(x)+C_{n}P_{n-1}(x),\qquad n\in\mathbb{N}_{0}, \tag{4}\]
where \(P_{-1}=0\) and \(B_{n},C_{n}\) are matrices depending on \(n\) and not on \(x\).
Along this paper we consider that an arbitrary matrix differential operator
\[D=\sum_{i=0}^{s}\partial^{i}F_{i}(x),\qquad\partial=\frac{d}{dx}, \tag{5}\]
acts on the right on a matrix-valued function \(P\) i.e.
\[(PD)(x)=\sum_{i=0}^{s}\partial^{i}(P)(x)F_{i}(x).\]
We consider the algebra of these operators with polynomial coefficients
\[\operatorname{Mat}_{N}(\Omega[x])=\Big{\{}D=\sum_{j=0}^{n}\partial^{j}F_{j}(x) \,:F_{j}\in\operatorname{Mat}_{N}(\mathbb{C}[x])\Big{\}}.\]
More generally, when necesary, we will also consider \(\operatorname{Mat}_{N}(\Omega[[x]])\), the set of all differential operators with coefficients in \(\mathbb{C}[[x]]\), the ring of power series with coefficients in \(\mathbb{C}\).
**Proposition 2.1** ([13], Propositions 2.6 and 2.7).: _Let \(W=W(x)\) be a weight matrix of size \(N\times N\) and let \(\{P_{n}\}_{n\geq 0}\) be the sequence of monic orthogonal polynomials in \(\operatorname{Mat}_{N}(\mathbb{C})[x]\). If \(D\) is differential operator of order \(s\), as in (5), such that_
\[P_{n}D=\Lambda_{n}P_{n},\qquad\text{for all $n\in\mathbb{N}_{0}$},\]
_with \(\Lambda_{n}\in\operatorname{Mat}_{N}(\mathbb{C})\), then \(F_{i}=F_{i}(x)=\sum_{j=0}^{i}x^{j}F_{j}^{i}\), \(F_{j}^{i}\in\operatorname{Mat}_{N}(\mathbb{C})\), is a polynomial and \(\deg(F_{i})\leq i\). Moreover \(D\) is determined by the sequence \(\{\Lambda_{n}\}_{n\geq 0}\) and_
\[\Lambda_{n}=\sum_{i=0}^{s}[n]_{i}F_{i}^{i},\qquad\text{for all $n\geq 0$}, \tag{6}\]
_where \([n]_{i}=n(n-1)\cdots(n-i+1)\), \([n]_{0}=1\)._
Given a weight matrix \(W\), the algebra
\[\mathcal{D}(W)=\{D\in\mathcal{D}\,:\,P_{n}D=\Lambda_{n}(D)P_{n},\,\Lambda_{n}( D)\in\operatorname{Mat}_{N}(\mathbb{C}),\text{ for all $n\in\mathbb{N}_{0}$}\} \tag{7}\]
is introduced in [13], where \(\{P_{n}\}_{n\in\mathbb{N}_{0}}\) is any sequence of matrix valued orthogonal polynomials with respect to \(W\).
We observe that the definition of \(\mathcal{D}(W)\) depends only on the weight matrix \(W\) and not on the particular sequence of orthogonal polynomials, since two sequences \(\{P_{n}\}_{n\in\mathbb{N}_{0}}\) and \(\{Q_{n}\}_{n\in\mathbb{N}_{0}}\) of matrix orthogonal polynomials with respect to the weight \(W\) are related by \(P_{n}=M_{n}Q_{n}\), with \(\{M_{n}\}_{n\in\mathbb{N}_{0}}\) invertible matrices (see [13, Corollary 2.5]).
**Proposition 2.2** ([13], Proposition 2.8).: _For each \(n\in\mathbb{N}_{0}\), the mapping \(D\mapsto\Lambda_{n}(D)\) is a representation of \(\mathcal{D}(W)\) in \(\operatorname{Mat}_{N}(\mathbb{C})\). Moreover, the sequence of representations \(\{\Lambda_{n}\}_{n\in\mathbb{N}_{0}}\) separates the elements of \(\mathcal{D}(W)\)._
In [13] it is also proved the existence and the uniqueness of an adjoint in \(\mathcal{D}(W)\). For any \(D\in\mathcal{D}(W)\) there exists a unique differential operator \(\widetilde{D}\in\mathcal{D}(W)\) such that
\[\langle PD,Q\rangle=\langle P,Q\widetilde{D}\rangle,\]
for all \(P,Q\in\operatorname{Mat}_{N}(\mathbb{C})[x]\). See Theorem 4.3 and Corollary 4.5 in [13]. The explicit definition of the coefficients of the differential operator \(\widetilde{D}\) is given in terms of the norm of the sequence of the monic othogonal polynomials.
More recently, in [2], the authors extended this notion to a subalgebra of \(\operatorname{Mat}_{N}(\Omega[x])\) larger than \(\mathcal{D}(W)\). The _formal adjoint_ on \(\operatorname{Mat}_{N}(\Omega([[x]])\), denoted by \({}^{*}\), is the unique involution extending Hermitian conjugate on \(\operatorname{Mat}_{N}(\Omega([x])\) and sending \(\partial I\) to \(-\partial I\). The _formal \(W\)-adjoint_ of \(\mathfrak{D}\in\operatorname{Mat}_{N}(\Omega([x])\), or the formal adjoint of \(\mathfrak{D}\) with respect to \(W(x)\) is the differential operator \(\mathfrak{D}^{\dagger}\in\operatorname{Mat}_{N}(\Omega[[x]])\) defined by
\[\mathfrak{D}^{\dagger}:=W(x)\mathfrak{D}^{*}W(x)^{-1},\]
where \(\mathfrak{D}^{*}\) is the formal adjoint of \(\mathfrak{D}\). An operator \(\mathfrak{D}\in\operatorname{Mat}_{N}(\Omega[x])\) is called \(W\)_-adjointable_ if there exists \(\widetilde{\mathfrak{D}}\in\operatorname{Mat}_{N}(\Omega[x])\), such that
\[\langle P\mathfrak{D},Q\rangle=\langle P,Q\widetilde{\mathfrak{D}}\rangle,\]
for all \(P,Q\in\operatorname{Mat}_{N}(\mathbb{C})[x]\). Then we say that the operator \(\widetilde{\mathfrak{D}}\) is the \(W\)-adjoint of \(\mathfrak{D}\).
**Proposition 2.3** ([2], Proposition 2.23).: _If \(\mathfrak{D}\in\operatorname{Mat}_{N}(\Omega[x])\) is \(W\)-adjointable and \(\mathfrak{D}^{\dagger}\in\operatorname{Mat}_{N}(\Omega[x])\), then \(\mathfrak{D}^{\dagger}\) is the \(W\)-adjoint of \(\mathfrak{D}\), i.e._
\[\langle\,P\mathfrak{D},Q\,\rangle=\langle\,P,\,Q\mathfrak{D}^{\dagger}\rangle,\]
_for all \(P,Q\in\operatorname{Mat}_{N}(\mathbb{C})[x]\)._
For \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}\in\operatorname{Mat}_{N}(\Omega[ x])\), the formal \(W\)-adjoint of \(\mathfrak{D}\) is given by \(\mathfrak{D}^{\dagger}=\sum_{k=0}^{n}\partial^{k}G_{k}\), with
\[G_{k}=\sum_{j=0}^{n-k}(-1)^{n-j}\binom{n-j}{k}(WF_{n-j}^{*})^{(n-k-j)}W^{-1}, \qquad\text{for $0\leq k\leq n$}. \tag{8}\]
It is a matter of careful integration by parts to see that \(\langle\,P\mathfrak{D},Q\,\rangle=\langle\,P,\,Q\mathfrak{D}^{\dagger}\rangle\) if the following set of "boundary conditions" are satisfied
\[\lim_{x\to x_{i}}\sum_{j=0}^{p-1}(-1)^{n-j+p-1}\binom{n-j}{k}\big{(}F_{n-j}(x )W(x)\big{)}^{(p-1-j)}=0, \tag{9}\]
for \(1\leq p\leq n\) and \(0\leq k\leq n-p\), where \(x_{i}\) are the endpoints of the support of the weight \(W\).
We say that a differential operator \(D\in\mathcal{D}(W)\) is \(W\)_-symmetric_ if \(\langle PD,Q\rangle=\langle P,QD\rangle\), for all \(P,Q\in\operatorname{Mat}_{N}(\mathbb{C})[x]\). An operator \(\mathfrak{D}\in\operatorname{Mat}_{N}(\Omega([x])\) is called _formally_\(W\)-_symmetric_ if \(\mathfrak{D}^{\dagger}=\mathfrak{D}\). In particular if \(\mathfrak{D}\in\mathcal{D}(W)\), then \(\mathfrak{D}\) is \(W\)-symmetric if and only if it is formally \(W\)-symmetric.
It is shown that the set \(\mathcal{S}(W)\) of all symmetric operators in \(\mathcal{D}(W)\) is a real form of the space \(\mathcal{D}(W)\), i.e.
\[\mathcal{D}(W)=\mathcal{S}(W)\oplus i\mathcal{S}(W),\]
as real vector spaces.
The condition of symmetry for a differential operator in the algebra \(\mathcal{D}(W)\) is equivalent to the following set of differential equations involving the weight \(W\) and the coefficients of \(D\).
**Theorem 2.4**.: _Let \(\mathfrak{D}=\sum_{i=0}^{n}\partial^{i}F_{i}(x)\) be a differential operator of order \(n\) in \(\mathcal{D}(W)\). Then \(\mathfrak{D}\) is \(W\)-symmetric if and only if_
\[\sum_{j=0}^{n-k}(-1)^{n-j}\binom{n-j}{k}(F_{n-j}W)^{(n-k-j)}=WF_{k}^{*}\]
_for all \(0\leq k\leq n\)._
Proof.: An operator \(\mathfrak{D}\in\mathcal{D}(W)\) is \(W\)-symmetric if and only if \(\mathfrak{D}=\mathfrak{D}^{\dagger}\). By using the explicit expression of the coefficients of \(\mathfrak{D}^{\dagger}\) given in (8), we complete the proof.
In particular, the coefficients of a differential operator of order two in \(\mathcal{D}(W)\) satisfy the classical symmetry equations obtained in [10].
\[\begin{split} F_{2}W&=WF_{2}^{*}\\ 2(F_{2}W)^{\prime}&-F_{1}W=WF_{1}^{*}\\ (F_{2}W)^{\prime\prime}&-(F_{1}W)^{\prime}+F_{0}W= WF_{0}^{*}\end{split} \tag{10}\]
## 3. The structure of the algebra \(\mathcal{D}(W)\)
For the Hermite-type weight matrix
\[W(x)=e^{-x^{2}}\begin{pmatrix}e^{2bx}+a^{2}x^{2}&\quad ax\\ ax&1\end{pmatrix},\quad a,b\in\mathbb{R},a,b\neq 0,\quad x\in\mathbb{R}\]
we will prove that the algebra \(\mathcal{D}(W)\) is a polynomial algebra on the \(W\)-symmetric differential operator \(D\) given by
\[D=\partial^{2}I+\partial\begin{pmatrix}-2x+2b&-2abx+2a\\ 0&-2x\end{pmatrix}+\begin{pmatrix}-2&0\\ 0&0\end{pmatrix}.\]
We will first prove in Theorem 3.7, that the centralizer of \(D\) in \(\mathcal{D}(W)\),
\[\mathcal{Z}_{\mathcal{D}(W)}(D)=\{\mathfrak{D}\in\mathcal{D}(W):\mathfrak{D}D =D\mathfrak{D}\}\]
is a polynomial algebra in \(D\). Then we will prove that any differential operator in \(\mathcal{D}(W)\) commutes with \(D\). See Theorem 3.10.
A differential operator \(\mathfrak{D}\in\mathcal{D}(W)\) is of the form \(\mathfrak{D}=\sum_{j=0}^{n}\frac{d^{j}}{dx^{j}}\,F_{j}\), where \(F_{j}=F_{j}(x)\) are polynomial matrices of degree at most \(j\). First at all, we shall prove that all these coefficients \(F_{j}\) are upper triangular matrices.
**Proposition 3.1**.: _The coefficients of any operator \(\mathfrak{D}\in\mathcal{D}(W)\) are upper triangular matrices._
Proof.: We can assume that \(\mathfrak{D}\) is a symmetric operator because \(\mathcal{D}(W)=\mathcal{S}(W)\oplus i\mathcal{S}(W)\).
Let \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}\in\mathcal{S}(W)\) with \(F_{j}=\begin{pmatrix}p_{j}&r_{j}\\ g_{j}&q_{j}\end{pmatrix}\), and \(p_{j},r_{j},g_{j},q_{j}\in\mathbb{C}[x]\), for all \(0\leq j\leq n\). From Theorem 2.4, the coefficients of \(\mathfrak{D}\) satisfy the following set of differential equations, for \(k=0\dots n\)
\[\sum_{j=0}^{n-k}(-1)^{n-j}\binom{n-j}{k}(F_{n-j}W)^{(n-k-j)}=WF_{k}^{*}.\]
We have
\[F_{j}W =e^{-x^{2}}\begin{pmatrix}a^{2}x^{2}p_{j}+axr_{j}&axp_{j}+r_{j}\\ a^{2}x^{2}g_{j}+axq_{j}&axg_{j}+q_{j}\end{pmatrix}+e^{-x^{2}+2bx}\begin{pmatrix} p_{j}&0\\ g_{j}&0\end{pmatrix},\] \[WF_{j}^{*} =e^{-x^{2}}\begin{pmatrix}a^{2}x^{2}\overline{p}_{j}+ax\overline{ r}_{j}&a^{2}x^{2}\overline{g}_{j}+ax\overline{q}_{j}\\ ax\overline{p}_{j}+\overline{r}_{j}&ax\overline{g}_{j}+\overline{q}_{j} \end{pmatrix}+e^{-x^{2}+2bx}\begin{pmatrix}\overline{p}_{j}&\overline{g}_{j}\\ 0&0\end{pmatrix}.\]
For each \(0\leq k\leq n\), the entry \((1,2)\) in (3) gives
\[e^{-x^{2}}(a^{2}x^{2}\overline{g}_{k}+ax\overline{q}_{k})+e^{-x^{2}+2bx} \overline{g}_{k}=\sum_{j=0}^{n-k}(-1)^{n-j}\binom{n-j}{k}\big{(}e^{-x^{2}}(axp _{n-j}+r_{n-j})\big{)}^{(n-k-j)}.\]
Multiplying by \(e^{x^{2}}\) we obtain that \(e^{2bx}\bar{g}_{k}\) is a polynomial function and therefore \(g_{k}\) must be zero, which proves that \(F_{k}\) is an upper triangular matrix.
**Proposition 3.2**.: _A differential operator \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}\in\mathcal{D}(W)\) is symmetric if and only if its polynomial coefficients \(F_{j}=\begin{pmatrix}p_{j}&r_{j}\\ 0&q_{j}\end{pmatrix}\) satisfy the following set of equations, for each \(0\leq k\leq n\)_
\[\sum_{j=0}^{n-k-1}(-1)^{n-j}\binom{n-j}{k}\left(e^{-x^{2}}q_{n-j} \right)^{(n-k-j)}=e^{-x^{2}}(\overline{q}_{k}+(-1)^{k+1}q_{k}), \tag{12}\] \[\sum_{j=0}^{n-k-1}(-1)^{n-j}\binom{n-j}{k}\left(e^{-x^{2}}(axp_{n -j}+r_{n-j})\right)^{(n-k-j)}=e^{-x^{2}}(ax\overline{q}_{k}+(-1)^{k+1}(axp_{k} +r_{k})),\] (13) \[\sum_{j=0}^{n-k-1}(-1)^{n-j}\binom{n-j}{k}a(n-k-j)\left(e^{-x^{2}} q_{n-j}\right)^{(n-k-j-1)}=e^{-x^{2}}(ax(\overline{p}_{k}-\overline{q}_{k})+ \overline{r}_{k}).\] (14) \[\sum_{j=0}^{n-k-1}(-1)^{n-j}\binom{n-j}{k}\left(e^{-x^{2}}(a^{2}x ^{2}p_{n-j}+axr_{n-j}\right))^{(n-k-j)}\] \[=e^{-x^{2}}a^{2}x^{2}(\overline{p}_{k}+(-1)^{k+1}p_{k})\quad+e^{- x^{2}}ax(\overline{r}_{k}+(-1)^{k+1}r_{k}),\] (15) \[\sum_{j=0}^{n-k-1}(-1)^{n-j}\binom{n-j}{k}\left(e^{-x^{2}+2bx}p_{n -j}\right)^{(n-k-j)}=e^{-x^{2}+2bx}(\overline{p}_{k}+(-1)^{k+1}p_{k}). \tag{11}\]
Proof.: From Theorem 2.4 we have that the coefficients of \(\mathfrak{D}\) satisfy
\[\sum_{j=0}^{n-k-1}(-1)^{n-j}\binom{n-j}{k}(A_{n-j}W)^{(n-k-j)}=WA_{k}^{*}+(-1)^ {k+1}A_{k}W,\qquad\text{ for }k=0\ldots n.\]
The entries (2,2) and (1,2) in the above matrix equation give the equations (11) and (12). The entry (2,1) is
\[\sum_{j=0}^{n-k-1}(-1)^{n-j}\binom{n-j}{k}\left(e^{-x^{2}}axq_{n-j}\right)^{(n -k-j)}=e^{-x^{2}}(ax\overline{p}_{k}+\overline{r}_{k}+(-1)^{k+1}axq_{k}). \tag{16}\]
By using that
\[(e^{-x^{2}}axq_{n-j})^{(n-k-j)}=ax(e^{-x^{2}}q_{n-j})^{(n-k-j)}+a(n-k-j)(e^{-x^ {2}}q_{n-j})^{(n-k-j-1)},\]
and combining with (11) we obtain that (16) is equivalent to (13).
Finally the entry (1,1) is the sum of equations (14) and (15), which splits into two equations due to the factor \(e^{2bx}\). This concludes the proof.
Observe that for a symmetric operator \(\mathfrak{D}\), the identity (13) gives an expression of the polynomial \(r_{k}\) in terms of the coefficients \(p_{j}\) and \(q_{j}\):
\[r_{k}=ax(q_{k}-p_{k})+\sum_{j=0}^{n-k-1}(-1)^{n-j}\binom{n-j}{k}a(n-k-j)(e^{-x^ {2}}\overline{q}_{n-j})^{(n-k-j-1)}e^{x^{2}}, \tag{17}\]
for \(0\leq k\leq n\).
We have the \(W\)-symmetric second-order differential operator \(D\),
\[D=\partial^{2}I+\partial\begin{pmatrix}-2x+2b&-2abx+2a\\ 0&-2x\end{pmatrix}+\begin{pmatrix}-2&0\\ 0&0\end{pmatrix}.\]
We write it as \(D=\partial^{2}I+\partial(Ax+B)+C\), where
\[A=\begin{pmatrix}-2&-2ab\\ 0&-2\end{pmatrix},\quad B=\begin{pmatrix}2b&2a\\ 0&0\end{pmatrix},\text{ and }\quad C=\begin{pmatrix}-2&0\\ 0&0\end{pmatrix}.\]
Let \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}\in\mathcal{D}(W)\) be a differential operator of order \(n\). We have that
\[\mathfrak{D}D=\sum_{j=0}^{n+2}\partial^{j}\left(F_{j-2}+2F_{j-1}^{\prime}+F_{ j-1}(Ax+B)+F_{j}^{\prime\prime}+F_{j}^{\prime}(Ax+B)+F_{j}C\right),\]
and
\[D\mathfrak{D}=\sum_{j=0}^{n+2}\partial^{j}(F_{j-2}+(Ax+B)F_{j-1}+jAF_{j}+CF_{j }).\]
Therefore the operator \(\mathfrak{D}\) commutes with \(D\) if and only if the matrix coefficients satisfy
\[2F_{k-1}^{\prime}+F_{k-1}(Ax+B)-(Ax+B)F_{k-1}=kAF_{k}+CF_{k}-F_{k}C-F_{k}^{ \prime\prime}-F_{k}^{\prime}(Ax+B), \tag{18}\]
for all \(k\). As usual, we assume that \(F_{j}=0\) for all \(j\notin\{0,1,\cdots,n\}\).
By Proposition 3.1, we can write the coefficients of the differential operator \(\mathfrak{D}\) as
\[F_{j}=\begin{pmatrix}p_{j}&r_{j}\\ 0&q_{j}\end{pmatrix},\]
where \(p,q,r\in\mathbb{C}[x]\) are polynomials with degree \(\leq j\).
The equation (18), for \(k=n+1\), gives
\[2F_{n}^{\prime}+F_{n}(Ax+B)-(Ax+B)F_{n}=0. \tag{19}\]
Thus we get
\[\begin{pmatrix}2p_{n}^{\prime}&2r_{n}^{\prime}\\ 0&2q_{n}^{\prime}\end{pmatrix}=\begin{pmatrix}0&(2abx-2a)(q_{n}-p_{n})+2br_{n} \\ 0&0\end{pmatrix}.\]
From here we obtain that \(p_{n}\) and \(q_{n}\) are constant polynomials, i.e.
\[p_{n}=\alpha,\quad\text{and }q_{n}=\beta,\qquad\text{ for some }\alpha,\beta\in\mathbb{C}. \tag{20}\]
We also obtain that
\[r_{n}=a(\alpha-\beta)x.\]
Again from (18), now with \(k=n\), we have that
\[2F_{n-1}^{\prime}+F_{n-1}(Ax+B)-(Ax+B)F_{n-1}=CF_{n}-F_{n}C+nAF_{n}-F_{n}^{ \prime}(Ax+B)-F_{n}^{\prime\prime}.\]
By looking at the entries \((1,1)\) and \((2,2)\) in the above equation, we see that \(p^{\prime}_{n-1}=-n\alpha\) and \(q^{\prime}_{n-1}=-n\beta\). Then
\[p_{n-1}=-n\alpha x+\alpha_{1},\qquad q_{n-1}=-n\beta x+\beta_{1} \tag{21}\]
with \(\alpha_{1},\beta_{1}\in\mathbb{C}\).
We now introduce the following notation: \(\llbracket n\rrbracket_{k}=n(n-2)\cdots(n-2(k-1))\), \(\llbracket n\rrbracket_{0}=1\).
**Proposition 3.3**.: _Let \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}\in\mathcal{D}(W)\) a differential operator commuting with \(D\). Then_
* _The leading coefficient of_ \(\mathfrak{D}\) _is_ \(F_{n}=\begin{pmatrix}\alpha&a(\alpha-\beta)x\\ 0&\beta\end{pmatrix}\)_, for some_ \(\alpha,\beta\in\mathbb{C}\)_._
* _For all_ \(1\leq k\leq n\)_, we have_ \[F_{n-k}=\begin{pmatrix}\frac{(-1)^{k}}{k!}\alpha\,\llbracket n\rrbracket_{k}\,x ^{k}+h_{n-k}&r_{n-k}\\ 0&\frac{(-1)^{k}}{k!}\beta\,\llbracket n\rrbracket_{k}\,x^{k}+g_{n-k}\end{pmatrix},\] _where_ \(h_{n-k},g_{n-k}\in\mathbb{C}[x]\) _with_ \(\deg(h_{n-k}),\deg(g_{n-k})\leq k-1\)_._
Proof.: From Proposition 3.1 we have that the coefficients of a differential operator in \(\mathcal{D}(W)\) are upper triangular matrices, that is
\[F_{n-k}=\begin{pmatrix}p_{n-k}&r_{n-k}\\ 0&q_{n-k}\end{pmatrix}.\]
The statement in i) has already been proven in (20). To see ii), we proceed by induction on \(k\geq 1\). For \(k=1\) the statement is true from (21). Assume that the statement of the proposition is true for some \(j\). i.e.
\[p_{n-j}=\frac{(-1)^{j}}{j!}\alpha\,\llbracket n\rrbracket_{j}\,x^{j}+h_{n-j}, \qquad q_{n-j}=\frac{(-1)^{j}}{j!}\beta\,\llbracket n\rrbracket_{j}\,x^{j}+g_ {n-j}. \tag{22}\]
From (18), with \(k=n-j\), we get
\[\begin{split} 2F^{\prime}_{n-j-1}+& F_{n-j-1}(Ax+B)-(Ax+B)F_{n-j-1}\\ &=(n-j)AF_{n-j}+CF_{n-j}-F_{n-j}C-F^{\prime}_{n-j}(Ax+B)-F^{ \prime\prime}_{n-j}.\end{split} \tag{23}\]
The left-hand side of (23) is
\[\begin{pmatrix}2p^{\prime}_{n-j-1}&2r^{\prime}_{n-j-1}\\ 0&q^{\prime}_{n-j-1}\end{pmatrix}+\begin{pmatrix}0&-2br_{n-j-1}-(2abx-2a)(p _{n-j-1}-q_{n-j-1})\\ 0&0\end{pmatrix}, \tag{24}\]
and the right-hand side is
\[\begin{split}-(n-j)&\begin{pmatrix}2p_{n-j}&2abq_{n-j}+2r_{n-j} \\ 0&2q_{n-j}\end{pmatrix}+\begin{pmatrix}0&-2r_{n-j}\\ 0&0\end{pmatrix}-\begin{pmatrix}p^{\prime\prime}_{n-j}&r^{\prime\prime}_{n-j} \\ 0&q^{\prime\prime}_{n-j}\end{pmatrix}\\ &+\begin{pmatrix}(2x-2b)p^{\prime}_{n-j}&2xr^{\prime}_{n-j}+(2abx-2a)p^{ \prime}_{n_{j}}\\ 0&2xq^{\prime}_{n-j}\end{pmatrix}\end{split} \tag{25}\]
By comparing the entries \((1,1)\) of the matrices in (24) and (25), and using the inductive hypothesis, we obtain
\[p^{\prime}_{n-j-1}=-\frac{(-1)^{j}}{j!}(n-2j)\,\llbracket n\rrbracket_{j}\alpha \,x^{j}-(n-j)h_{n-j}+xh^{\prime}_{n-j}-bp^{\prime}_{n-j}-\tfrac{1}{2}p^{\prime \prime}_{n-j}.\]
The right-hand side in the above equation is a polynomial of degree \(\leq j\). Therefore
\[p_{n-j-1}=\frac{(-1)^{j+1}}{(j+1)!}\,\llbracket n\rrbracket_{j+1}\alpha x^{j+ 1}+h_{n-j-1},\]
where \(h_{n-j-1}=\int\big{(}xh^{\prime}_{n-j}-\tfrac{1}{2}p^{\prime\prime}_{n-j}-bp^{ \prime}_{n-j}-(n-j)h_{n-j}\big{)}\,dx\).
By proceeding in the same way with the entry \((2,2)\), we obtain
\[q_{n-j-1}=\frac{(-1)^{j+1}}{(j+1)!}\,[\![n]\!]_{j+1}\beta x^{j+1}+g_{n-j-1},\]
where \(g_{n-j-1}=\int\big{(}-(n-j)g_{n-j}+xg_{n-j}^{\prime}-\frac{q_{n-j}^{\prime}}{2} \big{)}\,dx\)
_Remark 3.4_.: For \(n=2m\) and \(k>m\) the coefficient \([\![n]\!]_{k}=0\) and Proposition 3.3 does not gives any new information because we already know that \(\deg(F_{n-k})\leq n-k<k\).
**Proposition 3.5**.: _There are no odd order operators \(\mathfrak{D}\in\mathcal{D}(W)\) that commute with \(D\)._
Proof.: Suppose that \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}\in\mathcal{D}(W)\) is of odd order and it commutes with \(D\). Let us say \(n=2m-1\) and \(F_{n}\neq 0\). Recall that for any \(\mathfrak{D}\in\mathcal{D}(W)\) we have \(\deg(F_{j})\leq j\), for all \(0\leq j\leq n\).
From Proposition 3.3 ii) with \(k=m\) we get
\[F_{m-1}=\begin{pmatrix}\frac{(-1)^{m}}{m!}\alpha\,[\![2m-1]\!]_{m}\,x^{m}+h_{m -1}&r_{m-1}\\ 0&\frac{(-1)^{m}}{m!}\beta\,[\![2m-1]\!]_{m}\,x^{m}+g_{m-1}\end{pmatrix},\]
where \(h_{m-1},g_{m-1}\) are polynomials with \(\deg(h_{m-1}),\deg(g_{m-1})\leq m-1\). Since \([\![2m-1]\!]_{m}\neq 0\), we have that \(\alpha=\beta=0\). Therefore from Proposition 3.3 we get that the leading coefficient of \(\mathfrak{D}\) is \(F_{n}=0\), which is a contradiction.
**Proposition 3.6**.: _Let \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}\) a differential operator in \(\mathcal{D}(W)\). If \(\mathfrak{D}\) commutes with \(D\) then the leading coefficient of \(\mathfrak{D}\) is scalar, i.e. \(F_{n}=\alpha I\) for some \(\alpha\in\mathbb{C}\)._
Proof.: Let \(\mathfrak{D}\in\mathcal{D}(W)\) be a differential operator commuting with \(D\). The differential operator \(\mathfrak{D}^{\dagger}\) also commutes with \(D\) because \(D\) is symmetric, and \({}^{\dagger}\) is an involution in \(\mathcal{D}(W)\). The operators \(\mathfrak{D}_{1}=\mathfrak{D}+\mathfrak{D}^{\dagger}\) and \(\mathfrak{D}_{2}=i\mathfrak{D}-i\mathfrak{D}^{\dagger}\) are symmetric operators commuting with \(D\) and \(\mathfrak{D}=\frac{1}{2}\mathfrak{D}_{1}-i\frac{1}{2}\mathfrak{D}_{2}\). Therefore we can assume that \(\mathfrak{D}\) is a symmetric operator.
We write \(F_{j}=\begin{pmatrix}p_{j}&r_{j}\\ 0&q_{j}\end{pmatrix}\), for some polynomials \(p_{j},r_{j},q_{j}\in\mathbb{C}[x]\) of degree less than or equal to \(j\), and \(n=2m\). From Proposition 3.3, with \(k=m\), we have that
\[F_{m}=\begin{pmatrix}\frac{(-1)^{m}}{m!}[\![n]\!]_{m}\alpha x^{m}+h_{m}&r_{m}\\ 0&\frac{(-1)^{m}}{m!}[\![n]\!]_{m}\beta x^{m}+g_{m}\end{pmatrix}\]
and from (17)
\[r_{m}=ax(q_{m}-p_{m})+a\sum_{j=0}^{m-1}(-1)^{2m-j}\binom{2m-j}{m}(m-j)e^{x^{2 }}\,(e^{-x^{2}}\overline{q}_{2m-j})^{(m-j-1)}.\]
Given a polynomial \(f\), the function \(e^{x^{2}}\,(e^{-x^{2}}f(x))^{(j)}\) is a polynomial with degree equal to \(\deg(f)+j\). From Proposition 3.3 ii) we have that \(\deg(q_{2m-j})=j\), for all \(0\leq j\leq m\). Thus \(\deg\big{(}(e^{-x^{2}}\overline{q}_{2m-j})^{(m-j-1)}e^{x^{2}}\big{)}=m-1\).
If \(\alpha\neq\beta\) then the polynomial \(q_{m}-p_{m}\) has degree \(m\) and \(\deg(r_{m})=m+1\), a contradiction. Therefore \(\alpha=\beta\), \(p_{n}=q_{n}=\alpha\) and \(r_{n}=0\), which concludes the proof of the proposition.
Finally, we obtain that the centralizer \(\mathcal{Z}_{\mathcal{D}(W)}(D)\) of \(D\) in \(\mathcal{D}(W)\), is a polynomial algebra in \(D\).
**Theorem 3.7**.: _Let \(\mathfrak{D}\in\mathcal{D}(W)\) a differential operator commuting with D. Then \(\mathfrak{D}\) is a polynomial in \(D\)._
Proof.: We proceed by induction on \(m\), where \(\operatorname{ord}(\mathfrak{D})=2m\). Let us assume that the proposition is true for differential operators of order \(\leq 2(m-1)\) and let \(\mathfrak{D}=\sum_{j=0}^{2m}\partial^{j}F_{j}\) be an operator of order \(2m\). From Proposition 3.6 we have that \(F_{2m}=\alpha I\) for some \(\alpha\in\mathbb{C}\). The differential operator \(\mathfrak{B}=\mathfrak{D}-\alpha D^{m}\) commutes with \(D\) and it has order less than or equal to \(2(m-1)\) because there are no operators of odd order in the algebra \(\mathcal{D}(W)\). Thus, by inductive hypothesis we have that \(\mathfrak{B}\) is a polynomial in \(D\) and then \(\mathfrak{D}\in\mathbb{C}[D]\).
The aim of the rest of this section will be to prove that the entire algebra \(\mathcal{D}(W)\) is the polynomial algebra in the differential operator \(D\), in other words that the centralizer of \(D\) is the entire algebra \(\mathcal{D}(W)\).
Let \(\mathfrak{D}=\sum_{j=0}^{s}\partial^{j}F_{j}\in\mathcal{D}(W)\) and let \(\{P_{n}(x)\}_{n\in\mathbb{N}_{0}}\) be the sequence of monic orthogonal polynomials associated to \(W\). We have that
\[P_{n}\mathfrak{D}=\Lambda_{n}P_{n},\qquad\text{for all $n\in\mathbb{N}_{0}$.}\]
The eigenvalues \(\Lambda_{n}=\Lambda_{n}(\mathfrak{D})\) are given in term of the leading coefficient of the polynomials \(F_{i}\). Explicitly, if \(F_{i}(x)=\sum_{j=0}^{i}x^{j}F_{j}^{i}\), then the eigenvalues are obtained by
\[\Lambda_{n}(\mathfrak{D})=\sum_{i=0}^{s}[n]_{i}F_{i}^{i},\qquad\text{for all $n\geq 0$,}\]
where \([n]_{i}=n(n-1)\cdots(n-i+1)\), \([n]_{0}=1\). Hence, for any \(\mathfrak{D}\in\mathcal{D}(W)\) the map
\[n\longrightarrow\Lambda_{n}(\mathfrak{D})\]
is a matrix valued polynomial function of degree less or equal to \(\operatorname{ord}(D)\). Moreover, from Proposition 3.1, these eigenvalues are triangular matrices, let say
\[\Lambda_{n}(\mathfrak{D})=\begin{pmatrix}p(n)&r(n)\\ 0&q(n)\end{pmatrix},\]
for some \(p,q,r\in\mathbb{C}[x]\).
**Proposition 3.8**.: _A differential operator \(\mathfrak{D}\in\mathcal{D}(W)\) commutes with \(D\) if and only if_
\[r(n)=a\,b\,n\,\big{(}p(n)-q(n)\big{)},\]
_i.e. the eigenvalues of \(\mathfrak{D}\) are the form_
\[\Lambda_{n}(\mathfrak{D})=\begin{pmatrix}p(n)&abn\,(p(n)-q(n))\\ 0&q(n)\end{pmatrix}.\]
Proof.: The sequence of representations \(\{\Lambda_{n}\}_{n\in\mathbb{N}_{0}}\) separates points of the the algebra \(\mathcal{D}(W)\). Hence \(\mathfrak{D}\) commutes with \(D\) if and only if \(\Lambda_{n}(D)\Lambda_{n}(\mathfrak{D})=\Lambda_{n}(\mathfrak{D})\Lambda_{n}(D)\), for all \(n\geq 0\).
The eigenvalues of \(D\) and \(\mathfrak{D}\) are
\[\Lambda_{n}(D)=\begin{pmatrix}-2n-2&-2abn\\ 0&-2n\end{pmatrix}\quad\text{and}\quad\Lambda_{n}(\mathfrak{D})=\begin{pmatrix} p(n)&r(n)\\ 0&q(n)\end{pmatrix},\]
and then we have that
\[\Lambda_{n}(D)\Lambda_{n}(\mathfrak{D})-\Lambda_{n}(\mathfrak{D})\Lambda_{n}(D )=\begin{pmatrix}0&-2r(n)+2abn\big{(}p(n)-q(n)\big{)}\\ 0&0\end{pmatrix},\]
which completes the proof of the proposition.
**Proposition 3.9**.: _The algebra \(\mathcal{D}(W)\) has non trivial center._
Proof.: From Theorem 4.13 in [2], the algebra \(\mathcal{D}(W)\) is a finitely generated module over its center \(\mathcal{Z}(W)\). If \(\mathcal{Z}(W)=\mathbb{C}I\) and \(\mathfrak{D}_{1},\ldots,\mathfrak{D}_{\ell}\in\mathcal{D}(W)\) are such generators of \(\mathcal{D}(W)\), then any operator in \(\mathcal{D}(W)\) is a linear combination of them But the operator \(\sum_{k=0}^{\ell}w_{k}\mathfrak{D}_{k}\) is of order at most \(M=\max\{\operatorname{ord}(\mathfrak{D}_{1}),\ldots,\operatorname{ord}( \mathfrak{D}_{\ell})\}\), which gives a contradiction.
**Theorem 3.10**.: _The algebra \(\mathcal{D}(W)\) coincides with the centralizer of \(D\) in \(\mathcal{D}(W)\). In particular it is a polynomial algebra in the differential operator \(D\)._
Proof.: Let \(\mathfrak{D}\in\mathcal{D}(W)\). The sequence of eigenvalues of the monic orthogonal polynomials are given by \(\Lambda_{n}(\mathfrak{D})=\begin{pmatrix}p(n)&r(n)\\ 0&q(n)\end{pmatrix}\), for some \(p,q,r\in\mathbb{C}[x]\). From Propositions 3.9 and 3.8 there exists a differential operator \(E\in\mathcal{Z}(W)\) with \(\operatorname{ord}(E)>0\) and
\[\Lambda_{n}(E)=\begin{pmatrix}s(n)&ab\,n\,(s(n)-t(n))\\ 0&t(n)\end{pmatrix},\]
for some \(s,t\) polynomials of degree at most \(\operatorname{ord}(E)\). In particular \(s(n)-t(n)\neq 0\) almost everywhere.
Since \(\Lambda_{n}(E)\Lambda_{n}(\mathfrak{D})=\Lambda_{n}(\mathfrak{D})\Lambda_{n}(E)\), we have that
\[\Big{(}r(n)-abn\,(p(n)-q(n))\Big{)}(s(n)-t(n))=0.\]
Hence \(r(n)=abn\,(p(n)-q(n),\) for all \(n\), and by Proposition 3.8 we obtain that \(\mathfrak{D}\) commutes with \(D\).
The algebra \(\mathcal{D}(W)\) is full if there exist nonzero \(W\)-symmetric operators \(\mathfrak{D}_{1},\ldots,\mathfrak{D}_{N}\) in \(\mathcal{D}(W)\), such that
\[\mathfrak{D}_{i}\mathfrak{D}_{j}=0\text{ for }i\neq j\quad\text{ with }\quad\mathfrak{D}_{1}+\cdots+\mathfrak{D}_{N}\in\mathcal{Z}(W)\]
which is not a zero divisor. As a consequence of Theorem 3.10 we obtain the following result.
**Theorem 3.11**.: _The algebra \(\mathcal{D}(W)\) is not a full algebra._
Proof.: Let \(\mathfrak{D}_{1},\mathfrak{D}_{2}\) be nonzero \(W\)-symmetric operators in \(\in\mathcal{D}(W)\). From Theorem 3.10 we have that \(\mathfrak{D}_{1}=\sum_{j=0}^{n}\alpha_{j}D^{j}\), \(\mathfrak{D}_{2}=\sum_{k=0}^{m}\beta_{k}D^{k}\) are polynomials in the differential operator \(D\). Thus we have that the leading coefficient of \(\mathfrak{D}_{1}\mathfrak{D}_{2}\) is \(\alpha_{n}\beta_{m}I\neq 0\) and therefore \(\mathfrak{D}_{1}\mathfrak{D}_{2}\neq 0\).
## 4. The Fourier Algebras of \(W(x)\) and Darboux transformations
We recall the notion of right and left Fourier algebras associated to a weight matrix \(W(x)\), given in [2]. We consider the space of functions
\[\mathcal{P}=\{P:\mathbb{C}\times\mathbb{N}_{0}\longrightarrow M_{N}(\mathbb{ C})\,:\,P(x,n)\text{ is a rational function of }x,\text{ for each fixed }n\},\]
equivalently \(\mathcal{P}\) is the set of all semi-infinite sequences of matrix-valued rational functions.
On \(\mathcal{P}\) we consider a left action of discrete operators: for \(j\in\mathbb{Z}\), let \(\delta^{j}\) be the discrete operator which acts on a sequence \(a:\mathbb{N}_{0}\longrightarrow\mathbb{C}\) by \((\delta^{j}\cdot a)(n)=a(n+j)\), where we take the value of a sequence at a negative integer to be equal to zero. A discrete operator
\[\mathscr{M}=\sum_{j=-\ell}^{k}A_{j}(n)\,\delta^{j}, \tag{26}\]
acts on \(P\in\mathcal{P}\) by \((\mathscr{M}\cdot P)(x,n)=\sum_{j=-\ell}^{k}A_{j}(n)(\delta^{j}\cdot P)(x,n)= \sum_{j=-\ell}^{k}A_{j}(n)P(x,n+j).\)
We also have the right action on \(\mathcal{P}\) of matrix valued differential operators of the form
\[\mathfrak{D}=\sum_{i=0}^{s}\partial^{i}F_{i}(x),\qquad\text{with $F_{i}$ a polynomial function} \tag{27}\]
We denote by \(\operatorname{Mat}_{N}(\mathcal{S})\) the algebra of all discrete operators of the form (26) and let \(\operatorname{Mat}_{N}(\Omega[x])\) be the algebra of differential operators of the form (27).
We define the **right and left Fourier algebras associated to a weight matrix**\(W\) as the right and left Fourier algebras associated to its (unique) sequence of monic orthogonal polynomials
\[\begin{split}\mathcal{F}_{R}(W)=\mathcal{F}_{R}(P)& =\{\mathfrak{D}\in\operatorname{Mat}_{N}(\Omega[x])\,:\,\exists \,\mathscr{M}\in\operatorname{Mat}_{N}(\mathcal{S})\text{ such that }P\cdot\mathfrak{D}=\mathscr{M}\cdot P\}\\ \mathcal{F}_{L}(W)=\mathcal{F}_{L}(P)&=\{\mathscr{M} \in\operatorname{Mat}_{N}(\mathcal{S})\,:\,\exists\,\mathfrak{D}\in \operatorname{Mat}_{N}(\Omega[x])\text{ such that }\mathscr{M}\cdot P=P\cdot\mathfrak{D}\}.\end{split} \tag{28}\]
Since the left and right annihilators of \(P\) are both trivial, there is a natural isomorphism of algebras \(\psi:\mathcal{F}_{R}(P)\longrightarrow\mathcal{F}_{L}(P)\), called the _generalized Fourier map_, defined by
\[P\cdot\mathfrak{D}=\psi(\mathfrak{D})\cdot P.\]
We also have \(\mathscr{L}\cdot P=P\cdot\psi^{-1}(\mathscr{L}).\) We define the _right and left bispectral algebras_
\[\mathcal{B}_{R}(P)=\{\mathfrak{D}\in\mathcal{F}_{R}(P):\operatorname{order}( \psi(\mathfrak{D}))=0\},\qquad\mathcal{B}_{L}(P)=\{\mathscr{M}\in\mathcal{F}_ {L}(P):\operatorname{order}(\psi^{-1}(\mathscr{M}))=0\}.\]
We observe that the algebra \(\mathcal{D}(W)\) is the right bispectral algebra \(\mathcal{B}_{R}(P)\).
The three-term recursion relation of the monic orthogonal polynomials \(P(x,n)\) given in (4), tell us that there exists a discrete operator \(\mathscr{L}\in\operatorname{Mat}_{N}(\mathcal{S})\) of the form
\[\mathscr{L}=\delta+B(n)+C(n)\delta^{-1}\]
such that \(\mathscr{L}\cdot P(x,n)=P(x,n)\,x.\) Thus we have that \(\mathscr{L}\in\mathcal{B}_{L}(P)\) and \(\psi^{-1}(\mathscr{L})=x.\)
In [2], the authors give an explicit description of the left and right Fourier algebras associated to a matrix weight \(W\). We introduce the following notation for every pair of elements \(a,b\) of an algebra \(\mathcal{A}\)
\[\operatorname{Ad}_{a}(b):=ab-ba\quad\text{ and }\quad\operatorname{Ad}_{a}^{k+1 }(b):=\operatorname{Ad}_{a}^{k}(\operatorname{Ad}_{a}(b)).\]
**Theorem 4.1**.: _(Theorem 3.7 in [2]) Let W(x) be a weight matrix, let \(P(x,n)\) be the associated sequence of monic orthogonal polynomials and let \(\mathscr{L}\in\operatorname{Mat}_{N}(\mathcal{S})\) with \(\mathscr{L}\cdot P(x,n)=P(x,n)x\). Then the Fourier algebras of \(P(x,n)\) are given by_
\[\mathcal{F}_{L}(W)=\{\mathscr{M}\in\operatorname{Mat}_{N}(\mathcal{S}): \operatorname{Ad}_{\mathscr{L}}^{k+1}(\mathscr{M})=0\text{ for some }k\geq 0\},\]
\[\mathcal{F}_{R}(W)=\{\mathfrak{D}\in\operatorname{Mat}_{N}(\Omega[x]): \mathfrak{D}\text{ is }W\text{-adjointable and }\mathfrak{D}^{\dagger}\in \operatorname{Mat}_{N}(\Omega[x])\}.\]
_Remark 4.2_.: In the case of the weight matrices considered in this paper, any operator \(\mathfrak{D}\in\operatorname{Mat}_{N}(\Omega[x])\) such that \(\mathfrak{D}^{\dagger}\in\operatorname{Mat}_{N}(\Omega[x])\) is \(W\)-adjointable because of the exponential decay of the weight matrix at \(\pm\infty\).
**Definition 4.3**.: Let \(W(x)\) and \(\widetilde{W}(x)\) be weight matrices and let \(P(x,n)\) and \(\widetilde{P}(x,n)\) be their associated sequences of monic orthogonal polynomials. We say that \(\widetilde{P}(x,n)\) is a bispectral Darboux transformation of \(P(x,n)\) if there exist differential operators \(\mathfrak{T},\widetilde{\mathfrak{T}}\in\mathcal{F}_{R}(P)\), polynomials \(F(x),\widetilde{F}(x)\), and sequences of matrices \(C(n),\widetilde{C}(n)\) which are nonsingular for almost \(n\) and satisfy
\[C(n)\widetilde{P}(x,n)=P(x,n)\cdot\mathfrak{T}F(x)^{-1}\text{ and }\quad \widetilde{C}(n)P(x,n)=\widetilde{P}(x,n)\cdot\widetilde{F}(x)^{-1}\widetilde{ \mathfrak{T}}.\]
We say that \(\widetilde{W}(x)\) is a **noncommutative bispectral Darboux transformation** of \(W(x)\) if \(\widetilde{P}(x,n)\) is a bispectral Darboux transformation of \(P(x,n)\).
As a direct consequence of Theorems 1.1 and Theorem 3.11 we have the following result.
**Theorem 4.4**.: _The weight matrix_
\[W(x)=e^{-x^{2}}\begin{pmatrix}e^{2bx}+a^{2}x^{2}&ax\\ ax&1\end{pmatrix}\qquad(a,b\neq 0)\]
_is not a bispectral Darboux transformation of any direct sum of classical weights._
Even when the matrix weight \(W\) is not a Darboux transformation of classical weights it is _closely related_ with a _direct sum of scalar Hermite weights_, as we explain below. We introduce the diagonal weight
\[\widetilde{W}(x)=\begin{pmatrix}e^{-x^{2}+2bx}&0\\ 0&e^{-x^{2}}\end{pmatrix} \tag{29}\]
Observe that \(e^{-x^{2}+2bx}\) is an affine transformation of the classical Hermite weight \(w(x)=e^{-x^{2}}\). We get the following factorization
\[W(x)=T(x)\widetilde{W}(x)T(x)^{*}\qquad\text{ with }\quad T(x)=\begin{pmatrix}1&ax \\ 0&1\end{pmatrix} \tag{30}\]
From Theorem 4.1 and Remark 4.2, a differential operator \(\mathfrak{D}\in\operatorname{Mat}_{2}(\Omega[x])\) belongs to \(\mathcal{F}_{R}(W)\) if and only if its \(W\)-formal adjoint \(\mathfrak{D}^{\dagger}=W(x)\mathfrak{D}^{*}W(x)^{-1}\) has polynomial coefficients. Similarly, \(\widetilde{\mathfrak{D}}\in\operatorname{Mat}_{2}(\Omega[x])\) belongs to \(\mathcal{F}_{R}(\widetilde{W})\) if and only if \(\widetilde{\mathfrak{D}}^{\dagger}=\widetilde{W}(x)\widetilde{\mathfrak{D} }^{*}\widetilde{W}(x)^{-1}\in\operatorname{Mat}_{2}(\Omega[x])\).
**Proposition 4.5**.: _We have_
\[\mathcal{F}_{R}(W)=T(x)\mathcal{F}_{R}(\widetilde{W})T^{-1}(x).\]
Proof.: For any \(\mathfrak{D}\in\mathcal{F}_{R}(W)\) the differential operators \(\mathfrak{D}^{\dagger}=W(x)\mathfrak{D}^{*}W^{-1}(x)\) and \(E=T^{-1}(x)\mathfrak{D}T(x)\) have also polynomial coefficients because \(T(x)\) and \(T^{-1}(x)\) are polynomials. We also get
\[T^{-1}(x)\,\mathfrak{D}^{\dagger}\,T(x)=\widetilde{W}(x)E^{*}\,\widetilde{W} ^{-1}(x)=E^{\dagger}.\]
Therefore \(E\in\mathcal{F}_{R}(\widetilde{W})\) and \(\mathfrak{D}\in T(x)\mathcal{F}_{R}(\widetilde{W})T^{-1}(x)\).
_Remark 4.6_.: We get that
\[\mathcal{D}(W)\subsetneq T(x)\mathcal{D}(\widetilde{W})T^{-1}(x).\]
In fact, the generator \(D\) of the algebra \(\mathcal{D}(W)\) can be factorized as \(D=T(x)\widetilde{D}T^{-1}(x)\) with
\[\widetilde{D}=\begin{pmatrix}\partial^{2}+\partial(-2(x-b))-2&0\\ 0&\partial^{2}+\partial(-2x)\end{pmatrix}\in\mathcal{D}(\widetilde{W}).\]
Observe that \(\partial^{2}+\partial(-2x)\) is the classical Hermite operator. Then we have that \(\widetilde{D}\) is a \(\widetilde{W}\)-symmetric differential operator in \(\mathcal{D}(\widetilde{W})\). On the other hand the operator
\[\widetilde{D_{2}}=\begin{pmatrix}\partial^{2}+\partial(-2(x-b))&0\\ 0&\partial^{2}+\partial(-2x)\end{pmatrix}\in\mathcal{D}(\widetilde{W})\]
but
\[T(x)\widetilde{D}_{2}T(x)^{-1}=\partial^{2}I+\partial\begin{pmatrix}-2x+2b&-2 abx+2a\\ 0&-2x\end{pmatrix}+\begin{pmatrix}0&-2ax\\ 0&0\end{pmatrix}\not\in\mathcal{D}(W).\]
We close this section by studying the right Fourier algebras of the weights \(W(x)\) and \(\widetilde{W}(x)\). First at all we will prove that any operator in the algebra \(\mathcal{F}_{R}(\widetilde{W})\) is a diagonal operator.
**Proposition 4.7**.: _The right Fourier algebra of \(\widetilde{W}\) is_
\[\mathcal{F}_{R}(\widetilde{W})=\left\{\sum_{j=0}^{n}\partial^{j}F_{j}(x)\,:\, F_{j}(x)=\begin{pmatrix}p_{j}(x)&0\\ 0&q_{j}(x)\end{pmatrix},p_{j},q_{j}\in\mathbb{C}[x],\,n\geq 0\right\}.\]
Proof.: A differential operator \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}(x)\) with polynomial coefficients belongs to \(\mathcal{F}_{R}(\widetilde{W})\) if and only if \(\mathfrak{D}^{\dagger}=\widetilde{W}(x)\mathfrak{D}^{*}\widetilde{W}(x)^{-1}\) has also polynomial coefficients. We have that
\[\mathfrak{D}^{\dagger}=\sum_{j=0}^{n}(-1)^{j}\sum_{k=0}^{j}\binom{j}{k}\, \partial^{k}\big{(}\widetilde{W}(x)F_{j}(x)^{*}\big{)}^{(k)}\widetilde{W}(x)^ {-1}=\sum_{k=0}^{n}\partial^{k}G_{k}(x),\]
where
\[G_{k}=\sum_{j=0}^{n-k}(-1)^{n-j}\binom{n-j}{k}(\widetilde{W}(x)F_{n-j}(x)^{*}) ^{(n-k-j)}\widetilde{W}(x)^{-1},\qquad\text{for $0\leq k\leq n$}. \tag{31}\]
If \(F_{j}=\begin{pmatrix}p_{j}(x)&0\\ 0&q_{j}(x)\end{pmatrix}\) is a polynomial then
\[(\widetilde{W}(x)F_{j}(x)^{*})^{(k)}\widetilde{W}(x)^{-1}=\begin{pmatrix}(e^ {-x^{2}+2bx}\overline{p}_{j}(x))^{(k)}e^{x^{2}-2bx}&0\\ 0&(e^{-x^{2}}\overline{q}_{j}(x))^{(k)}e^{x^{2}}\end{pmatrix} \tag{32}\]
is also a polynomial. Thus \(\mathfrak{D}^{\dagger}\in\operatorname{Mat}_{2}(\Omega[x])\) which implies that \(\mathfrak{D}\in\mathcal{F}_{R}(\widetilde{W})\).
On the other hand, for \(\mathfrak{D}=\sum_{j=0}^{n}\partial^{j}F_{j}(x)\in\mathcal{F}_{R}(\widetilde{ W})\) we will see that all \(F_{j}\) are diagonal matrices. We have that \(\mathfrak{D}^{\dagger}=\sum_{j=0}^{n}\partial^{j}G_{j}(x)\) has polynomials coefficients. If \(F_{j}(x)=\begin{pmatrix}p_{j}(x)&r_{j}(x)\\ g_{j}(x)&q_{j}(x)\end{pmatrix}\in\operatorname{Mat}_{2}(\mathbb{C}[x])\), by (31) we get
\[G_{n}(x)=(-1)^{n}\widetilde{W}(x)F_{n}(x)^{*}\widetilde{W}(x)^{-1}=(-1)^{n} \begin{pmatrix}\overline{p}_{n}(x)&e^{2bx}\overline{g}_{n}(x)\\ e^{-2bx}\overline{r}_{n}(x)&\overline{q}_{n}(x)\end{pmatrix}.\]
Hence \(g_{n}(x)=0\) and \(r_{n}(x)=0\). By induction on \(k\) we can assume that \(g_{n-j}(x)=0\) and \(r_{n-j}(x)=0\), i.e. \(F_{n-j}\) are diagonal, for all \(0\leq j\leq k\). By (31) we have that
\[G_{n-k-1}=\sum_{j=0}^{k+1}(-1)^{n-j}\binom{n-j}{n-k-1}(\widetilde{W}(x)F_{n-j} (x)^{*})^{(k+1-j)}\widetilde{W}(x)^{-1}.\]
For \(0\leq j\leq k\), we get
\[\sum_{j=0}^{k}(-1)^{n-j}\binom{n-j}{n-k-1}(\widetilde{W}(x)F_{n-j}(x)^{*})^{(k +1-j)}\widetilde{W}(x)^{-1}=\begin{pmatrix}\alpha(x)&0\\ 0&\beta(x)\end{pmatrix},\]
for some polynomials \(\alpha,\beta\). Thus
\[G_{n-k-1}=\begin{pmatrix}\alpha(x)+(-1)^{n-k-1}\overline{p}_{n-k-1}(x)&(-1)^{ n-k-1}e^{2bx}\overline{g}_{n-k-1}(x)\\ (-1)^{n-k-1}e^{-2bx}\overline{r}_{n-k-1}(x)&\beta(x)+(-1)^{n-k-1}\overline{q}_ {n-k-1}(x)\end{pmatrix}.\]
is a matrix polynomial which implies that \(g_{n-k-1}(x)=0\) and \(r_{n-k-1}(x)=0\).
By using that \(\mathcal{F}_{R}(W)=T(x)\mathcal{F}_{R}(\widetilde{W})T^{-1}(x)\) we obtain
**Corollary 4.8**.: _The right Fourier algebra of \(W\) is_
\[\mathcal{F}_{R}(W)=\left\{\sum_{j=0}^{n}\partial^{j}\begin{pmatrix}p_{j}(x)& ax(q_{j}(x)-p_{j}(x))\\ 0&q_{j}(x)\end{pmatrix}+\partial^{j-1}\begin{pmatrix}0&jaq_{j}(x)\\ 0&0\end{pmatrix}\,:\,p_{j},q_{j}\in\mathbb{C}[x],\,n\geq 0\right\}.\]
**Proposition 4.9**.: _Let \(\widetilde{W}\) be the diagonal weight matrix given in (29). The algebra \(\mathcal{D}(\widetilde{W})\) is a commutative full algebra._
Proof.: The algebra \(\mathcal{D}(\widetilde{W})\) of any diagonal weight matrix \(\widetilde{W}\) is always a full algebra. In fact, \(\mathcal{D}_{1}=\mathrm{diag}(1,0)\) and \(\mathcal{D}_{2}=\mathrm{diag}(0,1)\) are \(\widetilde{W}\)-symmetric operators in \(\mathcal{D}(\widetilde{W})\) satisfying \(\mathcal{D}_{1}\mathcal{D}_{2}=0=\mathcal{D}_{2}\mathcal{D}_{1}\) and \(\mathcal{D}_{1}+\mathcal{D}_{2}\) a central element.
On the other hand \(\mathcal{D}(\widetilde{W})\subset\mathcal{F}_{R}(\widetilde{W})\) which implies that \(\mathcal{D}(\widetilde{W})\) is a diagonal algebra. Thus \(\mathcal{D}(\widetilde{W})=\mathcal{D}(e^{-x^{2}+2bx})\oplus\mathcal{D}(e^{ -x^{2}})\). Therefore \(\mathcal{D}(\widetilde{W})\) is commutative.
## 5. Orthogonal polynomials with respect to \(W\).
In this section we give the explicit expressions of a sequence of orthogonal polynomials with respect to the weight
\[W(x)=e^{-x^{2}}\begin{pmatrix}e^{2bx}+a^{2}x^{2}&ax\\ ax&1\end{pmatrix}\]
and we also give the three-term recursion relation satisfied by them.
We denote \(H_{n}(x)\) the \(n\)-th monic Hermite polynomial, given by \(H_{n}(x)=\frac{(-1)^{n}}{2^{n}}(e^{-x^{2}})^{(n)}e^{x^{2}}.\) We observe that \(H_{n}(x-b)\) are the monic orthogonal polynomials for the weight \(\tilde{w}(x)=e^{-x^{2}+2bx}.\)
**Proposition 5.1**.: _The polynomials_
\[Q(x,n)=\begin{pmatrix}H_{n}(x-b)&aH_{n+1}(x)-axH_{n}(x-b)\\ -anH_{n-1}(x-b)&2e^{b^{2}}H_{n}(x)+a^{2}nxH_{n-1}(x-b)\end{pmatrix}\]
_are orthogonal with respect to \(W(x)\). Moreover, the leading coefficient of \(Q(x,n)\) is given by_
\[M_{n}=\begin{pmatrix}1&nab\\ 0&2e^{b^{2}}+a^{2}n\end{pmatrix}. \tag{33}\]
Proof.: The weight \(W(x)\) can be factorized as \(W(x)=T(x)\widetilde{W}(x)T(x)^{*}\), where
\[T(x)=\begin{pmatrix}1&ax\\ 0&1\end{pmatrix}\quad\text{ and }\quad\widetilde{W}(x)=\begin{pmatrix} \tilde{w}(x)&0\\ 0&w(x)\end{pmatrix},\]
and \(w(x)=e^{-x^{2}}\), \(\tilde{w}(x)=e^{-b^{2}}w(x-b)\). Thus
\[Q(x,n)T(x)=\begin{pmatrix}H_{n}(x-b)&aH_{n+1}(x)\\ -anH_{n-1}(x-b)&2e^{b^{2}}H_{n}(x)\end{pmatrix}.\]
We have that the sequence \(\{Q(x,n)\}_{n}\) is orthogonal with respect to \(W(x)\) if and only if \(\{Q(x,n)T\}_{n}\) is orthogonal with respect to \(\widetilde{W}\).
For \(n\neq m\) we compute \(\langle Q(n,x)T(x),Q(m,x)T(x)\rangle_{\widetilde{W}}\).
\[\int_{-\infty}^{\infty}\begin{pmatrix}H_{n}(x-b)&aH_{n+1}(x)\\ -anH_{n-1}(x-b)&2e^{b^{2}}H_{n}(x)\end{pmatrix}\widetilde{W}(x)\begin{pmatrix} H_{m}(x-b)&-amH_{m-1}(x-b)\\ aH_{m+1}(x)&2e^{b^{2}}H_{m}(x)\end{pmatrix}dx=\begin{pmatrix}k_{11}&k_{12}\\ k_{21}&k_{22}\end{pmatrix}.\]
To see that
\[k_{11}=\langle H_{n}(x-b),H_{m}(x-b)\rangle_{\widetilde{w}}+a^{2}\langle H_{ n+1}(x),H_{m+1}(x)\rangle_{w}=0,\]
and
\[k_{22}=nma^{2}\langle H_{n-1}(x-b),H_{m-1}(x-b)\rangle_{\widetilde{w}}+4e^{2b^ {2}}\langle H_{n}(x),H_{m}(x)\rangle_{w}=0,\]
we use that \(\{H_{n}(x)\}_{n}\) and \(\{H_{n}(x-b)\}_{n}\) are orthogonal with respect to \(w\) and \(\tilde{w}\) respectively.
Now we compute
\[k_{12}=-am\langle H_{n}(x-b),H_{m-1}(x-b)\rangle_{\widetilde{w}}+2ae^{b} \langle H_{n+1}(x),H_{m}(x)\rangle_{w}.\]
If \(m\neq n+1\) then \(k_{12}=0\). If \(m=n+1\), we use that \(\|H_{n+1}\|_{w}^{2}=\frac{(n+1)}{2}\|H_{n}\|_{w}^{2}\) and
\[\langle H_{n}(x-b),H_{n}(x-b)\rangle_{\widetilde{w}}=\int_{-\infty}^{\infty}H_ {n}(x-b)e^{-x^{2}+2bx}H_{n}(x-b)dx=e^{b^{2}}\|H_{n}\|_{w}^{2}\]
to obtain that \(k_{12}=0\). We proceed in a similar way to prove that
\[k_{21}=-an\langle H_{n-1}(x-b),H_{m}(x-b)\rangle_{\widetilde{w}}+2ae^{b} \langle H_{n}(x),H_{m+1}(x)\rangle_{w}=0,\]
which concludes the proof that \(Q(n,x)\) is an orthogonal sequence of polynomials for the weight \(W\). The last assertion in the statement follows easily.
**Proposition 5.2**.: _The matrix orthogonal polynomials \(Q(x,n)\) defined in Proposition 5.1 satisfy the three-term recursion relation_
\[Q(n,x)x=\widetilde{A}_{n}Q(n+1,x)+\widetilde{B}_{n}Q(n,x)+\widetilde{C}_{n}Q (n-1,x)\]
_where_
\[\widetilde{A}_{n}=\begin{pmatrix}1&-\frac{ab}{2e^{b^{2}}+(n+1)a^{2}}\\ 0&\frac{2e^{b^{2}}+na^{2}}{2e^{b^{2}}+(n+1)a^{2}}\end{pmatrix},\,\widetilde{B} _{n}=\begin{pmatrix}\frac{2e^{b^{2}}b}{2e^{b^{2}}+(n+1)a^{2}}&\frac{a}{2(2e^{b^ {2}}+na^{2})}\\ \frac{2e^{b^{2}}a}{2e^{b^{2}}+(n+1)a^{2}}&\frac{na^{2}b}{2e^{b^{2}}+na^{2}} \end{pmatrix},\,\widetilde{C}_{n}=\begin{pmatrix}\frac{n}{2}\frac{(n+1)a^{2}+2 e^{b^{2}}}{2e^{b^{2}}+na^{2}}&0\\ -\frac{2nabe^{b^{2}}}{2e^{b^{2}}+na^{2}}&\frac{n}{2}\end{pmatrix}.\]
Proof.: Let \(T=T(x)=\begin{pmatrix}1&ax\\ 0&1\end{pmatrix}.\) Recall that \(Q(x,n)T=\begin{pmatrix}H_{n}(x-b)&aH_{n+1}(x)\\ -anH_{n-1}(x-b)&2e^{b^{2}}H_{n}(x)\end{pmatrix}\).
We compute
\[\widetilde{A}_{n}Q(n+1,x)T=\begin{pmatrix}H_{n+1}(x-b)+\frac{a^{2}b(n+1)}{a^{2 }(n+1)+2e^{b^{2}}}H_{n}(x-b)&aH_{n+2}(x)-\frac{2abe^{b^{2}}}{a^{2}(n+1)+2e^{b^ {2}}}H_{n+1}(x)\\ -\frac{(2e^{b^{2}}+na^{2})a(n+1)}{(n+1)a^{2}+2e^{b^{2}}}H_{n}(x-b)&\frac{2(2e^ {b^{2}}+na^{2})\pm^{2}}{(n+1)a^{2}+2e^{b^{2}}}H_{n+1}(x)\end{pmatrix},\]
\[\widetilde{B}_{n}Q(n.x)T=\begin{pmatrix}\frac{2e^{b^{2}}b\,H_{n}(x-b)}{(n+1)a^ {2}+2e^{b^{2}}}-\frac{a^{2}nH_{n-1}(x-b)}{2(2e^{b^{2}}+na^{2})}&\frac{2abe^{b^ {2}}H_{n+1}(x)}{(n+1)a^{2}+2e^{b^{2}}}+\frac{ae^{b^{2}}H_{n}(x)}{2e^{b^{2}}+na^ {2}}\\ \frac{2ae^{b^{2}}H_{n}(x-b)}{(n+1)a^{2}+2e^{b^{2}}}-\frac{n^{2}ba^{2}H_{n-1}(x -b)}{2e^{b^{2}}+na^{2}}&\frac{2a^{2}e^{b^{2}}H_{n+1}(x)}{(n+1)a^{2}+2e^{b^{2}} }+\frac{2na^{2}be^{2}H_{n}(x)}{2e^{b^{2}}+na^{2}}\end{pmatrix},\]
\[\widetilde{C}_{n}Q(n-1,x)T=\begin{pmatrix}\frac{n((n+1)a^{2}+2e^{b^{2}})}{2(2 e^{b^{2}}+na^{2})}H_{n-1}(x-b)&\frac{n((n+1)a^{2}+2e^{b^{2}})a}{2(2e^{b^{2}}+na^{2}) }H_{n}(x)\\ -\frac{2anbe^{b^{2}}}{2e^{b^{2}}+na^{2}}H_{n-1}(x-b)-\frac{n(n-1)a}{2}H_{n-2} (x-b)&-\frac{2na^{2}be^{b^{2}}}{2e^{b^{2}}+na^{2}}H_{n}(x)+ne^{b^{2}}H_{n-1}(x )\end{pmatrix}.\]
Let \(M=\widetilde{A}_{n}Q(n+1,x)T+\widetilde{B}_{n}Q(n,x)T+\widetilde{C}_{n}Q(n-1, x)T=\begin{pmatrix}m_{11}&m_{12}\\ m_{21}&m_{22}\end{pmatrix}\). We get
\[m_{11} =H_{n+1}(x-b)+bH_{n}(x-b)+\frac{n}{2}H_{n-1}(x-b),\] \[m_{12} =a\left(\frac{(n+1)}{2}H_{n}(x)+H_{n+2}(x)\right),\] \[m_{21} =-an\left(H_{n}(x-b)+bH_{n-1}(x-b)+\frac{(n-1)}{2}H_{n-2}(x-b) \right),\] \[m_{22} =e^{b^{2}}(2H_{n+1}(x)+nH_{n-1}(x)).\]
By using the three-term recursion relation satisfied by the monic orthogonal Hermite polynomials
\[H_{n}(x)x=H_{n+1}(x)+\frac{n}{2}H_{n-1}(x)\]
we get that \(M=x\,Q(n,x)T\) and multiplying by \(T^{-1}\) we complete the proof.
The monic orthogonal polynomials with respect to \(W(x)\) are \(P(x,n)=M_{n}^{-1}Q(x,n)\). By taking
\[A_{n}=M_{n}^{-1}\widetilde{A}_{n}M_{n+1},\quad B_{n}=M_{n}^{-1}\widetilde{B}_{n} M_{n},\quad C_{n}=M_{n}^{-1}\widetilde{C}_{n}M_{n-1}\]
we get
\[P(x,n)x=A_{n}P(x,n+1)+B_{n}P(x,n)+C_{n}P(x,n-1).\]
Explicitly, we have \(A_{n}=I\),
\[B_{n} =\begin{pmatrix}\frac{4be^{2u^{2}}}{(na^{2}+a^{2}+2e^{b^{2}})(2be^ {2}+na^{2})}&\frac{a^{8}n(n+1)(2b^{2}n-1)+a^{3}2e^{t^{2}}(2b^{2}n^{2}-2n-1)-ae^{ 2b^{2}}(2b^{2}n+1)}{-2(na^{2}+a^{2}+2e^{b^{2}})(2e^{b^{2}}+na^{2})}\\ \frac{2e^{b^{2}}a}{(2e^{b^{2}}+na^{2})(na^{2}+a^{2}+2e^{b^{2}})}&\frac{na^{2}b( na^{2}+a^{2}+4e^{2})}{(na^{2}+a^{2}+2e^{b^{2}})(2e^{b^{2}}+na^{2})}\end{pmatrix},\] \[C_{n} =\begin{pmatrix}\frac{n(a^{4}n(n+1)+a^{2}2e^{2}(2b^{2}n+2n+1)+4e^ {2b^{2}})}{2(2e^{b^{2}}+na^{2})^{2}}&\frac{nab(a^{4}n(n-1)+a^{2}2e^{2}(2b^{2}n^ {2}-2b^{2}n-1)-4e^{2b^{2}})}{2(2e^{b^{2}}+na^{2})^{2}}\\ \frac{-2nabe^{2}}{(2e^{b^{2}}+na^{2})^{2}}&\frac{-n(-a^{4}n(n-1)+a^{2}2e^{2}(2b^ {2}n-2b^{2}-2n+1)-4e^{2b^{2}})}{2(2e^{b^{2}}+na^{2})^{2}}\end{pmatrix}.\]
**Corollary 5.3**.: _With the notation above, the difference operator \(\mathcal{L}=\delta+B_{n}+C_{n}\delta^{-1}\in\mathcal{B}_{L}(P)\), the left-bispectral algebra associated to the weight \(W(x)\)._
## 6. Higher dimensional examples
Examples of weight matrices \(W(x)\) which are solutions of the matrix Bochner problem and which are not obtained as Darboux transformations of diagonal (classical) weights are present in any dimensions.
For \(N=3\), and real parameters \(a_{1},a_{2},b_{1}\neq b_{2}\neq 0\), the second-order differential operator
\[D=\partial^{2}I+\partial\begin{pmatrix}-2x&2a_{1}b_{1}x+2a_{1}&0\\ 0&2b_{1}-2x&0\\ 0&(2a_{2}b_{1}-2a_{2}b_{2})x+2a_{2}&2b_{2}-2x)\end{pmatrix}+\begin{pmatrix}0&2 a_{1}b_{1}&0\\ 0&2&0\\ 0&2a_{2}b_{1}&0\end{pmatrix}.\]
is a \(W\)-symmetric operator with respect to the weight matrix
\[W(x)=e^{-x^{2}}\begin{pmatrix}1+a_{1}^{2}x^{2}e^{2b_{1}x}&a_{1}xe^{2b_{1}x}&a_ {1}a_{2}x^{2}e^{2b_{1}x}\\ a_{1}xe^{2b_{1}x}&e^{2b_{1}x}&a_{2}xe^{2b_{1}x}\\ a_{1}a_{2}x^{2}e^{2b_{1}x}&a_{2}xe^{2b_{1}x}&a_{2}^{2}x^{2}e^{2b_{1}x}+e^{2b_{2 }x}\end{pmatrix}.\]
For \(N=4\), the weight matrix
\[W(x)=e^{-x^{2}}\begin{pmatrix}1+a_{1}^{2}x^{2}e^{2b_{1}x}&a_{1}xe^{2b_{1}x}&a_ {1}a_{2}x^{2}e^{2b_{1}x}&0\\ a_{1}xe^{2b_{1}x}&e^{2b_{1}x}&a_{2}xe^{2b_{1}x}&0\\ a_{1}a_{2}x^{2}e^{2b_{1}x}&a_{2}xe^{2b_{1}x}&a_{2}^{2}x^{2}e^{2b_{1}x}+e^{2b_ {2}x}+a_{3}^{2}x^{2}e^{2b_{3}x}&a_{3}xe^{2b_{3}x}\\ 0&0&a_{3}xe^{2b_{3}x}&e^{2b_{3}x}\end{pmatrix}\]
admits the following symmetric differential operator in the algebra \(\mathcal{D}(W)\).
\[D=\partial^{2}I+ \partial\begin{pmatrix}-2x&2a_{1}b_{1}x+2a_{1}&0&0\\ 0&2b_{1}-2x&0&0\\ 0&2a_{2}(b_{1}-b_{2})x+2a_{2}&2b_{2}-2x&-2a_{3}(b_{2}-b_{3})x+2a_{3}\\ 0&0&0&2b_{3}-2x\end{pmatrix}\] \[+\begin{pmatrix}-2&2a_{1}b_{1}&0&0\\ 0&0&0&0\\ 0&2a_{2}b_{1}&-2&2a_{3}b_{3}\\ 0&0&0&0\end{pmatrix}.\]
In both examples, we can prove that all differential operators in the algebra \(\mathcal{D}(W)\) can be generated by the second-order differential operator \(D\) and the identity \(I\). Therefore \(W\) is not a bispectral Darboux transformation of a direct sum of classical weights.
|
2305.17582 | Electroweak symmetry breaking by gravity | We consider a simple scale-invariant action coupling the Higgs field to the
metric scalar curvature $R$ and containing an $R^2$ term that exhibits
spontaneous breaking of scale invariance and electroweak symmetry. The
coefficient of the $R^2$ term in this case determines the self-coupling of the
Higgs boson in the Einstein frame, and the scalaron becomes a dilaton weakly
coupled to the Higgs boson. Majorana mass terms for right-handed neutrinos can
be generated in a scale-invariant manner by using the Higgs-field invariant; in
this case, the existing experimental limits on the Higgs-boson total width rule
out Majorana mass values in a certain range. The model inherits the naturalness
issues of general relativity connected with the smallness of the gravitational
and cosmological constants. | Yuri Shtanov | 2023-05-27T21:31:15Z | http://arxiv.org/abs/2305.17582v3 | # Electroweak symmetry breaking by gravity
###### Abstract
We show that a simple scale-invariant action coupling the Higgs field to the metric scalar curvature \(R\) and containing an \(R^{2}\) term exhibits dynamical breaking of scale invariance and electroweak symmetry. The coefficient of the \(R^{2}\) term in this case determines the self-coupling of the Higgs boson in the Einstein frame, and the scalaron becomes a dilaton weakly coupled to the Higgs boson. Majorana mass terms for right-handed neutrinos can be generated in a scale-invariant manner by using the Higgs-field invariant; in this case, the existing experimental limits on the Higgs-boson total width rule out Majorana mass values in a certain range. The model inherits the naturalness issues of general relativity connected with the smallness of the gravitational and cosmological constants.
## I Introduction
It has long been suggested in the literature that the global scale-invariance can be an exact symmetry of nature [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Here, by global scale-invariance, we mean invariance of the Lagrangian density with respect to multiplication of the space-time metric and matter fields by appropriate constant factors. Lagrangians respecting this principle cannot contain dimensionful parameters (such as masses or gravitational and cosmological constants), hence, scale symmetry has to be broken dynamically in order to generate them. Since only the dimensionless ratios of such parameters are measured in experiment, it appears possible, in principle, to generate all dimensionful scales by a unique mechanism. Scale invariance can be maintained on a quantum level, albeit at the expense of renormalisability [5; 6; 7; 8; 9; 10].
Technically, this idea is usually implemented by introducing a special scalar field \(\chi\), sometimes referred to as the "metron" (its logarithm or, often, the field \(\chi\) itself is also called
the dilaton; see [11] for a review), whose expectation value gives birth to all dimensionful parameters. In particular, the gravitational constant is generated by the scale-invariant coupling of the field \(\chi\) to the metric curvature scalar \(R\) via \(\xi\chi^{2}R\) term, following the old idea due to Brans and Dicke [12; 13].
A natural question arises whether the Higgs field of the Standard Model can play the role of such a scalar field breaking the scale-invariance. It turns out that, even in the presence of the scale and electroweak symmetry-breaking potential, such a theory is not realistic: the coupling constant \(\xi\) is required to be extremely large (see below), in which case the Higgs field becomes effectively massless and decouples from the other fields of the Standard Model [14; 15; 16].
In this letter, we point out that such a theory is remedied by adding the \(R^{2}\) term to the gravitational action, thus introducing a new scalar degree of freedom (eventually to become a dilaton). In this case, one can start with a scale-invariant action even without any potential for the Higgs field; both the scale invariance and the electroweak symmetry become naturally broken dynamically. This is most easily seen in the Einstein frame of the theory, in which the usual Standard Model action arises together with the Einstein gravity and a new scalar degree of freedom (the dilaton) coupled to the Higgs field. The scale-invariance of the theory in the original frame becomes the dilaton shift symmetry in the Einstein frame. The dimensionless constant in front of the \(R^{2}\) term regulates the self-coupling of the Higgs field, hence, the mass of the Higgs boson in this frame. On the other hand, the scale-invariant term \(\lambda_{\Phi}\left(\Phi^{\dagger}\Phi\right)^{2}\) in the original action is responsible for the cosmological constant, hence, requires a tiny constant \(\lambda_{\Phi}\). The extreme largeness of \(\xi\) and smallness of \(\lambda_{\Phi}\) represent fine-tuning issues in the original frame, equivalent to the unnaturalness issues of the Planck and cosmological constants in the Einstein frame.
## II The model
Let us consider gravity coupled to the Higgs field of the Standard Model, with a scale-invariant total low-energy effective Lagrangian1
Footnote 1: We use the metric signature \((-,+,+,+)\) and system of units \(\hbar=c=1\).
\[L=\xi_{*}\Phi^{\dagger}\Phi R+\frac{\xi_{*}^{2}}{4\lambda_{*}}R^{2}-\lambda_{ \Phi}\left(\Phi^{\dagger}\Phi\right)^{2}-\left(D_{\mu}\Phi\right)^{\dagger}D^ {\mu}\Phi+L_{\rm m}\,, \tag{1}\]
where \(\xi_{*}\), \(\lambda_{*}\) and \(\lambda_{\Phi}\) are positive dimensionless constants, \(\Phi\) is the Higgs doublet, and \(D_{\mu}\) is the gauge covariant derivative involving the SU(2) and U(1) electroweak gauge fields and acting on the Higgs doublet \(\Phi\). Note that the potential for the Higgs field does not contain the symmetry-breaking parameter, which is forbidden by the invariance of the Lagrangian density with respect to the global scaling \(g_{\mu\nu}\to\Omega^{2}g_{\mu\nu}\), \(\Phi\to\Omega^{-1}\Phi\), etc, with \(\Omega\) being a space-time constant. The part \(L_{\rm m}\) contains all the rest of matter fields, together with their couplings to the Higgs and gauge fields. It is assumed to be Weyl invariant, e.g., invariant with respect to local conformal transformations of the metric accompanied by appropriate local transformations of the matter fields; in particular, all couplings in \(L_{\rm m}\) are dimensionless. (This is the case in the Standard Model without the Majorana mass terms for right-handed neutrinos. We will return to this issue below.)
Introducing an auxiliary scalar field \(\chi_{*}\) with canonical dimension of mass, one can write Lagrangian (1) in the equivalent form
\[L =\xi_{*}\chi_{*}^{2}R-\lambda_{*}\left(\Phi^{\dagger}\Phi-\chi_{ *}^{2}\right)^{2}-\lambda_{\Phi}\left(\Phi^{\dagger}\Phi\right)^{2}-\left(D_{ \mu}\Phi\right)^{\dagger}D^{\mu}\Phi+L_{\rm m} \tag{2}\] \[=\xi\chi^{2}R-\lambda\left(\Phi^{\dagger}\Phi-\chi^{2}\right)^{ 2}-\lambda_{\chi}\chi^{4}-\left(D_{\mu}\Phi\right)^{\dagger}D^{\mu}\Phi+L_{ \rm m}\,. \tag{3}\]
Indeed, finding the extremum of (2) with respect to \(\chi_{*}^{2}\) and substituting it into (2) gives back equation (1). In equation (3), we have made a rescaling of the constants and of the auxiliary field according to
\[\lambda=\lambda_{*}+\lambda_{\Phi}\,,\quad\lambda_{\chi}=\lambda_{\Phi}\left(1 +\frac{\lambda_{\Phi}}{\lambda_{*}}\right)\,,\quad\xi=\xi_{*}\left(1+\frac{ \lambda_{\Phi}}{\lambda_{*}}\right)\,,\quad\chi=\left(1+\frac{\lambda_{\Phi}}{ \lambda_{*}}\right)^{-1/2}\chi_{*}\,. \tag{4}\]
Theory (3) is just the usual dilatonic extension of the Standard Model [1; 2] but without the kinetic term for the field \(\chi\). Note that, when \(\lambda_{\Phi}=0\), the Higgs-field potential is absent, we have also \(\lambda_{\chi}=0\), and the starred constants and scalar field coincide with the unstarred ones.
It is clear from (3) that the scale-invariance and electroweak symmetry is dynamically broken if the field \(\chi\) acquires a nonvanishing space-time value. In this case, the vacuum expectation value of the Higgs invariant \(\Phi^{\dagger}\Phi\) becomes equal to \(\chi^{2}\), while the factor \(\xi\chi^{2}\) of the scalar curvature generates the Planck mass. The value of \(\chi^{2}\) is found from the variation of (3):
\[\chi^{2}\equiv\Phi^{\dagger}\Phi=\frac{\xi}{2\lambda_{\chi}}R\,. \tag{5}\]
A non-vanishing vacuum value of \(\chi^{2}\equiv\Phi^{\dagger}\Phi\) is then conditioned by a positive vacuum value of the scalar curvature \(R\), implying the presence of an effective cosmological constant. Equation (5) can also be obtained from (1) by finding the vacuum value of the Higgs field and by noting that \(\xi_{*}/2\lambda_{\Phi}=\xi/2\lambda_{\chi}\) according to (4). In this sense, electroweak symmetry can be said to be broken by gravity.
At this point, we note that related ideas of breaking scale and electroweak symmetry were realised in [17; 18] (see also [19]) for a Weyl-invariant theory with an additional gauge vector field; in this case, there arises a kinetic term for the field \(\chi\) preserving the exact Weyl invariance of the theory. In our case, Weyl invariance of the whole theory is not assumed; theory (1) is only globally scale-invariant and is much simpler by construction.
To elucidate the field dynamics, We proceed to the Einstein frame in a usual way. Assuming \(\chi^{2}>0\) in space-time, we parametrise it by a new scalar field \(\phi\) and make a Weyl rescaling of the metric:
\[\chi^{2}(\phi)=\frac{M^{2}}{2\xi}\omega^{2}(\phi)\,,\qquad\omega(\phi)=e^{ \phi/\sqrt{6}M}\,,\qquad g_{\mu\nu}\to\omega^{-2}g_{\mu\nu}\,. \tag{6}\]
Here, \(M\) is an arbitrary constant of dimension mass. We also exploit the Weyl invariance of the original action \(S_{\rm m}\) with Lagrangian \(L_{\rm m}\). This allows us to accompany the Weyl rescaling (6) by the corresponding Weyl rescaling of all other fields, including the Higgs field, which is transformed as
\[\Phi\to\omega\Phi\,. \tag{7}\]
Being Weyl invariant, the action \(S_{\rm m}\) retains its original form in terms of new fields. As a result of these transformations, Lagrangian (3) in the Einstein frame becomes
\[L=\frac{M^{2}}{2}\left(R-2\Lambda\right)-\frac{1}{2}\left(\partial \phi\right)^{2}-\frac{1}{\sqrt{6}M}\partial_{\mu}\left(\Phi^{\dagger}\Phi \right)\partial^{\mu}\phi-\frac{1}{6M^{2}}\Phi^{\dagger}\Phi\,\left(\partial \phi\right)^{2}\] \[-\left(D_{\mu}\Phi\right)^{\dagger}D^{\mu}\Phi-\lambda\left(\Phi ^{\dagger}\Phi-\frac{v^{2}}{2}\right)^{2}+L_{\rm m}\,, \tag{8}\]
where
\[v^{2}=\frac{M^{2}}{\xi}\,,\qquad\Lambda=\frac{\lambda_{\chi}M^{2}}{4\xi^{2}}= \frac{\lambda_{\chi}v^{4}}{4M^{2}}\,. \tag{9}\]
We observe that the field \(\phi\) has dropped out from the field potential, having become a dilaton, while the Higgs field acquired a standard electroweak symmetry-breaking potential. The second line in (8) is just the Lagrangian of the Standard Model, while the first line is
the Lagrangian for Einstein gravity with a Planck mass \(M\), and cosmological constant \(\Lambda\), and for the dilaton \(\phi\) with its derivative coupling to the Higgs field.
The values of \(M=\sqrt{1/8\pi G}\approx 2.4\times 10^{18}\) GeV and \(v\approx 246\) GeV in the Standard Model imply
\[\xi=\frac{M^{2}}{v^{2}}\approx 10^{32}\,. \tag{10}\]
The constant \(\lambda\) determines the Higgs-boson mass \(m_{h}=\sqrt{2\lambda}v\approx 125\) GeV and self-coupling, and should be set to its established value \(\lambda\approx 0.13\). It can be observed that the metric scalar curvature in Lagrangian (1) or (3) enters in the combination \(\xi R\) with a huge constant \(\xi\) given by (10), whose large value is responsible for the weakness of gravity.
In view of (9), in order to account for the small observable value of \(\Lambda\approx 4\times 10^{-66}\) eV\({}^{2}\), the dimensionless constant \(\lambda_{\chi}\) should be extremely small, \(\lambda_{\chi}\approx 3\times 10^{-56}\). Hence, \(\lambda_{*}\approx\lambda\) and \(\lambda_{\Phi}\approx\lambda_{\chi}\) to a very high precision, which means that the self-coupling \(\lambda\) of the Higgs field in (8) is determined by the coefficient of the \(R^{2}\) term in (1). Thus, all dimensionless constants in (1) have been fixed. Given that the Higgs field interacts with matter fields in the original frame (1), such an extreme smallness of \(\lambda_{\Phi}\) represents a naturalness issue in this frame. In the Einstein frame, it is translated to the naturalness issue for the cosmological constant.
Lagrangian (8) describes non-renormalisable derivative interactions of the dilaton \(\phi\) with the Higgs field, which are all suppressed by inverse powers of the large Planck mass \(M\). The scale-invariance of the original action has transformed to the invariance of (8) with respect to the shifts \(\phi\to\phi+\text{const}\).
The dilaton is kinetically slightly mixed with the Higgs boson. Choosing the canonical unitary gauge for the Higgs doublet \(\Phi\) and shifting it by its vacuum expectation value \(v\), we write
\[\Phi=\frac{1}{\sqrt{2}}\begin{pmatrix}0\\ v+h\end{pmatrix}\,, \tag{11}\]
where \(h\) is the shifted real-valued Higgs field. The quadratic part of the dilaton-Higgs Lagrangian in (8) is then
\[L_{2}=-\frac{1}{2}\left(\partial h\right)^{2}-\frac{1}{2}\left(1+\frac{v^{2}} {6M^{2}}\right)\left(\partial\phi\right)^{2}-\frac{v}{\sqrt{6}M}\partial h \partial\phi-\lambda v^{2}h^{2}\,. \tag{12}\]
This Lagrangian is diagonalised by a transformation to new fields \(h_{*}\) and \(\phi_{*}\):
\[h=\sqrt{1+\frac{v^{2}}{6M^{2}}}\,h_{*}\,,\qquad\phi=\frac{1}{\sqrt{1+v^{2}/6M^ {2}}}\left(\phi_{*}-\frac{v}{\sqrt{6}M}h_{*}\right)\,, \tag{13}\]
with the result
\[L_{2}=-\frac{1}{2}\left(\partial h_{*}\right)^{2}-\frac{1}{2}\left(\partial\phi_ {*}\right)^{2}-\left(1+\frac{v^{2}}{6M^{2}}\right)\lambda v^{2}h_{*}^{2}\,. \tag{14}\]
This renormalises the Higgs field and its mass by a factor very close to unity.
The Dirac masses of fermions in the Standard Model are generated by the usual Yukawa coupling terms, making the Dirac action Weyl invariant. To generate Majorana masses for right-handed neutrinos in a Weyl-invariant way in the original frame (1), we can again exploit the Higgs field.
The action for a Weyl spinor \(\psi^{A}\), in the Penrose spinor-index notation [20], reads
\[S_{\psi}=\sqrt{2}\,\mathrm{i}\int\bar{\psi}^{A^{\prime}}\nabla_{AA^{\prime}} \psi^{A}\sqrt{-g}\,d^{4}x\,, \tag{15}\]
where \(\nabla_{AA^{\prime}}\) is the spinor covariant derivative compatible with the space-time metric. Under the Weyl rescaling (6) of the metric, the spinor covariant derivative transforms as [20]
\[\nabla_{AA^{\prime}}\psi^{B}\to\nabla_{AA^{\prime}}\psi^{B}-\epsilon_{A}{}^{B} \psi^{C}\,\nabla_{CA^{\prime}}\ln\omega\,. \tag{16}\]
Here, \(\epsilon_{AB}\) is the antisymmetric \(\epsilon\)-spinor field [it is transformed as \(\epsilon_{AB}\to\omega^{-1}\epsilon_{AB}\) under (6)]. Action (15) is then Weyl invariant under simultaneous rescaling (6) of the metric and rescaling \(\psi^{A}\to\omega^{2}\psi^{A}\) of the spinor.
To construct a Weyl-invariant action for a Majorana spinor, we need a gauge-invariant scalar field that transforms canonically under (6). With only the Higgs field of the Standard Model at our disposal, it is natural to use \(\left(\Phi^{\dagger}\Phi\right)^{1/2}\) as such a scalar, and write the Lagrangian for the Majorana spinor in the form
\[L_{\psi}=\sqrt{2}\,\mathrm{i}\,\bar{\psi}^{A^{\prime}}\nabla_{AA^{\prime}}\psi ^{A}-\frac{\gamma}{\sqrt{2}}\left(\Phi^{\dagger}\Phi\right)^{1/2}\left(\psi^ {A}\psi_{A}+\bar{\psi}_{A^{\prime}}\bar{\psi}^{A^{\prime}}\right)\,, \tag{17}\]
where \(\gamma\) is the coupling constant. The corresponding action is Weyl invariant and, after the electroweak symmetry breaking, the Majorana spinor acquires the mass \(m_{\psi}=\gamma v\).
Such a mechanism of mass generation implies interaction between the Higgs boson and Majorana fermion:
\[L_{\mathrm{int}}=-\frac{\gamma}{2}h\left(\psi^{A}\psi_{A}+\bar{\psi}_{A^{ \prime}}\bar{\psi}^{A^{\prime}}\right)\,, \tag{18}\]
where the Higgs field \(h\) was defined in (11). Fermions with mass smaller than \(m_{h}/2\) contribute to the width of the Higgs boson:
\[\Gamma_{h\to\psi\psi}=\frac{\gamma^{2}}{16\pi}\sqrt{m_{h}^{2}-\left(2m_{\psi} \right)^{2}}=\frac{m_{\psi}^{2}}{16\pi v^{2}}\sqrt{m_{h}^{2}-\left(2m_{\psi} \right)^{2}}\,. \tag{19}\]
The expected value of the total width of the 125-GeV Higgs boson in the Standard Model is \(\Gamma_{h}=4.1\) MeV [21]. In order that the total width be within the experimental limits \(\Gamma_{h}=4.5^{+3.3}_{-2.5}\) MeV [22], for one such fermion we obtain a constraint \(m_{\psi}\lesssim 4.2\) GeV for the Majorana mass, implying \(\gamma\lesssim 0.017\). Thus, Majorana masses of right-handed (sterile) neutrinos in this model are excluded in the interval \(4.2\) GeV \(\lesssim m_{\psi}\lesssim 62.5\) GeV.
Of course, couplings of the form (17) will also contribute to the naturalness issue for the Higgs-boson mass (see [26] for a review of the naturalness issues of the standard Model).
## III Discussion
We have shown that a scale-invariant theory with Lagrangian of the form (1) exhibits dynamical breaking of scale invariance, hence, also of electroweak symmetry. This is best seen in terms of variables of (3), in which the Higgs-field potential has an absolute minimum corresponding to \(\Phi^{\dagger}\Phi=\chi^{2}\) while the auxiliary field \(\chi\) can take any non-zero value due to scale invariance. In the original frame (1), it is the vacuum Higgs-field invariant \(\Phi^{\dagger}\Phi\) and scalar curvature that can take any value, related by (5), due to scale invariance of the theory. The term \(\lambda_{\Phi}\left(\Phi^{\dagger}\Phi\right)^{2}\) in (1) or the corresponding term \(\lambda_{\chi}\chi^{4}\) in (3) generate a small cosmological constant fixing the background spacetime metric. The naturalness issue related to the smallness of the gravitational and cosmological constants is translated as the issue of largeness of the constant \(\xi_{*}\approx 10^{32}\) and smallness of \(\lambda_{\Phi}\approx 3\times 10^{-56}\) in (1).
Note that the theory does not have a continuous limit to the case of \(\lambda_{*}\rightarrow\infty\), i.e., when the \(R^{2}\) term is absent from (1). Indeed, in this case, the gravitational scalaron degree of freedom disappears, and the theory becomes the one considered in [14; 15; 16]. Conformal rescaling of the metric is then done by using the Higgs-field invariant \(\Phi^{\dagger}\Phi\), and the Higgs boson in this case becomes massless and decouples from the Standard Model. In the presence of \(R^{2}\) term, as \(\lambda_{*}\rightarrow\infty\) (implying \(\lambda\rightarrow\infty\)), according to (8), the Higgs boson becomes infinitely heavy, leading to a massive Yang-Mills theory of vector bosons.
Along with a fourth-order gravity term quadratic in the scalar curvature, one can add to (1) another fourth-order term proportional to \(C_{\alpha\beta\mu\nu}C^{\alpha\beta\mu\nu}\), where \(C_{\alpha\beta\mu\nu}\) is the conformal Weyl tensor. Since this term in the action is Weyl-invariant, it will remain unmodified in the final result (8), leading to a version of Stelle gravity [23; 24]. Such a theory is plagued with ghosts, with a possible resolution of this problem consisting in allowing for curvature
invariants of unlimited differential order in the action [25]. Hopefully, this can be done in a scale-invariant manner by using the Higgs field, without affecting qualitatively the lower-order behaviour considered here.
We treated Lagrangian (1) as an _effective_ Lagrangian arising in a scale-invariant quantum field theory involving gravity. In order to maintain scale-invariance on the quantum level, one should assume one of the scale-invariant prescription for regularisation; e.g., employing the \(\chi\)-dependent regularisation parameter [5; 7] when dealing with (3). In the present case, \(\chi\)-dependence from the viewpoint of the original Lagrangian (1) means dependence on the combination
\[\chi_{*}^{2}=\Phi^{\dagger}\Phi+\frac{\xi_{*}}{2\lambda_{*}}R\,, \tag{20}\]
obtained by variation of (2) with respect to \(\chi_{*}^{2}\). To preserve the structure of (2), one will need to ensure that no kinetic term for the field \(\chi\) arises in regularising (3). This regularisation prescription means the usual field-independent regularisation of (8) preserving the relations between the terms which involve the constant \(M\).
The model under consideration does not describe successful inflation. In extending it appropriately, one can preserve scale-invariance, but one also needs to ensure that no overproduction of the dilaton radiation occurs in the reheating process. This does not seem to be a problem given that the dilaton interacts directly -- and very weakly -- only with the Higgs field by means of its coupling to the scalar curvature in the original frame (1).
## Acknowledgements
The author acknowledges support from the Simons Foundation. This work is supported by the National Academy of Sciences of Ukraine under project 0121U109612 and by the Taras Shevchenko National University of Kyiv under project 22BF023-01.
|
2308.06473 | Observation of the X17 anomaly in the decay of the Giant Dipole
Resonance of $^8$Be | Angular correlation spectra of $e^+e^-$ pairs produced in the
$^{7}$Li($p$,$\gamma$)$^{8}$Be nuclear reaction were studied at a proton beam
energy of $E_p$~=~4.0~MeV, which corresponds to the excitation energy of the
Giant Dipole Resonance (GDR) in $^8$Be. The spectra measured show a peak like
anomaly at 120$^\circ$ and a broader anomaly also above 140$^\circ$. Both
anomalies could consistently be described by assuming that the same
hypothetical X17 particle was created both in the ground-state transition and
in the transition going to the broad ($\Gamma$=1.5~MeV), first excited state in
$^8$Be. The invariant mass of the particle, which was derived to be $m_Xc^2 =
16.95 \pm 0.48$(stat.)~MeV, agrees well with our previously published values. | A. J. Krasznahorkay, A. Krasznahorkay, M. Csatlós, L. Csige, J. Timár, M. Begala, A. Krakó, I. Rajta, I. Vajda | 2023-08-12T05:49:05Z | http://arxiv.org/abs/2308.06473v1 | # Observation of the X17 anomaly in the decay of the Giant Dipole Resonance of \({}^{8}\)Be
###### Abstract
Angular correlation spectra of \(e^{+}e^{-}\) pairs produced in the \({}^{7}\)Li(\(p\),\(\gamma\))\({}^{8}\)Be nuclear reaction were studied at a proton beam energy of \(E_{p}=4.0\) MeV, which corresponds to the excitation energy of the Giant Dipole Resonance (GDR) in \({}^{8}\)Be. The spectra measured show a peak like anomaly at 120\({}^{\circ}\) and a broader anomaly also above 140\({}^{\circ}\). Both anomalies could consistently be described by assuming that the same hypothetical X17 particle was created both in the ground-state transition and in the transition going to the broad (\(\Gamma\)=1.5 MeV), first excited state in \({}^{8}\)Be. The invariant mass of the particle, which was derived to be \(m_{X}c^{2}=16.95\pm 0.48\)(stat.) MeV, agrees well with our previously published values.
## I Introduction
We published very challenging experimental results in 2016 [1] indicating the electron-positron (\(e^{+}e^{-}\)) decay of a hypothetical new light particle. The \(e^{+}e^{-}\) angular correlations for the 17.6 MeV and 18.15 MeV transitions in \({}^{8}\)Be were studied and an anomalous angular correlation was observed for the 18.15 MeV transition [1]. This was interpreted as the creation and decay of an intermediate bosonic particle with a mass of \(m_{X}c^{2}\)=16.70\(\pm\)0.35(stat)\(\pm\)0.5(sys) MeV, which is now called X17.
Our data were first explained with a vector gauge boson, X17 by Feng and co-workers [2; 3; 4], which would mediate a fifth fundamental force with some coupling to standard model (SM) particles. The possible relation of the X17 boson to the dark matter problem triggered an enormous interest in the wider physics community [5]. New results will hopefully be published soon on the X17 particle from a few different experiments [6].
We also observed a similar anomaly in \({}^{4}\)He [8]. It could be described by the creation and subsequent decay of a light particle during the proton capture process on \({}^{3}\)H to the ground state of \({}^{4}\)He. The derived mass of the particle (\(m_{X}c^{2}=16.94\pm 0.12\)(stat.)\(\pm\)0.21(syst.) MeV) agreed well with that of the proposed X17 particle.
Recently, we have studied the E1 ground state decay of the 17.2 MeV J\({}^{\pi}=1^{-}\) resonance in \({}^{12}\)C [9]. The angular correlation of the \(e^{+}e^{-}\) pairs produced in the \({}^{11}\)B(p,\(\gamma\))\({}^{12}\)C reaction were studied at five different proton energies around the resonance. The gross features of the angular correlations can be described well by the Internal Pair Creation (IPC) process following the E1 decay of the \(1^{-}\) resonance. However, on top of the smooth, monotonic distribution, we observed significant peak-like anomalous excess around 155-160\({}^{\circ}\) at four different beam energies. The \(e^{+}e^{-}\) excess can be well-described by the creation and subsequent decay of the X17 particle. The invariant mass of the particle was derived to be (\(m_{X}c^{2}=17.03\pm 0.11\)(stat.)\(\pm\)0.20(syst.) MeV), in good agreement with our previously published values.
However, despite the consistency of our observations, more experimental data are needed to understand the nature of this anomaly. For this reason, many experiments all over the world are in progress to look for such a particle in different channels. Many of these experiments have already put constraints on the coupling of this hypothetical particle to ordinary matter. Others are still in the development phase, but hopefully they will soon contribute to a deeper understanding of this phenomenon as concluded by the community report of the Frascati conference [6].
Very recently, Barducci and Toni published an updated view on the ATOMKI nuclear anomalies [10]. They have critically re-examined the possible theoretical interpretation of the observed anomalies in \({}^{8}\)Be, \({}^{4}\)He and \({}^{12}\)C anomalies in terms of a BSM boson X with mass \(\approx\)17 MeV. Their results identify an _axial vector state_ as the most promising candidate to simultaneously explain all three anomalous nuclear decays, while the other spin/parity assignments seems to be disfavored for a combined explanation.
At the same time, the NA62 collaboration was searching for K\({}^{+}\) decays to the \(\pi^{+}e^{+}e^{-}e^{+}e^{-}\) final state and excluded the QCD axion as a possible explanation of the 17 MeV anomaly [11]. Hostelt and Pospelov reanalysed some old pion decay constraints [12], ruled out the vector-boson explanations and set limits on axial-vector ones.
The aim of this paper is to use a simpler geometry of the spectrometer to avoid non-trivial possible artefacts, which may be connected to the spectrometer itself [13].
With such a new spectrometer, we studied the X17 creation and the \(e^{+}e^{-}\) pair emission from the decay of the Giant Dipole Resonance (GDR) [14; 15; 16] excitations of \({}^{8}\)Be.
## II Experimental Methods
The experiments were performed in Debrecen (Hungary) at the 2 MV Tandetron accelerator of ATOMKI,
with a proton beam energy of E\({}_{p}\)= 4.0 MeV.
Owing to the rather large width of the GDR(\(\Gamma\) = 5.3 MeV [14]), a 1 mg/cm\({}^{2}\) thick \({}^{7}\)Li\({}_{2}\)O target was used in order to maximize the yield of the \(e^{+}e^{-}\) pairs. The target was evaporated onto a 10 \(\mu\)m thick Ta foil. The average energy loss of the protons in the target was \(\approx\)100 keV.
\(\gamma\) radiations were detected by a 3"x3" LaBr3 detector monitoring also any potential target losses. The detector was placed at a distance of 25 cm from the target at an angle of 90 degrees to the beam direction.
A typical \(\gamma\) energy spectrum is shown in Fig 1. The figure clearly shows the transitions from the decay of GDR to the ground and first excited states in \({}^{8}\)Be. The cosmic ray background is also visible on the high energy side of the spectrum, but it is reasonably low.
The intensity ratio of the peaks was found to be: I(GDR\(\rightarrow\) g.s.)/I(GDR\(\rightarrow\)\(2_{1}^{+}\))=0.18\(\pm\)0.02 at E\({}_{p}\)= 4.0 MeV bombarding energy.
We used Double-sided Silicon Strip Detectors (DSSD) and plastic scintillators as "particle telescopes" to determine the hit positions and the energy of the electrons and positrons, respectively. In our previous experiments, the spectrometers were built up of five and six particle telescopes, both having different acceptances as a function of the \(e^{+}e^{-}\) correlation angle. A detailed description of the spectrometers can be found in Ref. [8]. However, in the present experiment, only two telescopes were used, placed at an angle of 110\({}^{\circ}\) with respect to each other. The diameter of the carbon fiber tube of the target chamber has been reduced from 70 mm to 48 mm to allow a closer placement of the telescopes to the target. Thus, we could cover a solid angle around 110\({}^{\circ}\) with the two telescopes much larger than with the previous setups. Also, in this setup the efficiency function has only one maximum as a function of the \(e^{+}e^{-}\) opening angle. This angular dependence can be simulated and calibrated more reliably. Another advantage of this setup is that its sensitivity to the cosmic background is significantly less. Since the vertical angles of telescopes were -35\({}^{\circ}\) and -145\({}^{\circ}\), the cosmic rays coming mostly vertically, have a very small chance of firing both telescopes at the same time.
The energy calibration of the telescopes, the energy and position calibrations of the DSSD detectors, the Monte Carlo (MC) simulations as well as the acceptance calibration of the whole \(e^{+}e^{-}\) coincidence pair spectrometer were explained in Ref. [8]. Good agreement was obtained between the experimental acceptance and the results of the MC simulations, as presented in Fig. 2. Due to the very tight geometry, the DSSD position data and therefore the \(e^{+}e^{-}\) angular distribution experiences an enhanced dependence on the beam spot size and position. According to previous measurements and MC simulations of the present setup we could take into account this effect properly.
At the proton energy of E\({}_{p}\)= 4.0 MeV, the (p,n) reaction channel is open (E\({}_{thr}\)= 1.88 MeV) and generated neutrons and low-energy \(\gamma\) rays with a large cross section. (Other reaction channels are also open, but their cross sections are much smaller and their influence on our experiment is much weaker.) The maximum neu
Figure 1: Tipical \(\gamma\)-ray spectrum measured for the \({}^{7}\)Li(\(p\),\(\gamma\))\({}^{8}\)Be nuclear reaction at \(E_{p}\)= 4.0 MeV
Figure 2: Experimental acceptance of the spectrometer as a function of correlation angle (\(\theta\)) for consecutive, uncorrelated e\({}^{+}\)e\({}^{-}\) pairs (red line histogram) compared with the results of the MC simulations (blue line histogram) as explained in the text.
tron energy E\({}_{n}\) = 1.6 MeV, which induces only a 300 keV electron equivalent signal in the plastic scintillator due to the quenching effect. Such a small signal fell well below the CFD thresholds that we used.
The low-energy neutrons did not produce any measurable signal in the DSSD detectors either since the maximum energy that can be transferred in elastic scattering on Si atoms is only \(\approx\)50 keV, which is below the detection threshold.
A single energy spectrum measured by the scintillators and gated by "multiplicity\(=\)2" events in the DSSD detector, which means that both the electron and positron coming from the internal pair creation are detected in the same telescope, is used for energy calibration. Such a calibration spectrum is shown in Fig. 3 for telescope 1.
As shown, the energy resolution for the ground-state transition is reasonably good (\(\approx\) 14%). The intensity ratio of the GDR to ground state and the GDR to the 2\({}^{+}_{1}\) state is determined to be: I(GDR\(\rightarrow\)g.s.)/I(GDR\(\rightarrow\)2\({}^{+}_{1}\))=0.25\(\pm\)0.03.
## III Experimental Results
Unfortunately, the gain of the PMT connected to the second plastic scintillator was less stable than the first one and its energy resolution was somewhat worse. This is represented by the worse resolution of the energy sum spectrum of the two telescopes as shown in Fig. 4.
The angular correlation spectra of the \(e^{+}e^{-}\) pairs for the different energy sum regions were then obtained for symmetric \(-0.5\leq\epsilon\leq 0.5\) pairs, where the energy asymmetry parameter \(\epsilon\) is defined as \(\epsilon=(E_{1}-E_{2})/(E_{1}+E_{2})\), where \(E_{1}\) and \(E_{2}\) denote the kinetic energies of the leptons measured in telescope 1 and telescope 2, respectively.
The angular correlation gated by the low energy-sum region (below 14 MeV), as marked in Fig. 4, is shown in Fig. 5. The measured counts were corrected for the acceptance obtained from the raw data collected for the whole experiment in the similar way as described previously [8]. It is a smooth distribution without showing any anomalies. It could be described by assuming E1 + M1 multipolarities for the IPC process and a constant distribution, which may originate from cascade transitions of the statistical \(\gamma\) decay of the GDR appearing in real coincidence. In such a case, the lepton pair may come from different transitions, and thus their angles are uncorrelated. This smooth curve reassured us that we were able to accurately determine the efficiency of the spectrometer.
The angular correlation of the \(e^{+}e^{-}\) pairs gated by the GDR energy region (above 14 MeV), as marked in Fig. 4, is shown Fig. 6.
The experimental data corrected for the acceptance of the spectrometer is shown as red dots with error bars. The simulated angular correlation for the E1 internal pair creation is indicated as a black curve. Significant deviations were observed. First of all, a peak-like deviation at 120\({}^{\circ}\), but also an even stronger deviation at larger angles.
The measured angular correlation was fitted from 70 degrees to 160 degrees with the sum of simulated E1, M1
Figure 3: Total energy spectrum of the \(e^{+}e^{-}\)-pairs from the \({}^{7}\)Li(p,\(e^{+}e^{-}\))\({}^{8}\)Be nuclear reaction measured in telescope 1 by requiring multiplicity\(=\)2 in their corresponding DSSD detector.
Figure 4: Total energy spectrum of the \(e^{+}e^{-}\)-pairs from the \({}^{7}\)Li(p,\(e^{+}e^{-}\))\({}^{8}\)Be nuclear reaction.
and X17 contributions calculated for both the GDR to ground state and for the GDR to \(2^{+}_{1}\) state transitions. The simulations concerning the decay of the X17 boson in the transition to the ground state of \({}^{8}\)Be were carried out in the same way as we did before [1; 8; 9] and could describe the anomaly appearing at around 120\({}^{\circ}\).
However, based on Figures 3 and 4 and previous measurements [14], the \(\gamma\)-decay of GDR to the first excited state is much stronger than its decay to the ground state. According to that, we assumed that the X17 particle was created also in the decay of GDR to the ground state and to the first excited state. Based on the energy of that transition (17.5 MeV), we would expect a peak around 150 degrees. However, the first excited state is very broad (\(\Gamma\)=1.5 MeV), so the shape of the expected anomaly is significantly distorted. The simulations were then performed as a function of the X17 mass from 10 MeV/c\({}^{2}\) to 18 MeV/c\({}^{2}\) for both transitions.
To derive the invariant mass of the decaying particle, we carried out a fitting procedure for both the mass value and the amplitude of the observed peaks. The fit was performed with RooFit [17] in a similar way as we described before [8; 9].
The experimental \(e^{+}e^{-}\) angular correlation was fitted with the following intensity function (INT):
\[\begin{split} INT(e^{+}e^{-})=\\ N_{E1}*PDF(E1)+N_{M1}*PDF(M1)+\\ N_{Sig}*\alpha_{ground}*PDF(sigground)+\\ N_{Sig}*(1-\alpha_{ground})*PDF(sig2plus)\;,\end{split} \tag{1}\]
where where PDF(X) represents the MC-simulated probability density functions. \(PDF(E1),PDF(M1)\) were simulated for Internal Pair Creation having electromagnetic transitions with E1 and M1 multipolarity. \(PDF(sigground),PDF(sig2plus)\) were simulated for the two-body decay of an X17 particle as a function of its mass created in the GDR to the ground state and GDR to \(2^{+}_{1}\) transitions, respectively. \(N_{E1}\), \(N_{M1}\), and \(N_{Sig}\) are the fitted numbers of background and signal events, respectively. \(\alpha_{ground}\) is the fraction of X17 decays detected in the GDR to ground state transition, with respect to the total number of detected X17 decays. We assumed the same mass for the X17 particle created in the two transitions. The result of the fit is shown in Fig. 6 together with the experimental data.
As shown in Fig. 7, the simulation can describe the experimental distributions from \(\Theta=70^{\circ}\) to \(160^{\circ}\) well. The significance of the fit is larger than \(10\sigma\).
The measured invariant mass of the hypothetical X17 particle obtained from the fit is: 16.94 \(\pm\) 0.47 MeV(stat)/c\({}^{2}\) and the intensity ratio of the X17 particle was found to be:
\[\frac{B_{X17}(GDR\to g.s.)}{B_{X17}(GDR\to 2^{+}_{1})}=\frac{\alpha_{ground}}{1- \alpha_{ground}}=0.08\pm 0.19 \tag{2}\]
Although the error bar is very large, it agrees within 1\(\sigma\) error bar with the intensity ratio of the corresponding \(\gamma\)-rays of I\({}_{\gamma}\)(GDR\(\rightarrow\) g.s.)/I\({}_{\gamma}\)(GDR\(\rightarrow\) 2\({}^{+}_{1}\))=0.18\(\pm\)0.02 and also with the intensity ratio of \(e^{+}e^{-}\)-pairs of I\({}_{e^{+}e^{-}}\)(GDR\(\rightarrow\)g.s.)/I\({}_{e^{+}e^{-}}\)(GDR\(\rightarrow\)2\({}^{+}_{1}\))=0.25\(\pm\)0.03.
Figure 6: Experimental angular correlations of the \(e^{+}e^{-}\) pairs measured in the \({}^{7}\)Li(p,\(e^{+}e^{-}\))\({}^{8}\)Be reaction at E\({}_{p}\)=4.0 MeV at the vicinity of the GDR. See explanation in the text.
Figure 5: Experimental angular correlations of the \(e^{+}e^{-}\) pairs measured in the \({}^{7}\)B(p,\(e^{+}e^{-}\))\({}^{8}\)Be reaction at E\({}_{p}\)=4.0 MeV for low-energy (E\({}^{+}\)+E\({}^{-}\)\(\leq\)14 MeV) transitions.
## IV Summary
We reported on a new direction of X17 research. For the first time, we successfully detect this particle in the decay of the Giant Dipole Resonance (GDR). Since this resonance is a general property of all nuclei, the study of GDR may extend these studies to the entire nuclear chart.
We have studied the GDR (J\({}^{\pi}\) =1\({}^{-}\)) E1-decay to the ground state (J\({}^{\pi}\) =0\({}^{+}\)) and to the first excited state (J\({}^{\pi}\)=2\({}^{+}_{1}\)) in \({}^{8}\)Be. The energy-sum and the angular correlation of the \(e^{+}e^{-}\) pairs produced in the \({}^{7}\)Li(\(p\),e\({}^{+}\)e\({}^{-}\))\({}^{8}\)Be reaction was measured at a proton energy of E\({}_{p}\)= 4.0 MeV. The gross features of the angular correlation can be described well by the IPC process following the decay of the GDR. However, on top of the smooth, monotonic distribution of the angular correlation of \(e^{+}e^{-}\) pairs, we observed significant anomalous excess at about 120\({}^{\circ}\) and above 140\({}^{\circ}\).
The \(e^{+}e^{-}\) excess can be well-described by the creation and subsequent decay of the X17 particle, which we have recently suggested [1; 8; 9]. The invariant mass of the particle was measured to be (\(m_{\rm X}c^{2}=16.95\pm 0.48\)(stat.) MeV), which agrees well with our previous results.
The present observation of the X17 particle in an E1 transition supports its vector/axial-vector character.
## V Acknowledgements
We wish to thank Z. Pintye for the mechanical and J. Molnar for the electronic design of the experiment. This work has been supported by the GINOP-2.3.3-15-2016-00034 and GINOP-2.3.3-15-2016-00005 grants.
|
2302.02896 | Label Assisted Autoencoder for Anomaly Detection in Power Generation
Plants | One of the critical factors that drive the economic development of a country
and guarantee the sustainability of its industries is the constant availability
of electricity. This is usually provided by the national electric grid.
However, in developing countries where companies are emerging on a constant
basis including telecommunication industries, those are still experiencing a
non-stable electricity supply. Therefore, they have to rely on generators to
guarantee their full functionality. Those generators depend on fuel to function
and the rate of consumption gets usually high, if not monitored properly.
Monitoring operation is usually carried out by a (non-expert) human. In some
cases, this could be a tedious process, as some companies have reported an
exaggerated high consumption rate. This work proposes a label assisted
autoencoder for anomaly detection in the fuel consumed by power generating
plants. In addition to the autoencoder model, we added a labelling assistance
module that checks if an observation is labelled, the label is used to check
the veracity of the corresponding anomaly classification given a threshold. A
consensus is then reached on whether training should stop or whether the
threshold should be updated or the training should continue with the search for
hyper-parameters. Results show that the proposed model is highly efficient for
reading anomalies with a detection accuracy of $97.20\%$ which outperforms the
existing model of $96.1\%$ accuracy trained on the same dataset. In addition,
the proposed model is able to classify the anomalies according to their degree
of severity. | Marcellin Atemkeng, Victor Osanyindoro, Rockefeller Rockefeller, Sisipho Hamlomo, Jecinta Mulongo, Theophilus Ansah-Narh, Franklin Tchakounte, Arnaud Nguembang Fadja | 2023-02-06T16:03:38Z | http://arxiv.org/abs/2302.02896v1 | # Label Assisted Autoencoder for Anomaly Detection in Power Generation Plants
###### Abstract
One of the critical factors that drive the economic development of a country and guarantee the sustainability of its industries is the constant availability of electricity. This is usually provided by the national electric grid. However, in developing countries where companies are emerging on a constant basis including telecommunication industries, those are still experiencing a non-stable electricity supply. Therefore, they have to rely on generators to guarantee their full functionality. Those generators depend on fuel to function and the rate of consumption gets usually high, if not monitored properly. Monitoring operation is usually carried out by a (non-expert) human. In some cases, this could be a tedious process, as some companies have reported an exaggerated high consumption rate. This work proposes a label assisted autoencoder for anomaly detection in the fuel consumed by power generating plants. In addition to the autoencoder model, we added a labelling assistance module that checks if an observation is labelled, the label is used to check the veracity of the corresponding anomaly classification given a threshold. A consensus is then reached on whether training should stop or whether the threshold should be updated or the training should continue with the search for hyper-parameters. Results show that the proposed model is highly efficient for reading anomalies with a detection accuracy of 97.20% which outperforms the existing model of 96.1% accuracy trained on the same dataset. In addition, the proposed model is able to classify the anomalies according to their degree of severity.
keywords: Electric grid, Fuel consumption, Autoencoder, Label assisted autoencoder, Anomaly detection, Power generating plants +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
About 3% of the world's electrical energy is utilised by the information communication technology companies [1] and the telecommunication industry is one of the fastest growing industrial sectors among Agriculture, Banking, Infrastructure, and Oil and Gas. The number of telecommunication industries and the quest for expansion and growth has led to an increase in base stations across targeted countries to boost their network coverage and enhance the effective flow of communication. With the increase in the number of base stations, the issue of base station management needs to be addressed. Grid energy has been known to be the main source of power in developing countries such as those in Africa, and it is expected that these base stations located across different rural and urban areas will be powered by grid energy. However, electricity is quite unstable in most parts of these developing countries and this has forced base stations to look for a reliable alternative source of energy. These alternatives include photovoltaic panels (PV), wind turbines and diesel generators, but mostly generators due to a lack of space for the installation of PV or wind turbines [2]. The high cost of fuel and its transportation to supply stations located in rural areas has increased the operational cost of these companies. These generators are being refilled manually, thus creating room for irregular or unusual fuel consumption which might be caused by several reasons such as fuel theft, fuel leakages or poor maintenance of equipment. A study conducted in a base station in Cameroon has shown that the design of the base station building, room cooling systems such as air conditioners and careless handling of lights increase the rate of fuel consumption in the base station [3]. Espadafor et al. [4] also attested that generator performance can be affected by the age of the generator, the number of loads powered by the generator and improper maintenance.
In Cameroon, TeleInfra LTD is one such company whose objective is to manage base stations in various parts of the country. The services include maintenance of base stations and refuelling of generators. Like any other businesses relying on grid energy, the unstable power supply has resulted in high operating costs as companies have to find alternative sources of power supply to sustain the continuous operations of the business. The use of alternative sources such as solar panels, hybrid energy and generators has been implemented by TeleInfra for the sustainability of the business performance. Data on fuel consumed such as working hours of the generator, the quantity of fuel
refuelled, the rate of consumption, generator maintenance, and total fuel consumed, are collected from base stations [5].
Although many strategies have been proposed to improve energy saving such as building a well-ventilated base station, the use of air conditioners as a cooling system and heat pipes to remove hot air from the base station [6]. However, detecting such irregularities is challenging especially when there are numerous base stations functioning at once. Anomaly detection in power generating plants is aimed at detecting such irregularities in the behaviour of the data provided. Although there are different algorithms for detecting anomalies, machine learning algorithms are the most used and popular for anomaly detection due to their ability for automation and their effectiveness in the context of deep learning, especially when involving large datasets. According to Goodfellow [7], deep learning is a variant of applied statistics with an increasing emphasis on using computers in estimating complex functions statistically, but a reduced emphasis on proving confidence intervals around these functions. Machine learning algorithms come in several variants. In supervised learning, models can make predictions on unlabelled data after they have been trained on labelled data whereas, in unsupervised learning, models can only make predictions on unlabelled data by learning similar features and patterns embedded in the datasets. In reinforcement learning, a goal is given and an agent undergoes training in an environment with the purpose of finding an optimal solution to accomplishing the goal.
Anomalies can be an indicator of areas that require attention, and detecting them has been quite popular among the research community. In the past, the research community has conducted several anomaly detections ranging from comprehensive to certain application domains with different machine learning algorithms. Mulongo et al. [5] worked on a similar area, four different supervised learning algorithms were used in their work for detecting an anomaly in a power generation plant. However, in real-life scenarios, abnormal behavioural patterns are very few compared to normal behaviour. For instance, in the same work of Mulongo et al. [5], labels were created by the authors based on certain criteria and only 35 per cent of the dataset turned out to be anomalous and the authors had to duplicate the data to balance between the two categories. These are key challenges in recognising anomalies with a supervised learning algorithm since they greatly rely on labelling and balance between normal and anomalous data patterns. The goal is to learn how efficiently a generating plant should behave so that observations that do not confine to the norm will be identified, and the required attention can be given. Our work investigates an alternative
approach for anomaly detection in a power generation plant in an unsupervised manner based on the dataset used in Mulongo et al. [5]. The aim is to compare performance with the results obtained in Mulongo et al. [5]. The proposed unsupervised learning framework is built from a modifying and fine-tuning autoencoder which is free of the hassle of data labelling and balancing. An additional module is added to an autoencoder, the new module uses some labelled data to check if each of the observations is correctly classified and then one of the three steps below is activated:
* Update the threshold of the autoencoder which therefore increases the model overall accuracy to an acceptable level
* Update the interval of variation for numerical hyper-parameters, the best values of the hyper-parameters are then explored in the new search space
* Provide overall performance score
This paper is organised as follows: Section 2 explains anomaly detection and investigates related works for anomaly detection in power grid plants. Section 3 discusses autoencoders which replicate the input data through a compressed representation. This section also discusses the different evaluation metrics used in this work. Section 4 proposes the label assisted autoencoder and provides a detail discussion. The dataset and feature engineering are discussed in Section 5. Section 6 discusses the results and limitations, and Section 7 concludes the work.
## 2 Anomaly detection and related works
Anomalies also known as outliers often refer to instances or data samples that are significantly distanced from the main body of an examined data [8]. These distanced values often indicate a deviation from its established normal pattern which can sometimes be a measurement error or an indication of a data sample of a different population [9]. Outliers classification depends on the type and domain of the given data as well as the data analyst. Since many outliers are linked directly with abnormal behaviour, they are also referred to as deviants, anomalies, or abnormalities in the literature of statistics and data analysis [8]. According to [8], interpreting data is directly associated with the detection of anomalous samples. Demestichas et al. [9] suggested that it is essential to achieve the highest possible interpretability level to properly select the best anomaly detection method from different ranges of the relevant algorithm. There are two major categories of anomalies
depending on the given dataset; multivariate and univariate [9]. Multivariate anomalies can be spotted in multi-dimensional data while univariate anomalies are spotted in a single-dimensional data. Besides the two categories of anomaly, there are other categories which depend on the distribution of the given data. Data samples that are considered anomalous when viewed against the entire dataset are point anomalies, while data samples that are considered anomalous with respect to meta-information related to the data sample are contextual anomalies [10]. In other words, contextual anomalies are classified based on local neighborhoods, while point anomalies are classified based on the overall dataset. Collective anomalies denote anomalous data collection samples which together are considered an anomalous pattern.
Fahim and Sillitti [11] provide two anomaly detection methods; statistical and machine learning methods. The statistical method uses various algorithms such as density-based, distance-based, parametric and statistical-based. However, Trinh et al. [12] noted that one of the major challenges that are encountered by this approach is the design of a suitable model that can accurately separate normal data from unusual data points. On the other hand, machine learning methods consist of both supervised and unsupervised learning algorithms in which dataset can either be labelled for supervised learning or unlabelled for unsupervised learning. Some advantages of this method are an enhancement of detection speeds and its ability to handle complexity with the less human intervention [13].
Many researchers have worked on different machine learning techniques for anomaly detection, but most of the current works applied artificial neural networks (ANNs) to classification tasks. The labelled data is used during the training stage, and then the learned model is able to correctly classify sample data never used during the training process. This technique is generally classified under supervised machine learning techniques. Such an example is trained in Mulongo et al [5] in which support vector machines (SVM [14]), K-Nearest Neighbours (KNN [15]), Logistic Regression (LR [16]), and MultiLayer Perceptron (MLP [17]) are used for anomaly detection associated with the fuel consumed dataset from an energy company. However, the energy sector is not the only place anomaly detection with supervised machine learning has been applied, others include fraud detection in credit card [18] attacks and anomaly detection in IoT sensors [19]. One of the main advantages of supervised learning techniques is the ability to handle high-dimensional datasets with high-performance [20]. However, there is a major problem with this technique. When dealing with real life data, the majority of them contains fewer anomalies data which is quite challenging and
can cause the problem of an unbalanced dataset. This is an issue for supervised learning techniques since they greatly rely on labelled and balanced data. However, unsupervised learning techniques can be used to address this problem, for example, the autoencoder considers a specific kind of feed-forward neural network that can be applied in outlier-based anomaly detection rather than classification problems. Hawkins et al. [21] proposed an approach that involved autoencoder for outlier detection, however many researchers have investigated hybrid methods; e.g. [20] proposed an approach based on long short term memory (LSTM) autoencoder and One-class SVM (OC-SVM). The approach is used to detect anomalies based attacks in an unbalanced dataset. The idea is to use the LSTM-autoencoder to train a model to learn the pattern in the normal class (dataset without anomaly) so that-the model is able to replicate the input data at the output layer with a small reconstruction error. When there are anomalies in the data, the model fails to replicate the anomalous samples. This arises when the reconstruct error is very high.
Another unsupervised learning technique is the k-means, Zhang et al. [22], used the transformer model and the k-means clustering method for anomaly detection. The k-means was also used in the work of Munz et al. [23], to detect traffic anomalies, the main idea is to train data containing unlabelled records and separate them into clusters of normal and anomalous data.
## 3 Autoencoder and performance metrics
### Autoencoder
An autoencoder is a neural network that is trained to attempt to copy its input to its output in an unsupervised manner. It consists of three parts: An Encoder part, a code and a decoder part. The encoder compresses the input data \(\mathbf{X}\) defined in two dimensions as:
\[\mathbf{X}=\begin{pmatrix}\mathbf{X}_{1,1}&\mathbf{X}_{1,2}&\cdots&\mathbf{X}_ {1,M}\\ \mathbf{X}_{2,1}&\mathbf{X}_{2,2}&\cdots&\mathbf{X}_{2,M}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{X}_{N,1}&\mathbf{X}_{N,2}&\cdots&\mathbf{X}_{N,M}\end{pmatrix}\in \mathbb{R}^{N\times M} \tag{1}\]
where \(\mathbf{X}_{:,i}\in\mathbb{R}^{N}\) is an observation with \(N\in\mathbb{R}\) entries among the \(M\in\mathbb{R}\) observations. The encoder compressed \(\mathbf{X}\) to a lower-dimensional space \(\mathbf{H}_{:,i}\in\mathbb{R}^{Q}\). The decoder produces a predicted output
\(\hat{\mathbf{X}}\),
\[\hat{\mathbf{X}}=\begin{pmatrix}\hat{\mathbf{X}}_{1,1}&\hat{\mathbf{X}}_{1,2}& \cdots&\hat{\mathbf{X}}_{1,M}\\ \hat{\mathbf{X}}_{2,1}&\hat{\mathbf{X}}_{2,2}&\cdots&\hat{\mathbf{X}}_{2,M}\\ \vdots&\vdots&\ddots&\vdots\\ \hat{\mathbf{X}}_{N,1}&\hat{\mathbf{X}}_{N,2}&\cdots&\hat{\mathbf{X}}_{N,M} \end{pmatrix}\in\mathbb{R}^{N\times M} \tag{2}\]
by reconstructing the original data from the compressed representation. It makes use of multiple layers and uses non-linear activation functions to learn the non-linear relationship embedded in the data. The goal is to make the reconstruction error as minimal as possible, which means finding the parameters that make the reconstruction \(\hat{\mathbf{X}}\) as close as possible to the original input \(\mathbf{X}\). Autoencoder can be applied to various tasks such as anomaly detection [20, 24], generative model [25, 26], clustering [27, 28], classification [29], recommendation systems [30] and dimensionality reduction [31, 32]. Figure 1 depicts a simple architecture of an autoencoder for which the detailed description is as follows:
_Encoding:_ During the encoding process, assuming the input data \(\mathbf{X}_{:,i}\) is a high dimensional vector that is mapped to a low dimensional vector \(\mathbf{H}_{:,i}\) after filtering insignificant features. This is expressed mathematically as:
\[\mathbf{H}_{:,i}=f(\mathbf{X}_{:,i}), \tag{3}\]
where \(f\) is a neural network that is trained with sets of activation functions weights and biases. Note that \(f\in\mathbb{R}^{N}\rightarrow\mathbb{R}^{Q}\), where \(Q\) is the number of compressed representations.
_Decoding:_ During the decoding process, the compressed representation \(\mathbf{H}_{:,i}\) of \(\mathbf{X}_{:,i}\) is used to generate the output \(\hat{\mathbf{X}}_{:,i}\) that maps back into the reconstruction of \(\mathbf{X}_{:,i}\):
\[\hat{\mathbf{X}}_{:,i} =g(\mathbf{H}_{:,i}) \tag{4}\] \[=g\big{(}f(\mathbf{X}_{:,i})\big{)}, \tag{5}\]
where \(g\) is the decoding neural network with activation functions, weights, and biases that could be completely independent of the corresponding activation, weights, and biases of the encoding neural network. As discussed in [33, 34], the autoencoder involves the search for \(f\) and \(g\) which minimizes the average of the loss \(\Delta_{\theta,\phi}\) between all the samples \(\mathbf{X}_{:,i}\), added for example, to a \(l_{2}\) weighted
regularization term \(a_{j}\):
\[[\hat{f},\hat{g}] =\operatorname*{arg\,min}_{f,g}\bigg{(}<\Delta_{\theta,\phi}\big{(} \mathbf{X}_{:,i},\hat{\mathbf{X}}_{:,i}\big{)}>+\lambda\sum_{j}a_{j}^{2}\bigg{)}, \tag{6}\] \[=\operatorname*{arg\,min}_{f,g}\bigg{(}<\Delta_{\theta,\phi}\big{(} \mathbf{X}_{:,i},g(f(\mathbf{X}_{:,i}))\big{)}>+\lambda\sum_{j}a_{j}^{2}\bigg{)}, \tag{7}\]
where \(<\cdot>\) is the average operator and \(\lambda\) is the hyper-parameter that weights the regularization term, \(a_{j}\). The higher the value of \(\lambda\) the greater the capacity penalty. Note that \(j\) runs across the hidden layers of the neural networks \(f,g\). Finding the optimal neural networks \(f\) and \(g\) involves updating their respective learning parameters \(\theta,\phi\) in each hidden layer of the neural networks so that the loss \(\Delta_{\theta,\phi}\) is smaller than a given limit. In this work, the mean absolute loss:
\[<\Delta_{\theta,\phi}\big{(}\mathbf{X}_{:,i},\hat{\mathbf{X}}_{:,i}\big{)}>= \frac{1}{M}\sum_{i=1}^{M}\left|\mathbf{X}_{:,i}-\hat{\mathbf{X}}_{:,i}\right| \tag{8}\]
is used at the place of \(<\cdot>\).
### Performance metrics
Performance metrics measure the ability of the overall quality of the model. A single performance measure is not enough to validate an autoencoder, therefore, different measures are performed and evaluated. The confusion matrix as shown in Table 1 generates more meaningful measures to find the detection accuracy, precision, recall, and F1 score. True Normal, \(TN\) represents the number of observations in the normal class that are predicted as normal by the model (i.e. below the threshold). True Anomaly, \(TA\) is the number of observations in the anomaly class that are predicted as an anomaly, and are above the threshold. False Normal, \(FN\) is the number of anomalous observations that are below the threshold (i.e. predicted as normal classes). False Anomaly, \(FA\) is the number of normal observations that are above the threshold (i.e. predicted as an anomaly). The classification
Figure 1: Simple architecture of an autoencoder (adapted from [35]).
accuracy measures the general performance of the model by producing the ratio of true prediction (true normal and true anomaly) out of the total number of prediction:
\[Accuracy=\frac{TN+TA}{TA+TN+FA+FN}. \tag{9}\]
The Precision is the ratio of the true anomaly divided by the total number of observations above the threshold (i.e. number or anomalies predicted):
\[Precision=\frac{TA}{TA+FA}. \tag{10}\]
The False positive rate, \(FPR\) refers to the ratio of the normal samples above the threshold to the actual number of anomalous samples:
\[FPR=\frac{FA}{FA+TN}. \tag{11}\]
True positive rate, \(TPR\) also known as sensitivity or recall, is the ratio of the number of anomalous samples above the threshold to the actual number of samples in the anomaly class:
\[TPR=\frac{TA}{TA+FN}. \tag{12}\]
Specificity is a measure obtained from the outcome of the confusion matrix which gives the ratio of true anomaly to the total negative class in the sample:
\[\text{Specificity}=\frac{TN}{TN+FA}. \tag{13}\]
F-measure or \(F_{1}\)-score gives the harmonic mean between recall and precision of the classifier. High F-measure indicates a better performance of the classifier with no false alarm:
\[F\text{-measure}=2\left(\frac{\text{Precision}\times\text{Recall}}{\text{ Precision}+\text{Recall}}\right). \tag{14}\]
\begin{table}
\begin{tabular}{|l|l|l|} \hline \hline \multicolumn{3}{|c|}{Classification Class Distribution} \\ \hline \hline \multicolumn{3}{|c|}{Actual Normal} & Actual Anomaly \\ \hline Predicted Normal & True Normal & False Normal \\ \hline Predicted Anomaly & False Anomaly & True Anomaly \\ \hline \end{tabular}
\end{table}
Table 1: Two class classification confusion matrix representation
## 4 Proposed label assisted autoencoder
### The architecture of the proposed model
The encoder takes as input a high-dimensional input data here a fixed-size vector and reaches the latent space through mapping it to a a low-dimensional representational vector. The decoder reconstructs the input data from the reduced representation in the latent space. The final reconstruction error is used to set a threshold to detect anomalies. An additional computation block is added to check from a set of labelled data if the threshold is acceptable to satisfy the required precision. This is a kind of validation block that uses the threshold to decide if the threshold should be changed or if the autoencoder should be trained to further minimize the reconstruction loss. The entire architecture of the proposed assisted autoencoder is depicted in Figure 2 and a detailed description is provided below.
1. The dataset is made up of observations and features which result in a 2-dimensional array of size \(N\times M\), where \(N\) and \(M\) represent the number of features and the number of observations respectively. All features of each observation are collected and received as input in the algorithm called nodes.
2. The input data of size \(N\times M\) is collected and reduced to a latent form of size \(N\times Q\) where \(Q<M\), while the \(N\times Q\) set is fed into the decoder to produce a predicted output of size \(N\times M\). Appendix 7 shows the parameters and the architecture of the deep neural networks trained in the encoder and decoder. The number of filters, size of filters and layers are displayed in the encoding and decoding phases.
3. After the decoding process, a reconstruction error is produced for each data point. The reconstruction error refers to the measure of how much the reconstructed input deviates from the original input. A threshold is set as a decision point to decide the acceptable amount of deviation, and the features of the observation that go beyond this threshold is classified as an anomaly. The observation that is below this threshold is a normal data without anomaly.
4. The label assisting module then takes over to verify if each of the observations is labelled, then the label is used to verify the veracity of the corresponding anomaly classification. The labelled observations are then checked and an agreement is reached if the threshold is satisfied to obtain the desired precision to detect anomalies, if not the threshold is updated or the model is further trained to find the best hyper-parameters for the given threshold.
Figure 2: Overview of the proposed model. Each observation goes through the autoencoder for training, the reconstruction error is measured and then a threshold is used to make the decision whether the observation is an anomaly or not. The labelling assistance module then takes over to check if each of the observations is labelled, and then the label is used to check the veracity of the corresponding anomaly classification. A consensus is then reached whether training should stop or whether the threshold should be updated or the training should continue with the search for hyper-parameters.
To illustrate the above steps, an example of a scenario is described as follows. Assume the input data is \(\mathbf{X}_{:,i}\) with \(N\) features and their corresponding predictions given as:
\[\mathbf{X}_{:,i} =\mathbf{X}_{1,i},\mathbf{X}_{2,i},\mathbf{X}_{3,i},\cdots, \mathbf{X}_{N-1,i},\mathbf{X}_{N,i} \tag{15}\] \[=(0.1,0.2,0.3,\cdots,0.4,0.5)\] (16) \[\hat{\mathbf{X}}_{:,i} =\hat{\mathbf{X}}_{1,i},\hat{\mathbf{X}}_{2,i},\hat{\mathbf{X}}_ {3,i},\cdots,\hat{\mathbf{X}}_{N-1,i},\hat{\mathbf{X}}_{N,i}\] (17) \[=(0.6,0.21,0.32,\cdots,0.61,0.53) \tag{18}\]
The value of each of these features indicates the importance of the feature in the anomaly classification; e.g. the running time of a generator is more important than the generator capacity. The reconstruction loss for each feature is then calculated as in Equation 7:
\[L_{:,i} =|\mathbf{X}_{:,i}-\hat{\mathbf{X}}_{:,i}| \tag{19}\] \[=(0.5,0.01,0.02,0.21,0.03). \tag{20}\]
Assume the maximum reconstruction loss is set to 0.2, then the absolute error classifies the sample under the normal category while if a priority is given to the important feature, say \(\mathbf{X}_{N-1,i}\) then the sample would be labelled as an anomaly since the reconstruction loss of \(\mathbf{X}_{N-1,i}\) is \(L_{N-1,i}=0.21\) which is beyond 0.2. Figure 3 illustrates the reconstruction loss scenario adopted in this example.
### Training and testing phases
Both the training and testing phases are equally important when it comes to the generalisation of the model. The dataset is divided into training and testing, depending on the proportion of choice, in our case, we used the ratio of 3:1 for training and testing respectively. The training phase is divided into two folds. Firstly, minimising the reconstruction error is the focus of the training for the reconstructed outputs to converge to the input samples. Secondly, the reconstruction error is calculated for each data point to find an optimal threshold for anomaly detection.
The training phase consists of normalizing the dataset so that all the features are reduced to a common scale without distorting the differences in the range of the values. In this work, the mathematical measure used to normalize the data is given by:
\[\mathbf{X}_{:,\text{iscaled}}=\frac{\mathbf{X}_{:,i}-\mathbf{X}_{:,\min}}{ \mathbf{X}_{:,\max}-\mathbf{X}_{:,\min}}, \tag{21}\]
where \(\mathbf{X}_{:,\min}\) and \(\mathbf{X}_{:,\max}\) represent the data point with the minimum and maximum entries respectively. This is followed by dividing and separating the anomalous samples from the training set so that the algorithm learns to reconstruct the norms. Once the training is completed, a reconstruction loss between the input and output is calculated and a backpropagation strategy is applied to adjust the weights and parameters of the model. The testing phase checks the performance of the model on the unseen test data using the threshold obtained from the training phase. The whole process is described in Figure 3.
## 5 Datasets
### Data description
The dataset used in this paper is gathered from a Telecom base station management company in Cameroon named TeleInfra and was subject to previous studies for anomaly detection in [5]. The dataset is collected over the period of one year, i.e. from September 2017 to September 2018. It consists of 6010 observations from various base stations in 46 towns and villages (known as
clusters) across Cameroon. These stations mainly rely on generators as the main supply of power. The dataset also consists of 17 variables which are defined in both numerical and categorical forms. A detailed description of each variable is shown in Table 2.
Anomalies are observed in different features of the dataset and the observed anomalies are classified based on three indicators: (1) for a given time period if the generator running time is zero and the quantity of fuel consumed is not zero. (2) when the running time per day is more than 24 hours, and (3) when the daily consumed quantity of fuel is more than the maximum consumption a generator can consume. For a data sample to receive the anomaly tag 1, it has to demonstrate at least one of the three anomaly indicators listed, otherwise it is given the normal tag 0. A full workflow of the entire labelling process is illustrated in Figure 4. During the labelling process, output variables are assigned labels 0 and 1 representing the normal and anomaly classes respectively. For a single generator, Figure 5 illustrates the plot of the working hours per day, for example, all data samples above 24 h threshold show anomalies in the running time of one of the generators and these samples are assigned the label 1 because it is known that one day only has 24 hours. The 6010 observations are curated to remove missing samples leaving 5902 observations with all information. Out of 5905 observations, it is observed that 3832 samples are labelled as normal and 2073 samples as abnormal, resulting in 64.8% normal samples and 35.1% abnormalities in the entire dataset. Figure 6 shows all the clusters and their respective total fuel consumption including the degree of anomalies in the entire dataset.
### Feature importance
Feature selection is performed by fitting the data using a random forest classifier with 16 features. Note that one could use any other method to find the most important feature. Since the relative importance of the most important feature is too high (100% as shown in Figure 7) compared to other features, any algorithm will predict the same feature as the most important. Figure 7 shows that the feature "Running time per day" has the greatest influence on the output and can be coined as the most important feature in the dataset. Even though it is followed by the "Daily consumption within a period" the huge difference between the "Running time per day" and the remaining features shows that priority should be given to the feature "Running time per day" when considering its reconstruction error for anomaly detection.
\begin{table}
\begin{tabular}{|l l|} \hline \multicolumn{2}{|c|}{Feature Description} \\ \hline \hline CONSUMPTION HIS & The total fuel consumed between a specific period of time \\ & before the next refuelling is done. \\ \hline CONSUMPTION\_RATE & The number of litres the generator consumes per hour. \\ \hline Cluster & The cities where the generator sites are located \\ \hline CURRENT HOUR METER GE1 & The hour meter reading of the generator. \\ \hline Site Name & Name of the site where each generator is located \\ \hline EFFECTIVE\_DATE\_OF\_VISIT & The date of meter reading, refuelling and recording \\ \hline PREVIOUS\_DATE\_OF\_VISIT & The previous date of visit \\ \hline Months & The month when the reading was taken \\ \hline NUMBER\_OF\_DAYS & The number of days before the next refuelling process. \\ \hline GENERATOR\_1\_CAPACITY\_(KVA) & The capacity of the generator \\ \hline POWER TYPE & Type of power used in the power plant \\ \hline PREVIOUS HOUR METER G1 & The previous meter reading of the generator. \\ \hline PREVIOUS\_FUEL\_QTE & The total quantity of fuel left inside the generator tank on \\ & the previous date of the visit. \\ \hline QTE\_FUEL\_FOUND & The quantity of fuel found inside the generator tank before refuelling is done. \\ \hline QTE\_FUEL\_ADDED & The quantity of fuel added to the generator during refuelling process. \\ \hline TOTALE\_QTE\_LEFT & Quantity left in the generator after refuelling. \\ \hline RUNNING\_TIME & The total number of hours the generator worked before the next refuelling is done \\ \hline \end{tabular}
\end{table}
Table 2: Description of the different features in the dataset.
### Correlation
The correlation matrix is used to visualize the linear relationship between two variables. The values produced by the covariance matrix range from -1 to 1, where 1 indicates a strong positive linear relationship, -1 indicates a strong negative linear relationship and 0 indicates that there is no linear relationship between the variables. Figure 8 shows that the key variable "Running time per day" and the variable "Daily consumption within a period" has a strong positive correlation, which is reasonable since the daily quantity of fuel consumed by a generator is a function of the running time. A strong correlation is also observed between the three pairs of features; "Total quantity of fuel after refilling", "Quantity of fuel found" and the "Previous quantity of fuel recorded". However, the feature "Previous hour meter G1" has no significant correlation with any of the features.
## 6 Results
The results are observed from a number of different evaluation metrics.
Figure 4: Flowchart showing how the labels are decided: For a data sample to receive the anomaly tag 1, it has to demonstrate at least one of the three anomaly indicators listed, otherwise it is given the normal tag 0.
### Training
In order to assess the performance of our model during training, we set aside 10% of the training data as the validation set. The training and validation loss are seen in Figure 9 as it trends down different epochs. We observe that the validation loss is below the training loss. However, this can not influence the predicted accuracy with further hyper-parameter tuning since the difference between the two errors is negligible. The training loss accesses the error in the model during training. Finding the model's appropriate threshold requires testing the model with the entire test dataset. The confusion matrix is used to get the model accuracy and True positive rate (TPR). They are plotted over a range of thresholds as seen in Figure 10a and the level at which they both attain an average maximum point is the best threshold for the model. As seen in Figure 10a, the best threshold is 0.232 with an accuracy score of 0.962. However, if the sensitivity (TPR) or predicted anomaly is a priority for the organization without minding the cost they may secure while sorting out the False normal (predicted anomalies that are actually), the threshold could then be reduced to 0.231. This is the point where all anomalies are predicted (i.e. predicted anomalies are at 100%) with a higher false normal (FN) which has an effect on the overall model accuracy.
Figure 5: Observed anomaly in the number of working hours in a day for a single generator. For example, all data samples above 24 h threshold show anomalies in the running time of the generator and these samples are assigned the label 1, because it is known that one day only has 24 hours.
The threshold is also used to categorise these anomalies from mild to extreme using their reconstruction error. Figure (b)b shows the reconstruction error. The samples with extreme threshold are prioritised when one seeks to find the reasons for the presence of anomalies.
### Model performance
The total number of test samples is 1,476 with 1,006 normal samples and the remaining 470 are anomalous samples. Figure 11 illustrates the performance of the proposed model at different thresholds based on the confusion matrix. The proposed model is able to detect a total of 455 anomalous samples correctly out of the 470 samples with the anomaly label, this accounts for 96.8% (TPR) of the total anomaly samples. The model also detected a total number of 979 normal samples correctly out of 1006. The model incorrectly classified 15 normal samples as anomalies (FN) and 27 anomalies as normal samples (FA). These results show an accuracy of 97.15%, a precision of 94.40%, a recall of 96.81%, a specificity of 97.31% and a F1-score of 95.59%. Table 3 provides a summary of the performance metrics.
Figure 6: Fuel consumed per cluster showing the degree of anomalies in the dataset.
### Comparison with other models using the same datatset
Table 4 shows the performance of the proposed model compared to that discussed in [5]. We used the latter paper to compare our work with because both are implemented using the same Teneifera dataset for anomaly detection. The proposed label assisted autoencoder has the best performance with an accuracy of 97.2% and a recall of 96.8%. The multi-layer perception proposed in [5] shows the most competitive performance with a higher F1-score, specificity and precision. However, the label assisted autoencoder is flexible since if we adjust the threshold, the recall will increase at the cost of specificity and overall accuracy.
### Anomaly classification
The reconstruction error for each data sample differs from one another (as seen in Figure 10b) and provides an opportunity to classify these predicted anomalies according to their reconstruction error. In this work, 4 classes \(A,B,C\), and \(D\) are considered. Class \(A\) represents anomalies that are slightly above the threshold, class \(B\) represents anomalies that are above twice the threshold, class \(C\) represents anomalies that are above four times the threshold, and class D represents anomalies that are above eight times the threshold. This implies that each class has twice a threshold compared to
Figure 7: Feature importance for the 16 variables fitted using Random Forest Classifier. The feature “Running time per day” has the greatest influence on the output and can be coined as the most important feature in the dataset.
its predecessor. Table 5 shows the classes and their corresponding thresholds showing that 28.25% of the test dataset belongs to the anomaly category of class \(A\), 2.03% belongs to class \(B\), 0.20% belongs to class \(C\) and 0.34% belongs to \(D\).
### Discussion
Feature importance has played an important role in this model, the feature "Running time per day" has the most significant importance of 100, and "Daily consumption within a period" coming in the second place with an important measure of 16, from this we gave priority to the reconstruction error of the "Running time per day". To further justify our choice, the correlation matrix in Figure 8 shows a strong positive correlation of 0.74 between the two features. Using the reconstruction error of the key variable, we were able to train and compare the proposed model with the work in [5]. A recall score of 96.8% outperformed all the models proposed in [5] in detecting anomalies. Also, as shown in Table 3, our model shows a recall score of approximately 100% with a decrease in the overall accuracy of 85% when the threshold is decreased to 0.231.
## 7 Conclusion
Telecommunication industry is one of the dominant information communication technology industries that rely on a huge amount of electric power supply for their operations, and thus it is indispensable in their daily dealings. However, its availability in underdeveloped countries, particularly in Africa, has been a constant source of contention. Despite the industry's rise through the creation of base stations, they have had to turn to alternative energy sources such as the use of gasoline or diesel with generators, and the use of solar power, to name a few.
TeleInfra telecommunication company established in Cameroon is one of such companies hooked on these challenges due to the state of power supply in the country. The telecommunication equipment that is fixed in different parts of the rural and urban areas in Cameroon requires an uninterrupted supply of electricity to achieve the goal of establishing strong and seamless communication channels in the country, however, the country's electrical generation is mostly based on hydropower (73%), with perpetual power interruptions, particularly during the dry seasons when water levels are low [36]. The consequence of the diversification to alternative sources of power, particularly the usage of generators posed another challenge of irregularities or anomalies in fuel consumption at the base stations due to the observed high consumption rate in the power generation plants. TeleInfra
Figure 8: Correlation matrix of all numerical features. The “Running time per day” and “Daily consumption within a period” have a strong positive correlation, which is reasonable since the daily quantity of fuel consumed by a generator is a function of the running time.
Figure 9: Training and validation loss as a function of the number of iterations. The mean absolute error is used to measure the loss.
telecommunication company is faced with the challenge of unaccounted high fuel consumption for their operations at the base stations. Since they solely depend on generating plants as their major source of power supply, they necessarily have to continually refill these generators and these are done manually. Such activities are known to have emanated in possible cases of pilferage of fuel due to the observed anomalies in fuel consumption. As a result, it is essential to investigate the likely factors contributing to the anomalies by collecting data on fuel consumption at each of the base stations for the purpose of minimizing the costs of operation.
We have proposed a label assisted autoencoder-based deep-learning technique for detecting anomalies in the fuel consumption datasets of the base station management company namely TeleInfra. In our proposed model, an autoencoder is used to generate an encoded representation of the input features and construct the output from the encoded representation to look like the input features of the series of decoders. The maximum reconstruction error from the trained model is obtained from the training set and it is set as a threshold for detecting anomalies on the test dataset. The anomaly detector identifies each data sample from the testing set as anomaly when they exceed the threshold assigned. Results showed that our proposed model is highly efficient for reading anomalies with a detection accuracy of 97.20% and outperforms existing supervised learning models. The proposed model is flexible and the threshold is adjustable according to the need of the user which can classify anomalies from severe to mild.
This work opens future research possibilities which could involve using different variations of autoencoders such as long short term memory autoencoders and memory-augmented autoencoders combined with our proposed label assisted unit. The latter does not require the feature importance analysis for selecting the best reconstruction error.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \hline \multicolumn{5}{|c|}{Model performance} \\ \hline Threshold & Accuracy & F1-Measure & Recall & Precision & Specificity \\ \hline
0.232 & 0.972 & 0.956 & 0.968 & 0.944 & 0.973 \\ \hline
0.231 & 0.850 & 0.810 & 1.00 & 0.680 & 0.780 \\ \hline \end{tabular}
\end{table}
Table 3: Evaluation performance.
Figure 11: Detection of results based on confusion matrix.
Figure 10: Threshold detection (10a) using the reconstruction error (10b). The threshold is used to categorise anomalies from mild to extreme using their reconstruction error.
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} \hline \multicolumn{6}{|c|}{Model performance} \\ \hline Paper & Techniques & Accuracy & F1-Score & Recall & Precision & Specificity \\ \hline Mulongo et al. [5] & LR & 0.708 & 0.811 & 0.709 & 0.943 & 0.699 \\ \cline{2-6} & SVM & 0.949 & 0.962 & 0.962 & 0962 & 0.925 \\ \cline{2-6} & KNN & 0.851 & 0.888 & 0.887 & 0.890 & 0.783 \\ \cline{2-6} & MLP & 0.961 & 0.971 & 0.954 & 0.988 & 0.976 \\ \hline Our Model & AE & 0.972 & 0.956 & 0.968 & 0.944 & 0.973 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison to similar models using the same dataset.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline \multicolumn{4}{|c|}{**Categorizing Anomalies**} \\ \hline \hline Class & Threshold & Predicted Number of Samples & Percentage of Test Data \\ \hline A & 0.232 & 417 & 28.25\% \\ \hline B & 0.464 & 30 & 2.03\% \\ \hline C & 0.928 & 3 & 0.20\% \\ \hline D & 1.856 & 5 & 0.34\% \\ \hline \end{tabular}
\end{table}
Table 5: Four categories of anomalies.
## Disclosure Statement
We hereby state that no known competing financial interests or personal ties could have influenced the research presented in this study.
|
2307.00687 | Estranged facets and $k$-facets of Gaussian random point sets | Gaussian random polytopes have received a lot of attention especially in the
case where the dimension is fixed and the number of points goes to infinity.
Our focus is on the less studied case where the dimension goes to infinity and
the number of points is proportional to the dimension $d$. We study several
natural quantities associated to Gaussian random polytopes in this setting.
First, we show that the expected number of facets is equal to
$C(\alpha)^{d+o(d)}$ where $C(\alpha)$ is some constant which depends on the
constant of proportionality $\alpha$. We also extend this result to the
expected number of $k$-facets. We then consider the more difficult problem of
the asymptotics of the expected number of pairs of $\textit{estranged facets}$
of a Gaussian random polytope. When $n=2d$, we determined the constant $C$ so
that the expected number of pairs of estranged facets is equal to $C^{d+o(d)}$. | Brett Leroux, Luis Rademacher | 2023-07-02T23:48:43Z | http://arxiv.org/abs/2307.00687v1 | Brett Leroux, Luis Rademacher
###### Abstract
Gaussian random polytopes have received a lot of attention especially in the case where the dimension is fixed and the number of points goes to infinity. Our focus is on the less studied case where the dimension goes to infinity and the number of points is proportional to the dimension \(d\). We study several natural quantities associated to Gaussian random polytopes in this setting. First, we show that the expected number of facets is equal to \(C(\alpha)^{d+o(d)}\) where \(C(\alpha)\) is some constant which depends on the constant of proportionality \(\alpha\). We also extend this result to the expected number of \(k\)-facets. We then consider the more difficult problem of the asymptotics of the expected number of pairs of _estranged facets_ of a Gaussian random polytope. When \(n=2d\), we determined the constant \(C\) so that the expected number of pairs of estranged facets is equal to \(C^{d+o(d)}\).
_Keywords:_ Gaussian random point set; convex polytope; estranged facets; \(k\)-facet; inner diagonal.
_2020 Mathematics Subject Classification:_ 52A22; 60D05; 60C05; 62H10; 52B05
## 1 Introduction
A _Gaussian random point set_ is an i.i.d. sequence of standard Gaussian random points in \(\mathbb{R}^{d}\), i.e., each point in the set is distributed according to \(N(0,I_{d})\). The convex hull of a Gaussian random point set \(\{X_{1},\ldots,X_{n}\}\) with \(n\) samples is denoted \([X_{1},\ldots,X_{n}]\) and is called a _Gaussian random polytope_. In the study of random polytopes given as the convex hull of random points, many asymptotic results provide insight in the case where the dimension (\(d\)) is fixed but arbitrary and the number of points (\(n\)) grows. For example, some of the basic results provide asymptotic expansions on the number of \(j\)-dimensional faces of a Gaussian random polytope for fixed \(d\) and as \(n\to\infty\)[1, 4, 13, 21, 22]. For the case where both the dimension and the number of points grow together, there are gaps in our understanding. In this work we study this case. We provide asymptotic expansions of the expectation of several natural quantities associated to Gaussian random polytopes and, more generally, Gaussian random point sets. The quantities we consider are: the number of facets, the number of \(k\)-facets, and
the number of pairs of estranged facets. We now recall the standard definitions of \(k\)-facets and estranged facets.
A \(k\)_-facet_ of a finite set of points \(X\subset\mathbb{R}^{d}\) in general position (namely, any subset of \(d+1\) or less points is affinely independent) is a subset \(\Delta\subset X\) of size \(d\) such that the open halfspace on one side of \(\operatorname{aff}\Delta\) contains exactly \(k\) points from \(X\). We use the notation \(E_{k}(X)\) for the set of \(k\)-facets of \(X\) and we define \(e_{k}(X):=|E_{k}(X)|\). There is a long line of work on the \(k\)_-facet problem_ which asks one to determine the asymptotics of the maximum possible number of \(k\)-facets of a set of \(n\) points in \(\mathbb{R}^{d}\) as a function of \(n\), \(k\) and \(d\). The first papers on the \(k\)-facet problem ([17] and [11]) only considered the case when the dimension is equal to two and even this case is still not well understood. See [26] for a survey on what is known. Although the majority of work on the \(k\)-facet problem is for deterministic points sets, the problem has also previously been studied for random point sets in [2, 10, 16].
Let \(P\) be a full-dimensional polytope. We use the notation \(f_{j}(P)\) for the number of \(j\)-dimensional faces of \(P\). In particular, \(f_{d-1}(P)\) is the number of facets. Note that if \(X\subset\mathbb{R}^{d}\) is a set of \(n\) points in general position, then the \(0\)-facets of \(X\) are precisely the facets of the polytope \(P\) where \(P\) is the convex hull of \(X\) and so \(f_{d-1}(P)=e_{0}(X)\) in this case.
A pair of facets of a polytope is called _estranged_ if they do not share any vertices (i.e., facets \(F\) and \(G\) are estranged if the set of vertices contained in \(F\) is disjoint from the set of vertices contained in \(G\)). In this paper all polytopes we consider are simplicial with probability one. Under the standard polarity operation for polytopes, the polar of a simplicial polytope is a simple polytope and there is a one-to-one correspondence between pairs of estranged facets of the simplicial polytope \(P\) and _inner diagonals_ of the polar dual \(P^{*}\) of \(P\). Here an _inner diagonal_ of a polytope is a line segment which joins two vertices of the polytope and that is contained, except for its endpoints, in the relative interior of the polytope. We clarify the motivation for studying estranged facets and inner diagonals in the next section.
### Previous work and our contributions
Expected number of facets and \(k\)-facets.Let \([X]\) denote the convex hull of \(X\).
As mentioned above, for fixed dimension, an asymptotic formula for the expected number of facets of a Gaussian random polytope as the number of samples \(n\) goes to infinity has been known for some time: It was shown in [21, 22] that for fixed \(d\geq 2\), and a set \(\{X_{1},\ldots,X_{n}\}\) of \(n\) i.i.d. Gaussian random points in \(\mathbb{R}^{d}\),
\[\mathbb{E}f_{d-1}([X_{1},\ldots,X_{n}])=\frac{2^{d}\pi^{\frac{d-1}{2}}}{\sqrt{ d}}(\ln n)^{\frac{d-1}{2}}\big{(}1+o(1)\big{)}\text{ as }n\to\infty.\]
Similar formulae are known for \(\mathbb{E}f_{j}([X_{1},\ldots,X_{n}])\) for \(j=0,\ldots,d\), see [1, 4, 13].
The above mentioned papers only address the case when the dimension is fixed and the number of samples goes to infinity. More recently, progress has
been made by Boroczky, Lugosi and Reitzner in [6] and Fleury in [12] on the question of the asymptotic value of \(\mathbb{E}f_{d-1}([X_{1},\ldots,X_{n}])\) when both \(d\) and \(n\) are allowed to go to infinity. It is shown in [6, Theorem 1.1] that if \(d\geq 78\) and \(n\geq e^{e}d\), then
\[\mathbb{E}f_{d-1}([X_{1},\ldots,X_{n}])=2^{d}\pi^{\frac{d-1}{2}}\sqrt{d}e^{ \frac{d-1}{2}\ln\frac{n}{d}-\frac{d-1}{4}\frac{\ln\frac{n}{d}}{\ln\frac{n}{d}} +(d-1)\frac{\Theta}{\ln\frac{n}{d}}+O(\sqrt{d}e^{-\frac{d}{10}})} \tag{1}\]
with \(\Theta\in[-34,2]\) and \(\ln=\log\log\). Also, [6, Theorem 1.3] states that if \(n-d=o(d)\), then
\[\mathbb{E}f_{d-1}([X_{1},\ldots,X_{n}])=\binom{n}{d}\frac{1}{2^{n-d-1}}e^{ \frac{1}{\pi}\frac{(n-d)^{2}}{d}+O\left(\frac{(n-d)^{3}}{d^{2}}\right)+o(1)}. \tag{2}\]
There are two gaps relevant to us in their expressions: (1) They only provide asymptotic expressions for \(n-d=o(d)\) or \(n\geq e^{e}d\). (2) For the case where \(n\) grows proportional to \(d\), they only establish exponential upper and lower bounds (with different bases of the exponential function in each bound). Our Theorem 8 below fills in this missing piece. We show that when \(n/d\to\alpha>1\) and \(k/(n-d)\to r\in[0,1]\) the expected number of \(k\)-facets grows like \(C(\alpha,r)^{d+o(d)}\) where \(C(\alpha,r)\) is a constant depending on \(\alpha\) and \(r\) and we provide a simple way to determine \(C(\alpha,r)\) given \(\alpha\) and \(r\) (Theorem 8). Note that setting \(k=0\) gives the asymptotics of the expected number of facets.
In [5], Bonnet and O'Reilly consider the convex hull of random points from the unit sphere in \(\mathbb{R}^{d}\). They call such polytopes _spherical random polytopes_ and they provide asymptotic expressions for the expected number of facets as \(n\) and \(d\) grow at different rates. In the cases when \(n-d=o(d)\) or \(n/d\to\infty\), they obtain formulae for the expected number of facets of spherical random polytopes which match the corresponding formulae obtained in [6] for the expected number of facets of Gaussian random polytopes, i.e. equations (1) and (2) above. Such a correspondence is not particularly surprising given the fact that Gaussian random points concentrate around a thin spherical shell of radius \(\sqrt{d}\) in high dimension. Our result shows that this correspondence continues for the case when \(n\) is proportional to \(d\): for any \(\alpha>1\), Theorem 8 says that the expected number of facets of a Gaussian random polytope with \(n\sim\alpha d\) vertices is equal to \(C(\alpha)^{d+o(d)}\) for some constant \(C(\alpha)\). For spherical random polytopes, the case when the number of vertices is equal to \(n\sim\alpha d\) for some \(\alpha>1\) is dealt with in [5, Theorem 4.2]. The asymptotic formula given there is also of the form \(C(\alpha)^{d+o(d)}\) for some constant \(C(\alpha)\). Some algebra shows that the constants are the same in both the spherical and Gaussian random cases.
A formula from [13] extended to \(k\)-facets.[13, Theorem 3.2] provides a formula that expresses the probability that a fixed subset of \(d\) out of \(n\) Gaussian random points form a facet of the convex hull of the whole set. The formula turns the original probability involving \(n\) random vectors in \(\mathbb{R}^{d}\) into a simpler probability involving \(n-d+1\) real valued random variables. Their proof is an application of the affine Blaschke-Petkantschin formula.
We extend the formula to the case of \(k\)-facets (Theorem 7). Our proof does not use the Blaschke-Petkantschin formula and is based on a slightly different probabilistic argument.
Expected number of pairs of estranged facets.We show in Theorem 9 that if \(X\) is a set of \(2d\) i.i.d. Gaussian random points in \(\mathbb{R}^{d}\), then the expected number of pairs of estranged facets of \([X]\) is equal to \(C^{d+o(d)}\) where \(C\approx 1.7696\).
The main technique in the proof is the affine Blaschke-Petkantschin formula applied twice on a partition of the \(2d\) points into two \(d\)-subsets to express the probability that they are facets simultaneously. This is combined with known estimates of the expected volume of a random simplex (one of the main terms in the affine Blaschke-Petkantschin formula) and a simple asymptotic expansion of integrals (Proposition 5, see below).
To put this result in context, we recall the following conjecture of von Stengel [25]:
**Conjecture 1** ([25]).: _The maximum number of pairs of estranged facets of any simplicial \(d\)-polytope with \(2d\) vertices is \(2^{d-1}\), which is attained by the \(d\)-dimensional cross polytope._
Although von Stengel's conjecture is still open, a number of similar questions about estranged facets (and their polar equivalent, inner diagonals) were answered by Bremner and Klee [9] who argue that estranged facets are worthy of more study given that they are an intrinsically interesting combinatorial feature of convex polytopes.
Aside from their intrinsic interest, estranged facets are also relevant to the study of Nash equilibria of bimatrix games [25]. Indeed, this was the original context for the above conjecture of von Stengel. Although estranged facets themselves do not directly correspond to any particular quantity of interest in bimatrix games, they have been used by Barany, Vempala and Vetta [3] in the analysis of a Las Vegas algorithm for finding Nash equilibria in bimatrix games. In particular, their analysis required them to determine concentration bounds for the number of Nash equilibria in random games. This in turn required them to prove an upper bound on the expected number of pairs of estranged facets of a random polytope whose vertices are either i.i.d. Gaussian or uniform in the \(d\)-cube [3, Lemma 13]. In contrast to our Theorem 9, [3, Lemma 13] is only meaningful in the case when the dimension \(d\) is fixed and the number of points \(n\) goes to infinity.
Finally, we remark that estranged facets are also relevant to the study of the diameter problem for convex polytopes, i.e., the question of the maximum diameter of the graph of a simple \(d\)-polytope with \(n\) facets. As previously mentioned, estranged facets of a simplicial polytope correspond, via the polar operation, to inner diagonals of a simple polytope. It has been shown that the pair of vertices which attains the maximum distance in the graph of a simple polytope must be the endpoints of some inner diagonal of the polytope [15].
Simple asymptotic expansion of integrals.Our asymptotic expansions of expected values are based on the formula \(\int_{\mathbb{R}^{d}}f(x)^{p}\,\mathrm{d}x=\|f\|_{\infty}^{p+o(p)}\), stated formally as Proposition 5. This is a simple result that provides asymptotic expansions of integrals that follows immediately from the known fact that the \(L^{p}\) norm of a function converges to the \(L^{\infty}\) norm as \(p\to\infty\) under mild assumptions (Proposition 4).
### Outline of the paper.
Section 2 introduces notation and collects some propositions that will be used later including a result about the expected volume of a Gaussian simplex as well as a result about asymptotic expansions of integrals based on \(L^{p}\) norms. In Section 3 we establish our asymptotic formula for the expected number of \(k\)-facets of a Gaussian random polytope. Finally, Section 4 establishes the asymptotic formula for the expected number of estranged facets of a Gaussian random polytope with \(2d\) vertices.
## 2 Preliminaries
Let \(f_{X}\) denote the PDF of random variable \(X\). Let \(\mathbb{E}_{X}\big{(}f(X,Y)\big{)}\) denote the expectation with respect to \(X\) only, and similarly for \(\mathbb{P}_{X}\). Namely, \(\mathbb{E}_{X}\big{(}f(X,Y)\big{)}=\mathbb{E}\big{(}f(X,Y)\big{|}\,Y\big{)}\). For a random vector \(X\), let \(\operatorname{cov}(X)\) denote the covariance matrix of \(X\). Asymptotic notation \(f(d)\sim g(d)\) means \(f(d)/g(d)\to 1\) as \(d\to\infty\). For a set \(A=\{\ldots\}\) in a measurable space, let \(1\,A=1\{\ldots\}\) denote the indicator function of \(A\). For a measurable set \(K\subseteq\mathbb{R}^{d}\), let \(|K|\) denote the volume of \(K\). Let \([X]\) denote the convex hull of \(X\).
**Proposition 2** (Blaschke's formula, [8, Proposition 3.5.5] [20, Lemma 4]).: _Let \(X_{1},\ldots,X_{d+1}\) be i.i.d. \(d\)-dimensional random vectors with finite second moment. Then_
\[\det\operatorname{cov}(X_{1})=\frac{d!}{d+1}\,\mathbb{E}\big{(}\big{|}[X_{1}, \ldots,X_{d+1}]\big{|}^{2}\big{)}.\]
Proof.: [20, Lemma 4] states and proves the claim for the uniform distribution in a convex body. That proof works essentially unchanged for any distribution with finite second moment.
We will need the following well-known result about the expected volume of a Gaussian simplex. See e.g. [19, p. 377].
**Proposition 3**.: _Let \(X_{1},\ldots,X_{d+1}\) be i.i.d. \(d\)-dimensional Gaussian random vectors. Then_
\[\mathbb{E}\big{(}\big{|}[X_{1},\ldots,X_{d+1}]\big{|}\big{)}=\frac{\sqrt{d+1} }{2^{d/2}\Gamma(\tfrac{d}{2}+1)}\sim\frac{1}{\sqrt{\pi}}\,\Big{(}\frac{e}{d} \Big{)}^{d/2}\,.\]
We use the following asymptotic approximation of integrals: \(\int_{\mathbb{R}^{d}}f(x)^{p}\,\mathrm{d}x=\|f\|_{\infty}^{p+o(p)}\) (Proposition 5). It follows easily from the fact that the \(L^{p}\) norm converges to the \(L^{\infty}\) norm as \(p\to\infty\) under mild assumptions (Proposition 4).
**Proposition 4** ([23, p. 71]).: _Let \(1\leq q<\infty\). Let \(f\in L^{\infty}(\mathbb{R}^{d})\cap L^{q}(\mathbb{R}^{d})\). Then \(\left\|f\right\|_{\infty}=\lim_{p\to\infty}\left\|f\right\|_{p}\)._
**Proposition 5**.: _Let \(1\leq q<\infty\). Let \(f\in L^{\infty}(\mathbb{R}^{d})\cap L^{q}(\mathbb{R}^{d})\) and assume that \(f\) is nonnegative and \(C:=\left\|f\right\|_{\infty}\neq 1\). Then, as \(p\to\infty\), \(\int_{\mathbb{R}^{d}}f(x)^{p}\,\mathrm{d}x=C^{p+o(p)}\) (where \(o(p)\) can depend on \(f\))._
Proof.: Let \(a_{p}=\int_{\mathbb{R}^{d}}f(x)^{p}\,\mathrm{d}x\). From Proposition 4 we have \(\lim_{p\to\infty}a_{p}^{1/p}=C\). Write \(a_{p}=C^{p+g(p)}\) for some function \(g\).
To conclude, we will now show that \(g(p)=o(p)\). Note that \(a_{p}^{1/p}=C^{1+\frac{g(p)}{p}}\), so that, applying \(\lim_{p\to\infty}\) to both sides we get \(\lim_{p\to\infty}C^{\frac{g(p)}{p}}=1\), which implies \(\lim_{p\to\infty}\frac{g(p)}{p}=0\).
We need the following known inequality (the constant has not been optimized).
**Lemma 6**.: _If \(X\) is a (real valued) mean zero logconcave random variable then \(\mathbb{E}(|X|)\geq\frac{1}{8}\sqrt{\mathbb{E}(X^{2})}\)._
Proof.: The inequality is invariant under scaling and therefore it is enough to prove it when \(X\) is isotropic (i.e. when \(\mathbb{E}(X^{2})=1\)). It is known [18, Lemma 5.5] that the density of an isotropic logconcave random variable is at most \(1\). Therefore, using Markov's inequality, \(1/2\leq\mathbb{P}(|X|\geq 1/4)\leq 4\,\mathbb{E}(|X|)\). The claim follows.
## 3 Facets and \(k\)-facets
In this section we study the expected number of \(k\)-facets of Gaussian random polytopes. We give an asymptotic formula for the expected number of \(k\)-facets in the case when the dimension \(d\) goes to infinity and the number of samples \(n\) grows linearly with \(d\).
Before establishing our asymptotic formula, we need to establish the following result which reduces the problem of computing \(\mathbb{E}f_{d-1}([X_{1},\ldots,X_{n}])\) from a \(d\)-dimensional problem to a \(1\)-dimensional problem.
**Theorem 7**.: _Let \(X_{1},\ldots,X_{n}\) be \(n\geq d+1\) i.i.d. standard Gaussian random vectors in \(\mathbb{R}^{d}\). Then the expected number of \(k\)-facets of \(\{X_{1},\ldots,X_{n}\}\) is equal to_
\[\binom{n}{d}\mathbb{P}\big{(}Y\in E_{k}(\{Y,Y_{1},\ldots,Y_{n-d}\})\big{)}\]
_where \(Y\) is \(N(0,\frac{1}{d})\), \(Y_{i}\) is \(N(0,1)\) for \(i=1,\ldots,n-d\) and \(Y,Y_{1},\ldots,Y_{n-d}\) are independent._
Proof.: By linearity of expectation and symmetry, it is enough to show that the probability that \(\{X_{1},\ldots,X_{d}\}\) is a \(k\)-facet is \(\mathbb{P}\big{(}Y\in E_{k}(\{Y,Y_{1},\ldots,Y_{n-d}\})\big{)}\).
Let \(V\) be a random unit vector perpendicular to \(\operatorname{aff}\{X_{1},\ldots,X_{d}\}\) but with its orientation (sign) chosen independently at random among the two choices. Define \(Y=V\cdot X_{1}\) and \(Y_{i}=V\cdot X_{i+d}\), \(i=1,\ldots,n-d\). Using that \(V\) is independent of \(X_{d+1},\ldots,X_{n}\), it is clear that the \(Y_{i}\)s are i.i.d. \(N(0,1)\). Moreover, notice that by symmetry, the distribution of \(V\) conditioned on \(Y\) is still uniform on the unit sphere. That is, \(V\) is independent of \(Y\), which implies that \(Y\) is independent of \(Y_{1},\ldots,Y_{n-d}\).
We now determine the distribution of \(Y\). Note that \(Y^{2}\) is the squared distance of \(\operatorname{aff}\{X_{1},\ldots,X_{d}\}\) to the origin, which is given by \(1/\|A^{-1}\mathbf{1}\|^{2}\), where \(A\) is the matrix having \(X_{1},\ldots,X_{d}\) as rows. By the invariance under orthogonal transformations of the distribution of \(A\), the distribution of \(A^{-1}\) is also invariant under orthogonal transformations and the distribution of \(1/\|A^{-1}\mathbf{1}\|^{2}\) is the same as the distribution of \(\frac{1}{d\|A^{-1}e_{1}\|^{2}}\), where \(\frac{1}{\|A^{-1}e_{1}\|^{2}}\) is the squared distance between \(X_{1}\) and \(\operatorname{span}\{X_{2},\ldots,X_{d}\}\). This is distributed as \(\chi_{1}^{2}\) (namely, \(N(0,1)\) squared). Thus, using the random sign of \(V\), the distribution of \(Y\) is \(N(0,1/d)\).
In summary, \(Y\) and \(Y_{i}\)s are distributed as in the statement. Moreover, the event that \(\{X_{1},\ldots,X_{d}\}\) is a \(k\)-facet of \(\{X_{1},\ldots,X_{n}\}\) is the same as the event that \(Y\) is a \(k\)-facet of \(\{Y,Y_{1},\ldots,Y_{n-d}\}\).
We remark that Theorem 7 is heavily inspired by the work of Hug, Munsonius and Reitzner in [13]. In particular, Theorem 7 is a simple generalization of [13, Theorem 3.2] from facets to \(k\)-facets. See [13, Theorem 3.2] for an alternative proof of the above theorem (in the case of facets) using the affine Blaschke-Petkantschin formula.
We are know ready to state our main result on facets/\(k\)-facets of Gaussian random polytopes. We use the notation
\[\Phi(y):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{y}e^{-s^{2}/2}\,\mathrm{d}s,\ \text{and}\ \phi(y):=\Phi^{\prime}(y)=\frac{1}{\sqrt{2\pi}}e^{-y^{2}/2}\]
for the CDF and PDF of the standard Gaussian distribution.
**Theorem 8**.: _Fix \(\alpha>1\), and \(r\in[0,1]\) and assume that \(n/d\to\alpha\) as \(d\to\infty\) and that \(k/(n-d)\to r\) as \(d\to\infty\). Let \(X\) be a set of \(n\) i.i.d. Gaussian random points in \(\mathbb{R}^{d}\). Then the expected number of \(k\)-facets of \(X\) is equal to_
\[\Big{(}2^{\alpha H(\frac{1}{\alpha})}2^{(\alpha-1)H(r)}\sqrt{2\pi}c_{\alpha,r} \Big{)}^{d+o(d)}\ \text{as}\ d\to\infty,\]
_where_
\[c_{\alpha,r}:=\max_{y\in\mathbb{R}}\{\Phi(y)^{r\alpha}(1-\Phi(y))^{\alpha-1-r \alpha}\phi(y)\}.\]
_and \(H(r)\) is the binary entropy function. The rate of convergence in the above \(o(d)\) is not universal as it depends on \(\alpha\) and \(r\) and on the rate of convergence of \(n/d\) to \(\alpha\) and \(k/(n-d)\) to \(r\)._
Proof.: From Theorem 7, \(\mathbb{E}e_{k}(X)=\binom{n}{d}\mathbb{P}\big{(}Y\in E_{k}(\{Y,Y_{1},\ldots,Y_{n-d }\})\big{)}\) where \(Y\) is \(N(0,\frac{1}{d})\), \(Y_{i}\) is \(N(0,1)\) for \(i=1,\ldots,n-d\) and \(Y,Y_{1},\ldots,Y_{n-d}\) are independent. Notice that if \(k\neq\frac{n-d}{2}\), then
\[\mathbb{P}\big{(}Y\in E_{k}(\{Y,Y_{1},\ldots,Y_{n-d}\})\big{)}=2\binom{n-d}{k} \frac{\sqrt{d}}{\sqrt{2\pi}}\int\limits_{-\infty}^{\infty}\Phi(y)^{k}\big{(}1- \Phi(y)\big{)}^{n-d-k}e^{\frac{-dy^{2}}{2}}\,\mathrm{d}y.\]
If \(k=\frac{n-d}{2}\), the above formula counts each potential \(k\)-facet twice, because in this case each side of the hyperplane represented by \(Y\) could contain exactly \(\frac{n-d}{2}\) points. Therefore, if \(k=\frac{n-d}{2}\), the above formula holds after removing the factor of two on the right-hand side. This factor of two is not important for our result, and we have that
\[\mathbb{E}e_{k}(X) =\Theta(1)\binom{n}{d}\mathbb{P}\big{(}Y\in E_{k}(\{Y,Y_{1}, \ldots,Y_{n-d}\})\big{)}\] \[=\Theta(1)\binom{n}{d}\binom{n-d}{k}\frac{\sqrt{d}}{\sqrt{2\pi}} \int\limits_{-\infty}^{\infty}\Phi(y)^{k}\big{(}1-\Phi(y)\big{)}^{n-d-k}e^{-dy ^{2}/2}\,\mathrm{d}y\] \[=\Theta(1)\binom{n}{d}\binom{n-d}{k}\sqrt{d}(2\pi)^{\frac{d-1}{2} }\int\limits_{-\infty}^{\infty}\Phi(y)^{k}\big{(}1-\Phi(y)\big{)}^{n-d-k}\phi( y)^{d}\,\mathrm{d}y.\]
We will use Proposition 5 to estimate the integral in the above expression. In particular, we will show that the integral is equal to \(c_{\alpha,r}^{d+o(d)}\) where \(c_{\alpha,r}:=\|f\|_{\infty}\) and \(f(y):=\Phi(y)^{r(\alpha-1)}\big{(}1-\Phi(y)\big{)}^{(1-r)(\alpha-1)}\phi(y)\). In order to establish this estimate, we first need to restrict the integral to some finite interval, the length of which does not depend on \(d\) but does depend on \(\alpha,r\). In order to accomplish this, first observe that we can upper bound the terms in front of the integral by \(\binom{n}{d}\binom{n-d}{k}\sqrt{d}(2\pi)^{\frac{d-1}{2}}=O\big{(}2^{n}2^{n}(2 \pi)^{\frac{d-1}{2}}\big{)}=O\big{(}(4^{\alpha}\sqrt{2\pi})^{d}\big{)}\). Now choose \(R(\alpha)\) so that \(\phi\big{(}R(\alpha)\big{)}<\frac{1}{4^{\alpha}\sqrt{2\pi}}\). For technical reasons, we also need to assume that our region of integration is big enough so that it contains some \(y_{0}\in\mathbb{R}\) so that \(c_{\alpha,r}=f(y_{0})\). So choose \(R(\alpha,r)\) so that \(R(\alpha,r)\geq R(\alpha)\) and so that \([-R(\alpha,r),R(\alpha,r)]\) contains some \(y_{0}\) as above. Using the fact that \(\Phi(y)^{k}\big{(}1-\Phi(y)\big{)}^{n-d-k}<1\) and \(\phi\big{(}R(\alpha,r)\big{)}<\frac{1}{4^{\alpha}\sqrt{2\pi}}\), we know that the right tail of the integral is upper bounded by
\[\int\limits_{R(\alpha,r)}^{\infty}\Phi(y)^{k}\big{(}1-\Phi(y) \big{)}^{n-d-k}\phi(y)^{d}\,\mathrm{d}y \leq\int\limits_{R(\alpha,r)}^{\infty}\phi(y)^{d-1}\phi(y)\, \mathrm{d}y\] \[\leq\left(\frac{1}{4^{\alpha}\sqrt{2\pi}}\right)^{d-1}\int\limits _{R(\alpha,r)}^{\infty}\phi(y)\,\mathrm{d}y\] \[=O((4^{\alpha}\sqrt{2\pi})^{-d})\]
and the same estimate holds for the left tail. Therefore,
\[\mathbb{E}e_{k}(X) =\Theta(1)\binom{n}{d}\binom{n-d}{k}\sqrt{d}(2\pi)^{\frac{d-1}{2}} \int\limits_{-\infty}^{\infty}\Phi(y)^{k}\big{(}1-\Phi(y)\big{)}^{n-d-k}\phi(y)^ {d}\,\mathrm{d}y\] \[=\Theta(1)\binom{n}{d}\binom{n-d}{k}\sqrt{d}(2\pi)^{\frac{d-1}{2}} \int\limits_{-R(\alpha,r)}^{R(\alpha,r)}\Phi(y)^{k}\big{(}1-\Phi(y)\big{)}^{n-d -k}\phi(y)^{d}\,\mathrm{d}y+O(1)\] \[=\Theta(1)\binom{n}{d}\binom{n-d}{k}\sqrt{d}(2\pi)^{\frac{d-1}{2}} \int\limits_{-R(\alpha,r)}^{R(\alpha,r)}\Phi(y)^{k}\big{(}1-\Phi(y)\big{)}^{n-d -k}\phi(y)^{d}\,\mathrm{d}y\]
where the last equality uses the fact that \(\mathbb{E}e_{k}(X)\geq 1\) so that the \(O(1)\) term can be absorbed into the \(\Theta(1)\) factor in front.
Now for \(y\in[-R(\alpha,r),R(\alpha,r)]\), \(\Phi(y)\) and \(1-\Phi(y)\) both take values in some fixed interval, i.e. \(\Phi(y)=\Theta(1)\) and \(1-\Phi(y)=\Theta(1)\). Recall that we are assuming that \(n/d\to\alpha\) and \(k/(n-d)\to r\) as \(d\to\infty\) which means that \(n=\alpha d+o(d)\) and \(k=r(\alpha-1)d+o(d)\) and therefore that \(n-d-k=(\alpha-1)d-r(\alpha-1)d+o(d)\). This means that \(\Phi(y)^{k}=\Phi(y)^{r(\alpha-1)d}\Theta(1)^{o(d)}=e^{o(d)}\Phi(y)^{r(\alpha- 1)d}\) and that \(\big{(}1-\Phi(y)\big{)}^{n-d-k}=\big{(}1-\Phi(y)\big{)}^{(\alpha-1)d-r(\alpha- 1)d}\Theta(1)^{o(d)}=e^{o(d)}\big{(}1-\Phi(y)\big{)}^{(\alpha-1)d-r(\alpha-1)d}\) for \(y\in[-R(\alpha,r),R(\alpha,r)]\). Therefore we have shown that
\[\int\limits_{-R(\alpha,r)}^{R(\alpha,r)}\Phi(y)^{k}\big{(}1-\Phi( y)\big{)}^{n-d-k}\phi(y)^{d}\,\mathrm{d}y\] \[\qquad\qquad=e^{o(d)}\int\limits_{-R(\alpha,r)}^{R(\alpha,r)} \Phi(y)^{r(\alpha-1)d}\big{(}1-\Phi(y)\big{)}^{(1-r)(\alpha-1)d}\phi(y)^{d}\, \mathrm{d}y.\]
Let \(\hat{f}:=f\cdot 1\{-R(\alpha,r)<y<R(\alpha,r)\}\). Define \(\hat{c}_{\alpha,r}:=\|\hat{f}\|_{\infty}\). Recall that we are assuming that \(f\) attains its maximum somewhere in the interval \([-R(\alpha,r),R(\alpha,r)]\) so we have that \(\hat{c}_{\alpha,r}=c_{\alpha,r}\).
By Proposition 5, we have that
\[\int\limits_{-R(\alpha,r)}^{R(\alpha,r)}\Phi(y)^{r(\alpha-1)d}\big{(}1-\Phi( y)\big{)}^{(1-r)(\alpha-1)d}\phi(y)^{d}\,\mathrm{d}y=(\hat{c}_{\alpha,r})^{d+o(d)}=(c_{ \alpha,r})^{d+o(d)}.\]
Combining everything,
\[\mathbb{E}e_{k}(X) =\Theta(1)\binom{n}{d}\binom{n-d}{k}\sqrt{d}(2\pi)^{\frac{d-1}{2}} \int\limits_{-R(\alpha,r)}^{R(\alpha,r)}\Phi(y)^{k}\big{(}1-\Phi(y)\big{)}^{n-d-k }\phi(y)^{d}\,\mathrm{d}y\] \[=\Theta(1)\binom{n}{d}\binom{n-d}{k}\sqrt{d}(2\pi)^{\frac{d-1}{2} }e^{o(d)}(c_{\alpha,r})^{d+o(d)}\] \[=\left(2^{\alpha H(\frac{1}{\alpha})}2^{(\alpha-1)H(r)}\sqrt{2\pi }c_{\alpha,r}\right)^{d+o(d)}.\]
For the last step one can use the Stirling approximation of the Gamma function and the fact that Gamma is continuous on \(\mathbb{R}_{+}\) to obtain the asymptotic estimates of the binomial coefficients.
## 4 Estranged facets
We say that two facets of a polytope are _estranged_ if they do not share any vertices. The main result of this section is Theorem 9, which gives an asymptotic estimate of the expected number of estranged facets of the convex hull of \(2d\) Gaussian random points in \(\mathbb{R}^{d}\).
**Theorem 9**.: _Let \(X\) be a set of \(2d\) i.i.d. Gaussian random points in \(\mathbb{R}^{d}\). Let \(N\) be the number of (unordered) pairs of estranged facets in \([X]\). Then_
\[\mathbb{E}(N)=(4C_{11})^{d+o(d)},\]
_where \(C_{11}\in(0,1/2)\) is the universal constant from Lemma 11._
Our proof uses the affine Blaschke-Petkantschin formula [24, Theorem 7.2.7], a change of variable formula that involves the volume of a random simplex. We will need the following estimate of the volume of a random simplex in a halfspace:
**Lemma 10**.: _Let \(H\subseteq\mathbb{R}^{d-1}\) be a halfspace that contains the origin. Let \(Z\in\mathbb{R}^{(d-1)\times d}\) be a random matrix with i.i.d. standard Gaussian entries truncated to be in \(H^{d}\). Then_
\[\mathbb{E}\big{(}|Z|\big{)}\geq\sqrt{1-\frac{2}{\pi}}\frac{\sqrt{d}}{2^{\frac{ d+5}{2}}\Gamma(\frac{d+1}{2})}=\left(\frac{e}{d}\right)^{d/2}2^{o(d)}\]
_(where \(o(d)\) does not depend on \(H\) and using abbreviated notation \(|Z|=\big{|}[Z_{1},\ldots,Z_{d}]\big{|}\))._
Proof.: The idea of the proof is to compare \(Z\) with the Gaussian case (namely, without truncation). It is easier to do this for the second moment instead of the first, and one can relate the first and the second moments via Jensen's inequality and a suitable reverse for our case, Lemma 6.
By applying a rotation it is enough to prove for \(H=\{x\in\mathbb{R}^{d-1}:x_{1}\leq t\}\) with \(t\geq 0\). Let \(W\) be \(Z\) with a row of ones appended. Then
\[|Z|/d=|\mathrm{det}(W)|/d!. \tag{3}\]
That is, \(|Z|=|\det(W)|/(d-1)!\). Let \(W_{1},\ldots,W_{d}\) be the rows of \(W\). Let \(A=\{x\in\mathbb{R}^{d}:(\forall i)x_{i}\leq t\}\). Note that \(W_{1}\) is distributed as standard Gaussian truncated to \(A\). We have \(|\det(W)|=\prod_{i=1}^{d}\mathrm{d}(W_{i},\operatorname{span}W_{(i+1)\ldots d})\) (where \(\mathrm{d}(\cdot,\cdot)\) denotes point-subspace distance) and
\[\mathbb{E}\big{(}|\det(W)|\big{)} =\mathbb{E}\Big{(}\mathrm{d}(W_{1},\operatorname{span}W_{2\ldots d })\prod_{i=2}^{d}\mathrm{d}(W_{i},\operatorname{span}W_{(i+1)\ldots d})\Big{)} \tag{4}\] \[=\mathbb{E}\Big{(}\mathbb{E}\big{(}\mathrm{d}(W_{1}, \operatorname{span}W_{2\ldots d})\bigm{|}W_{2\ldots d}\big{)}\prod_{i=2}^{d} \mathrm{d}(W_{i},\operatorname{span}W_{(i+1)\ldots d})\Big{)}.\]
Let \(v\in\mathbb{R}^{d}\) be such that \(\sum_{i=1}^{d}v_{i}=0\) and \(\|v\|=1\). Using Lemma 6, \(\mathbb{E}(v^{T}W_{1})=0\), and the fact that the variance of a Gaussian truncated to \((-\infty,t]\) with \(t\geq 0\) is at least \(1-2/\pi\) we get
\[\mathbb{E}(|v^{T}W_{1}|)\geq\frac{1}{8}\sqrt{\mathbb{E}((v^{T}W_{1})^{2})}= \frac{1}{8}\sqrt{\operatorname{var}(v^{T}W_{1})}\geq\frac{1}{8}\sqrt{1-\frac{ 2}{\pi}}:=c^{\prime}.\]
Now, to express \(\mathrm{d}(W_{1},\operatorname{span}W_{2\ldots d})\), let \(V\) be a random vector that is a unit vector normal to \(\operatorname{span}W_{2\ldots d}\) (sign will not matter) and let \(W_{1}^{\prime}\) be an independent standard Gaussian in \(\mathbb{R}^{d}\). We have the following comparison inequality between \(W_{1}\) (truncated Gaussian) and \(W_{1}^{\prime}\) (not truncated), using moment inequalities and the fact that, conditionioning on \(W_{2\ldots d}\), vector \(V\) is a fixed unit vector perpedicular to the all ones vector \(W_{d}\) so that our analysis for \(v\) above applies:
\[\mathbb{E}\big{(}\mathrm{d}(W_{1},\operatorname{span}W_{2\ldots d })\bigm{|}W_{2\ldots d}\big{)} =\mathbb{E}\big{(}|V^{T}W_{1}|\bigm{|}W_{2\ldots d}\big{)}\] \[\geq c^{\prime}\] \[=c^{\prime}\sqrt{\mathbb{E}\big{(}\mathrm{d}(W_{1}^{\prime}, \operatorname{span}W_{2\ldots d})^{2}\bigm{|}W_{2\ldots d}\big{)}}\] \[\geq c^{\prime}\,\mathbb{E}\big{(}\mathrm{d}(W_{1}^{\prime}, \operatorname{span}W_{2\ldots d})\bigm{|}W_{2\ldots d}\big{)}.\]
This in (4) implies, defining \(W^{\prime}\) as \(W\) with the first row \(W_{1}\) substituted by \(W_{1}^{\prime}\):
\[\mathbb{E}(|\det(W)|) \geq c^{\prime}\,\mathbb{E}\Big{(}\mathbb{E}\big{(}\mathrm{d}(W_ {1}^{\prime},\operatorname{span}W_{2\ldots d})\bigm{|}W_{2\ldots d}\big{)}\prod _{i=2}^{d}\mathrm{d}(W_{i},\operatorname{span}W_{(i+1)\ldots d})\Big{)}\] \[=c^{\prime}\,\mathbb{E}(|\det(W^{\prime})|)\] \[=c^{\prime}\,\frac{(d-1)!\sqrt{d}}{2^{\frac{d-1}{2}}\Gamma(\frac {d+1}{2})}\quad\text{(using Proposition \ref{prop:sum_def} and the idea in Eq. (3)).}\]
Thus
\[\mathbb{E}\big{(}|Z|\big{)}=\frac{\mathbb{E}\big{(}|\det(W)|\big{)}}{(d-1)!} \geq\frac{c^{\prime}\sqrt{d}}{2^{\frac{d-1}{2}}\Gamma(\frac{d+1}{2})}.\qed\]
We will now complete the proof of Theorem 9. Most of the proof is in the following lemma (Lemma 11), which estimates the probability that a fixed partition of the random points is a pair of facets. Theorem 9 then follows by linearity of expectation. The proof of Lemma 11 is somewhat similar to the proof of [14, Theorem 1.3] which gives an upper bound for the variance of the number of facets of a Gaussian random polytope in the case where the dimension is fixed and the number of points increases. The main difficulty in the proof of both [14, Theorem 1.3] and Lemma 11 is to prove an upper bound for the probability that given pair of subsets of vertices are both facets of the polytope. In contrast to [14, Theorem 1.3], our Lemma 11 is meaningful when the dimension increases with the number of points. However, Lemma 11 does not give any bound on the variance because we only consider pairs of facets with no points in common.
Let \(F(P)\) be the set of facets (as a family of subsets of vertices) of polytope \(P\).
**Lemma 11**.: _Let \(X,Y\) be two independent sets of \(d\) i.i.d. Gaussian random points in \(\mathbb{R}^{d}\). Then_
\[\mathbb{P}\big{(}X,Y\in F([X,Y])\big{)}=C_{11}^{d+o(d)},\]
_where_
\[C_{11}:=\sup_{\begin{subarray}{c}\rho\geq 0\\ w\in[-1,1]\end{subarray}}e^{-\rho^{2}}\Phi\left(\frac{\rho(1-w)}{\sqrt{1-w^{2} }}\right)^{2}\sqrt{1-w^{2}}\approx 0.4424.\]
Proof.: Let \(H(\rho,\theta)=\{x\in\mathbb{R}^{d}:\theta\cdot x=\rho\}\), \(H_{-}(\rho,\theta)=\{x\in\mathbb{R}^{d}:\theta\cdot x<\rho\}\), and \(H_{+}(\rho,\theta)=\{x\in\mathbb{R}^{d}:\theta\cdot x>\rho\}\). Let \(f_{X}(\cdot)\) denote the density function of random variable \(X\). We will use the affine Blaschke-Petkantschin formula as stated in [24, Theorem 7.2.7]. Let \(c_{d}=(d-1)!^{2}\).
Upper bound.For the upper bound we continue from (5) as follows:
\[\begin{split}&\mathbb{P}\big{(}X,Y\in F([X,Y])\big{)}\\ &\quad=c_{d}\sum_{s,s^{\prime}\in\{-,+\}}\int_{\mathbb{R}_{+}^{2}} \int_{(S^{d-1})^{2}}\biggl{(}\int_{H(\rho_{1},\theta_{1})^{d}}1\{x\in H_{s}( \rho_{2},\theta_{2})^{d}\}|x|f_{X}(x)\,\mathrm{d}x\biggr{)}\\ &\qquad\biggl{(}\int_{H(\rho_{2},\theta_{2})^{d}}1\{y\in H_{s^{ \prime}}(\rho_{1},\theta_{1})^{d}\}|y|f_{Y}(y)\,\mathrm{d}y\biggr{)}\,\mathrm{ d}\theta_{1}\,\mathrm{d}\theta_{2}\,\mathrm{d}\rho_{1}\,\mathrm{d}\rho_{2}. \end{split} \tag{6}\]
For the next step we will need the following notation: \(Z=(Z_{1},\ldots,Z_{d})\in\mathbb{R}^{(d-1)\times d}\) is i.i.d. standard Gaussian (identifying \(H(\rho_{1},\theta_{1})\) with \(\mathbb{R}^{d-1}\)). Also, \(h_{s}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})\) for \(s\in\{+,-\}\) is the halfspace \(H_{s}(\rho_{2},\theta_{2})\cap H(\rho_{1},\theta_{1})\) in \(\mathbb{R}^{d-1}\) (identifying \(H(\rho_{1},\theta_{1})\) with \(\mathbb{R}^{d-1}\), see Fig. 1). Finally, \(E\) is the event \(\{Z\in h_{-}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})^{d}\}\), and \(\mu\) is the Gaussian probability measure in \(\mathbb{R}^{d-1}\).
We have
\[\begin{split}\int_{H(\rho_{1},\theta_{1})^{d}}& 1\{x\in H_{-}(\rho_{2}, \theta_{2})^{d}\}|x|f_{X}(x)\,\mathrm{d}x\\ &=\biggl{(}\int_{H(\rho_{1},\theta_{1})^{d}}f_{X}(x)\,\mathrm{d}x \biggr{)}\,\mathbb{E}_{Z}\bigl{(}|Z|\,1\,E\bigr{)}\\ &=\biggl{(}\int_{H(\rho_{1},\theta_{1})^{d}}f_{X}(x)\,\mathrm{d}x \biggr{)}\,\mathbb{P}_{Z}\bigl{(}E\bigr{)}\,\mathbb{E}_{Z}\bigl{(}|Z|\,\big{|} \,E\bigr{)}\\ &=\frac{e^{-d\rho_{1}^{2}/2}}{(2\pi)^{d/2}}\bigl{(}\mu(h_{-}( \rho_{1},\theta_{1},\rho_{2},\theta_{2}))\bigr{)}^{d}\,\mathbb{E}_{Z}\bigl{(}| Z|\,\big{|}\,E\bigr{)}.\end{split}\]
Figure 1: Halfspaces in the proof of Lemma 11
Let \(A\) be the covariance matrix of the Gaussian distribution truncated to \(h_{-}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})\). Namely, \(A=\operatorname{cov}\bigl{(}Z_{1}\mid Z_{1}\in h_{-}(\rho_{1},\theta_{1},\rho_{2 },\theta_{2})\bigr{)}\). Note that the variance of any univariate marginal of \(Z_{1}\) conditioned on \(Z_{1}\in h_{-}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})\) is at most \(1\) (say, by the Brascamp-Lieb inequality [7, Section 5]) and this implies \(\det A\leq 1\). Using moment inequalities and Proposition 2 (Blaschke's formula):
\[\mathbb{E}_{Z}\bigl{(}|Z|\mid E\bigr{)}\leq\sqrt{\mathbb{E}_{Z}\bigl{(}|Z|^{2} \bigm{|}E\bigr{)}}=\sqrt{\frac{d}{(d-1)!}\det A}\leq\sqrt{\frac{d}{(d-1)!}}.\]
Now, to express the Gaussian measure of \(h_{-}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})\), we need the signed distance of its boundary to the origin of \(\mathbb{R}^{d-1}\) (the sign is positive if the halfspace contains the origin). The signed distance is \(t=t(\rho_{1},\theta_{1},\rho_{2},\theta_{2})=\frac{\rho_{2}-\rho_{1}\cos\alpha }{\sin\alpha}\), where \(\alpha\in[0,\pi]\) is the angle between \(\theta_{1}\) and \(\theta_{2}\) (see Fig. 1).1 In other words, \(t=\frac{\rho_{2}-\rho_{1}\theta_{1}\cdot\theta_{2}}{\sqrt{1-(\theta_{1}\cdot \theta_{2})^{2}}}\). To understand this quantity, it will be helpful in the next calculation to reinterpret certain integrals as expectations and to think of \(\theta_{1}\) and \(\theta_{2}\) as random unit vectors. With that interpretation, we will use the following fact: the distribution of \(W:=\theta_{1}\cdot\theta_{2}\) has density \(w\mapsto\frac{\Gamma(\frac{d}{2})}{\sqrt{\pi}\Gamma(\frac{d-1}{2})}(1-w^{2}) ^{\frac{d-3}{2}}\) with support \([-1,1]\).2
Footnote 1: To see this, note first that it is enough to peform this calculation in \(\mathbb{R}^{2}\). Assume without loss of generality that \(\theta_{1}=(1,0)\) and \(\theta_{2}=(\cos\alpha,\sin\alpha)\). Then \(t\) is the \(y\)-coordinate of the point intersection of the lines \((x,y)\cdot\theta_{1}=\rho_{1}\) and \((x,y)\cdot\theta_{2}=\rho_{2}\), which implies \(x=\rho_{1}\) and \(\rho_{1}\cos\alpha+y\sin\alpha=\rho_{2}\). The claim follows.
Footnote 2: To see this, without loss of generality we can assume that \(\theta_{2}=e_{1}\). Then use Archimedes’ idea, namely that the distribution of the first \(d-2\) coordinates of \(\theta_{1}\) is uniform in the unit \((d-2)\)-dimensional ball. The claim follows then up to the normalization constant. The constant can be obtained by integration.
Let \(\omega_{d}=2\pi^{d/2}/\Gamma(d/2)\) be the area of the unit sphere in \(\mathbb{R}^{d}\). We determine the asymptotics of the first term in the sum in Eq. (6) using Proposition 5 in
the last step:
\[c_{d}\int_{\mathbb{R}^{2}_{+}}\int_{(S^{d-1})^{2}}\biggl{(}\int_{H( \rho_{1},\theta_{1})^{d}}1\{x\in H_{-}(\rho_{2},\theta_{2})^{d}\}|x|f_{X}(x)\, \mathrm{d}x\biggr{)}\] \[\qquad\qquad\times\biggl{(}\int_{H(\rho_{2},\theta_{2})^{d}}1\{y \in H_{-}(\rho_{1},\theta_{1})^{d}\}|y|f_{Y}(y)\,\mathrm{d}y\biggr{)}\,\mathrm{ d}\theta_{1}\,\mathrm{d}\theta_{2}\,\mathrm{d}\rho_{1}\,\mathrm{d}\rho_{2}\] \[\leq\frac{c_{d}d}{(d-1)!(2\pi)^{d}}\int_{\mathbb{R}^{2}_{+}}\int _{(S^{d-1})^{2}}e^{-\frac{d(\rho_{1}^{2}+\rho_{2}^{2})}{2}}\bigl{(}\mu(h_{-}( \rho_{1},\theta_{1},\rho_{2},\theta_{2}))\bigr{)}^{d}\] \[\qquad\times\bigl{(}\mu(h_{-}(\rho_{2},\theta_{2},\rho_{1},\theta _{1}))\bigr{)}^{d}\mathrm{d}\theta_{1}\,\mathrm{d}\theta_{2}\,\mathrm{d}\rho_ {1}\,\mathrm{d}\rho_{2}\] \[=\frac{d!\omega_{d}^{2}}{(2\pi)^{d}}\int_{\mathbb{R}^{2}_{+}}e^{ -\frac{d(\rho_{1}^{2}+\rho_{2}^{2})}{2}}\,\mathbb{E}_{\theta_{1},\theta_{2}} \Bigl{(}\bigl{(}\Phi(t(\rho_{1},\theta_{1},\rho_{2},\theta_{2}))\bigr{)}^{d}\] \[\qquad\times\bigl{(}\Phi(t(\rho_{2},\theta_{2},\rho_{1},\theta_{1} ))\bigr{)}^{d}\bigr{)}\,\mathrm{d}\rho_{1}\,\mathrm{d}\rho_{2}\] \[=\frac{d!\omega_{d}^{2}}{(2\pi)^{d}}\int_{\mathbb{R}^{2}_{+}}e^{ -\frac{d(\rho_{1}^{2}+\rho_{2}^{2})}{2}}\,\mathbb{E}_{W}\biggl{(}\biggl{(}\Phi \Bigl{(}\frac{\rho_{2}-\rho_{1}W}{\sqrt{1-W^{2}}}\Bigr{)}\Phi\Bigl{(}\frac{ \rho_{1}-\rho_{2}W}{\sqrt{1-W^{2}}}\Bigr{)}\biggr{)}^{d}\biggr{)}\,\mathrm{d} \rho_{1}\,\mathrm{d}\rho_{2} \tag{7}\] \[=\frac{d!\omega_{d}^{2}\Gamma(\frac{d}{2})}{(2\pi)^{d}\sqrt{\pi} \Gamma(\frac{d-1}{2})}\int\limits_{\mathbb{R}^{2}_{+}}e^{-\frac{d(\rho_{1}^{2} +\rho_{2}^{2})}{2}}\int\limits_{-1}^{1}\biggl{(}\Phi\left(\frac{\rho_{2}-\rho _{1}w}{\sqrt{1-w^{2}}}\right)\Phi\left(\frac{\rho_{1}-\rho_{2}w}{\sqrt{1-w^{2 }}}\right)\biggr{)}^{d}\] \[\qquad\times(1-w^{2})^{\frac{d-3}{2}}\,\mathrm{d}w\,\mathrm{d} \rho_{1}\,\mathrm{d}\rho_{2}\] \[\leq 2^{o(d)}\!\!\!\int\limits_{\mathbb{R}^{2}_{+}}\int\limits_{-1} ^{1}\biggl{(}e^{-\frac{\rho_{1}^{2}-\rho_{2}^{2}}{2}}\Phi\Bigl{(}\frac{\rho_{ 2}-\rho_{1}w}{\sqrt{1-w^{2}}}\Bigr{)}\Phi\Bigl{(}\frac{\rho_{1}-\rho_{2}w}{ \sqrt{1-w^{2}}}\Bigr{)}\sqrt{1-w^{2}}\biggr{)}^{d-3}\!\!\!\mathrm{d}w\, \mathrm{d}\rho_{1}\,\mathrm{d}\rho_{2}\] \[=C_{11}^{d+o(d)},\]
where
\[C_{11}:=\sup_{\begin{subarray}{c}\rho_{1},\rho_{2}\geq 0\\ w\in[-1,1]\end{subarray}}e^{-\frac{\rho_{1}^{2}+\rho_{2}^{2}}{2}}\Phi\left( \frac{\rho_{2}-\rho_{1}w}{\sqrt{1-w^{2}}}\right)\Phi\left(\frac{\rho_{1}-\rho _{2}w}{\sqrt{1-w^{2}}}\right)\sqrt{1-w^{2}}\approx 0.4424. \tag{8}\]
The other three terms in Eq. (6) have similar asymptotics, with \(C_{11}\) replaced by
\[\sup_{\begin{subarray}{c}\rho_{1},\rho_{2}\geq 0\\ w\in[-1,1]\end{subarray}}e^{-\frac{\rho_{1}^{2}+\rho_{2}^{2}}{2}}\left(1-\Phi \left(\frac{\rho_{2}-\rho_{1}w}{\sqrt{1-w^{2}}}\right)\right)\Phi\left(\frac{ \rho_{1}-\rho_{2}w}{\sqrt{1-w^{2}}}\right)\sqrt{1-w^{2}}\approx 0.355\]
and
\[\sup_{\begin{subarray}{c}\rho_{1},\rho_{2}\geq 0\\ w\in[-1,1]\end{subarray}}e^{-\frac{\rho_{1}^{2}+\rho_{2}^{2}}{2}}\left(1-\Phi \left(\frac{\rho_{2}-\rho_{1}w}{\sqrt{1-w^{2}}}\right)\right)\left(1-\Phi \left(\frac{\rho_{1}-\rho_{2}w}{\sqrt{1-w^{2}}}\right)\right)\sqrt{1-w^{2}}=1/4.\]
Namely, the first of the four terms in Eq. (6) is asymptotically the largest and we have:
\[\mathbb{P}\big{(}X,Y\in F([X,Y])\big{)}\leq C_{11}^{d+o(d)}.\]
Finally, note that the argument of sup in Eq. (8) is logconcave and symmetric in \(\rho_{1},\rho_{2}\) for any fixed \(w\) (using the known fact that \(\Phi\) is logconcave). This implies that its value at \(\rho_{1},\rho_{2},w\) is less than or equal to its value at \((\rho_{1}+\rho_{2})/2,(\rho_{1}+\rho_{2})/2,w\) and therefore it is enough to maximize for \(\rho_{1}=\rho_{2}\) and we have the simplified expression in the statement of the theorem.
Lower bound.In Eq. (5), consider the term
\[1\{x\in H_{+}(\rho_{2},\theta_{2})^{d}\cup H_{-}(\rho_{2},\theta_{2})^{d}\}.\]
Note that (a.s.) one of \(H_{+}(\rho_{2},\theta_{2})\) and \(H_{-}(\rho_{2},\theta_{2})\) is the "biggest" in the particular sense that it contains in its interior the point in \(H(\rho_{1},\theta_{1})\) (the domain of the innermost integral) that is closest to the origin (namely point \(\rho_{1}\theta_{1}\)). More precisely, let \(H_{M}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})\) be (a.s.) the halfspace among \(H_{+}(\rho_{2},\theta_{2})\) and \(H_{-}(\rho_{2},\theta_{2})\) that contains \(\rho_{1}\theta_{1}\) in its interior. Then
\[1\{x\in H_{+}(\rho_{2},\theta_{2})^{d}\cup H_{-}(\rho_{2},\theta_{2})^{d}\} \geq 1\{x\in H_{M}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})^{d}\}. \tag{9}\]
Let \(Z=(Z_{1},\ldots,Z_{d})\in\mathbb{R}^{(d-1)\times d}\) be i.i.d. standard Gaussian (identifying \(H(\rho_{1},\theta_{1})\) with \(\mathbb{R}^{d-1}\)), let \(E^{\prime}\) be the event \(\{Z\in h_{M}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})^{d}\}\), and let \(h_{M}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})\) be the halfspace \(H_{M}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})\cap H(\rho_{1},\theta_{1})\) in \(\mathbb{R}^{d-1}\) (identifying \(H(\rho_{1},\theta_{1})\) with \(\mathbb{R}^{d-1}\)). Now, using Lemma 10 (a lower bound on the expected volume of a random simplex in a halfspace), we have
\[\begin{split}\int_{H(\rho_{1},\theta_{1})^{d}}& 1\{x\in H_{M}(\rho_{1},\theta_{1},\rho_{2}, \theta_{2})^{d}\}|x|f_{X}(x)\,\mathrm{d}x\\ &=\mathbb{E}_{Z}(|Z|\,1\,E^{\prime})\int_{H(\rho_{1},\theta_{1})^ {d}}f_{X}(x)\,\mathrm{d}x\\ &=\mathbb{P}_{Z}(E^{\prime})\,\mathbb{E}_{Z}\big{(}|Z|\;\big{|}\;E ^{\prime}\big{)}\int_{H(\rho_{1},\theta_{1})^{d}}f_{X}(x)\,\mathrm{d}x\\ &=\frac{e^{-d\rho_{1}^{2}/2}}{(2\pi)^{d/2}}\big{(}\mu(h_{M}(\rho_{ 1},\theta_{1},\rho_{2},\theta_{2}))\big{)}^{d}\,\mathbb{E}_{Z}\big{(}|Z|\; \big{|}\;E^{\prime}\big{)}\\ &\geq 2^{o(d)}(e/d)^{d/2}\frac{e^{-d\rho_{1}^{2}/2}}{(2\pi)^{d/2}} \big{(}\mu(h_{M}(\rho_{1},\theta_{1},\rho_{2},\theta_{2}))\big{)}^{d}\\ &\geq 2^{o(d)}(e/d)^{d/2}\frac{e^{-d\rho_{1}^{2}/2}}{(2\pi)^{d/2}} \big{(}\mu(h_{-}(\rho_{1},\theta_{1},\rho_{2},\theta_{2}))\big{)}^{d}.\end{split} \tag{10}\]
Using a calculation similar to Eq. (7) but starting at Eq. (5) and using Eqs. (9)
and (10) twice we get
\[\mathbb{P}\big{(}X,Y\in F([X,Y])\big{)}\] \[=c_{d}\int\limits_{\mathbb{R}_{+}^{2}}\int\limits_{(S^{d-1})^{2}} \bigg{(}\int\limits_{H(\rho_{1},\theta_{1})^{d}}1\{x\in H_{+}(\rho_{2},\theta_{ 2})^{d}\cup H_{-}(\rho_{2},\theta_{2})^{d}\}|x|f_{X}(x)\,\mathrm{d}x\bigg{)}\] \[\quad\bigg{(}\int\limits_{H(\rho_{2},\theta_{2})^{d}}1\{y\in H_{+ }(\rho_{1},\theta_{1})^{d}\cup H_{-}(\rho_{1},\theta_{1})^{d}\}|y|f_{Y}(y)\, \mathrm{d}y\bigg{)}\,\mathrm{d}\theta_{1}\,\mathrm{d}\theta_{2}\,\mathrm{d} \rho_{1}\,\mathrm{d}\rho_{2}\] \[\geq c_{d}2^{o(d)}\left(\frac{e}{2\pi d}\right)^{d}\int_{\mathbb{ R}_{+}^{2}}\int_{(S^{d-1})^{2}}e^{-d(\rho_{1}^{2}+\rho_{2}^{2})/2}\] \[\quad\big{(}\mu(h_{-}(\rho_{1},\theta_{1},\rho_{2},\theta_{2})) \mu(h_{-}(\rho_{2},\theta_{2},\rho_{1},\theta_{1}))\big{)}^{d}\,\mathrm{d} \theta_{1}\,\mathrm{d}\theta_{2}\,\mathrm{d}\rho_{1}\,\mathrm{d}\rho_{2}\] \[=C_{11}^{d+o(d)}.\qed\]
Proof of Theorem 9.: Immediate from Lemma 11 and the fact that the number of \(d\)-subsets of \(X\) is \(\binom{2d}{d}=4^{d+o(d)}\).
Acknowledgments.We would like to thank Karoly J. Boroczky and Daniel Hug for helpful discussions. This material is based upon work supported by the National Science Foundation under Grants CCF-1657939, CCF-1934568 and CCF-2006994. This material is also based upon work supported by the National Science Foundation under Grant No. DMS-1929284 while the second author was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the "Harmonic Analysis and Convexity" program.
|
2304.08953 | From Words to Music: A Study of Subword Tokenization Techniques in
Symbolic Music Generation | Subword tokenization has been widely successful in text-based natural
language processing (NLP) tasks with Transformer-based models. As Transformer
models become increasingly popular in symbolic music-related studies, it is
imperative to investigate the efficacy of subword tokenization in the symbolic
music domain. In this paper, we explore subword tokenization techniques, such
as byte-pair encoding (BPE), in symbolic music generation and its impact on the
overall structure of generated songs. Our experiments are based on three types
of MIDI datasets: single track-melody only, multi-track with a single
instrument, and multi-track and multi-instrument. We apply subword tokenization
on post-musical tokenization schemes and find that it enables the generation of
longer songs at the same time and improves the overall structure of the
generated music in terms of objective metrics like structure indicator (SI),
Pitch Class Entropy, etc. We also compare two subword tokenization methods, BPE
and Unigram, and observe that both methods lead to consistent improvements. Our
study suggests that subword tokenization is a promising technique for symbolic
music generation and may have broader implications for music composition,
particularly in cases involving complex data such as multi-track songs. | Adarsh Kumar, Pedro Sarmento | 2023-04-18T12:46:12Z | http://arxiv.org/abs/2304.08953v2 | # From Words to Music: A Study of Subword Tokenization Techniques in Symbolic Music Generation
###### Abstract
Subword tokenization has been widely successful in text-based natural language processing (NLP) tasks with Transformer-based models. As Transformer models become increasingly popular in symbolic music-related studies, it is imperative to investigate the efficacy of subword tokenization in the symbolic music domain. In this paper, we explore subword tokenization techniques, such as byte-pair encoding (BPE), in symbolic music generation and its impact on the overall structure of generated songs. Our experiments are based on three types of MIDI datasets: single track-melody only, multi-track with a single instrument, and multi-track and multi-instrument. We apply subword tokenization on post-musical tokenization schemes and find that it enables the generation of longer songs at the same time and improves the overall structure of the generated music in terms of objective metrics like structure indicator (SI), Pitch Class Entropy, etc. We also compare two subword tokenization methods, BPE and Unigram, and observe that both methods lead to consistent improvements. Our study suggests that subword tokenization is a promising technique for symbolic music generation and may have broader implications for music composition, particularly in cases involving complex data such as multi-track songs.
## 1 Introduction
Subword tokenization is a widely used technique for text representation in natural language processing (NLP). Such tokenization techniques based on the creation of subword tokens, like byte pair encoding (BPE) [15], Unigram [16] and WordPiece [21], have become ubiquitous in various NLP tasks. Attributed to their efficiency in modeling longer patterns, rather than simply characters, these subword tokenization techniques became extremely successful with Transformer models like BERT [2] and GPT [14], achieving state-of-the-results in multiple text-based NLP applications. Works like [17] and [18] have further shown the universality of its application across languages, not just in English.
Inspired by the success of Transformer models in text-NLP,1 recent years have witnessed a shift in research work towards leveraging Transformers [23] in the domain of symbolic music generation [14, 15, 16]. This shift can be ascribed to the resemblance of symbolic music post-musical-tokenization2 to that of text tokens. With Transformers' extraordinary capability to model longer sequences, we are able to generate coherent, adequate pieces of music end-to-end [11, 14, 15].
Footnote 1: For the sake of clarity and distinction, we will henceforth refer to tasks related to the text in NLP as text-NLP.
Footnote 2: Musical Tokenization refers to tokenization of symbolic formats like MIDI or GuitarPro with Music Tokenization such as REMI or MIDI-like.
Despite all the success of the predecessors in improving the state of music generation, these models are often accused of failing to entirely capture the repetitive structure and overall musical development of songs [13, 14].
Figure 1: Example of similar frequently co-occurring structures within the text (at character level) and musical representations (MIDI-like, at event level).
Wu and Yang, 2020; Wu and Yang, 2022). This becomes more apparent as the structure of the music becomes complex such as in the case of polyphonic music or multi-track music. A reasonable explanation for this could be the significantly longer sequence of symbolic music tokens, which limits the segment of a song visible to the Transformer model, hindering its understanding of the overall musical structure. As an analogy, this would be equivalent to representing a piece of text as a sequence of individual characters.
One possible solution to this problem is at the token level. The idea is to group individual events into subgroups, similar to subwords in text-NLP. There have been works like [14] and [15], which tried to group musical events, exploiting the properties of musical structure for certain MIDI-based datasets. However, most of these works are dependent on the musical structure of certain datasets involved, and hence cannot be extrapolated to other formats easily. So, to the best of our knowledge, no work has been done so far to assess the use of subword tokenization techniques like BPE and Unigram that utilize the co-occurrence-based structure of musical events, independent of the musical structure of the training dataset itself.
We are therefore motivated to study whether the use of subword tokenization can improve the overall musical structure of generated songs, while at the same time being independent of the dataset or format of symbolic music involved. In this work, we specifically investigate the usefulness of subword tokenization techniques like BPE [11] and Unigram [16] in modeling the task of symbolic music generation. Through our experiments, we try to answer the following two primary questions:
* Q1: Can we use subword tokenization techniques to improve the overall musical structure and musical quality of the generated examples?
* Q2: How do these findings generalize between two different subword tokenization techniques, namely BPE and Unigram?
In an effort to answer the above questions, our main contributions through this paper are as follows:
* Creation and implementation of an evaluation environment to objectively assess the usefulness of subword tokenization techniques, namely BPE and Unigram, to improve overall musical structure (in terms of quantitative metrics like Pitch Class Entropy, Structureness Indicators etc.), independent of data-specific factors like music file formats (e.g. MIDI or GuitarPro);
* Establishing the efficiency of subword tokenization across datasets, data formats and musical tokenization techniques, with our study involving melody-only, polyphonic and multi-track datasets;
* Demonstrating the usefulness of subword tokens towards facilitating longer pieces of generated music within the same inference time.
## 2 Background and Related Work
### Subword Tokenization
Tokenization has become a fundamental process in NLP which involves breaking the macroscopic units of text such as 'words' or'sentences' into smaller units called tokens. Since representing the much longer textual data as individual characters is highly ineffective, the concept of'subword tokenization' was introduced [13]. It involves breaking the word or sentence into sub-words (a subsequence of characters with length \(\geq 1\)), which are then used to represent the data. Over the years, several subword tokenization techniques have been introduced such as BPE [1, 15], Unigram [16], WordPiece [17] and SentencePiece [16]. Most of these techniques involve the selection of subwords based on their frequency within the training data.
Byte-Pair-Encoding (BPE) [11] is a data compression technique [1] which involves the replacement of the most frequent pair of bytes by a single, unused byte. In text-NLP, BPE is applied to subword units, rather than bytes, by finding the most frequent character n-grams in a text corpus and merging them into a single token. This allows the model to learn a more fine-grained representation of the language, especially for rare or out-of-vocabulary words. The result is a variable-length subword vocabulary, which balances the ability to capture complex language structure with reducing the risk of overfitting.
In contrast to BPE, Unigram [16] is another subword tokenization technique, which starts from a large vocabulary and gradually trims the vocabulary towards a smaller one. It involves a probabilistic Unigram language model, which decides whether to keep a subword or not based on its likelihood and loss function. Both these models have been extensively used in various NLP applications such as [23], [15] and [14]. Furthermore, papers like [1] and [1] have shown that these subword tokenization techniques can be applied effectively across languages. This motivates us to explore whether these results can be extended to symbolic music as a language, and what impact it could have on the overall musical structure of generated songs.
### Symbolic Music Generation
Symbolic Music Generation involves representation of music data from formats like MIDI or GuitarPro with symbols or sequences of events which are then used to train generative models. This has been an extensively researched area, where researchers are trying to come up with algorithms and models that can generate music at par with human performance [13]. Recently, the domain of symbolic music generation has witnessed steady improvements, mostly driven by advances in deep learning architectures. Overall, approaches towards symbolic music generation with deep learning can be aggregated according to the architecture used, namely Variational Autoencoder (VAEs) models [10], Generative Adversarial Networks (GANs) [12], and models that stem from natural language processing (NLP) field, such as
Recurrent Neural Networks (RNNs) [14], Long Short-Term Memory (LSTMs) [15], or Transformers [21]. As stated before, the Transformer architecture [21] is suitable for generating longer sequences when compared to previous approaches used by RNNs. Transformer-like sequence-to-sequence models are able to learn the dependencies and patterns among elements of a given sequence. The work by [10], the Music Transformer, pioneered the application of the self-attention mechanism to generate longer sequences of symbolic piano music. Other examples include works like Musenet [17], in which a large-scale Transformer model, GPT-2, was used to generate symbolic multi-instrument music from different musical genres, the Pop Music Transformer [10], which uses Transformer-XL [11] as a backbone architecture and is able to generate pop piano symbolic music with a better rhythmic structure, the Compound Word Transformer [12], that presents novel and more efficient ways of tokenizing symbolic music for training purposes, and GTR-CTRL [14] explores genre and instrument conditioning using special control tokens with multi-track music dataset DadaGP, thereby controlling overall musical structure of generated songs.
## 3 Methodology
In order to evaluate the usefulness of subword tokenization and to answer the questions mentioned in Section 1, we designed an empirical study where we objectively assess the performance of models with and without subword tokenization methods, while keeping all the other factors identical. For clarification, we here refer to the model without subword tokens as the 'base model', while the other models are named according to the respective subword tokenization technique used while modeling (e.g. 'BPE model' as the model that used BPE for subword tokenization).
### Datasets and Music Tokenization Schemes
As mentioned earlier, we used three datasets for our experiments, namely the Folk Songs dataset [15], the MAESTRO dataset[15], and the DadaGP dataset [14]. Furthermore, for the music tokenization procedure, we utilized REMI [10], Midi-like [1] and DadaGP tokenization [14] respectively, primarily to demonstrate the compatibility of subword tokenization with existent different music tokenization approaches. Given that training the models such as Music Transformer [10] and Transformer-XL [11, 12] on such large datasets is very resource intensive, we restricted our study to subsets of each dataset, using **1000** samples from the Folk Songs dataset, **400** samples from the MAESTRO dataset and **2000** songs from the Rock genre in DadaGP. However, since similar settings were kept for both the models with and without subword tokenization, our restriction does not impact any conclusions from our results.
### Data Processing
While there are several subword tokenization methods available, we focused our experiments on two of the most common ones, BPE and Unigram, primarily because of their ease of applicability to any dataset. To leverage subword tokenization with symbolic music, we first converted songs from symbolic format (MIDI or GuitarPro) to musical events using music tokenization schemes such as REMI [10], following which we create a mapping, from musical events to unicode symbols. Thus, we obtained a relatively easy-to-process corpus of symbols, in which we assume each
Figure 2: Symbolic music processing pipeline with subword tokenization. Please note that, here, musical symbols refer to mapping of musical events to unicode symbols or characters, which are then processed with subword tokenizer.
song is a single entity similar to SentencePiece [12]. We then trained subword tokenization methods on the respective datasets to create a larger vocabulary of subword tokens. Furthermore, we used this vocabulary to process musical tokens of all songs in a given dataset. A statistical summary of the original vocabulary size of musical tokens and subword tokens used in these experiments is given in Table 1.
### Experimental Configuration
As stated before, in order to assess the independence of results in terms of improvements while using subword tokenization methods against factors like type of dataset, music tokenization procedure, and model, we experimented with three different combinations:
1. Folk Songs (monophonic, single instrument, MIDI format) + REMI tokenization + Music Transformer;
2. MAESTRO (polymbonic, single instrument, MIDI format) + MIDI-Like tokenization + Music Transformer;
3. DadaGP (polymbonic, multi-instrument, GuitarPro format) + DadaGP tokenization + Transformer-XL;
The key idea here is to experiment with subword tokenization schemes in different settings and configurations and to see if the results in terms of improvement hold true irrespective of the rest of the factors. Furthermore, the choice of monophonic, polyphonic, and multi-track music, allows us to evaluate the usefulness of subword tokenization for music generation tasks with different levels of complexities.
## 4 Evaluation Metrics
To objectively evaluate the results of applying subword tokenization in music generation, we subdivided dedicated metrics into categories: musical quality and structure and efficiency in representation.
### Musical Quality and Structure
**Structureness Indicator (_SI_):** Proposed in [23],the structureness indicator (SI) is designed to capture the structureness of music, induced by repetitions of musical content. It is based on the fitness-scapeplot algorithm [13] and the self-similarity matrix (SSM) [21] computed from _fitness_, where the degree of repetition is derived from SSM for a given segment (\(i\), \(j\)). Similar to [23], in our experiments, we used \(SI_{3}^{8}\), \(SI_{8}^{15}\) and \(SI_{15}\), in which \(SI_{l}^{u}\), \(l\) and \(u\) represent the lower and upper bound of intervals of consideration (in seconds). We respectively refer to these as _SI-short_, _SI-medium_, and _SI-long_, as they are used to examine the short, medium, and long-term structureness of generated songs. Here it is important to note that a higher _SI-x_ value doesn't necessarily mean 'better' music, since the generated samples may then be too repetitive. Instead, a more valid assumption is that it is better the closer it is to the real music (i.e. testing corpus), which will be the basis of our evaluation.
**Pitch Class Entropy (\(\mathcal{H}\)):** Also described in [23], gives an insight about different pitches, and thereby tonality used in a song. Here, the main idea is to calculate entropy from a normalized 12-dimensional pitch class histogram (corresponding to 12 pitch classes C, C#, D,... B) and analyze how close these values are to the real values.
**Groove Pattern Similarity (\(\mathcal{GS}\)):** Another metric defined in [23], Groove Pattern Similarity helps in measuring the rhythmic consistency within a song. It calculates the pairwise similarity of each bar's groove vector **g** (indicating the position in bar with at least a note onset) as \(1-HammingDist(g_{a},g_{b})\), across all pairs \(g_{a}\) and \(g_{b}\). Similar to the other two metrics mentioned previously, in this case also, the closer the values of groove similarity from the generated songs are to the real songs, the better.
### Efficiency in Representation
**Average Number of Tokens per Song:** Within this metric we measure the average number of tokens per song present post-data processing, which are then fed into a Transformer model for training. A more efficient representation will be the one, which has smaller average number of tokens per song.
**Average Number of Tokens per Song for the Same Inference Time:** In this metric, for a given dataset, we generate an equal number of tokens (i.e. same inference)for each of the three models we experimented with. After a post-conversion process back to the original musical tokens, we compare the average number of tokens generated in terms of the original tokens. This metric helps us assess how efficient the representation is in terms of generating longer music within the same inference time. Hence, for this particular metric, the larger the value, the better representation of data will be.
### Other Metrics
**NLL Loss:** Negative log-likelihood is a common metric, often used to measure how well a model fits the training dataset [13, 14]. While a relatively closer and smaller value of NLL represents a well-fitted model, in our case we observe that it is not a good metric to compare performances across models, since the model with subword tokenization has a higher number of parameters. Furthermore, a lower NLL loss doesn't necessarily mean better generation quality. We have still added this to give an idea of how well the BPE or the Unigram model fits relative to the base model.
## 5 Results
### Experimental Settings
The experiments were conducted using HuggingFace [23] and PyTorch implementations of Music Trans
\begin{table}
\begin{tabular}{l||r|r|r} \hline \hline Music Tokenization & Original & BPE & Unigram \\ \hline REMI & 227 & 300 & 300 \\ MIDI-Like & 331 & 1000 & 1000 \\ DadaGP & 2104 & 5000 & 5000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Vocabulary size of original tokens (using respective musical tokenization), post-processing with BPE and Unigram.
former3 and Transformer-XL4. For musical tokenization, we used the MidiTok [11] library, which we further processed for subword tokenization using Huggingface's tokenizers5. For the first two parts of our experiment (i.e. Folk Songs and MAESTRO songs dataset involving the Music Transformer model), we used a 3-layer Transformer architecture, with an embedding dimension 256, which we trained on Google Colab Free Tier with a P100 16GB GPU machine. For the last part (i.e. DadaGP with Transformer-XL), we used the same architecture as in [10], training the model on a 24GB Quadro RTX 6000 GPU. For the evaluation, we used implementation of the metrics described in the previous section in MusPy [12] and MusDr6. Finally, we evaluated the model performance by generating 20 songs for each model configuration. Samples from the generation can be accessed here.
Footnote 3: [https://github.com/jason9693/MusicTransformer-pytorch](https://github.com/jason9693/MusicTransformer-pytorch)
Footnote 4: [https://github.com/YatingMusic/compound-word-transformer](https://github.com/YatingMusic/compound-word-transformer)
Footnote 5: url[https://huggingface.co/docs/tokenizers/index](https://huggingface.co/docs/tokenizers/index)
Footnote 6: [https://github.com/slSeanWU/MusDr](https://github.com/slSeanWU/MusDr)
### Objective Evaluation
Results from the three distinct experiments can be seen in tables 2, 3 and 4.
As can be observed from the tables, the use of subword tokenization methods outperforms the base models by a significant margin in both cases (i.e. BPE and Unigram), in almost all the configurations, irrespective of the model, dataset, data format, or music tokenization scheme used. The values of SI-short, SI-medium, and SI-long, which closely resemble real song data, indicate an overall improvement in the musical structure of songs using subword tokenization. Additionally, longer repetitive structures exhibit more significant improvements compared to shorter ones. The results suggest that subword tokenization techniques have the potential to model the musical structure better and can leverage the latent co-occurring structure within musical tokens to improve the quality of generated music.
That being said, the relatively smaller improvements in the case of the Folk Songs dataset, suggest a correlation between the complexity of the dataset being modeled and the performance change, with more scope for improvements in the case of datasets with higher complexities such as MAESTRO and DadaGP. Adding to this, negligible improvements in terms of SI for Unigram could be possible because the subword tokens generated with Unigram are harder to learn by the model, thereby collapsing to simpler tokens (which is a case similar to base model, which also works on simpler tokens). This again could be possibly attributed to the relative simplicity in terms of structure and lesser scope of co-occurrence when the dataset is melody-only. However, when we move to more complicated datasets, we have musical co-occurring structures like Chords, which improve the feasibility of using Subword Tokenization Techniques.
Furthermore, the figures from Tables 5 and 6 demonstrate the improved efficiency of representation with the use of subword tokenization. This adds up to the advantage of using subword tokenization with the base model, as it allows to model of longer sequences within the same inference time from a model. These results become particularly important in the case of complex symbolic music datasets such as DadaGP. This dataset, being complex in representation, requires that much longer sequences are generated even for a short song segment. Using subword tokenization with such datasets can
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Dataset & Base & BPE & Unigram \\ \hline FOLK & 500 & **1307** & 994 \\ MAESTRO & 1000 & **1570** & 1534 \\ DadaGP & 1000 & 1437 & **1828** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Average number of tokens generated for same inference time in a dataset (i.e. for same time taken to generate \(x\) base model (without subword tokenization) tokens, corresponding \(y\) and \(z\) tokens generated (in terms of original musical tokens) for BPE and Unigram models respectively.)
\begin{table}
\begin{tabular}{l||c||c|c|c} \hline \hline Metrics & Real & Original & BPE & Unigram \\ \hline SI-short & 0.4637 & 0.2707 & **0.3376** & 0.2712 \\ SI-medium & 0.4959 & 0.2759 & **0.3379** & 0.2719 \\ SI-long & 0.4543 & 0.2583 & **0.3340** & 0.2451 \\ \(\mathcal{H}\) & 2.6011 & 2.6924 & **2.6754** & 2.6842 \\ \(\mathcal{GS}\) & 0.9987 & 0.9984 & **0.9986** & 0.9985 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results with the Folk Songs dataset.
\begin{table}
\begin{tabular}{l||c||c|c|c} \hline \hline Metrics & Real & Original & BPE & Unigram \\ \hline SI-short & 0.3228 & 0.5119 & 0.3880 & **0.3483** \\ SI-medium & 0.3066 & 0.4663 & 0.3334 & **0.2828** \\ SI-long & 0.2343 & 0.4173 & 0.3031 & **0.2205** \\ \(\mathcal{H}\) & 3.0555 & 2.5152 & 2.8705 & **2.9297** \\ \(\mathcal{GS}\) & 0.9917 & 0.9971 & 0.9942 & **0.9936** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results with the MAESTRO songs dataset.
\begin{table}
\begin{tabular}{l||c|c|c} \hline \hline Datasets & Base & BPE & Unigram \\ \hline FOLK & 500 & **1307** & 994 \\ MAESTRO & 1000 & **1570** & 1534 \\ DadaGP & 1000 & 1437 & **1828** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Average number of tokens generated for same inference time in a dataset (i.e. for same time taken to generate \(x\) base model (without subword tokenization) tokens, corresponding \(y\) and \(z\) tokens generated (in terms of original musical tokens) for BPE and Unigram models respectively.)
\begin{table}
\begin{tabular}{l|c||c|c} \hline \hline Datasets & Base & BPE & Unigram \\ \hline FOLK & 796 & **359** & 436 \\ MAESTRO & 12925 & **8831** & 8618 \\ DadaGP & 5332 & **2875** & 2954 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average number of tokens per song in each representation.
allow shortening the sequences, thereby allowing longer music to be inferred.
A comparison between the performance of models with BPE or Unigram tokenization throughout the results suggests that the improvements hold true in general for subword tokenization, leveraging the frequent cooccurrence of musical events within songs. Though there are some localized differences on what method is more efficient in terms of structural modeling or musical quality, with one working better than the other in certain cases or datasets, in all, these perform better than the base model 'without' subword tokenization. This answers our second question of how the results of our study generalize for two different subword tokenization methodologies. Furthermore, this generalization is independent of the constraints of a given musical structure of a particular dataset.
Another interesting aspect of our results is to observe the improvement of results with the experiments involving Transformer-XL i.e. beyond fixed length modeling. While the main purpose of using this model, as proposed in [11] for music generation, is to model musical structure beyond a fixed length of the input, it is intriguing to see the improvements with subword tokenization even in this case, particularly with longer repetitiveructeness. This suggests while the dedicated architecture of Transformer-XL is capable of modeling shorter musical structure better, there is still a loss of information as the model propagates through the windows of Transformer-XL input, leading to lesser long-term repetitiveness. However, on reducing this length with subword tokenization, this loss can be reduced, thereby allowing improved representation of the musical structure, closer to the real data, we are trying to model.
Lastly, it is important to note the NLL loss performance of the models we trained from Table 7. While the closer values of loss functions in the case of the base, BPE and Unigram suggest the model is able to model the dataset almost equivalently in all the cases, differences in the objective evaluation of generated musical quality, indicate that not all the information is captured within NLL loss. Furthermore, it supports our initial assumption (in section 4.3)that NLL is not an appropriate metric to act upon conclusively over the model performances. However, it can still provide an overview of how well the model is fitting the dataset, which in the case of usage of subword tokenization is almost the same as without it.
## 6 Discussion
In order to provide some qualitative insights from the generated content, we here present an individual subjective analysis of some of the results. Despite good results in terms of overall structureness presented in Table 3, we noticed that on some occasions the model opts by resorting to rests (silence) for a few measures. This can obviously be a desirable outcome sometimes, but as is observable in Figure 3, the rests from measure 24 to 28 seem to detract from the previous flow in terms of the musical idea behind it.
Furthermore, despite the frequentist approach in subword tokenization procedures, often giving emphasis to combinations of words/subwords that are more common in a given corpus, for the particular case of guitar-focused symbolic music generation with the DadaGP dataset, it is interesting to observe that tokens concerning guitar expressivity techniques are preserved, despite its diminished frequency when compared to note tokens.
As we can see from Figure 4, guitar-specific expressivity techniques such as hammer-ons and pull-offs, bends, slides, and vibrato, are adequately used.
\begin{table}
\begin{tabular}{l|l|l|l|l|l} Dataset & Model & Train Time & Base & BPE & Unigram \\ \hline FOLK & MT & \(\sim 20mins\) & 0.08 & 0.13 & 0.11 \\ MAESTRO & MT & \(\sim 3hrs\) & 2.58 & 3.16 & 3.62 \\ DadaGP & Tr-XL & \(\sim 3days\) & 0.10 & 0.11 & 0.11 \\ \end{tabular}
\end{table}
Table 7: Negative Log-Likelihood Loss for the models (MT \(\rightarrow\) Music Transformer, Tr-XL \(\rightarrow\) Transformer XL). As design choices, here we decided the train time based on the complexity of the dataset and the model in use, having kept the same time for all three experiments.
Figure 4: GuitarPro screenshot of the first five measures from _sample 18_ from the DadaGP BPE experiment. Only the distorted guitar is visible.
Figure 3: MuseScore screenshot of measures 17 to 34 of _sample 19_ from the MAESTRO BPE experiment.
In order to visually support the arguments made in Section 5.2 towards the improvement in terms of'structureness' of the generated examples from the subword tokenization models, in Figure 5 we can observe that the model is able to callback to the _motifs_ played on the first two measures (i.e. same pattern repeating in measures seven and eight). From measure 5, it is also interesting to observe that the model was able to refer to the same pattern that was introduced in the first two measures but also incorporated a few connecting notes in its first beats.
A similar observation on improvements in terms of structural representation can be made if we analyze the fitnessscape plots for the generated songs against real songs. A sample of such observation is shown in Figure 6 for MAESTRO dataset samples. Contours in yellowish/brownish regions inscape plots represent the repetitive structure, with the lower side implying short-term and the higher side implying long-term repetitive structure. A better model is one that generates songs having a scape-plot similar to that of real songs since then the musical structure of the training dataset is captured in a more accurate way. As we can observe from the scape plots given in Figure 6, there is a much less long-term repetitive structure in the real data. Both BPE and Unigram have their scape plots more similar to real songs, with lesser yellowish regions on the upper side of plots, than that of the base model. This suggests that the BPE and Unigram models capture the overall musical structure of real songs better than the base model, with Unigram being the best among the three. This observation is in coherence with Table 3, suggesting improvements with the use of subword tokenization methods.
## 7 Conclusion and Future Work
In this paper, we conducted an empirical study to assess the usefulness of subword tokenization methods in symbolic music generation. We objectively studied the change in performance of music generation models with the use of subword tokenization techniques such as BPE and Unigram while answering the two fundamental questions posed in Section 1. Our study not only shows that BPE and Unigram, as data compression techniques, are not only able to represent the data in a more efficient manner but also improve the overall structure of music generated with model-inculating subword tokenization. Furthermore, this result (and the trends in improvement) stands irrespective of the model, dataset, or musical tokenization used. Overall, from our study, we can conclude that the inclusion of subword tokenization in Symbolic Music generation has a potential impact on the performance of the model and structure of generated music, while allowing longer music to be generated at the same time.
From this point onwards, future work can be multifaceted. One direction could be to explore the impact of vocabulary size on the model performance, i.e. how the vocal-performance trade-off. Similar to text-NLP, where we usually have a vocabulary size in the range of 50k, we can increase the vocab size here as well to see how performance varies with changes in vocabulary size. Another interesting direction could be to explore whether the knowledge of Music theory can be leveraged in coordination with BPE or Unigram-type techniques, to develop a hybrid version of subword tokenization, involving concurrency (such as in Musical BPE) musical events with adjacency of tokens. In all, there is a large scope of future work that can be done, beyond this study.
## Ethical Statement
The training of large language models is an intensive computational process that requires vast amounts of energy, resulting in a significant carbon footprint. Cloud providers are increasingly offering services for training and hosting these models, but not all of them have committed to being carbon neutral, meaning they may contribute to greenhouse gas emissions. This is an important consideration when selecting a
Figure 5: GuitarPro screenshot of the first nine measures from _sample 10_ from the DadaGP Unigram experiment. Only the distorted guitar is visible.
Figure 6: Fitness scape plots for two generated samples from each of Base, BPE and Unigram along with real songs corresponding to the MAESTRO dataset. The x-axis represents the segment center (in sec) and y-axis represents the segment length (in sec) for any repetitive structure in music
cloud provider for training models, as it has both environmental and ethical implications.
To minimize the impact of training large language models, one solution is to release pre-trained models. This approach enables others to use these models without having to undergo the energy-intensive process of training them from scratch. By releasing pre-trained models, we aim to reduce the carbon footprint associated with training and make it easier for others to use these models in a more sustainable manner. Additionally, releasing pre-trained models may also encourage collaboration and innovation in the field, further advancing the development of AI technology.
## Acknowledgement
We would like to express our sincere gratitude to Dr. Yi-Hsuan Yang, for his supervision and guidance throughout this research project. Dr. Yang's expert advice and feedback were invaluable in shaping the direction and outcome of this study. His constant support and motivation were crucial in keeping us focused and inspired. We would also like to thank Dr. Sourav Mukhopadhyay, IIT Kharagpur for giving us an opportunity to work on this project as Adarsh's Master's Thesis Project.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.